< GitHub177> [bitcoin] mrCertified opened pull request #7867: deleted Configure.ac restore bits to all networks(%master%masterCode[{rLi}]) (master...patch-1) https://github.com/bitcoin/bitcoin/pull/7867
< GitHub91> [bitcoin] theuni opened pull request #7868: net: Split DNS resolving functionality out of net structures (master...net-cleanup-resolve) https://github.com/bitcoin/bitcoin/pull/7868
< cryptocoder> hi everyone
< cryptocoder> not sure if this is the right place for this, but how come the windows release for core 0.12 does not seem to have zmq support in it?
< jonasschnelli> cryptocoder: IIRC there where problems with static linking...
< * jonasschnelli> is searching the exact reason
< cryptocoder> ah! thank you jonasschnelli. I was hoping i’m not missing something obvious
< jonasschnelli> cryptocoder: I think you could hack the depends/ build system to link it dynamic.
< jonasschnelli> Or compile it on a window machine (not cc), but not sure how this exactly works.
< jonasschnelli> sipa, wumpus: reindex with LMDB took ~11h (same machine where a full sync with master took 2h20'). Now reindexing the master levelDB node.
< jonasschnelli> wumpus: LMDB reindex with -dbcache=9000: >11h, levelDB with -dbcache=8000: 2h25'.
< jonasschnelli> (the IBD from random peers was a couple of minutes faster... i'm confused)
< jonasschnelli> I wonder where we the performance bottleneck with LMDB is.
< sipa> that's very strange!
< btcdrak> Completely backwards
< sipa> especially since neither should be touched the database at all
< sipa> with such dbcache
< shangzhou> sipa:http://bitcoin.sipa.be/ data is not up to date
< sipa> shangzhou: thanks, fixing
< sipa> shangzhou: done
< shangzhou> thanks @sipa
< sipa> i upgraded my node to 0.12 and the new rpcauth mechanism, but didn't give the credentials to the script generating the website
< wumpus> jonasschnelli: interesting - so a reindex is slow with LMDB, but a sync from another node is fast? Greg did a benchmark with a sync from another node and LMDB came out much faster: https://github.com/laanwj/bitcoin/tree/2016_04_mdb#x86_64 . Also wonder where the bottleneck is, but will add it to the performance results
< jonasschnelli> wumpus: Both are slower. IBD and reindex.
< wumpus> okay
< wumpus> have you tried with the default dbcache as well?
< wumpus> maybe the 5GB write transaction is what gets it
< jonasschnelli> No... you mean a reindex with default dbache?
< wumpus> yes. You've done so with dbcache 9000, which means it fills to about ~5gb without using the database, then it writes everything at once
< wumpus> lmdb seems to shine in read latency, but it could be the big write is slow
< wumpus> with such a large dbcache it never reads so that part isn't measured
< jonasschnelli> wumpus: From looking at the "log speed" (very efficient benchmark technique :) ), lmdb seems to be slower during non-write operations.
< jonasschnelli> But i'll do now compare a reindex with no other options.
< wumpus> but with such a high dbcache, syncing from scratch, it doesn't touch the database at all
< jonasschnelli> That is what is really strange...
< jonasschnelli> maybe another change from your lmdb branch causes this.
< jonasschnelli> Haven't had time to debug it tough.
< wumpus> if everyone used dbcache=9000 we wouldn't need a database at all, we could just store the utxo set in a linear file
< jonasschnelli> Indeed. Just hope there is no crash before the big write. :)
< sipa> jonasschnelli: can you run with debug=bench, and show the resulting debug.log?
< jonasschnelli> sipa: ah. Right. Let me do that.
< * jonasschnelli> is shutting down bitcoind with -dbcache=9000,... waits...
< sipa> --- 3 hours later ---
< jonasschnelli> Hei! There is a SSD! :)
< jonasschnelli> Example bench (lmdb): height: 82804, Connect total: 0.14ms [22.08s]
< jonasschnelli> Verify 0 txins: 0.03ms (0.000ms/txin) [15.22s], Index writing: 0.05ms [3.19s]
< jonasschnelli> Maybe index writing?
< wumpus> height 82804 isn't very interesting yet :)
< jonasschnelli> Yeah. Can't start top down. :)
< wumpus> but anyhow in one light I like your result jonasschnelli, it would mean that leveldb is ok+ and there's no need to even spend time investigationg switching to something else
< wumpus> if both sqlite and lmdb came out slower - then again Greg's benchmark showed something completely different that's curious
< wumpus> you're using a SSD? don't know what his storage device was
< sipa> except if lmdb's final full UTXO set dump took ~9h longer than leveldb's, i can't explain jonasschnelli's result
< jonasschnelli> Yes. Lets first find out where the time/cycles is/are consumed.
< jonasschnelli> wumpus: Yes. I'm using SSD. The write speed is somewhere around 1GB/s.
< jonasschnelli> Verify 1245 txins: 131.58ms (0.106ms/txin) [6447.02s]
< jonasschnelli> Is the value in the [] the total value so far?
< sipa> yes
< jonasschnelli> Running two hours so far. Looks like most time is consumed for verify (as expected). But still not clear why lmdb takes ~5 times longer...
< jonasschnelli> Current Master at block 200'000: Connect total: 33.82ms [331.59s]
< jonasschnelli> LMDB at block 200'000: Connect total: 125.55ms [1466.35s]
< sipa> are you sure the LMDB is with dbcache set high?
< * jonasschnelli> is checking if the wumpus lmdb branch uses libsecp
< sipa> jonasschnelli: show me the verify lines
< jonasschnelli> no. Now its with default dbcache
< sipa> both leveldb and lmdb?
< jonasschnelli> yes.
< sipa> oh, ok
< jonasschnelli> LMDB block 200k: Verify 1231 txins: 121.49ms (0.099ms/txin) [1385.86s
< jonasschnelli> leveldb: Verify 1231 txins: 28.88ms (0.023ms/txin) [262.67s]
< sipa> are they running with the same -par?
< jonasschnelli> Yes. Passed only -debug=bench
< jonasschnelli> nononono!
< jonasschnelli> lmdb node -> git branch -> master!
< sipa> ?
< jonasschnelli> I'm actually not testing against LMDB i'm testing again non-libsecp-master from nov 1st!
< jonasschnelli> Damit!
< sipa> lol
< sipa> libsecp was merged on nov 5th :)
< sipa> thank for benchmarking the improvements we've made the past 5 months, then... they turn out to be significant
< jonasschnelli> Somewhere during git re-set I forget to pass a "git checkout 2016_04_mdb".
< GitHub190> [bitcoin] mrbandrews opened pull request #7871: Manual block file pruning. (master...ba-manual6) https://github.com/bitcoin/bitcoin/pull/7871
< sdaftuar> so, MAX_OPS_PER_SCRIPT includes op codes not executed? i didn't expect that.
< Chris_Stewart_5> ^^^^
< Chris_Stewart_5> even more interesting, if MAST is implemented what does that mean???
< Chris_Stewart_5> if I understasnd correctly MAST only reveals branches of our control structure that are actually executed
< sipa> exactly
< sipa> and for the ones not executed, you give their hash
< Chris_Stewart_5> sipa: Does MAST change the data structure from a List to a Tree inside of interpreter?
< sipa> that depends on the implementation
< sipa> it's just a generic idea
< sipa> i haven't looked at jl2012's specific proposal
< Chris_Stewart_5> sipa: Going off what we were talking about yesterday, are we constrained to realistically implementing this as a list for fear of unintended consensus changes?
< sipa> no
< sipa> no script versions can easily use a completely independent interpreter
< sipa> *new script versions
< Chris_Stewart_5> ahh ok
< jl2012> sipa: It's like your tree signature. I just compact everyone to become 3 arguments: position, path, script . (Actually I borrowed your original segwit code when the commitment was a Merkle tree)
< jl2012> and the depth is implied by the size of path
< jl2012> Chris_Stewart_5: it's very similar to P2SH and P2WSH. Just with 2 extra arguments
< Chris_Stewart_5> jl2012: I understand that part, I for specific implementation details I"m getting caught up on 'Position' and 'Path' and how they are different
< Chris_Stewart_5> It seems that if you have the path that the script takes in the tree you could derive its position..
< jl2012> First you divide the size of Path by 32, which is the Depth of the tree
< instagibbs> Chris_Stewart_5, I believe path is the hashes in the tree, the position will tell you how to build the branch
< instagibbs> jl2012, you might want to make it explicit, as I had trouble understanding it first go around as well
< instagibbs> and if I'm wrong, doubly so :P
< jl2012> instagibbs, yes, you are right
< Chris_Stewart_5> hmm ok
< instagibbs> I had to read between the lines tbh, since path had to be multiple of 32 bytes, i inferred it was sha hash
< jl2012> for Depth (d), you may have at most 2^d possible Position
< jl2012> Position = 0 means the leftmost position in the tree
< Chris_Stewart_5> jl2012: Leftmost... what exactly doe that mean? THe left most leaf node if you were to draw the tree out?
< instagibbs> yes
< jl2012> yes
< jl2012> same as the Merkle Root in the block header
< instagibbs> jl2012, mind if I write a clarification text? do I PR directly against the bip repo or yours?
< jl2012> instagibbs, please feel free, just a direct PR to BIP repo, thanks
< Chris_Stewart_5> So path is essentially a vector of sha256 hashes.. why do we exactly need the position arg again? instagibbs said for building the branch, can you be more explicit than that? Some how reconstructing the script from the hashes?
< sipa> Chris_Stewart_5: if your leaves are (a,b,c,d) then root=H(H(a,b),H(c,d)), right?
< Chris_Stewart_5> yes
< sipa> if you want leaf b, you need to reveal a and H(c,d), right?
< Chris_Stewart_5> yes
< sipa> so your path would be (a,H(c,d))
< sipa> if you want leaf c, however, you reveal (d,H(a,b))
< Chris_Stewart_5> so position = 3?
< sipa> if you reveal b, position = 1
< sipa> if you reveal c, position = 2
< sipa> etc
< Chris_Stewart_5> ahh zero based index
< Chris_Stewart_5> gotcha. Thanks.
< sipa> of course :p
< * instagibbs> fortran user detected
< Chris_Stewart_5> lol
< jl2012> programmers count from 0
< Chris_Stewart_5> I think I forgot to turn on the CS part of my brain this morning... it was almost like I was a normal person for a while :P
< Chris_Stewart_5> *looks around for more coffee*
< instagibbs> let me know of quibbles etc
< jl2012> thanks!
< instagibbs> the merkle branch also has to be "minimal", but I figure anyone who skims the merkle root function would arrive at that
< instagibbs> and really should just copypasta that function if need be
< instagibbs> maybe branch singular already means that, no idea
< wumpus> jonasschnelli: hahah oops! good to know that's the issue, that was an old branch
< wumpus> I remember having made a similar mistake at least once, testing another branch then I thought I was testing, then spending quite some time debuggin why it didnt work as expected. I think around the time of the boost to evhttpd switch.
< jl2012> instagibbs: what do you mean by minimal?
< instagibbs> it only deals with the nodes it needs to compute a single path
< jl2012> I think it's implied by the design
< Chris_Stewart_5> probably better to be explicit..
< instagibbs> Copying and pasting the tiny function call is pretty explicit and no work
< jl2012> Chris_Stewart_5, instagibbs: more comments added to the reference implementation https://github.com/jl2012/bips/blob/bip114ref/bip-0114.mediawiki#Reference_Implementation
< instagibbs> ACK :)
< jonasschnelli> Now correct: LMDB branch sync from random peers took 2h 30min up to progress=1
< jonasschnelli> Now comparing reindex with default dbcache
< jonasschnelli> First shutdown of LMDB IBDed node with dbcache=9000 took just a couple of seconds (write speed didn't felt different to leveldb).
< GitHub79> [bitcoin] morcos opened pull request #7874: Improve AlreadyHave (master...speedAlreadyHave) https://github.com/bitcoin/bitcoin/pull/7874
< cfields_> mm, what's the real-world use-case for getaddednodeinfo rpc?
< cfields_> the resolving logic is kinda wonky, and i'm not sure it's worth trying to maintain compatibility with the net refactor
< cfields_> BlueMatt_: ^^. Looks like you added it. In particular, the issue is that it (for dns entires) it does a resolve in the rpc, though that doesn't represent the ips that the network thread will end up trying
< cfields_> so I'm not sure that it's really worth trying to enumerate them. seems like keeping a map of dns->resolved would be enough to determine if a dns entry is connected or not, which i think is the useful info there?
< sipa> cfields_: i doubt it is important to keep its exact semantics
< cfields_> ok