< achow101> is there any particular reason to be gathering coverage data for leveldb?
< achow101> cfields_: ^^ (you added it 4 years ago)
< gmaxwell> achow101: when reporting you can easily filter it out by passing through lcov -r
< gmaxwell> I think it might be of interest at some point to know figures for how much of leveldb that our tests cover.
< gmaxwell> though kinda odd to do it by default
< achow101> lcov -r is really slow, but I have a workaround for that :)
< achow101> I don't think having it is necessary so we can just skip gathering data for it altogether
< gmaxwell> faster computer. :P
< gmaxwell> the way the lcov stuff is setup in our build seems kinda convoluted to me, but I'm not maintaining it. I normally just do it with CFLAGS/CCFLAGS overriding and buildings.
< sipa> gmaxwell: achow101 reimplemented lcov -r in python, making it run 60x faster
< sipa> (20 minutes -> 20 sec)
< gmaxwell> lol
< bitcoin-git> [bitcoin] jtimon closed pull request #9579: Net: Trivial-review: Make SendMessages easier to review (master...0.15-split-sendmessages) https://github.com/bitcoin/bitcoin/pull/9579
< bitcoin-git> [bitcoin] achow101 opened pull request #10552: [Test] Tests for zmqpubrawtx and zmqpubrawblock (master...zmq-raw-tests) https://github.com/bitcoin/bitcoin/pull/10552
< Chris_Stewart_5> bitcoind in -regtest activates all soft forks (+ segwit) and enforces all Policy flags after generating ~500 blocks right?
< bitcoin-git> [bitcoin] fanquake closed pull request #10477: Use C++ initializer to initialze map and implement map comparator as const (master...master) https://github.com/bitcoin/bitcoin/pull/10477
< kallewoof> fanquake: I think you got the @name wrong in the comment on #10477 btw. (cg31 opened, you said @benma)
< gribble> https://github.com/bitcoin/bitcoin/issues/10477 | Use C++ initializer to initialze map and implement map comparator as const by cg31 · Pull Request #10477 · bitcoin/bitcoin · GitHub
< wumpus> hhm thinking whether to add statistics such as number of incoming/outgoing connections, connections per bind point, to getnetworkinfo or just make a statistics script based on getpeerinfo on top for myself
< wumpus> the # in/out connections is in the GUI, so in principle it's make sense to add it to RPC
< fanquake> kallewoof oops, corrected.
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/e801084decf4...6c2d81f34dc4
< bitcoin-git> bitcoin/master 0abc588 practicalswift: [tests] Remove printf(...)
< bitcoin-git> bitcoin/master 6c2d81f Wladimir J. van der Laan: Merge #10524: [tests] Remove printf(...)...
< bitcoin-git> [bitcoin] laanwj closed pull request #10524: [tests] Remove printf(...) (master...u-for-unsigned-int) https://github.com/bitcoin/bitcoin/pull/10524
< bitcoin-git> [bitcoin] practicalswift opened pull request #10553: Simplify "bool x = y ? true : false". Remove unused function and trailing semicolon. (master...minor-cleanups) https://github.com/bitcoin/bitcoin/pull/10553
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/6c2d81f34dc4...71ab6e553856
< bitcoin-git> bitcoin/master 227ae9b practicalswift: [tests] Use FastRandomContext instead of boost::random::{mt19937,uniform_int_distribution}
< bitcoin-git> bitcoin/master 71ab6e5 Wladimir J. van der Laan: Merge #10547: [tests] Use FastRandomContext instead of boost::random::{mt19937,uniform_int_distribution}...
< bitcoin-git> [bitcoin] laanwj closed pull request #10547: [tests] Use FastRandomContext instead of boost::random::{mt19937,uniform_int_distribution} (master...remove-boost-random-dependency) https://github.com/bitcoin/bitcoin/pull/10547
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/71ab6e553856...35e7f13f68f9
< bitcoin-git> bitcoin/master 246a02f practicalswift: Use std::unordered_{map,set} (C++11) instead of boost::unordered_{map,set}
< bitcoin-git> bitcoin/master 35e7f13 Wladimir J. van der Laan: Merge #10548: Use std::unordered_{map,set} (C++11) instead of boost::unordered_{map,set}...
< bitcoin-git> [bitcoin] laanwj closed pull request #10548: Use std::unordered_{map,set} (C++11) instead of boost::unordered_{map,set} (master...unordered_map) https://github.com/bitcoin/bitcoin/pull/10548
< bitcoin-git> [bitcoin] laanwj pushed 3 new commits to master: https://github.com/bitcoin/bitcoin/compare/35e7f13f68f9...9c248e39f280
< bitcoin-git> bitcoin/master 5b75c47 Andrew Chow: Add a valid opcode sanity check to CScript...
< bitcoin-git> bitcoin/master ac4e438 Andrew Chow: Sanity check transaction scripts in DecodeHexTx...
< bitcoin-git> bitcoin/master 9c248e3 Wladimir J. van der Laan: Merge #10481: Decodehextx scripts sanity check...
< bitcoin-git> [bitcoin] laanwj closed pull request #10481: Decodehextx scripts sanity check (master...decodehextx-sanity) https://github.com/bitcoin/bitcoin/pull/10481
< bitcoin-git> [bitcoin] somdoron opened pull request #10554: ZMQ: add publishers for wallet transactions. (master...zmq_wallet_tx) https://github.com/bitcoin/bitcoin/pull/10554
< bitcoin-git> [bitcoin] jnewbery opened pull request #10555: [tests] various improvements to zmq_test.py (master...zmqtestimprovements) https://github.com/bitcoin/bitcoin/pull/10555
< bitcoin-git> [bitcoin] jnewbery opened pull request #10556: Move stop/start functions from utils.py into BitcoinTestFramework (master...testframeworkstopstart) https://github.com/bitcoin/bitcoin/pull/10556
< bitcoin-git> [bitcoin] morcos opened pull request #10557: Make check to distinguish between orphan txs and old txs more efficient. (master...dontcheckoutputs) https://github.com/bitcoin/bitcoin/pull/10557
< bitcoin-git> [bitcoin] morcos opened pull request #10558: Address nits from per-utxo change (master...10195nits) https://github.com/bitcoin/bitcoin/pull/10558
< jtimon> travis didn't run for https://github.com/bitcoin/bitcoin/pull/9176
< bitcoin-git> [bitcoin] achow101 closed pull request #9522: [RPC] Fix decoderawtransaction decoding of segwit txs (master...fix-decoderawtx) https://github.com/bitcoin/bitcoin/pull/9522
< arubi> luke-jr made a comment about changing that 0x00 to 0xff. in case segwit is retried, this is very good input imo..
< sipa> yeah we should have used 0xff
< sipa> or even better... we should never have used normal transaction encoding for partial transactions
< arubi> right, I'd love for a blob of tx data to tell me what it wants to me to do with it
< * sipa> proposes ASN.1
< * sipa> hides
< arubi> haha, make the wallet even fatter
< Chris_Stewart_5> arubi: so some sort of marker that says 'needs signature'?
< arubi> the scriptsig area itself can describe what it wants without hurting current layout
< arubi> "oops, this is unsigned input" or maybe a prevtxid "oops, not funded..."
< arubi> I mean, it depends at what stage it's parsed.. hard to tell for "decoderawtransaction" :\
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/9c248e39f280...29f80cd230c3
< bitcoin-git> bitcoin/master 3fb81a8 practicalswift: Use list initialization (C++11) for maps/vectors instead of boost::assign::map_list_of/list_of
< bitcoin-git> bitcoin/master 29f80cd Wladimir J. van der Laan: Merge #10545: Use list initialization (C++11) for maps/vectors instead of boost::assign::map_list_of/list_of...
< bitcoin-git> [bitcoin] laanwj closed pull request #10545: Use list initialization (C++11) for maps/vectors instead of boost::assign::map_list_of/list_of (master...list_of) https://github.com/bitcoin/bitcoin/pull/10545
< bitcoin-git> [bitcoin] sdaftuar reopened pull request #10357: Allow setting nMinimumChainWork on command line (master...2017-05-chainwork-commandline) https://github.com/bitcoin/bitcoin/pull/10357
< gmaxwell> Meeting in ten minutes.
< sipa> DOINK
< wumpus> #startmeeting
< lightningbot> Meeting started Thu Jun 8 19:00:04 2017 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
< wumpus> #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr btcdrak sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier jl2012 instagibbs
< jonasschnelli> hi
< sdaftuar> hi
< sipa> hi
< achow101> hi
< wumpus> proposed topics?
< kanzure> hi.
< sipa> UI interaction with pertxout upgrade?
< gmaxwell> #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr btcdrak sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier
< jonasschnelli> sipa: ack
< wumpus> ok, let's do high priority for review first, then that
< wumpus> #topic high priority for review
< sipa> #10148
< gribble> https://github.com/bitcoin/bitcoin/issues/10148 | Use non-atomic flushing with block replay by sipa · Pull Request #10148 · bitcoin/bitcoin · GitHub
< luke-jr> I've rebased multiwallet etc
< wumpus> luke-jr: great!
< sipa> (^ will double our effectively available dbcache)
< wumpus> luke-jr: sae there were some new review comments, did you address them yet?
< luke-jr> wumpus: I believe all review comments are addressed or answered
< wumpus> added 10148
< sipa> thanks!
< jtimon> still waiting on my current one, thanks wumpus for benchmarking
< jonasschnelli> #8694 now passes gitian, will re-utack soon
< gribble> https://github.com/bitcoin/bitcoin/issues/8694 | Basic multiwallet support by luke-jr · Pull Request #8694 · bitcoin/bitcoin · GitHub
< wumpus> jtimon: a very nice gain!
< luke-jr> oh, there was a comment asking for tests..
< luke-jr> I didn't write any, although I agree they would be nice
< wumpus> jtimon: unfortunately bitcoind nuked my log so I can't get the timings, but 26% savings in block hash operations is noce
< jtimon> wumpus: nice
< jtimon> we're talking about #10339 in case someone is lost
< gribble> https://github.com/bitcoin/bitcoin/issues/10339 | Optimization: Calculate block hash less times by jtimon · Pull Request #10339 · bitcoin/bitcoin · GitHub
< wumpus> luke-jr: well I'd say add those later, this is only the first in a series towards multiwallet after all
< sipa> wumpus: status on account to label conversion?
< luke-jr> indeed, it might not be possible to test right now since it's not exposed to RPC
< BlueMatt> wumpus: "in block hash operations" hmm? can you be more specific?
< gmaxwell> Related to hashing, does anyone here have AMD ryzen yet?
< BlueMatt> whats that compared to total runtime?
< wumpus> BlueMatt: I don't know about time, I just instrumented the block header hash function
< gmaxwell> BlueMatt: it's tiny vs total runtime, I assume-- but it should impact latency on block handling somewhat.
< luke-jr> gmaxwell: no, but I'd been holding out for SHA2 acceleration before upgrading, so I might get one if AMD is competing with Intel again
< sipa> related topic: sha2/crc32 hw accel?
< wumpus> as I said, I lost the timings (blame bitcoind nuking the log if it's too large)
< gmaxwell> luke-jr: Ryzen has the sha2 instructions... so I'm asking to find out if anyone will be able to test support for them. :)
< luke-jr> gmaxwell: right, that's why ;)
< luke-jr> I just haven't had time to investigate the pros/cons otherwise yet
< wumpus> sipa: no progress on that yet
< gmaxwell> In any case, 98% of the work needed to support sha2 instructions is the same as using SSE4 SHA2 which will itself be a non-trival speedup.
< wumpus> crc speedup should be possible after the leveldb upgrade that sipa PRed
< luke-jr> gmaxwell: we removed SSE4 stuff years ago, but I'm not sure if it was used for non-mining
< sipa> luke-jr: it was only used for mining at the time
< gmaxwell> BlueMatt: IIRC I instrumented and measured accepting a block at tip hashed the header ~6 times or so.
< gmaxwell> luke-jr: it was mining only.
< wumpus> luke-jr: that was a different one, the N-way-parallel
< sipa> yup 4-way parallel SSE2, iirc
< gmaxwell> and it was a vector SHA2 that had to work on 4 elements at a time.
< jtimon> gmaxwell: that's before or after the patch?
< gmaxwell> What we should be implementing now is the one at a time SIMD sha2. I believe matt uses it in fibre but without autodetection.
< sipa> it seems we've strayed from the topic?
< gmaxwell> jtimon: before.
< luke-jr> I wonder if it'd be worth using 4-way for merkle trees etc
< BlueMatt> gmaxwell: yea, but if its not measurable on the total, unlikely to be worth the effort and complexity.....an alternative - do we have CBlockIndex*es in most of those places? pblockindex->GetBlockHash() is free and simpler than passing hashes around
< jonasschnelli> Will have access to a Ryzen in a couple of days via Hetzner
< * luke-jr> wonders if the GCC compile farm has a Ryzen
< morcos> Without doing a thorough review of 10339, is the speedup worth it as opposed to the tradeoff on making the code a little more complicated/involved. Who was it that said the primary point of source code is to communicate between developers
< jonasschnelli> #10339
< morcos> It just seems weird to pass a block and separately it's hash into a bunch of functions
< gribble> https://github.com/bitcoin/bitcoin/issues/10339 | Optimization: Calculate block hash less times by jtimon · Pull Request #10339 · bitcoin/bitcoin · GitHub
< jtimon> BlueMatt: what's wrong with passing hashes around?
< BlueMatt> DoPartOfConnectBlock(42 args)
< wumpus> #topic Optimization: Calculate block hash less times by jtimon
< BlueMatt> we already have too many args in many of those functions
< instagibbs> "fewer" :P
< wumpus> well if hashing is a bottleneck, the obvious optimization is to just to it less
< sipa> instagibbs: i didn't want to say anything :)
< BlueMatt> wumpus: is it?
< wumpus> if hashing is not a bottleneck, then going after SSE and such makes no sense either
< sipa> BlueMatt: i believe that in all places where we already have CBlockIndex, no new block hashes are computed
< sipa> all of the cases in 10339 are before having a CBlockIndex
< wumpus> so if #10339 is not an improvement then we can leave hashing alone
< gribble> https://github.com/bitcoin/bitcoin/issues/10339 | Optimization: Calculate block hash less times by jtimon · Pull Request #10339 · bitcoin/bitcoin · GitHub
< gmaxwell> Well my suggestion on that hash thing was just to cache the hash in the block object, but Pieter said he prefered this.
< jtimon> instagibbs: calculate fewer times? thanks
< gmaxwell> (I also implemented that, though in a way that wasn't correct for the mining code.)
< BlueMatt> gmaxwell: I highly prefer that, though making CBlock const ala CTransaction is a bit more involved
< sipa> gmaxwell: if we create another CBlock for the cases where that's not needed
< sipa> (like GetTransaction and mining code)
< jtimon> Node also that we're sometimes getting the hash from the index and sometimes from the CBlock
< wumpus> right, calculating it eagerly can also result in overhead
< morcos> I'd just like to see some evidence, even a little bit, that this optimization is worth it. 26% fewer hashing operations results in what savings? Or do people not agree it makes the code a bit more cumbersome
< sipa> i don't see much problem in passing a hash along with a block if you know it
< BlueMatt> passing in the hash as a part of ProcessNewBlock? yuck
< sipa> it's certainly a bit of complication, but a trivial one
< wumpus> morcos: sure, my point is just that if 26% fewer hashing operations isn't worth it, then using special intstructions to gain a few % is netiher worth it
< CodeShark> seems like there are probably lower hanging fruit in optimizations but meh
< BlueMatt> I just spent a bunch of time getting fewer args there
< morcos> wumpus: well hashing occurs a lot more often than in the calculations of these block hashes
< BlueMatt> to separate the consensus code from net_processing, now we're adding more coupling :(
< jtimon> it seems to me that the reserve is mostly a question of style
< gmaxwell> morcos: My reason for bringing it up is that I believe the repeated hashing is on the latency critical path for block propagation to the tune of maybe a millisecond-ish.
< jtimon> I mean calculating it eagerly can penalize in some cases, right
< gmaxwell> wumpus: these optimizations just prevent repeated hashing of the headers, in terms of total bytes hashed while syncing thats pretty small even though we do have a high repetition factor.
< gmaxwell> jtimon: what case would computing it early peanalize?
< wumpus> gmaxwell: ok...
< CodeShark> BlueMatt: my preference is to sacrifice a little performance for better architecture ;)
< sdaftuar> i will try to benchmark the effect on validation speed for this
< wumpus> CodeShark: I think that's a fair argument
< sipa> wumpus: transaction hashing accounts for far more than block header hashing, but not on the critical path
< jtimon> gmaxwell: when the validation fails for some other reason before you need the hash?
< * BlueMatt> would be more ok if we calculated it at the top of ProcessNewBlock and pass that one around, but still not ideal
< BlueMatt> passing it in to ProcessNewBlock is.....ugh
< gmaxwell> jtimon: I don't think there is any validation failure path where we don't hash... after all, we'll want to know _what_ block hash failed to validate.
< jtimon> BlueMatt: you could have said that when each function had a separated commit, but I can leave that out
< BlueMatt> or, really, chnging the signatures of stuff in validation.h for this without a benchmark showing its a real win :(
< jtimon> gmaxwell: yeah, fair enough
< * morcos> has said his piece and is willing to let the people who spent more time thinking about this decide
< wumpus> ok, clear, better to close it I guess, would have made sense to have this discussion earlier
< jtimon> so we come back to only the "ugh" argument
< gmaxwell> But I still don't understand why sipa prefered to do this jtimon patch instead of making the object cache it. FWIW, just monkey patching it 'works' and the only obvious issue is mining.
< sipa> gmaxwell: and every other read from disk
< jtimon> I preferred this because I think it's simpler than the cache, even if it's "ugh"
< gmaxwell> sipa: I don't understand your comment.
< sipa> i consider adding more arguments to validation-specific functions less invasively than changing a primitive datastructure
< sipa> we can do both, if needed
< wumpus> sipa: I tend to agree
< sipa> gmaxwell: i believe we often read blocks from disk without caring about its hash, because we already know it
< wumpus> sipa: just passing extra arguments is easier to reason about than extending primitive datastructures
< gmaxwell> sipa: the read funcion computes it!
< sipa> gmaxwell: ?
< sipa> currently, yes, as a checksum
< sipa> if we care about the checksum functionality, we should add a crc
< gmaxwell> // Check the header
< gmaxwell> if (!CheckProofOfWork(block.GetHash(), block.nBits, consensusParams))
< wumpus> jtimon: I agree, caching is always somewhat risky and error prone, this is more explicit and clear
< gmaxwell> return error("ReadBlockFromDisk: Errors in block header at %s", pos.ToString());
< wumpus> then again if it's not worth it, performanec wise, we shouldn't do it at all
< wumpus> not in another way either
< sipa> making the block hash part of CBlock makes us unable to skip computing that hash
< sipa> even in places where we don't care about it
< gmaxwell> I think I previously gave figures on the latency impact of caching.
< jtimon> how can I benchmark this to convince morcos?
< gmaxwell> sipa: Where are those places?
< wumpus> I don't know, I tried
< sipa> gmaxwell: every time we read a block from disk, we already have its hash (because that's how we look it up!)
< sdaftuar> sipa: don't we also hash all the transactions when we read a block from disk?
< sipa> gmaxwell: recomputing it there serves as nothing more than a checksum
< sdaftuar> surely that dominates all this
< morcos> sipa: i know i said i'd shut up, but you just read from DISK, who cares if you hash 80 bytes?
< wumpus> might be time for next topic, if this is not a perf issue, might be better to discuss it after meeting
< sipa> sdaftuar: good point.
< gmaxwell> sdaftuar: we don't. I used to think we did but we do not.
< sipa> yes, we do
< gmaxwell> sipa: where?
< sipa> deserializing a transaction computes its hash
< gmaxwell> oh right, what we don't do is validate the hashroot.
< sipa> let's discuss this after the meeting
< gmaxwell> k
< sipa> next topic?
< wumpus> #topic UI interaction with pertxout upgrade
< jtimon> in any case, BlueMatt morcos if you already knew you didn't liked this, I would have appreaciated knowing it earlier, maybe a provisional concept NACK or something
< sipa> so, since 10195, we'll have a possibly lengthy (minutes on decent hardware, possibly much longer elsewhere) upgrade process from the old per-txid utxo db to the per-txout one
< morcos> jtimon: yes sorry, i only looked at the PR during todays meeting, also why i said i'd stop arguing about it
< sipa> that needs some GUI interaction, i believe
< wumpus> jtimon: same here, this should have been clear sooner, otherwise I wouldn't have spent as much time on it
< sipa> or perhaps even in bitcoind
< morcos> wumpus: in general perf improvements shoudl include data in the PR request when possible I think
< wumpus> jtimon: it's kind of disappointing to discover this does what it says on the tin, save ~1/4th of block hash operations, just to discover that's not even important
< jonasschnelli> Can you not just use uiInterface.Progress() like in VerifyDB?
< wumpus> jtimon: that kind of annoys me
< sipa> jonasschnelli: that doesn't let you interrupt
< morcos> sipa: yeah i noticed that it doesn't even output to debug log that its doing an upgrade right?
< sipa> it should log
< sipa> "Upgrading database..."
< luke-jr> jonasschnelli: I'd think users should be able to send txs during the upgrade.
< morcos> oh
< jonasschnelli> luke-jr: but RPC is also not ready in this case? RIght?
< gmaxwell> morcos: it logs, but you won't see it if you have leveldb logging turned on. :P
< sipa> nothing works during the upgrade
< sipa> there is no full node available
< jonasschnelli> why would you then need interruption?
< wumpus> it should at least show a progress indicator and be interruptible indeed, ideally
< sipa> because maybe it takes too long
< sipa> and they want to continue another time
< wumpus> yes, because the user may want to do something else with the computer, having not expected to have to spend a long time upgrading
< gmaxwell> by interruption this means 'crash', not continue running without the upgrade.
< jtimon> anyway, I would still like to benchmark it if anyone can help me (not sure what to do)
< jonasschnelli> Ah.. the update can be done in multiple steps..
< sipa> yes
< jonasschnelli> well that would also need communication then
< wumpus> gmaxwell: well 'exit' not crash
< gmaxwell> same difference
< morcos> sipa: yeah i missed that somehow, sorry
< sipa> crashing "should" be fine
< gmaxwell> it better be! :) (also, I've tested it a fair bit)
< jonasschnelli> [Updating 0%] \n You can shutdown and continue later [Shutdown-Button] <-- something like that?
< wumpus> jtimon: benchmarking the reindex chainstate maybe?
< gmaxwell> jonasschnelli: like that.
< luke-jr> what if you crash, run the old version, and then the new one again?
< wumpus> jtimon: I still wonder why -stopatblock doesn't work
< gmaxwell> luke-jr: you get to keep both pieces? (I believe the old version will tell you the database is corrupt and stop there.... but I haven't tested that, so it's a good question.)
< sipa> wumpus: no; that's dominated by tx validation/deserialization... a good benchmark tests block validation given fully populated mempool and sigcache etc
< jonasschnelli> How about logging? Can we log similar to the VerifyDB [the non-newline % steps]?
< sipa> jonasschnelli: sure
< wumpus> sipa: ok...
< jonasschnelli> I can work on that. Seems to fit my skillset. :)
< luke-jr> gmaxwell: IMO users may get tired of waiting and prefer to start the upgrade over in exchange for being able to use the old version in the meantime
< jtimon> sipa: perhaps it could be as simple as using -upgradeutxodb=1 once or something of the sort?
< wumpus> sipa: good luck doing that in a deterministic way, though
< sipa> maybe VerifyDB or something like that can pass a bool saying "interruptible" ?
< gmaxwell> luke-jr: sounds like a case we should handle.
< sipa> jtimon: you can't not do it
< sipa> jtimon: i mean, -upgradeutxodb=0 would mean just exiting immediately :)
< wumpus> no, the upgrade needs to be done
< luke-jr> we could prompt before starting to warn the user, but IMO we'd probably want to begin ASAP, even if we hold off destroying backward compat until the user OKs it
< gmaxwell> meh.
< wumpus> sure, you could warn...
< gmaxwell> Keep in mind that on a reasonably fast computer the upgrade does not take long.
< jtimon> no, I mean once you set -upgradeutxodb=1 once it doesn't matter what you set anymore
< sipa> it's a new major release, i don't care about backward compatibility; the user can always reindex if they really need to
< morcos> so just to be clear, right now if someone went back to an old version, there is some chance they'd screw themselves right?
< morcos> (and need to reindex)
< wumpus> morcos: yes, it simply doesn't work. We should warn in the release notes.
< sipa> morcos: i believe that with very high probability the startup sanity check will fail
< luke-jr> pruned nodes cant reindex
< gmaxwell> morcos: old version rejects with a "your database is corrupted" though luke was asking an interesting question about what happens if you do this mid-conversion.
< wumpus> we coudl have added logic to 0.14.2 to detect the new format and give a more specific error
< morcos> sipa: yeah that was my question, how far do you have to get along in the upgrade before your sanity check would fail with high probability
< gmaxwell> sipa: I did test that case, how could it not fail?
< sdaftuar> we only go back 6 blocks now in the startup sanity check i think
< luke-jr> wumpus: IMO it'd be better to destroy the upgrade so it starts over, and work (for incomplete upgrades)
< sdaftuar> isn't it possible you don't come across any bad entries?
< gmaxwell> wumpus: hm? I don't think that would be the right way, the right way would be to make the new version more incompatible.
< wumpus> gmaxwell: ok...
< sipa> there is a trivial change we can make that guarantees old versions will treat it as an empty db
< luke-jr> change the dir name?
< sipa> haha
< wumpus> gmaxwell: I just mean a specific error like 'the utxo database is in a newer format than supported by this version' would be clearer than a general sanity check error
< sipa> yes, i've considered that too
< morcos> empty db == corrupt and leave it so the new version can continue
< gmaxwell> no there is just the height index entry.
< morcos> wumpus +1
< wumpus> but I don't know ,sorry for suggesting it
< gmaxwell> wumpus: oh, sure that would have been okay!
< morcos> thats at least somethign we shoudl do for future changes
< sipa> wumpus: unfortunately it doesn't have a version number
< jtimon> luke-jr: we could move the main datadir to ./bitcoin/main maybe
< sipa> agree
< luke-jr> using a new db would also mean users from pre-obfuscation get that enabled..
< sipa> a new db is a possibility
< wumpus> sipa: we could have added one! but yes, too late.
< sipa> though i'd prefer to not need 2x disk storage during the upgrade
< sipa> not too late
< morcos> sipa: what was the trivial change you were referring to
< sipa> morcos: change the record name for the 'best block hash' entry
< gmaxwell> the database stores a record to indicate the current tip; we can just change that.
< sipa> and set the old one to something invalid
< luke-jr> [19:35:19] <sipa> though i'd prefer to not need 2x disk storage during the upgrade <-- IMO unavoidable if the "user gets tired of waiting and runs old version" is desired to work okay
< gmaxwell> FWIW the non-atomic flushing change already changes those records around.
< sipa> yup
< sipa> well, in a backward compatible way
< gmaxwell> I don't think we should support user gets tired of waiting.
< gmaxwell> too high a cost for too low a benefit.
< sipa> it's even easier if it doesn't need to be backward compatible
< morcos> gmaxwell: yes we shouldn't support it, but it might be nice to support they tried to do it and didn't corrupt their database by trying so they now have to reindex
< gmaxwell> morcos: yes, in practice that is the case but it isn't guarenteed. We could make it guarenteed.
< wumpus> gmaxwell: well just exiting and continuing the upgrade later would be ok
< gmaxwell> wumpus: yea, that works except we don't have an exit button other than kill -9 :)
< sipa> wumpus: also, i don't understand why the CompactRange doesn't work for you
< sipa> if it's not guaranteed (or even not likely) to work, maybe the 2x disk storage is inevitable
< sipa> and we're better off explicitly creating a new db
< wumpus> sipa: I don't know either, could try it again if you think it's a fluke...
< sipa> i'll test it myself again too
< gmaxwell> I tested an earlier range compacting patch and it did work.
< gmaxwell> I'll also test.
< sipa> monitor disk usage during upgrade, with and without compactrange patch
< sipa> would be nice
< sipa> and maybe this discussion depends on the outcome there
< gmaxwell> if indeed compacting isn't reliable we should change to create a new database. IMO
< sipa> agree
< wumpus> it might have to do with my test framework, I'll copy the database directory next time instead of using my usual hardlink hack
< gmaxwell> (FWIW, the reason I checked compacting before is because I was going to suggest changing to a new database if it didn't work.)
< wumpus> other topics?
< sipa> crc32 leveldb 1.20?
< gmaxwell> ^
< wumpus> #topic crc32 leveldb 1.20
< jtimon> gmaxwell: agreed on cost/benefit for supporting interruption, much easier to require confirmation with a warning "this may take a while and cannot be interrupted" or something
< gmaxwell> sipa: Why is travis failing?
< sipa> i personally really dislike the approach they're using (which seems to be the advised way even), which requires a separate object compiled with different flags
< sipa> gmaxwell: i assume incompatibility with our own build system integration of leveldb
< wumpus> compiling a separate object with special flags is pretty much the standard way of doing it
< sipa> they're even abusing it... they're calling the new object without knowing that the CPU supports it
< wumpus> ok, that's not good
< gmaxwell> Compiling a new object with different flags is standard and correct, calling it without knowing, is wrong.
< wumpus> yes
< wumpus> the detection function should not be in that object
< sipa> the approach in #10377 is so much easier
< gribble> https://github.com/bitcoin/bitcoin/issues/10377 | Use rdrand as entropy source on supported platforms by sipa · Pull Request #10377 · bitcoin/bitcoin · GitHub
< wumpus> simple as that
< wumpus> that only works because you don't spell out the instruction but put it in binary, if you spelled out the instruction it would have to be compiled with a special flag
< wumpus> you can't use e.g. sse4.2 instructions if the object isn't compiled with that
< gmaxwell> IIRC MSVC has banned inline ASM, if we don't care about MSVC then we're free to do other things. However, you need to do the object thing if you want to use code that accesses simd via intrensics, for something like rdrand or clmul it's no big deal to use asm.
< sipa> ok ok
< wumpus> gmaxwell: yes, intrinsics are more compatible
< cfields_> here!
< gmaxwell> so we need to fix leveldb to now call into that object without having done the detection. Anything else needed there?
< wumpus> I don't think leveldb's way is wrong, except for putting the detection function in the instruction-set specific object
< BlueMatt> oh, i was supposed to mention cfields_ would be late
< cfields_> heh, thanks
< jtimon> maybe we can open an issue on their repo or something?
< BlueMatt> you're welcome :)
< sipa> jtimon: the issue is in our own build system integration, i'm sure
< sipa> not upstream
< sipa> oh, you mean for calling the machine specific object? yeah
< wumpus> wouldn't upstream have the same problem then?
< gmaxwell> jtimon might have been referring to the detection in the wrong object, thats just wrong.
< sipa> yeah, agree
< sipa> if they ever look at issues...
< jtimon> gmaxwell: sipa right
< gmaxwell> we should submit a fix, it should be trivial.
< wumpus> yes better to submit a fix, submitting an issue likely won't help
< sipa> cfields_: oh!
< cfields_> or are you guys talking about something else?
< sipa> probably not
< sipa> let's try
< gmaxwell> okay, we have actions here.
< wumpus> lol <long discussion> oh, cfields did it already
< cfields_> that enables the runtime detection and separate object, but iirc there's still a little generic code that may end up with the wrong instructions
< sipa> cfields_: yup...
< gmaxwell> that just needs to be moved into another file, I expect.
< sipa> yup
< sipa> let's do that independently
< wumpus> yes, move all the code out of it except for the function itself
< gmaxwell> should be a trivial patch we can take locally and send upstream.
< cfields_> i even linked it on the PR! :)
< cfields_> heh, sorry I was late. Movers showed up at 3:00 exactly :\
< gmaxwell> We'll need to do the same thing for SSE4 (and sha2 instruction) sha2.
< jtimon> cfields_: that doesn't seem related the meeting started at 21:00 :p
< wumpus> from what I remember that's actual assembly code from intel, in .s format, not using intrinsics
< gmaxwell> ah. okay!
< luke-jr> jtimon: 3:00 is meeting time for east USA
< jtimon> luke-jr: sure, bad joke, sorry
< cfields_> i suspect I missed this discussion too, but I have intrinsics done for sha2
< gmaxwell> I usually expect corporate code to be intrinsics because having good performance is not enterprise enough. :P
< wumpus> it's even in yasm format
< cfields_> wumpus: ah, i'll shut up :)
< gmaxwell> I'm very exicited about getting the faster hash in, with all our other improvements, I'm now expecting a 8% IBD speedup.
< gmaxwell> if not more.
< wumpus> cfields_: you mean sha-specific instructions? the stuff I quote is from before that
< * jtimon> was used to the other notation and used assemble with nasm, but it's pretty sure we don't want that
< cfields_> wumpus: yea, intrinsics for intel sha1/2
< wumpus> it just implements a SHA using SSE4/AVX, so it works on older architectures..
< sipa> jtimon: ha, cool
< sipa> wumpus: no license issues?
< jtimon> sipa: not cool, I should have learned with the at & t notation, no?
< gmaxwell> sipa: it's permissively licensed IIRC.
< cfields_> wumpus: ah, nice. I suspect that's where the leveldb discussion came from. We can re-use the build-time logic for sure.
< wumpus> sipa: it's either MIT or BSD
< sipa> jtimon: just swap the argument order :)
< gmaxwell> Before the metting closes, I'd like to remind people that Matt's caching change #10192 is a 31% block connection speedup. It's done, rebased current, and needs review.
< gribble> https://github.com/bitcoin/bitcoin/issues/10192 | Cache full script execution results in addition to signatures by TheBlueMatt · Pull Request #10192 · bitcoin/bitcoin · GitHub
< wumpus> #topic cache full script execution
< jtimon> sipa: yeah, I think that's mostly it, and some minor things I think, but I was too lazzy to translate the programs I had
< gmaxwell> I don't know that there is too much to say about 10192. It saves a lot of operations when checking if an already validated tx is cached. It's fairly straight forward.
< sipa> yeah, let's do it
< wumpus> seems it had a lot of review, no (ut)ACKs though
< gmaxwell> I'm working on an ACK now. Just want other people to look, it sat for a long time needing rebase.
< wumpus> right
< morcos> on the list
< sdaftuar> +1
< wumpus> it's already on the high priority for review list
< gmaxwell> The non-atomic flush stuff is also still outstanding, but I think it might get slightly changed based on todays per txo discussion.
< sipa> cfields_: yours commit (after trivial cherry pick) works
< wumpus> #endmeeting
< lightningbot> Meeting ended Thu Jun 8 20:00:33 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< gmaxwell> wumpus: thanks!
< luke-jr> #10512's travis failure is being weird :/
< gribble> https://github.com/bitcoin/bitcoin/issues/10512 | Rework same-chain from abusing DoS banning, to explicit checks by luke-jr · Pull Request #10512 · bitcoin/bitcoin · GitHub
< luke-jr> it seems like adding debugging logging is fixing the test
< sipa> ah, a heisenbu
< sipa> +g
< kanzure> heisentypu
< gmaxwell> aside, I have a fair amount of distaste for intrensics; they're slightly more portable but for a long time they had poor performance because they left register allocation up to the compliler and it usually did it wrong. But newer compilers have been much better about register allocation around simd code.
< cfields_> luke-jr: double the debug logging, close as fixed.
< luke-jr> oh wait, I'm wrong. *rubs eyes*
< luke-jr> forgot I had started a bisect last night
< jtimon> I have the feeling that some who didn't like #10339 may not like #8498 either for similar reasons
< gribble> https://github.com/bitcoin/bitcoin/issues/10339 | Optimization: Calculate block hash less times by jtimon · Pull Request #10339 · bitcoin/bitcoin · GitHub
< wumpus> for ARM32, intrinsics were pretty bad, apparently there were some changes in ARM64 arch making them better suited for optimization by compilers
< gribble> https://github.com/bitcoin/bitcoin/issues/8498 | Optimization: Minimize the number of times it is checked that no money... by jtimon · Pull Request #8498 · bitcoin/bitcoin · GitHub
< jtimon> BlueMatt: morcos ^^
< gmaxwell> Sorry about earlier with jtimon caching patch, I thought I had expressed the view that I thought the change was adding complexity-- on reread of my comment I can see that my effort to be more constructive (in the form of offering "why not just cache") backfired.
< gmaxwell> I don't really think the complexity it adds is that big a deal.
< wumpus> but if reducing the number of header hashing operations isn't worth it in the bigger scheme of things, it's a boondoggle
< jtimon> gmaxwell: I think you were clear: you preferred a cache, but since there were more comments and you didn't say anything else, I assumed it wasn't a strong preference
< sipa> wumpus: i'm not convinced it's not worth it
< cfields_> sipa: i can patch up leveldb to move all ops to that file, if you're not already working on it
< jtimon> is there a benchmarking tutorial somewhere ?
< sipa> wumpus: but it will (if at all) only be relevant for blocks with pre-cached utxos, pre-checked sigs, already loaded...
< sipa> which is what matters for relay, not reindex
< wumpus> sipa: well for what it's worth it also helped with the number of hashes in reindex
< sipa> wumpus: i very much doubt it will be significant if you could all SHA256 operations (as opposed to just block header hashes)
< sipa> *count
< jtimon> yeah, it should reduce the number of hashes when validating a block
< jnewbery> luke-jr: I've noticed an issue with zmq_test.py not exiting correctly. #10555 fixes that and also adds a timeout for individual tests (which means travis should show more useful output if one of the tests hangs). Try rebasing your PR on that and see if it helps.
< gribble> https://github.com/bitcoin/bitcoin/issues/10555 | [tests] various improvements to zmq_test.py by jnewbery · Pull Request #10555 · bitcoin/bitcoin · GitHub
< wumpus> well the point of the pull was to reduce header hashes, so that's what I counted
< morcos> jtimon: I
< sipa> i'm sure it reduces header hashes (and thanks for the concrete evidence for that)... but there are probably 2 orders of magnitude more hashes in computing merkle trees
< wumpus> but I don't care, won't spend any more time on it at least
< sipa> :)
< sipa> cfields_: that should go in our leveldb repo then
< sipa> ... or upstream
< morcos> jtimon: I'd need to look at 8498 more closely, but that one doesn't not seem to me to add that much complication.. it might be a nicer layout.. but i need to spend more time thinking it through
< cfields_> sipa: sure, first one, then the other
< gmaxwell> jtimon: it wasn't a strong preference, I'm okay with that PR if pieter is okay with it. Though morcos seems to have stronger views.
< jtimon> morcos: thank you for the fast look, that already helps
< jtimon> gmaxwell: yeah, that's why I should learn to benchmark these things, I'm sure it isn't hard and it's just that I've never tried
< morcos> like i said, i personally don't love the direction that 10339 goes, but if other people have already done the review and like it, i don't think its worth further debate to stop it
< morcos> it seems like premature optimization that makes the code less easy to use to me, but not something terrible
< sipa> right - perhaps there is no noticable performance gain, in which case it's obviously not worth it
< sipa> i soft of assumed there would be, and if that's the case, i think passing along a hash of an object with the object itself is a very reasonable thing to do
< sipa> but perhaps all of this is premature
< wumpus> right
< wumpus> it's all up to finding a way to benchmark it then
< sipa> i believe we should perhaps postpone this until after #10192
< gribble> https://github.com/bitcoin/bitcoin/issues/10192 | Cache full script execution results in addition to signatures by TheBlueMatt · Pull Request #10192 · bitcoin/bitcoin · GitHub
< sipa> and then use the benchmark strategy that BlueMatt used there
< sipa> or use some replay that feeds transactions and blocks into bitcoind over p2p...
< jtimon> I'll go look that benchmarking stratedy
< gmaxwell> I think there is a simpler way to argue the performance: benchmark hashing a header, and then count how many hashes the change saves. Total saved time is no more than the product.
< wumpus> the playback would work using morcos's data
< jtimon> so how did you counted it was about 6 ?
< jtimon> wumpus: thanks
< bitcoin-git> [bitcoin] morcos opened pull request #10559: Change semantics of HaveCoinInCache to match HaveCoin (master...haveunspentincache) https://github.com/bitcoin/bitcoin/pull/10559
< gmaxwell> Am I alone in thinking we should have more comments in the code and commit message vs PRs? E.g. when I wanted to cite the multiply/shift hash trick for the mailing list, I first looked at the source-- no explination of what its doing, then the commit message that added the change, all it said was allow non-power-of-two cache sizes, -- the PR text had the citation I wanted https://github.com/
< gmaxwell> bitcoin/bitcoin/pull/9533
< wumpus> no, more (useful) source code comments would be nice
< wumpus> I'm always happy if people add them, or other documentation for that matter
< morcos> agreed, i've at least started to incororate most of my PR messages in the commits
< morcos> but more source code commentary would be good too, although it sometimes gets a bit stale when a major PR changes a lot of pieces
< wumpus> yes that's the drawback, keeping comments up to date is extra maintenance work
< wumpus> ideal would be self-illustrating code
< wumpus> but uh, yeah
< luke-jr> gmaxwell: I agree
< luke-jr> instead of answering questions on PRs, we should interpret it as a bug that it's underdocumented ;)
< wumpus> I've tried, there's many times I've encouraged people to add a code comment instead of just mentioning the rationale in a PR/commit message
< wumpus> in any case being able to find it in the commit message or even PR is still better than if it was some discussion on IRC :)
< luke-jr> heh
< bitcoin-git> [bitcoin] practicalswift closed pull request #10343: Remove redundant on-the-same-line-repetition of type names (DRY): RepeatedTypeName foo = static_cast<RepeatedTypeName>(bar) (master...auto) https://github.com/bitcoin/bitcoin/pull/10343
< bitcoin-git> [bitcoin] practicalswift opened pull request #10560: Remove non-existing "force safe mode" (-testsafemode). Remove unused constants. (master...noop-modes) https://github.com/bitcoin/bitcoin/pull/10560
< bitcoin-git> [bitcoin] practicalswift opened pull request #10561: Remove duplicate includes (master...remove-duplicate-includes) https://github.com/bitcoin/bitcoin/pull/10561