< gmaxwell> Our current addnode behavior is more than a little obnoxious. We won't bring up an addnode if it would leave us with more than the maximum outbound connections, even if none of our existing connections are addnoded. This means that if your network burps you can end up disconnecting all your addnoded peers, and then fill them back in with random peers.
< gmaxwell> Even sitting around manually disconnectnode-ing outbounds that weren't my addnodes took quite a while before it managed to try the addnode before connecting something else.
< gmaxwell> I think the behavior should be, we should track if peers are addnodeed. If we have unconnected addnodes and less han MAX connected addnodes, and less than MAX+1 total connected. ... try adding an addnode.
< btcdrak> gmaxwell: I have the same complaint. I just never articulated it.
< gmaxwell> I think we introduced this misbehavior around 0.9.x or something.
< gmaxwell> a long time ago addnode would just go over the connection limit.
< gmaxwell> ... and then seperately, every N seconds, if we have >MAX outbound connections, pick one to evict. (where it would avoid evicting whitelisted and addnode peers, then ones that have recently sent us blocks, then txn, the prefer to keep long uptime).
< gmaxwell> The same eviction process could then be used to rotate the outbound connections. (just randomly try to connect to too many, and then let eviction do its thing.)
< sipa> gmaxwell: indeed, and we already have eviction code
< sipa> just adding a bool to select for incoming or outgoing would be a large part already
< gmaxwell> the current eviction code only evicts incoming. I think our strategy for outgoing should be somewhat different... in particular, we have far fewer outgoing.
< sipa> for rotation we'd need better logic
< sipa> but for preferring addnode over random outgoing maybe not so much
< midnightmagic> +1 for addnode prioritization :-)
< gmaxwell> yes. I think sort by whitelist/addnode/most-recent-to-send-block/dice would be fine for evicting with the addnode case.
< gmaxwell> kinda odd that it would let you partition it by kicking off all your useful peers in favor of some dumb addnodes you plugged in.. but ::shrugs::
< sipa> yes, if you had to be whitelisted in order to get an addnode, we could just have addnode bypass the connection limit
< sipa> maybe something for post host authentication
< gmaxwell> right.
< gmaxwell> well baby steps
< midnightmagic> .w 2
< gmaxwell> sipa: could just be a feature of the p2p auth stuff, once we have that. If mutual auth is successful it bypasses limits.
< gmaxwell> (or at least a non-file descriptor related limit)
< gmaxwell> sipa: another coner case to consider: say you set 8 addnode peers. They all ban you.
< midnightmagic> can I ask for a ptr to where the p2p auth is being worked on?
< sipa> midnightmagic: blocked by first having encryption in; see bip 151
< midnightmagic> ok
< gmaxwell> I guess that banning example suggests having some kind of addnode backoff... and making sure the auto evict runs slow enough after the connect that it won't kick a good peer to make room for a connection that is banned.
< midnightmagic> I see a note of the mitm mitigation not being addressed and identity authentication: was this discussed somewhere that I missed?
< midnightmagic> (I would be happy with "sometime a few weeks ago in -wizards" or like, I don't mind doing my own legwork but would appreciate a general pointer)
< sipa> to address mitm, you need authentication
< sipa> jonas started on an authentication bip as well in parallel with bip 151
< sipa> but that was postponed to first get encryption done
< jonasschnelli> sipa, midnightmagic: there is some stuff from greg: https://people.xiph.org/~greg/auth1.txt
< jonasschnelli> Also the auth bip should probably be done before starting with the bip151 implementation
< midnightmagic> ok
< gmaxwell> I think it's fine to implement before auth is done, so long as whomever is implementing doesn't mind taking some small risk of disruption if changes to the auth design change the encryption.
< midnightmagic> gmaxwell: any thought to using proof-of-storage as ephemeral key ratchets? :-)
< midnightmagic> er. that was meant generally, not just to gmax. :)
< midnightmagic> and, would a group sig in log or constant space be possible for client group-membership proof non-interactively?
< gmaxwell> I don't think group sigs fit well in this context, unfortunately. The server can tell if you are peer X by basically giving you a dummy keyring where all the keys are junk except yours.
< gmaxwell> you can't tell that it did this to you if it was successful in deanoning you
< gmaxwell> (the log size group sigs also take linear time to sign/verify... we're not actually super bandwidth constrained in this application-- so I think a log sized one doesn't buy us much)
< gmaxwell> The constant sized ones use pairing crypto, so then there is a bunch of extra crypto to bring in.
< midnightmagic> thank you
< gmaxwell> the chaumtoken auth was the best private auth I could come up with so far.
< jouke> I'm running a node on ipv4 and onion. I have a lot of peers that have the same "addr" IP, and my addresslocal says: "127.0.0.1". Are those connected trough TOR but are advertising that IP? The subver of those peers are different as well.
< gmaxwell> if someone connects into you via onion, their addr will be 127.0.0.1:<something> and the addrlocal should be your onion address.
< gmaxwell> though they can send whatever they want in addrlocal.
< jouke> Does the addrlocal in the gepeerinfo field specify on which IP address that node is connected to
< gmaxwell> No it is whatever address the remote party claims they think is your address.
< jouke> Oh right
< jouke> Thanks.
< jouke> And the addr field is the address where my nodes comunicates to?
< gmaxwell> yes, which is 127.0.0.1 for inbound onion peers.
< jouke> I have 57 peers with the same addr IP :-/
< gmaxwell> presumably 52.51.something.
< jouke> No
< jouke> Those I have as well
< jouke> On other nodes
< jouke> It is is actualy a /32 IP
< midnightmagic> that sounds super annoying.
< gmaxwell> What do you mean by a "/32 IP" ... all IPs are /32... because they're hosts not networks. :P
< jouke> The bytessent/bytesrecv ratio isn't as skewed as with those 52.51
< jouke> gmaxwell: just one and the same IP, not multiple IP's from the same subnet.
< gmaxwell> I strongly recommend reporting abusive hosts to their providers, fwiw.
< jouke> Hetzner...
< midnightmagic> maybe it's a tor exit node or something
< gmaxwell> midnightmagic: well thats easily checked. :) (indeed, don't report tor exits)
< midnightmagic> Ah. Hetzner. Why am I not surprised. :-/
< jouke> I was thinking of having a script that would look at the upload/download ratio and ban nodes with a bad ratio. But wouldn't have worked in this case.
< gmaxwell> you can do varrious adhoc things personally, but most don't work broader than that.. too easily avoided. the 52.x sybils could start sending junk traffic... and just waste more of your bandwidth.
< gmaxwell> They case no harm except privacy loss, and the solution to that is to fix things so they can't hurt privacy.
< gmaxwell> though if they're irritating you, http://0bin.net/paste/EH8fTyfotJSU3Udw#TCHJfA7qDt5odgG+gW4dG9aUEYkJQo6oOTmsuXBAJke works.
< gmaxwell> ah seems that misses 52.51.180.197 and 52.51.204.60
< GitHub50> [bitcoin] jonasschnelli opened pull request #8135: [OSX] fix make deploy when compiling on OSX (master...2016/06/makedeploy) https://github.com/bitcoin/bitcoin/pull/8135
< jouke> gmaxwell: heh, thanks :)
< GitHub104> [bitcoin] jonasschnelli pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/715e9fd7454f...58725ba89dfc
< GitHub104> bitcoin/master 2692e1b fanquake: [Doc] Simplify OS X build notes...
< GitHub104> bitcoin/master 58725ba Jonas Schnelli: Merge #8029: [Doc] Simplify OS X build notes...
< GitHub56> [bitcoin] jonasschnelli closed pull request #8029: [Doc] Simplify OS X build notes (master...osx-build-notes) https://github.com/bitcoin/bitcoin/pull/8029
< GitHub85> [bitcoin] jonasschnelli pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/58725ba89dfc...ee1533e262e7
< GitHub85> bitcoin/master 16698cb UdjinM6: PR #7772 is not enough to fix the issue with QCompleter, use event filter instead of `connect`
< GitHub85> bitcoin/master ee1533e Jonas Schnelli: Merge #8129: Fix RPC console auto completer...
< GitHub132> [bitcoin] jonasschnelli closed pull request #8129: Fix RPC console auto completer (master...fixRPCAutoCompleter_bitcoin) https://github.com/bitcoin/bitcoin/pull/8129
< iniana> What can I do to debug the painfully slow sync of my core node on a linux machine? Currently at height 410k and it progresses around 1 block every 3 seconds on average.
< jonasschnelli> iniana: what processor? how many RAM? disk stype?
< jonasschnelli> 1 block every 3 seconds is very likely disk speed or CPU.
< jonasschnelli> If you have enough memory, you should try to pass -dbache=4000 (4GB)
< jonasschnelli> Even passing -dbcache=2000 can help
< jonasschnelli> (default is 300)
< iniana> jonasschnelli: blockchain is stored on hdd, dbcache was set to 2000. I have a feeling it is because it is run within a Qubes VM. Is there anyway to measure disk I/O?
< jonasschnelli> iniana: I guess you can use iostat on linux
< jonasschnelli> iniana: did you enable multicore on your VM? But yes,.. running in a VM can slow down the whole thing.
< jonasschnelli> Best fix: don't stress it and wait a couple of days. :)
< iniana> jonasschnelli: yes, set to 8 VCPUs, 8GB max memory, 4GB initial memory. Am at height 410k, so should only have to wait a couple of hours :)
< jonasschnelli> iniana: yes. What CPU?
< jonasschnelli> On my Xeon/SSD I could sync a node in a VirtualBox VM within 3-4h.
< iniana> core i7, 2yo, not sure on exact specs
< jonasschnelli> iniana: try run Core with --debug=bench
< jonasschnelli> You will see if its the disk latency or the ecdsa
< iniana> I managed to sync a pruned node on a whonix VM (tor-only) a lot faster than what I am experiencing now. Only difference is that was on SSD. But I have used the same HDD before and it was a lot faster then. Some combo of VM and HDD doesn't mix I guess.
< iniana> Ah great! Will try that debug flag
< phantomcircuit> iniana, does qubes pass the cpu flags?
< phantomcircuit> also what version are you running?
< iniana> phantomcircuit: What are the cpu flags? Am running Qubes 3.1, Core 0.12.1
< iniana> The following line seems to be the one that takes longest to generate: - Connect 647 transactions: 597.49ms (0.923ms/tx, 0.509ms/txin) [102.29s]
< iniana> also - Verify 4484 txins: 2407.61ms (0.537ms/txin) [104.76s]
< wumpus> looks like most time is spent on verification
< wumpus> how many cores are allocated to that VM?
< wumpus> bitcoin core makes heavy use of paralellism for verification
< wumpus> ok 8 VCPUs should be fine
< sipa> u
< iniana> Hmm.. There was a sequence of a couple of hundred blocks which were processed very quickly (10/s maybe) after which it resumed with 1 block per 3s again.
< sipa> are you reindexing or downloading?
< iniana> downloading
< sipa> that will explain the speed differences
< sipa> we download multiple blocks at once
< sipa> but they don't arrivs in the right order
< iniana> Are transactions that can't be verified yet during sync added to the mempool or someplace else?
< sipa> so validation only progresses when the first missing one arrices
< sipa> you mean transactions in blocks we download? no
< sipa> the mempool only contains validated transactions
< iniana> ah ok
< sipa> say you're synced up to block 10000
< sipa> and you downloading the 100 blocks that follow
< sipa> and 10001 though 10099 have arrived
< sipa> but 10000 has not
< sipa> then 10000 arrvives
< sipa> and suddenly you can validate 100 blocks at once
< iniana> Still weird that it is progressing this slowly though. I mean, it was the same speed when reindexing a previously synced data directory I had copied over. It had all the data and did no downloading. How come there was that sequence of blocks that could be progressed so quickly?
< iniana> When I restarted bitcoin core recently the verification of the previous 288 blocks was also that slow.
< sipa> during reindex you also saw some sequence of blocks that wete fast?
< sipa> not all blocks have the same number of transactions
< sipa> the initial 288 being slow is due to the cache mot being warmed up
< sipa> the database entries still have to be loaded from disk
< iniana> no, during reindex I never saw a fast sequence, though I could have missed it.
< phantomcircuit> sipa, the p2p block fetching logic needs to be hardened against people being annoying
< phantomcircuit> there's some very very very slow nodes
< iniana> Sure, not all blocks are the same size, but from a couple of hundred blocks at height 411k, I bet most of them had a lot of transactions.
< phantomcircuit> but worse there's some that are ridiculously bursty
< phantomcircuit> they'll be fast for some blocks and then take 10 seconds to respond to another
< sipa> iniana: you have high dbcache set in both?
< iniana> sipa: yes, 2000
< sipa> whenever the dbcache fills up, it needs tp be written to disk, and afterwards processing is slower for a while
< sipa> you can see when this happens when the cache size in the UodateTip lines falls back to 0
< sipa> but that causes sudden slowness, not sudden fastness
< iniana> Cache currently at 1258MiB, and it is consistently slow.
< sipa> are you running with debug=bench ?
< iniana> yes
< iniana> sipa: Here is a snippet from my log http://pastebin.com/6jWv29uj
< sipa> all your time is lost on loading database entries
< sipa> either those are blocks that spend a lot of very old inouts that are not cached
< sipa> or your disk is very slow
< sipa> *old inputs
< iniana> iostat reports 7.8M read and 2.1M write for that particular block deivce. Doesn't feel like it is the bottleneck.
< sipa> they're all very small reads
< sipa> what kind of disk?
< iniana> sipa: ok. I guess this is the performance I have to live with until I upgrade to an SSD?
< sipa> i have a spinning disk, and numbers are far better than what you're seeing
< sipa> 18s to load all the inputs is crazy
< iniana> yes, I have used the same disk on a non-virtual machine with much better results.
< sipa> ah, maybe it's due to the VM setup
< wumpus> yes VM can make some difference
< wumpus> there are some curious interactions between fdatasync and VMs for example
< wumpus> probably because the VM 'disk' is one file in the host OS
< sipa> wumpus: this is slowness in reads, not writes
< sipa> so sync is not the issue
< wumpus> reads shouldn't be affected by VM
< wumpus> not much, at least
< wumpus> unless it is some weird compressed image format
< iniana> I'm using Qubes (xen) if that helps
< wumpus> what can make a difference in i/o speed is to make sure that you're using a paravirtualized disk, and not e.g. emulating IDE
< phantomcircuit> iniana, 200+ms to load a block from disk is terrrible
< iniana> Could be the way I attach the block device to my VM (qvm-block), will explore this further.
< wumpus> yes qemu allows configuring a lot of things related to emulated devices
< wumpus> I'd say it is only worth it if you intend to do a lot of syncing, I mean once your node syncs the i/o and CPU usage drops by lots and this all matters very little
< * airmace312> is selling coins if you are intrested in buying please private msg me and we can negotiate the rate so if intrested please msg me for a good deal thanks
< iniana> wumpus: I guess, but it would be annoying if when I have to turn off the machine only to have to wait for hours until it is synced up again when I turn it on. Plus its fun getting to know how stuff works :)
< wumpus> sure, as a learning experience it makes sense
< sipa> iniana: at your sync speed, it takes around 1-2 hours to sync a week of blockchain data
< sipa> that's terrible
< sipa> some people sync the whole chain in 4 hours
< iniana> sipa: yes, its very frustrating
< sipa> but i don't have any advice but checking your vm disk setup
< phantomcircuit> iniana, it's xen based
< iniana> phantomcircuit: yes
< phantomcircuit> i would say
< phantomcircuit> "good luck" fixing that issue
< phantomcircuit> :/
< iniana> You say I shouldn't waste my time?
< phantomcircuit> i'd say go ask the qubes people
< GitHub8> [bitcoin] jonasschnelli opened pull request #8136: Log/report in 10% steps during VerifyDB (master...2016/06/init_checkblocks) https://github.com/bitcoin/bitcoin/pull/8136
< iniana> How would attaching the disk as a samba or nfs share instead of via xen work out? Has anyone tried loading blockchain data from a remote server that way?
< sipa> iniana: no, you need mmap support
< sipa> network filesystems don't offer that
< phantomcircuit> sipa, do we still require mmap?
< sipa> leveldb does
< phantomcircuit> hmm
< sipa> i think
< kinlo> isn't requiring mmap a good thing from a performance point of view?
< sipa> yes
< phantomcircuit> "meh"
< sipa> i think you can disable it
< sipa> i think we may not use mmap on 32-bit
< wumpus> loading the blockchain data from a network filesystem should work, just not having the database directories there (if it doesn't support mmap)
< wumpus> I've succesfully used a network block device to store both though
< wumpus> haven't tried samba nor nfs
< jonasschnelli> wumpus: is there a startup argument to set the blockfiles directory?
< wumpus> I've also shared synced node data between windows and linux locally, linux' ntfs implementation surprisingly does work good enough for leveldb
< jonasschnelli> Or do you need some ln -s?
< wumpus> no, but you can put stuff all over the place with symbolic linnks
< jonasschnelli> Right...
< wumpus> being able to specify different directories as arguments would be mildly useful, although for people that don't know what they're doing it's a foot shooting cannon
< wumpus> (e.g. we only aquire a lock on the data directory, not on the subdirectories individually, so if you can specify them elsewhere someone may do crazy things like point two bitcoinds at the same place)
< jonasschnelli> hmm.. travis is strange today:
< wumpus> scary
< wumpus> happens every time?
< sipa> try pressing more buttond
< wumpus> these are not even our own downloads, but travis pre-installing packages that fails
< wumpus> so less scary, probably just something with the repository that is misconfigured
< wumpus> docketprojects's package repository that is, not our repository
< GitHub10> [bitcoin] pstratem opened pull request #8137: Improve CWallet API with new AccountMove function. (master...2016-06-02-cwallet-accountmove) https://github.com/bitcoin/bitcoin/pull/8137
< wumpus> pushing more buttons doesn't seem to help
< MarcoFalke> sipa, but this would cause an include main.h in wallet.h?
< sipa> MarcoFalke: why in wallet.h?
< sipa> the constant would just disappear from the .h fr
< sipa> wallet.cpp would call main to query the current relay fee
< MarcoFalke> Oh, you mean current relay fee and not minimum relay fee.
< sipa> even the minimum relay fee is configurable, so indeed :)
< sipa> or even the minimum confirmable feerate
< sipa> as we should never create outputs that are uneconomical to spend
< sipa> and the fee estimate is the best guess for that
< phantomcircuit> sipa, im seeing pull tester failures on master
< phantomcircuit> can you (or someone else) check?
< phantomcircuit> wumpus, ^
< wumpus> still the apt-get issue?
< wumpus> nothing we can do there
< wumpus> docker needs to wake up on this https://github.com/docker/docker/issues/23203
< sipa> MarcoFalke: also, unrelated... i don't think we should care much about header file dependencies (apart from making compilation time and memory larger, they don't hurt modularity)... what matters is dependencies between (.h,.c) pairs on eachother, and wallet.cpp already depends on main.cpp
< phantomcircuit> wumpus, no im testing locally
< MarcoFalke> ok
< sipa> the reasoning being: you already can't use the functionality from wallet.{cpp,h} without having main.{cpp,h} available, so there already is a dependency
< jonasschnelli> MarcoFalke: sipa: what about introducing a nodemodel.{cpp/h} that could act as a wallet/node API?
< sipa> how is that different from clientmodel?
< jonasschnelli> clientmodel is GUI
< jonasschnelli> nodemodel would be core/wallet
< sipa> i don't understand
< sipa> clientmodel is the abstraction of the client/node that the GUI uses to communicate with core
< jonasschnelli> Main reason for this could be to have all node interaction in a single file (better readability, better source for later decoupling)
< jonasschnelli> Yes. It could also be clientmodel...
< sipa> ah, you mean a lower level
< jonasschnelli> yes.
< sipa> what sort of code would move there?
< jonasschnelli> Fee estimation, relay fee, etc.
< jonasschnelli> broadcast
< sipa> i wouldn't put wallet-related things there (as those shouldn't be lumped together long term)
< sipa> but if there are pieces of code shared between gui and rpc, it makes sense to move them to a common abstraction layer below
< wumpus> wallet.cpp can depend on main.cpp, but not the other way around
< wumpus> I'm not sure adding all node interaction in a single file is a step forward; we'tre trying to modularize things beyond that, e.g. net is different from blockchain handling
< jonasschnelli> Yes. I agree, move it to a single file would only increase code readability, ... and not sure if this is worth a PR.
< sipa> wumpus: agree
< sipa> i think we're perhaps better off spending time to move code from RPC calls (especially the complicated code inside RPCs that grabs cs_main) to somewhere in the core code instead, so that the RPC is just a single call
< wumpus> clientmodel has way too many responsibilities
< wumpus> (but it's fine for the GUI, and was a big step forward from satoshi's stuff)
< wumpus> yes, that makes sense, ideally only main functions would lock cs_main
< wumpus> on the other hand at this point I'd not like moving things into main
< wumpus> main has the same issue
< wumpus> but after net refactors etc, sure
< wumpus> I don't think RPC functions necessarily need to be just a single call, what matters is that operations that need to happen under cs_main lock and act on main data structures would be better off in main, but these could be the minimal possible encapsulated operations
< sipa> wumpus: +1
< sipa> anything that grabs a lock should be moved to the module that manages that lock
< sipa> (in practice, that often means introducing callback functions etc... though)
< wumpus> aren't we happy with c++11 lambdas then
< sipa> i should learn to use those :)
< sipa> but yes
< wumpus> cfields taught me how to use them in Zurich
< wumpus> I'm impressed, the functionality is so flexible it seems almost un-c++
< wumpus> (e.g. how code in a lambda can use variables from the enclosing function is more remniscent of dynamic languages)
< wumpus> phantomcircuit: if you run it locally you should have an error message?
< gmaxwell> GCC nested functions can do that too.
< phantomcircuit> wumpus, i have lots of error messages
< wumpus> didn't know that! there's not really precedent, c++ nested classes cannot access members from enclosing classes, like in java static inner classes
< phantomcircuit> im checking again after a git clean -fdx
< wumpus> phantomcircuit: so the autotests fails?
< wumpus> or the qa tests?
< phantomcircuit> wumpus, rpc pull tests fail
< phantomcircuit> make check tests pass
< MarcoFalke> which rpc-test?
< phantomcircuit> a bunch of the wallet related ones
< phantomcircuit> i'll clarify in a minute when im done recompiling
< phantomcircuit> and now everything passes
< phantomcircuit> >.>
< phantomcircuit> nvm
< GitHub38> [bitcoin] jonasschnelli opened pull request #8138: Add maximal amount-of-transactions limit to checkblock/CVerifyDB (master...2016/06/verify_db) https://github.com/bitcoin/bitcoin/pull/8138
< sipa> ping for review: https://github.com/bitcoin/bitcoin/pull/7948 (RPC: augment getblockchaininfo bip9_softforks data)
< sipa> ping for review: https://github.com/bitcoin/bitcoin/pull/7967 ([RPC] add feerate option to fundrawtransaction)
< sipa> phantomcircuit: are you seeing this:
< sipa> MarcoFalke: ^
< MarcoFalke> sipa: any zombie bitcoinds? ps -A |grep bitcoind
< sipa> nope
< MarcoFalke> reproducible?
< sipa> retried the test walletbackup.py itself and got a different error
< sipa> now trying rpc-tests.py again, and it seems to run ok
< sipa> at least it doesn't instantly fail like before
< MarcoFalke> Have you been running ./rpc-tests.py in parallel?
< MarcoFalke> (This doesn't work)
< sipa> no
< sipa> if you mean multiple instances of rpc-tests.py simultaneously, no
< MarcoFalke> jup
< GitHub121> [bitcoin] sipa pushed 4 new commits to master: https://github.com/bitcoin/bitcoin/compare/ee1533e262e7...ec45cc5e2766
< GitHub121> bitcoin/master 84c13e7 Wladimir J. van der Laan: chain: Add assertion in case of missing records in index db
< GitHub121> bitcoin/master 6030625 Wladimir J. van der Laan: test: Add more thorough test for dbwrapper iterators...
< GitHub121> bitcoin/master 269a440 Matt Corallo: Add test for dbwrapper iterators with same-prefix keys.
< GitHub78> [bitcoin] sipa closed pull request #7992: Extend #7956 with one more test. (master...16-5-7956-update) https://github.com/bitcoin/bitcoin/pull/7992
< GitHub30> [bitcoin] sipa closed pull request #8051: Seemingly fix walletbackup.py failure (master...fixwalletbackup) https://github.com/bitcoin/bitcoin/pull/8051
< GitHub20> [bitcoin] sipa opened pull request #8139: Fix interrupted HTTP RPC connection workaround for Python 3.5+ (master...fixwalletbackup) https://github.com/bitcoin/bitcoin/pull/8139
< GitHub100> [bitcoin] mrbandrews opened pull request #8141: Continuing port of java comparison tool (master...ba-comptool) https://github.com/bitcoin/bitcoin/pull/8141
< BakSAj> hi
< wumpus> hi
< BakSAj> bitcoin core live meeting now?
< wumpus> yes
< BakSAj> good :-)
< sipa> yes
< BakSAj> any outlook on merging segwit to master?
< wumpus> #startmeeting
< lightningbot> Meeting started Thu Jun 2 19:00:23 2016 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
< luke-jr> aww, too fast
< luke-jr> I was going to propose a topic very unseriously explicitly outside the meeting. :P now it's time to be serious
< petertodd> hi
< wumpus> I guess segwit is the recurring topic, any other topic proposals?
< gmaxwell> jtimon: Cory: morcos: sdaftuar: btcdrak: phantomcircuit: jonasschnelli:
< jonasschnelli> Here!
< sipa> cfields_!
< gmaxwell> I can give some updates on compactblock testing, since it seems that Matt isn't around (or if he shows up, I expect he can)
< wumpus> great
< wumpus> #topic segwit review
< wumpus> let's start with that then
< sipa> My plan right now is making the BIP9/GBT changes, removing segnet, removing the positive witness flag, and then creating a parallel rebase
< sipa> with has a clean history but leads to the same tree
< sipa> if that is something wanted at this point
< wumpus> 111 comments on PR #7910, that must be a record
< luke-jr> "positive witness flag"?
< sipa> luke-jr: originally, there was a flag to the serialize code to say "serialize witnesses!"; at some point we realized that the opposite was better, as almost always you want to serialize witnesses, and there are just a few exceptions
< sipa> so i introduced both temporarily, and required that exactly one of both was set
< luke-jr> oh, I thought having it explicit was a good idea
< sipa> luke-jr: one reason for the opposite is that a failure where you miss the positive flag will not be detected usually, but failure to set a negative flag will
< sipa> luke-jr: i thought the same thing initially
< instagibbs> sipa, are you removing new wallet functionality as well?
< sipa> instagibbs: ?
< luke-jr> sipa: what downsides are there to the current explicit flag?
< sipa> 21:06:22 < sipa> luke-jr: one reason for the opposite is that a failure where you miss the positive flag will not be detected usually, but failure to set a negative flag will
< instagibbs> I assume you don't want users to have segwit addresses pre-rollout
< petertodd> luke-jr: indeed, I might have very well written it with a separate CWitnessTx class :)
< luke-jr> sipa: I don't understand that answer. :<
< instagibbs> (unless that would still be a testing branch)
< sipa> luke-jr: if we use a positive flag only, and we miss setting that flag somewhere, it won't easily be detected, as wherever that data goes, it will likely also accept tx without witness
< luke-jr> right, I'm talking about where we have both positive and negative flags..
< sipa> ah, that is also an option
< sipa> it's just more code changes shattered all over the place
< luke-jr> less likely to have it accidentally wrong, though
< gmaxwell> With segwit in place there really isn't much further concern about getting the wrong flags, it should only have been an issue with the migration where support had to be added in many places (like RPCs) that might have less test coverage.
< gmaxwell> If you get it wrong in the future, the thing you changed simply won't work right. And hopefully you'll be testing the thing you just changed.
< gmaxwell> I don't have a strong or strongly held opinion however.
< luke-jr> we have a lot of outstanding PRs that may need to be updated that might not conflict
< jtimon> ack on cleaning history while removing the testnet
< wumpus> instagibbs: well in principle, master is attesting branch
< sipa> there are a few things we need in first, though
< petertodd> jtimon: you mean segnet?
< wumpus> jtimon: you mean removing the segnet? don't make him remove testnet too :)
< instagibbs> oh a testing, not attesting
< jtimon> petertodd: I believe sipa meant removing segwit's testnet
< gmaxwell> sipa: in any case, what do you think will let you get the code merged sooner?
< sipa> #7749 and #8083
< wumpus> but yes ACK on cleaning up the history before merge
< gmaxwell> I like cleaning history for sure.
< gmaxwell> Future me thanks you.
< sipa> well it definitely has to happen before merge
< jonasschnelli> As said, we can ACK the sha256sum of the diff (if someone cares about integrity of an ACK)
< sipa> the question whether the time is now for that
< luke-jr> (GBT VB as well)
< sipa> jonasschnelli: git internally has tree hashes
< jonasschnelli> sipa: Nice. That should work.
< sipa> So please review #7749 and #8083
< luke-jr> did we get anywhere with the fundrawtransaction issue?
< sipa> luke-jr: i can't remember the conclusion there
< wumpus> q: is everything in the segwit PR meant to go in at once, or will it be split up in to, say, a pull with consensus/network changes, following up with wallet, and GUI changes
< jonasschnelli> Re 8083: im finalizing the seeder part (nits and filterbitmap-whitelist)
< wumpus> #action review #7749 and #8083
< luke-jr> sipa: IIRC, 1) it's a problem; 2) we won't change consensus code to fix it
< luke-jr> I don't know of any actual resolution to the problem :/
< gmaxwell> wumpus: If sipa does the "rewrite history to result in the same code thing" then if it is is PR split they will need to go in one right after another to preserve the "same code" property.
< gmaxwell> (I think)
< sipa> wumpus: that's a good question; the history is broken into sections already
< sipa> so we can decide later
< sipa> though... maybe not
< gmaxwell> I suppose sipa should show "same code" at one point, then split, and the remaining parts could then change.
< sipa> the unit/rpc/p2p tests rely on the wallet code
< wumpus> gmaxwell: I don't have a strong opinion either way, if this change is sufficiently atomic it makes sense to do it at once, but see e.g. instagibbs's comment about wallet addresses
< wumpus> maybe hide it behind an option in the beginning?
< gmaxwell> My recommendation for the wallet parts was to just hide the changed functionality there behind a hidden configuration option that the tests could use.
< jtimon> well, the wallet code can be maybe be introduced disabled for users with a constant to be removed later or something?
< sipa> maybe we can disable addwitnessaddress before the softfork
< wumpus> gmaxwell: right
< sipa> that would make sense
< wumpus> jtimon: hey we're all saying the same :)
< jtimon> wumpus: yes
< sipa> yay, agreement
< sipa> i have another question
< gmaxwell> sipa: what is the meaning of life?
< sipa> 42
< jonasschnelli> :)
< gmaxwell> thats an answer, not a question!
< luke-jr> he has both the answer and a question
< gmaxwell> We're going to need to build a bigger computer...
< sipa> jl2012 brought up the issue that our witness program definition is limited to 16 versions, and that is not easy to extend without introducing a new witness storage
< sipa> there is an easy solution, by allowing witness programs to be slightly larger
< sipa> but this is a hard forking change
< luke-jr> sipa: it is? I thought the version could be any length?
< sipa> which, if done unconditionally, could hardfork testnet
< sipa> luke-jr: nope, has to be OP_0 through OP_16
< luke-jr> :/
< luke-jr> why limit it?
< jtimon> then would be something to move to the later hardfork, no?
< sipa> jtimon: i don't like relying on that
< gmaxwell> Oh ... how'd that happen? in any case, you can simply use 0..16 and then use another byte after that one.
< luke-jr> jtimon: we cannot assume there will ever be a hardfork.
< sipa> gmaxwell: max 32 bytes can follow
< gmaxwell> okay thats broken.
< * luke-jr> doesn't see the purpose of restricting the witness-triggering sPK like that
< jtimon> well, the plan was to deploy segwit as a softfork
< sipa> jtimon: changing it is a HF with respect to the current SW code
< gmaxwell> My assumption was that it would be arbritary and extended by just adding more bytes when the space ran out.
< sipa> jtimon: not with respect to bitcoin
< sipa> jtimon: so we can change it before merge while leaving SW a SF
< jtimon> sipa: oh, I see, it would still be a SF to bitcoin. ok
< sipa> gmaxwell, luke-jr: the reason was that constant size utxos are nice
< gmaxwell> sipa: testnet segwit rules can be changed so activiation doesn't begin until later.
< gmaxwell> So at worst that would be a reindex for testnet nodes, no?
< sipa> gmaxwell: ... segwit is already activated on testnet
< gmaxwell> yes, so?
< luke-jr> sipa: they're already non-constant size.
< sipa> luke-jr: ?
< gmaxwell> sipa: move it forward, reindex.
< jtimon> testnet4 ?
< gmaxwell> luke-jr: he means in the future.
< gmaxwell> sipa: in any case, 16 is unacceptably too few.
< sipa> agree
< sipa> gmaxwell: hmm, ok
< instagibbs> The bip doesn't read like it would be a HF, but maybe I'm being obtuse
< gmaxwell> I don't think constant size is as interesting as constant bound. so if you wanted to say that the version was limited to N bytes for some small N that would be fine. 4294967296 versions should be enough for anyone.
< luke-jr> gmaxwell: in the future, if we softfork out the current long sPKs, we can softfork a limit on witness sPK length as well
< sipa> fair enough, will do
< sipa> limit to 36 or 40 bytes or so
< jtimon> instagibbs: it would be a HF only for testnet3 which has already deployed "old segwit"
< instagibbs> jtimon, no I mean, anything where version is 1+ has no consensus meaning yet
< gmaxwell> sipa: cool (also, testnet behavior has never been in a merged much less released version, I don't mind breaking it)
< instagibbs> when it's not simply 0
< sipa> instagibbs: the problem is that something of the form "1 + 33 bytes" is not treated as a witness program, and is not allowed to have any witness data
< sipa> we can extend the definition of a witness program, but it would require a new witness structure
< sipa> as old nodes won't relay witness data with something they don't treat as a witness output
< sipa> (and that rule is necessary to prevent spam)
< sipa> anyway, ok, will just change the rule
< sipa> other topics?
< wumpus> yes
< luke-jr> old nodes won't relay anything with witness data they can't verify anyway..
< sipa> (or more segwit review related points)
< wumpus> #topic compactblock testing
< sipa> luke-jr: old nodes in this case being witness v0 nodes
< instagibbs> luke-jr, I'm not convinced, but I think fixing now is better anyways so whatever
< luke-jr> sipa: yes, I also mean these
< wumpus> @gmaxwell
< luke-jr> IMO make any length limit a relay policy only.
< sipa> we'll discuss further in #segwit-dev
< luke-jr> k
< instagibbs> ack
< jtimon> compackblock
< gmaxwell> Okay. So matt has built a parallel relay network that uses compact blocks and the UDP network block coding stuff.
< sipa> i rebased BlueMatt's compactblock patch on top of the shared_ptr mempool change, and gmaxwell and i have been succesfully running it across the internet
< sipa> ^ and that's more interesting
< gmaxwell> ^ a number of people-- including some large miners-- are running both Sipa's rebase, and Matt's PR without the rebase on public nodes.
< gmaxwell> Which are also connecting to matt's nodes.
< wumpus> good to hear
< gmaxwell> (I got bored with the simulated topology lab testing, this is potentially more interesting)
< sipa> 2016-06-02 18:36:13.412286 Successfully reconstructed block 0000000000000000014a6a924544dd097d538314281bebd95c89e50726e7d2ef with 1 txn prefilled, 2708 txn from mempool and 0 txn requested
< sipa> 2016-06-02 18:37:46.728092 Successfully reconstructed block 000000000000000001943282946e9579fd84def95df577ebb8bcda3b8d9f4d06 with 1 txn prefilled, 1529 txn from mempool and 0 txn requested
< sipa> 2016-06-02 18:43:32.713909 Successfully reconstructed block 000000000000000000499e1485726cd87003e7a6400118f8c061171748665612 with 1 txn prefilled, 1102 txn from mempool and 3 txn requested
< wumpus> yes, always good to test with real world data as well
< gmaxwell> I don't know that there is much to report, it's working as expected. :) Sipa's rebase on the sharedptr is pretty nice.
< instagibbs> gmaxwell, the rebase includes the predicted transactions?
< BakSAj> nice
< gmaxwell> As it also eliminates transaction duplication from the relay pool, and eliminates a fair bit of transaction copying.
< wumpus> is this branch available somewhere?
< sipa> instagibbs: it only sends the coinbase directly
< instagibbs> sipa, ah k
< gmaxwell> instagibbs: no, right now I don't believe anyone is running something with fancy prediction. I think we'll leave that out in the initial PR. Easily added later.
< wumpus> #link sipa's rebase of compactblocks on top of PR #8126: https://github.com/sipa/bitcoin/commits/compactblocks
< wumpus> #action review PR #8126
< gmaxwell> if other developers here are interested in running nodes connected to these nodes, lemme know and I'll give you addresses to connect to.
< wumpus> I'm interested
< gmaxwell> I should take an action to setup a couple on published addresses too, for people to connect to without asking. :)
< wumpus> yes, that always works best :)
< wumpus> any other topics?
< luke-jr> is sipa's rebase different enough that we ought to switch to reviewing that instead?
< gmaxwell> Yes, though they may get DDOS attacked, which is harmless but would waste time sorting out the issue. :)
< sipa> luke-jr: it's less code than the original :)
< wumpus> gmaxwell: you mean thoroughly stress-tested :)
< * gmaxwell> stabs
< gmaxwell> Does anyone know the current CFPF status?
< wumpus> #topic current CPFP status
< sipa> gmaxwell: review #7598
< luke-jr> afaik no show-stoppers found, but more review needed; there's a dep PR to get in first though
< wumpus> #action review #7598
< sipa> it's a blocker/dependency for CPFP (#7600)
< gmaxwell> I've been struggling a bit with too many PRs outstanding at once that I want to test.
< wumpus> #link CPFP is that like 'strong AI should be here in less than 20 years'
< wumpus> EH
< luke-jr> :<
< wumpus> (sorry, copy paste messup)
< gmaxwell> Merging them is a pain. (thanks to sipa for merging a lot of things recently!)
< sipa> i've been going through all PRs... there are so many decent-but-not-quite-finished ones in the queue...
< wumpus> I've lost overview a bit
< wumpus> any PRs that should be close to be able to be merged?
< wumpus> sipa: yes, I've noticed
< jonasschnelli> IMO https://github.com/bitcoin/bitcoin/pull/7957 can be merged (save, tool only)
< gmaxwell> Mostly my minor difficulty there just comes from many things touching the same parts, and a lot of that was actually my fault. (e.g. opening three PRs at once that all conflicted with each other. :) )
< luke-jr> I'd like to see key generation type merged so there's no risk of other wallet upgrades conflicting it since these wallets are in the wild
< sipa> 17:19:13 < sipa> ping for review: https://github.com/bitcoin/bitcoin/pull/7948 (RPC: augment getblockchaininfo bip9_softforks data)
< sipa> 17:20:03 < sipa> ping for review: https://github.com/bitcoin/bitcoin/pull/7967 ([RPC] add feerate option to fundrawtransaction)
< wumpus> jonasschnelli: agree, but that one is probably not blocking anything
< sipa> also 7997
< jonasschnelli> I'm requesting permission to merge [docs] and [tools] PRs
< wumpus> jonasschnelli: sure
< gmaxwell> sounds fine, I know we sometimes don't get enough review on those, esp docs. Please feel empowered to nag people to review things.
< jonasschnelli> Okay. Will try to focus on trivial docs PRs, so wumpus and sipa can take care about the corish ones.
< wumpus> the problem with doc pulls is usually that they get review comments but the author disappears for large spans of time
< sipa> luke-jr: agree
< sipa> luke-jr: will do final review and reack
< cfields_> luke-jr: that looked good to me last time I checked. I'll re-review as well.
< sipa> also #7825 is good
< sipa> and #7942
< jonasschnelli> Also have a look at https://github.com/bitcoin/bitcoin/pull/7946 (could speed up things a little bit)
< wumpus> #7942 has an unaddressed comment by sipa
< sipa> tiny nit :)
< jonasschnelli> nit is nit!
< wumpus> that's not always clear to me whether it should block merging
< wumpus> (I usually at least wait for the author to respond)
< sipa> the author is active, he probably just missed it
< sipa> jonasschnelli just pinged
< wumpus> ok good
< luke-jr> oh, topic: 0.11.next
< jonasschnelli> luke-jr, I guess we already discussed the 0.11 maintenance?
< wumpus> ok
< wumpus> #topic 0.11.next
< luke-jr> jonasschnelli: ?
< sipa> 0.11 goes to critical fixes only when 0.13 is released, right?
< jtimon> luke-jr: 0.11.next is supposed to include csv but not segwit, right?
< jonasschnelli> I had in mind we we BP to 0.12, 0.13 and only security stuff to 0.11?
< luke-jr> jtimon: unless it gets delayed until segwit is merged, I guess
< wumpus> is there any urgent reason to do a 0.11 release?
< luke-jr> sipa: unless someone decides to maintain it longer
< luke-jr> wumpus: CSV support, at least
< wumpus> right
< jtimon> wumpus: is there any reason not to do it while things are backported and tested?
< gmaxwell> looking at the network I'm not seeing any evidence of need to maintain 0.11 extensively, also we called for people running older versions in operations and got crickets in response, AFAIK.
< * jonasschnelli> checking the seeder
< wumpus> jtimon: developer overhead
< sipa> jtimon: is it tested?
< jtimon> wumpus: well, that's luke-jr's problem, isn't it?
< luke-jr> gmaxwell: we did? I don't recall seeing the "call for", and I know for a fact that Eligius relies on 0.11 for now
< jonasschnelli> 88 0.11 nodes
< jtimon> sipa: I don't know, I'm not reviewing or testing myself, but if luke-jr gets review and testing...
< sipa> luke-jr: perhaps the time is better spent in upgrading eligius then :)
< luke-jr> jtimon: which is unlikely, hence the history of git/rc only stable stuff :P
< luke-jr> sipa: probably.
< gmaxwell> esp since master will hopefully have CPFP soon.
< luke-jr> jonasschnelli: what?
< wumpus> yes, interest in older major versions is extrememly low
< btcdrak> there is a backport PR for 0.11 for CSV etc. but we sort of semi decided back then it was not urgent and much more risky.
< luke-jr> I see 1768 0.11 nodes.
< btcdrak> the BIP68 backport in particular is complex
< jtimon> my only point is that we shouldn't use the "we only promise to maintain the last 2 versions" as an artificial limitation beyond review and testing. If people are interested in working on that...
< gmaxwell> well in particular, interest in _updates_ for old versions is low...
< wumpus> gmaxwell: yes that's what I mean...
< luke-jr> we should remove the promise if we're not going to uphold it.
< wumpus> jtimon: the problem is that people are not interested
< gmaxwell> what promise?
< jtimon> well "promise"
< gmaxwell> also, not "not going"-- not able.. without people to test you really can't provide good releases.
< jtimon> wumpus: by people you mean users or reviewers/testers?
< gmaxwell> not doing someone a service to put out a barely tested update.
< wumpus> I mean if luke-jr really wants to handle a release by himself I'm not going to protest
< gmaxwell> ^ agreed.
< luke-jr> https://bitcoincore.org/en/lifecycle/ mentions something; whether promise or able or not, it should be updated if we can't do it
< btcdrak> the CSV backport PR was https://github.com/bitcoin/bitcoin/pull/7716
< btcdrak> we did pretty much decide not to merge it.
< luke-jr> wumpus: I can't get testing/reviews by myself.
< jonasschnelli> luke-jr: sorry,.. wrong file: we have 743 0.11x nodes and 1786 0.12.x nodes... so yes. CSV for 0.11 makes sense.
< luke-jr> I can maintain stable branches, but releases seem unlikely to work out at this point
< wumpus> but there's so much happening on master right now, and 0.13 release is near, I can't promise I will be able to spend any time on it (except gitian building / uploading executables)
< jonasschnelli> Yes. 0.11 is certainly not something for wumpus (would be a waste of his time)
< gmaxwell> Without people interested in using it, we can't get much platform qualification which is a lot of the difference between a branch and a release.
< wumpus> so if you want to do this: please create your own 0.11 branch, tag, do the release notes etc
< sipa> jtimon: i have 1093 'good' 0.11 nodes
< sipa> eh, jonasschnelli:
< jonasschnelli> good is not good enough...
< jonasschnelli> cat dnsseed.dump | grep " 100.00% 100.00% 100.00% " | grep "Satoshi:0.11." | wc -l
< sipa> anyway, besided the point
< sipa> we can't do releases without people interested in running and testing
< wumpus> yes, meeting time over
< sipa> oh, that too
< wumpus> #endmeeting
< lightningbot> Meeting ended Thu Jun 2 19:59:24 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< jtimon> maybe we can merge it into the 0.11 branch without doing a release?
< wumpus> jtimon: sure. luke-jr can run his own 0.11 branch, I could just take that over
< gmaxwell> could be done, I suppose. But thing is: soft forks are more or less safe to run without, but a broken update may be less safe.
< gmaxwell> not that I think there is a lot of risk there.
< jtimon> and if it ever gets tested, release, otherwise there's never a 0.11.next
< wumpus> well if no one runs it, it doesn't create much risk either :)
< gmaxwell> Good point.
< luke-jr> wumpus: should I continue to maintain stable branches in a separate repo (the old one was on Gitorious which is dead, so a new location needs deciding if so), or would it make sense to do it on the main repo now (would require push access for me, at least in the stable branches)?
< luke-jr> [20:00:18] <gmaxwell> could be done, I suppose. But thing is: soft forks are more or less safe to run without, but a broken update may be less safe. <-- my main concern with backporting of the recent softforks
< wumpus> luke-jr: well you don't strictly need push access; I could just pull the 0.11 branch from your repo when you say so
< jtimon> can't we just merge PRs in the 0.11 branch in the main repo?
< btcdrak> luke-jr: morcos specific concern was the safety of BIP68 backport
< luke-jr> wumpus: that'd work too I guess
< btcdrak> the backports were done in #7716
< jtimon> or that, I'll shut up, you people coordinate
< btcdrak> The codebase is significantly different between 0.11 and 0.12 with regards to BIP68
< luke-jr> it probably would make sense to have a separate repo for stable in general, though, so we can't accidentally confuse PRs against the wrong branch
< wumpus> at least the github merge script now automatically gets the branch to merge against from github
< gmaxwell> luke-jr: is there any thing I could help do to get eligius off of 0.11 and to 0.12.1 (and maybe master with CPFP?)
< luke-jr> gmaxwell: my current target is Knots 0.12.1 including CPFP there, so the big step is backporting CPFP in a way that can be turned on/off (which AIUI, the CPFP dep makes easier); I don't think there's a good way to divide this work, however
< gmaxwell> luke-jr: ugh.
< gmaxwell> I don't see what benefit you get from 0.12.1 with such a large and invasive backport.
< luke-jr> as opposed to everything else in master? :x
< gmaxwell> which has a lot more eyes on it, and in the mining relevant codepaths, the changes for CPFP are probably the bulk of the changes.
< gmaxwell> also, look ahead a bit, and you'd have to forward port that backport onto segwit.
< luke-jr> hmm
< BakSAj> i understand the need for backporting in enterprise software, where upgrades might get messy, but what is the exact point in btc where process is quite simple...?
< luke-jr> not sure it'd be less work to forward-port Knots to master either, though
< luke-jr> BakSAj: consensus systems carry even more risk in upgrades, than enterprise
< gmaxwell> BakSAj: there are many large businesses that use Bitcoin too, and some that have complex customizations.
< sipa> luke-jr: what other kinds of changes does Knots have?
< gmaxwell> BakSAj: unfortunately, though we know these deployments exist-- we seldom hear from people involved with them.
< luke-jr> sipa: http://bitcoinknots.org/ has the full list, but relevant to Eligius.. *skims over list* #7149, #7533, and spamfilter; so maybe easier than backporting CPFP
< BakSAj> i wonder if they do patching
< BakSAj> otherwise its just a waste of dev that can go to master version
< luke-jr> would be nice to get #7149 reviewed and merged finally.
< btcdrak> This was the summary regarding the backports, for the record https://bitcoincore.org/en/meetings/2016/03/31/#softfork-backports
< BakSAj> ok, thanks for clarification
< BakSAj> i hope 0.12.1 will activate soon, so you guys can move on with segwit and get lightning finally
< BakSAj> anybody happen to know when e.g. f2pool moves to 0.12.1 ?
< GitHub51> [bitcoin] sipa pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/ec45cc5e2766...f972b04d63eb
< GitHub51> bitcoin/master 0bf6f30 Pedro Branco: Prevent multiple calls to ExtractDestination
< GitHub51> bitcoin/master f972b04 Pieter Wuille: Merge #7825: Prevent multiple calls to ExtractDestination...
< GitHub30> [bitcoin] sipa closed pull request #7825: Prevent multiple calls to ExtractDestination (master...enhancement/prevent-multiple-calls-extractdestination) https://github.com/bitcoin/bitcoin/pull/7825
< GitHub138> [bitcoin] sipa pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/f972b04d63eb...a82f03393a32
< GitHub138> bitcoin/master 9805f4a Kaz Wesley: mapNextTx: use pointer as key, simplify value...
< GitHub138> bitcoin/master a82f033 Pieter Wuille: Merge #7997: replace mapNextTx with slimmer setSpends...
< GitHub69> [bitcoin] sipa closed pull request #7997: replace mapNextTx with slimmer setSpends (master...setSpends) https://github.com/bitcoin/bitcoin/pull/7997