< luke-jr> this seems way too premature; only 2% of the network's upgraded :/
< gmaxwell> unfortunately, the issue was made public.
< luke-jr> :/
< gmaxwell> it wasn't clear if it would propagate or die out, but since it was moderatly easy to discover on your own, even just the rumors of it risked someone exploiting it.
< achow101> gmaxwell: luke-jr: there's an r/btc thread about someone finding the bug
< gmaxwell> achow101: it's referenced in the message. 'circulating'
< jamesob> Wow, that was fast
< promag> jnewbery: fyi #14283
< gmaxwell> should I PR the extended test case?
< luke-jr> gmaxwell: I'd wait
< gmaxwell> OK.
< luke-jr> did anyone mail the announcement ML about 0.16.3? I don't think I saw one..
< achow101> luke-jr: there was an announcement for it
< luke-jr> no
< nanotube> would it make sense to propose a 'contact' page on bitcoin.org similar to the one on bitcoincore.org? it appears it is non-trivial to find where to privately report security issues unless one knows to go to bitcoincore.org, since earlz had to come asking on #bitcoin-dev and -core-dev for a way to report.
< gmaxwell> nanotube: :( I feel really uncomfortable with people going to bitcoin.org for that kind of information.
< harding> nanotube: any page on Bitcoin.org, top menu, Participate, Development, "to report an issue, please see the Bug Reporting page", Responsible Disclosure.
< gmaxwell> but its there
< gmaxwell> yea
< nanotube> yes not ideal, but people probably still go there unless they know to check bitcoincore.org... so, good that it has the bug reporting page in there somewhere.
< echeveria> there used to be bitcoin-security, but that was handled sort of poorly.
< echeveria> the contents of it ended up being published when someone stole the old satoshi email address.
< kanzure> hi. mailing list admin has hit a bug and is unusable at the moment.
< kanzure> We're sorry, we hit a bug!
< kanzure> Please inform the webmaster for this site of this problem. Printing of traceback and other system information has been explicitly inhibited, but the webmaster can find this information in the Mailman error logs.
< sipa> great.
< echeveria> I was looking at one of the public internet mapping tools for bitcoin core versions. there's a pretty disturbing number of hosts that have 8332 open.
< echeveria> is there some tool or setup guide that is telling people to open this port? I thought it was pretty difficult (not a single switch) to get Bitcoin Core to bind the RPC interface to 0.0.0.0.
< echeveria> of 8000 IPv4 nodes, 1142 have a RPC port 8332 that respond to a SYN.
< gmaxwell> maybe honeypots?
< gmaxwell> as you note, you must take extra steps to bind..
< echeveria> I don't think so. they're over a huge number of different hosts, old and new.
< ken2812221_> MarcoFalke: Your gpg signing key has expired
< Jmabsd> the code that qualifies and validates a segwit transaction starts on what code locations?
< Jmabsd> mostly validation.cpp's AcceptToMemoryPoolWorker
< Jmabsd> Where is the code that checks the witness merkle root in the coinbase transaction?
< emilengler> If I download the linux tarball, will I be able to select a download path for the blockchain ?
< luke-jr> yes, if you know how or use the GUI
< luke-jr> better topic for #Bitcoin
< emilengler> Ok I will keep this in mind excuse me
< provoostenator> I used invalidateblock on a remote node to go back to ~ 475000, but lost the connection after a few hours. The last debug message is from an hour ago, an updatetip down to 508954. It's in a weird state.
< provoostenator> Memory usage was swinging between 5GB and 8GB. I was able to shut it down via rpc, though the last message was "net thread exit" which sounds like an unclean exit.
< provoostenator> Restarting the node, now it's "Replaying blocks", "Rolling back ... 542229" and down from there.
< sdaftuar> provoostenator: what version bitcoind was it?
< provoostenator> sdaftuar: v0.17.0rc2 (I was actually dumb enough to not upgrade it before doing this)
< provoostenator> I also don't know if invalidateblock is supposed to work for such a huge rollback. Though if not, then perhaps the documentation should warn against that.
< sdaftuar> well, i think we do want it to work
< provoostenator> Also, the RPC call is blocking. Does getting disconnected have any bearing on that?
< sdaftuar> no, the invalidateblock function should continue even after the rpc client disconnects, i think
< sdaftuar> i believe if you had waited long enough, it probably would have finished?
< sdaftuar> but it might be several hours
< provoostenator> The logs also suggest it continued for about 30 minutes after disconnecting.
< sdaftuar> disconnecting blocks is heavily disk-bound. when i last looked at it (on different hardware than i use today), i think i noticed i could disconnect on the order of 3-5 blocks/second, on average
< provoostenator> It got about halfway in 2-3 hours, so indeed it looked like it would have made it.
< provoostenator> This is an iMac with SSD and plenty of memory.
< sdaftuar> we used to have an issue where the memory usage could grow sort of unbounded, as disconnected blocks would have their transactions added to the mempool
< sdaftuar> but that was fixed
< provoostenator> Yeah the weird thing I noticed is how dbcache kept growing as it was disconnecting.
< sdaftuar> but your comment about 5-8GB of memory has me slightly concerned
< provoostenator> The machine has 64 GB so it didn't run out.
< sdaftuar> what is -dbcache set to?
< provoostenator> 5000, so that's bad
< provoostenator> The mempool is just the default, so that shouldn't have grown so much, right?
< sdaftuar> yeah assuming the code works correctly, the mempool's memory usage would have been bounded pretty well
< provoostenator> Last log entry had cache=4650.7MiB
< sdaftuar> oh so that seems good then
< provoostenator> The 5-8 GB RAM usage was an hour after the last log entry, when I reconnected, found through "top".
< sdaftuar> alright well maybe this is all expected (crappy) behavior. i don't know of any clever ideas to speed up block disconnection, unfortunately.
< sdaftuar> maybe someone could implement https://github.com/bitcoin/bitcoin/issues/8037
< provoostenator> Maybe invalidateblock could have a "don't bother adding to the mempool" option?
< provoostenator> I just noticed I have txindex=1, so that could be another issue.
< sdaftuar> provoostenator: yeah that's fair but i suspect it would still take hours
< gmaxwell> provoostenator: it's not clear to me what you're saying you saw
< gmaxwell> provoostenator: was it still rolling back when you stopped it?
< gmaxwell> if it was then it just sounds like expected behavior.
< provoostenator> gmaxwell: rolling back was after I restarted (it's still doing that now).
< gmaxwell> I've rolled back all the way to block 0 many times, though not recently.
< gmaxwell> provoostenator: yea, it'll keep going until it finishes.
< kanzure> mailing list bug has been reoslved; can someone send the post mortem link to the mailing list subscribers plzkthx? like https://bitcoincore.org/en/2018/09/20/notice/
< sipa> kanzure: didn't BlueMatt send one?
< provoostenator> Before I disconnected (a few hours ago) it was doing "UpdateTip: new best ..." in reverse order, as expcted from doing invalidateblock
< kanzure> i don't see one in the mod queue
< provoostenator> The logs shows it kept doing that for 30 mins after I disconnected from the machine.
< provoostenator> When I logged back into the machine, bitcoind was still running, using 5-8 GB of RAM (it was actually going up and down in the space of minutes), but log wasn't updating. I then stopped it via rpc and restarted.
< provoostenator> So it seems it was still doing _something_, despite not logging.
< luke-jr> FYI, I still didn't get anything for 0.6.3/CVE from https://bitcoincore.org/en/list/announcements/join/ yet
< gmaxwell> provoostenator: what you're seeing without the logs is the atomic flush roll forward probably.
< luke-jr> sipa: ^ since you are one of the 3 who can apparently send those
< sdaftuar> provoostenator: gmaxwell: it does seem surprising i guess that an unclean shutdown happened?
< sdaftuar> how would that be possible if you just use rpc to stop the node?
< provoostenator> gmaxwell: what is a "atomic flush roll forward"?
< sdaftuar> provoostenator: on startup, we detect if the utxo state wasn't finished being written as of what we think our tip is.
< sdaftuar> in that situation, we have a rollback / rollforward mechanism to fix the utxo
< sdaftuar> by disconnecting blocks that are no longer on our chain, and replaying the blocks that might need applying to the utxo state
< sdaftuar> that should only happen after an unclean shutdown though
< gmaxwell> not to change subject but anyone know what this is? https://www.reddit.com/r/Bitcoin/comments/9hrusk/orhpan_blocks/e6e4zhk/?context=3
< gmaxwell> sdaftuar: indeed. I missed that the shutdown was supposted to be clean.
< provoostenator> "duplicate block" seems to mean that it already processed it, nothing to do with orphans.
< kanzure> *poke* postmortem email plzkthx
< provoostenator> Ok, so roll back made it to 509650 and all seems well. Except the node seems to have forgotten I invalidated block 485000, because it jumped right into IBD and is moving forward again.
< provoostenator> I guess that's because the node doesn't check the full block index at launch.
< provoostenator> Restarted, now using v0.17.0rc4, doing nanother invalidateblock. Memory usage is almost 2GB higher than cache= shown in the logs, and seems to outpace it.
< provoostenator> I've also turned off the index.
< gmaxwell> what are the actual dirty page counts?
< gmaxwell> we recently realized that OS cached pages in mmaped files show up in res.
< provoostenator> Also, it's still going even though bitcoin-cli stop said it would stop. I'll look at the dirty page counts...
< provoostenator> (note to self: do not google "top dirty pages")
< sipa> lol
< gmaxwell> oh sorry, pmap -x $(pidof bitcoind) | tail -n 1 | tr -s ' ' | cut -d' ' -f 4
< provoostenator> macOS doesn't have pmap, but vmmap gives me this summary: https://gist.github.com/Sjors/6b01711ccd0f96128c7db5230c85ae8f
< provoostenator> Also a long list of "mapped file", e.g. many "locks/index/*.ldb"
< gmaxwell> k, so ~2GB of your resident size is mapped files.
< gmaxwell> whats your dbcache setting?
< provoostenator> dbcache=5000 MB, the log currently says cache=2300 MiB, so that part makes sense?
< provoostenator> It's the just the other 6 GB that needs explaining. Memory usage is now 10 GB. 38 more and the machine is going to OOM, which I'm not going to allow.
< provoostenator> MALLOC_TINY is now at 8.7, so that seems to be the thing that's mooning.
< provoostenator> (actually this is still v0.17.0rc2, sorry, though hopefully that doesn't matter here)
< provoostenator> (no, it is v0.17.0rc4)
< provoostenator> kill has no effect, kill -9 did
< provoostenator> Getting fairly consistent behavior now, even with disablewallet=1. bitcoin-cli stop seems to stop the RPC server, but not the invalidation process. Curious if anyone can reproduce. I'll let it sync to the tip before trying again.
< sipa> if invalidateblock does not succeed, its state isn't writte
< sipa> it first disconnects the blocks, and then marks them as invalid
< sipa> so at startup they will be connected again if bitcoind was killed in the middle
< provoostenator> That makes sense. I wonder if it matter that I was essentially interrupting IBD with that invalidateblock call. Memory usage seemed way worse than what I saw earlier today.
< sipa> invalidateblock also keeps a list of transactions to re-add to the mempool after the invalidation completes
< sipa> i assume that's the memory usage you see
< provoostenator> Is it also not abortable once in progress?
< sipa> no
< provoostenator> Ok, so in that case the way to roll back a long way would be to do it in smaller increments.
< sipa> right
< sipa> that should work
< gmaxwell> sipa: the mempool usage is limited.
< sipa> gmaxwell: how?
< sipa> gmaxwell: it doesn't look like DisconnectedBlockTransactions enforces any memory limits
< gmaxwell> MAX_DISCONNECTED_TX_POOL_SIZE
< sipa> oh
< provoostenator> There's also this open issue: #9027
< gribble> https://github.com/bitcoin/bitcoin/issues/9027 | Unbounded reorg memory usage · Issue #9027 · bitcoin/bitcoin · GitHub
< sipa> yup
< sipa> i was expecting the code to be elsewhere, my bad
< provoostenator> I take great pride in doing stupid things that lead to a new release candidate, so hopefully you'll find something :-)
< gmaxwell> provoostenator: are you running txindex?
< provoostenator> No, I did the first time today, but turned that off in more recent attempts.
< provoostenator> Re incremental approach: I rolled back ~10,000 blocks using about 13GB of RAM, cache=2200 at the peak. Sounds like it's holding all transactions in memory.
< provoostenator> But then it gets weird. ERROR: AcceptToMemoryPoolWorker: Consensus::CheckTxInputs: ... bad-txns-premature-spend-of-coinbase, tried to spend coinbase at depth 92
< provoostenator> InvalidChainFound: invalid block [the block I invalidated]
< gmaxwell> thats normal.
< provoostenator> Yeah, but then it starts syncing again.
< provoostenator> Ok, now I think I destroyed my chain :-) At boot: "assertion failed: (!setBlockIndexCandidates.empty()), function PruneBlockIndexCandidates, file validation.cpp, line 2547"
< sipa> provoostenator: i found the issue
< sipa> it's specific to InvalidateBlock
< provoostenator> sipa: nice!
< provoostenator> sipa: is it because disconnectpool holds on to transactions which reference a shared_ptr<CBlock> pblock, so those don't get deallocated?
< sipa> provoostenator: no
< sipa> the event queue holds on to the shared_ptr<CBlock> objects in callbacks to DisconnectedBlock
< sipa> and InvalidateBlock doesn't limit the size of the queue
< sipa> provoostenator: could you check whether this issue also occurs when Rewinding?
< sipa> create a 0.13.0 node, sync it to tip, and then upgrade to 0.17+
< sipa> i suspect it is, and if that's the case, i would consider it a release blocker
< provoostenator> That's the rewind that happens if you upgrade a non-segwit node to a segwit node?
< sipa> yup
< provoostenator> I'll give it a try this weekend or early next week. Getting a bit late here. Maybe someone else gets to it first.
< sipa> thanks!
< provoostenator> I'm not looking forward to doing another release notes for 0.14 and 0.15 backports :-)
< MarcoFalke> About #14289, was it ever supported to call invalidateblock on a block very far back?
< gribble> https://github.com/bitcoin/bitcoin/issues/14289 | Unbounded growth of scheduler queue · Issue #14289 · bitcoin/bitcoin · GitHub
< sipa> MarcoFalke: i would say no, but it'd be a nice-to-have if it worked
< sipa> having invalidateblock 100000 blocks deep use a massive amount of memory is not a blocker, i think
< MarcoFalke> Ok, that was my impression because every time I tried that it would deadlock the node a bit until I got impatient and CTRL+C out
< MarcoFalke> If that is supported with reasonable memory gurantees, we should add a test/benchmark so it doesn't randomly regress
< MarcoFalke> Also my key is de-expired, but I am having issues uploading it to keyservers.
< MarcoFalke> All of them return some obscure proxy error or timeout or ...
< gmaxwell> MarcoFalke: yes worked fine since 9208.
< gmaxwell> the rpc will disconnect, because the rpc timeout isn't long enough for it to finish, but a node will happly work its way back to block 0.
< Murch> MarcoFalke: Luckily Keyservers may soonish be a thing of the past: https://wiki.gnupg.org/WKD