< fanquake>
Qt for WebAssembly "Tech Preview". I guess they were listening to our IRC discussion..
< bitcoin-git>
[bitcoin] achow101 opened pull request #13307: Replace coin selection fallback strategy with Single Random Draw (master...srd-fallback) https://github.com/bitcoin/bitcoin/pull/13307
< jonasschnelli>
roasbeef: Thanks! Is there a paper or a specs? Or just the code?
< bitcoin-git>
[bitcoin] martinus opened pull request #13309: Faster unit tests: directloy operate with CMutableTransaction (master...SignSignature-with-CMutableTransaction) https://github.com/bitcoin/bitcoin/pull/13309
< eshays>
hey guys, my bitcoin core program didnt close correctly and upon relaunch im stuck on replaying block 0%
< eshays>
in the debug log im being shown that its currently rolling forward numbers
< eshays>
but ive had it running for 24 hours and still on 0%
< booyah>
corruption of kind that makes the recovery function hang forever? eshays can you attach gdb to it gdb -p $(pidof bitcoind) and see in which function it is
< booyah>
eshays: in gdb, command: "thread apply all bt" (and press enter). Pastebin the result
< eshays>
im sorry im pretty unaware of all the terms, where do i find gdb
< booyah>
eshays: sorry, forgot you are on Windows. In this case just pastebin some portion of debug.log from end (check if there are no confidentail IP addresses or btc addresses). After that, imo, shut down the node, make a copy of entire data-directory of bitcoin to have snapshot of corrupted files, and start again to see if it's the same
< eshays>
start the whole sync again?
< booyah>
<eshays> version v0.16.0 running Windows 10 ; 2TB harddrive with atleast 1.5TB free (and include this information if you post to github issue)
< booyah>
eshays: after saving debug.log, shut down the node fully (eg kill process) and start it again. But I think it will try to continue again, it should not redownload chain from scratch
< eshays>
if im closing and opening debug and this is progressing with new entries is that a good sign?
< eshays>
2018-05-23 09:46:41 Rolling forward 0000000000000000001f0ffad4402279360668ba5b071ee3206971199bd732f7 (492215) 2018-05-23 09:46:50 Rolling forward 0000000000000000006d7c1822119613361d0fdb4594ec68990a37d1c42472da (492216) 2018-05-23 09:46:57 Rolling forward 00000000000000000046d5cabb24f80c5f60e614cb9450b14f5f86548dbac7af (492217) 2018-05-23 09:47:03 Rolling forward 0000000000000000001a5530ee669c4438fa706d53f8e5135f8256488df66
< booyah>
eshays: looks like progress. current block is 523985.
< eshays>
i had been syncing with maybe 15000 blocks left when it closed.
< echeveria>
yes, if you close improperly you lose sync progress partially.
< booyah>
the bug seems to be that for 24 hours it was stuck right? what exactly do you mean by stuck, eshays?
< eshays>
it appears to be continually rolling forward in debug but never moving off 0%
< booyah>
eshays: the progress information, is in the GUI right?
< eshays>
yes it says "replaying blocks...
< eshays>
"press q to shutdown"
< eshays>
"0%"
< booyah>
maybe the GUI rolling forward message should include (last block: %d)
< eshays>
thanks anyway guys, im gonna let it sit and continue rolling forward.
< booyah>
eshays: let of known on #bitcoin if all worked out, see you
< booyah>
*let us
< eshays>
will do
< luke-jr>
I don't think the rolling forward reports any progress info
< booyah>
luke-jr: he pasted above that it does?
< booyah>
"00000000000000000046d5cabb24f80c5f60e614cb9450b14f5f86548dbac7af (492217) 2018-05-23 09:47:03 Rolling forward"
< luke-jr>
that's the debug log
< luke-jr>
I'm talking about the % reported to the GUI
< luke-jr>
so it will say 0% the whole time until it finishes
< provoostenator>
Why (and where) is dbcache wiped after each prune event? IIUC prune deletes block storage files, while dbache is mostly the UTXO set, which doesn't change.
< luke-jr>
provoostenator: prune events cause dbcache to get flushed out to disk, so you don't need to [potentially] replay the blocks you're about to prune
< provoostenator>
Is that process described in more detail anywhere?
< provoostenator>
luke-jr: what do you mean by replaying a block? Is that in the event of a crash?
< luke-jr>
that's the main difference between dbcache and flushed to disk, yes
< provoostenator>
Would it make sense to have a --I_feel_lucky flag that doesn't flush cache during IBD and just starts from scratch if there is a crash?
< luke-jr>
I doubt it would make a huge performance difference
< luke-jr>
what might help is doing some kind of CoW of the dbcache, and flushing it to disk in parallel rather than pausing the sync for it
< provoostenator>
Indeed only if it made a big difference. I'm still trying to figure out why a larger dbcache and bigger prune events are slower than master.
< provoostenator>
I don't think it's the duration of the flush events, though I haven't checked.
< provoostenator>
CoW?
< provoostenator>
Maybe some way to write to disk what's needed, but not remove it from memory?
< jonasschnelli>
roasbeef, bitconner: why static 16 bytes of entropy? Is that considered "enought"?
< provoostenator>
I'm running #12404 on a t2.medium EC2 instance (4 GB RAM, 2 CPUs), using dbcache=3000. After 72 hours it's at block 435K (while master is at 487K and #11658 is at 506K). There were 10 prune events. They take less than a minute, but blocks can take multiple minutes to process.
< gribble>
https://github.com/bitcoin/bitcoin/issues/11658 | During IBD, when doing pruning, prune 10% extra to avoid pruning again soon after by luke-jr · Pull Request #11658 · bitcoin/bitcoin · GitHub
< jonasschnelli>
roasbeef, bitconner: 2nd, has AEZ enough cryptoanalysis?
< luke-jr>
provoostenator: I'm not sure EC2 is usable for benchmarking.. isn't it cloud stuff?
< provoostenator>
luke-jr: well, it's slow stuff, buy maybe it's slow in cloud-specific way that's not relevant for any other conceivable device?
< luke-jr>
provoostenator: cloud is about resource sharing; so one VM might get more real-world CPU time than another due to the VMs it shares with
< luke-jr>
for a reliable benchmark comparison, you should really use real hardware doing nothing else
< provoostenator>
It's not CPU bound though.
< luke-jr>
the most often shared resource is I/O
< provoostenator>
Not as good as real computers, but in my experience they're still reasonably consistent. If so, then it's still useful to know what it is about a large cache that could slow things down.
< bitcoin-git>
[bitcoin] promag opened pull request #13310: Report progress in ReplayBlocks while rolling forward (master...2018-05-replayblocks-progress) https://github.com/bitcoin/bitcoin/pull/13310
< bitcoin-git>
bitcoin/master 0bf4318 Wladimir J. van der Laan: net: Serve blocks directly from disk when possible...
< bitcoin-git>
bitcoin/master 7f4db9a Wladimir J. van der Laan: Merge #13151: net: Serve blocks directly from disk when possible...
< bitcoin-git>
[bitcoin] laanwj closed pull request #13151: net: Serve blocks directly from disk when possible (master...2018_05_direct_from_disk) https://github.com/bitcoin/bitcoin/pull/13151
< MarcoFalke>
wumpus: Thanks for going through the high priority list.
< MarcoFalke>
I feel like we should put #13253 on there
< moneyball>
Hello, thanks to jimpo reaching out to GitHub, their support got back to us with the following good news: "You should be seeing fewer unicorns as of this morning! We've shipped a change that improves our caching (easy thing to do in computer science) which should improve your situation.
< moneyball>
Unicorns are real and they may still exist but if you see one and refresh it should go away (cache updates). If you're seeing a "double unicorn", in general that's awesome but actually meaning you saw a unicorn after refreshing from a unicorn, please reach out. Drop me the PR that is problematic and we'll dig into it further. I hope you don't see any double unicorns."