< GitHub92>
[bitcoin] gmaxwell opened pull request #8644: [0.13 backport] Check for compatibility with download in FindNextBlocksToDownload (0.13...findnext_backport) https://github.com/bitcoin/bitcoin/pull/8644
< gmaxwell>
Do we want to backport, "Reduce default number of blocks to check at startup"? It's a trivial fix to long startup time complaints, and arguably the dbcache increase in 0.13 was a startup time regression for some people.
< sipa>
i would be fime with that, though calling it a bugfix is a stretch
< gmaxwell>
well I gave a pragmatic argument for it being one.
< gmaxwell>
but I don't think I needed to, it's pretty obviously riskless with no "interface" impact.
< sipa>
yes, i agree it has very low risk
< gmaxwell>
which I think beyond 'fix' should be the goal for point releases, nothing that is going to turn a working deployment into a broken one.
< luke-jr>
sounds reasonable to do, in any case
< GitHub176>
[bitcoin] gmaxwell opened pull request #8646: [0.13 backport] Reduce default number of blocks to check at startup (0.13...reduceblocks_backport) https://github.com/bitcoin/bitcoin/pull/8646
< gmaxwell>
wumpus: feel free to close my backport prs if you don't find them helpful, I was just feeling usless simply saying X or Y should be backported rather than doing it.
< cfields>
it's pretty hackish, needs a cleanup. But should be enough to determine if it fixes the problem
< cfields>
wumpus: let me know if ^^ is what you had in mind
< cfields>
that not only eliminates polling, but also drops cs_main locking, so i should think it'd be a good bit quicker
< GitHub186>
[bitcoin] JeremyRubin opened pull request #8650: Make tests much faster by replacing BOOST_CHECK with FAST_CHECK (master...faster_tests) https://github.com/bitcoin/bitcoin/pull/8650
< GitHub124>
bitcoin/master 854f1af Pieter Wuille: Make the dummy argument to getaddednodeinfo optional
< GitHub124>
bitcoin/master 91990ee Wladimir J. van der Laan: Merge #8272: Make the dummy argument to getaddednodeinfo optional...
< GitHub133>
[bitcoin] laanwj closed pull request #8272: Make the dummy argument to getaddednodeinfo optional (master...optionaladdnodedummy) https://github.com/bitcoin/bitcoin/pull/8272
< gmaxwell>
jeremyrubin: I had someone 'testframeworkize' one of my tests in another project and it caused it to go from 45 second runtime something like an hour.
< jeremyrubin>
gross :/
< jonasschnelli>
Hmm... IBDing from a local peer (same host) seems to take much longer the bootstraping from random peers.
< gmaxwell>
early in the sync it fetches to few a number of blocks at a time, so fetching from a single peer is slow. hardly matters overall.
< gmaxwell>
other than that, they'll obviously contend a bit for IO
< gmaxwell>
I'm not aware of any other reasons for slowdowns other than that.
< paveljanik>
jonasschnelli, OS X<
< paveljanik>
?
< jonasschnelli>
paveljanik: no debian on my Intel(R) Xeon(R) CPU E3-1275 v5 @ 3.60GHz
< paveljanik>
I had the same issue on OS X in the past. unplugging the network cable, turning off firewall helped 8)
< gmaxwell>
are you only looking at the first 150k blocks? or are you seeing the rate per block slower later?
< paveljanik>
the process named powerd took 100% of CPU power
< jonasschnelli>
gmaxwell: It took more then 1h for progress=0.035480
< jonasschnelli>
(where is did sync with random peers up to progress 1.0 in ~3h [2 month ago though])
< gmaxwell>
better to look at time vs height, e.g. grab a chunk of 100 blocks wherever it is and comare the blocks per second at that height with the same height in your prior logs.
< jonasschnelli>
Also... my extremely scientific measure of how fast the tail -f output scrolls though.... did tell me its slower
< jonasschnelli>
I guess i let it sync up to h400'000 and then compare it against a sync from random peers
< jonasschnelli>
The only differences are: -prune=550 and connecting to a node thats running on the same machine (in sync, should not cause a huge slowdown)
< * jonasschnelli>
needs to write a IBD benchmark tool
< gmaxwell>
prune might make it slower.
< jonasschnelli>
the prune data eviction seems pretty fast
< gmaxwell>
after all it's writing things to only delete it after.
< jonasschnelli>
it's running on a 1GB/s SSD
< gmaxwell>
I doubt anyone has benmarked prune vs not, if you told me that it was doing some linear scan of all the blocks to decide what to prune, every block-- and was thus massively slower, I wouldn't not be too surprised.
< gmaxwell>
oh obviously, it needs to make sure that after a restart it can continue, so it hast to at least be consistent to the point where it pruned.
< jonasschnelli>
What is the reason for the txn_count=0x00 in the headers p2p message?
< sipa>
because CBlockHeader did not originally exist
< sipa>
and headers messages just contained a CBlock with 0 transactions
< jonasschnelli>
thanks.
< jeremyrubin>
Is it correct that the maximum number of items deleted from sigcache while checking a block is max sigops?
< sipa>
i believe so
< jeremyrubin>
And is that "MAX_BLOCK_SIGOPS_COST" or is that something else
< jeremyrubin>
because I see a 20k number floating around too
< jeremyrubin>
but MAX_BLOCK_SIGOPS_COST=80k
< sipa>
yes, 4x factor from segwit
< sipa>
sigops in non-witness part count as 4, in witness part they count as 1
< jeremyrubin>
so if you have 0 sigops in witness, you can have the full 80k?
< sipa>
if you have 0 sigops in non-witness
< sipa>
yes
< jeremyrubin>
ok cool
< sipa>
pre segwit the max is 20k
< sipa>
post segwit it is 80k
< jonasschnelli>
with disabled pruning it took also >1h for progress=0.031407... now trying a sync from random peers
< gmaxwell>
you're killin me with the testing with just the first 3%.
< jonasschnelli>
gmaxwell: heh. You think comparing the first 3% does not make sense?
< jonasschnelli>
I just try to not waste time. :)
< gmaxwell>
it's very unrepresentative, as I explaned earlier. We only fetch 16 blocks per peer at a time ... early in the chain this is far too few to keep the system well pipelined, because the blocks take no work to process. :)
< jonasschnelli>
gmaxwell: Okay. I see. But anyways, a sync with a single local peer will then take always longer, right?
< gmaxwell>
yes, somewhat. the difference will only be in the initial part of the chain, but go away later-- at least, whatever issue was due to that.
< gmaxwell>
it's like the 0.13 vs 0.12 reindex, the 0.13 reindex spends 20 minutes at the front reading headers. So it starts off 20 minutes behind, it still finishes earlier.
< gmaxwell>
but if you were to compare reintext for 0.13 vs 0.12 for the first 3% you would likely find 0.13 was much slower. :)
< GitHub117>
[bitcoin] sstone opened pull request #8653: doc (trivial): add tip for cross-builds on ubuntu (master...wip-doccrosscompile) https://github.com/bitcoin/bitcoin/pull/8653
< sipa>
cfields: looks good to me
< sipa>
jonasschnelli: 3%, how many blocks is that?
< * jonasschnelli>
checking the logs
< jonasschnelli>
sipa: aprox 211000
< jonasschnelli>
My assumption is that IBD with a single peer is much slower... but can't prove it right now.
< sipa>
jonasschnelli: that seems very wrong
< sipa>
it should take minutes to get to 211000
< sipa>
not an hour
< sipa>
the effect greg talks about it real, but shouldn't last more than a few minutes
< jonasschnelli>
2016-09-02 08:58:49 UpdateTip: new best=0000000000000345b371caa3f829cacbe2b4d38ecd15a5a02031efae79934d15 height=211000
< jonasschnelli>
2016-09-02 07:56:40 Bitcoin version v0.13.99.0-df98230
< jonasschnelli>
sipa: I guess its caused by --enable-debug
< sipa>
oh, yes, that will slow things down tremendously
< jonasschnelli>
sry for the noise... I should finally remember to disable debug mode when benachmarking.
< jonasschnelli>
I'm working on more authentic IBD benchmarks.
< cfields>
jonasschnelli: ok. apples to apples IBD is a good idea though. I'll do some syncs over the weekend
< jonasschnelli>
cfields: Yes. I was fooled (again) by the bias of --enable-debug and --prune
< cfields>
jonasschnelli: ah, heh. I've fallen into the same trap. Turns out profiles generated with -O0 (for better info) aren't at all representative of real-world usage
< cfields>
so it makes sense that it'd be a major performance killer
< gmaxwell>
jonasschnelli: I think you got caught on debug before, you realize now I'm gonna start asking you? "Are you benchmarking with enable debug again?" :P
< jonasschnelli>
heh... yes.
< btcdrak>
Jonas 'debug' schnelli
< sdaftuar>
cfields: i don't think i have seen segwit.py fail locally for the reason mentioned in #8532
< sdaftuar>
gmaxwell: please set up an irc autoresponder so the rest of us don't forget to ask him too :)
< cfields>
sdaftuar: ah, ok
< sdaftuar>
it seems like the most common reason my rpc tests fail is the communicat() timeout thing i reported in #8649, and then new test runs failing when old bitcoind's haven't been killed from prior test runs (sigh)
< GitHub85>
[bitcoin] jl2012 opened pull request #8654: Reuse sighash computations across evaluation (rebase of #4562) (master...sighashcache) https://github.com/bitcoin/bitcoin/pull/8654
< wumpus>
cfields: looking
< wumpus>
cfields: looks good to me, doesn't even require any new signals!
< wumpus>
cfields: any reason to not put the signal subscription/unsubscription in StartRPC/StopRPC?
< wumpus>
cfields: oh duh, those are general methods
< wumpus>
cfields: if there was a rpcblockchain.init/deinit, it would fit there
< wumpus>
cfields: but this is ok; maybe mark the RPC calls that they are primarily meant for testing and/or set them as hidden?
< wumpus>
(e.g. I can see complaints from people hanging the RPC server this way with no clue what they're doing)
< sipa>
agree
< cfields>
wumpus: yep, makes sense
< cfields>
and yes, i kinda abused the existing signals there. I figured you'd be grumpier about it, hence the poke rather than PR :)
< wumpus>
cfields: well the important thing is that all other processing has been done at the time of the signal; does this guarantee that?
< wumpus>
(I guess the answer is yes, as there is no async processing, although it could depend on the order in which the handlers are called?)
< sipa>
by default i think handlers are called in the order they were added
< cfields>
wumpus: i believe it's fine, but i'll double-check.
< cfields>
wumpus: since it abuses the ui signal, i'm not sure we need to worry about the order? Looking to see who else receives that
< wumpus>
sipa: yes, but that is a really fragile thing to rely on
< wumpus>
cfields: ok, yes that makes sense, if the UI signal is last
< wumpus>
cfields: indeed there's no need to be worried doing things concurrently with the GUI
< wumpus>
(usually that won't be running anyhow during RPC testing, although it's possible)
< cfields>
it doesn't guarantee that we've acted on relaying appropriately, though all we could guarantee there anyway is that we've queued up some messages
< cfields>
right
< cfields>
yes, only other user is qt for updating gui
< GitHub78>
[bitcoin] paveljanik opened pull request #8655: Do not shadow variables (trivials) (master...20160902_Wshadow_trivials) https://github.com/bitcoin/bitcoin/pull/8655
< phantomcircuit>
jonasschnelli, the p2p network contains a bunch of nodes which exist only to monitor people and as a side effect have bizarre behaviour
< GitHub12>
[bitcoin] paveljanik opened pull request #8656: Do not shadow global variable fileout (master...20160902_Wshadow_fileout) https://github.com/bitcoin/bitcoin/pull/8656
< instagibbs>
sipa, where is chainActive loaded on init? Having trouble tracking that down.
< sipa>
instagibbs: ActivateBestChain?
< sipa>
LoadBlockIndex?
< instagibbs>
eh yes, but it got moved around and I'm flailing
< instagibbs>
I'll track it down
< instagibbs>
ah i misunderstood what wallet init was doing with it, nevermind, thanks
< GitHub70>
[bitcoin] paveljanik opened pull request #8658: WIP/DO NOT MERGE: Remove unused statements in serialization (master...20160902_nVersion_serialization_cleanup) https://github.com/bitcoin/bitcoin/pull/8658