< wumpus>
phantomcircuit: I don't think rpc_wallet_tests.cpp should ideally exist at all
< wumpus>
phantomcircuit: it is from before we had the RPC functional tests framework
< wumpus>
unit tests that check RPC behavior, except for the most basic level, aren't really unit tests
< wumpus>
so if someone takes care that everything that is in rpc_wallet_tests.cpp and is worthwhile is being tested in the RPC wallet tests, the file can go
< phantomcircuit>
wumpus: great im pretty sure we're already doing that and this is doing a bunch of reallllly weird stuff
< GitHub103>
[bitcoin] paveljanik opened pull request #8446: BIP9 parameters on regtest cleanup (master...20160802_shadow_bip9params) https://github.com/bitcoin/bitcoin/pull/8446
< wumpus>
cfields: I don't get it, I don't manage to compile master for ARM anymore: https://github.com/bitcoin/bitcoin/issues/8447 however, in travis it's working. What could be the difference, that I'm compiling my own toolchain?
< wumpus>
I guess I'll try in a trusty VM
< wumpus>
maybe this is false alarm and the g++ compiler produced by crosstool lacks some feature to support the c++11 threading primitives, or I've just forgot to pass some setting
< morcos>
still a bit more work to go before PR's and we probably need to figure out how to apply NicolasDorier's hash cache as well
< morcos>
tipcache is a permanent CCoinsViewCache in front of pcoinstip instead of creating a new one each ConnectBlock
< morcos>
its prepopulated every 30 secs by the inputs necessary for a block created by CreateNewBlock (takes on average 13ms)
< sipa>
ha
< morcos>
and its only cleared after a block is connected
< morcos>
this also serves to keep pcoinsTip itself properly warm
< sipa>
interesting
< morcos>
so all these tests are with about 550M of total caches (300M dbcache, 100M sigcache, approx 150M tipcache) or 450M dbache if not using tipcache
< morcos>
so overal cpu running time is increased of course using tipcache, but it happens out of critical path and its quick enough that its not hurting anything i dont' think 13ms every 30 secs
< sipa>
morcos: i can do benchmarks on a 52-core system, if you have code for me to test
< sipa>
s/core/thread/
< jeremyrubin>
woooo
< morcos>
ah, ok good , yeah i have 16 cores / 32 threads
< jeremyrubin>
How many cores?
< sipa>
2 chips, each 13 cores, each 2 threads
< gmaxwell>
s/13/14/
< morcos>
i tried setting scriptcheck threads to 24 or 32 and it slowed down a lot in all versions
< sipa>
oops, 56, not 52
< jeremyrubin>
how is the memory bus configured?
< jeremyrubin>
im curious how the lockfree stuff is implemented
< jeremyrubin>
nvm
< jeremyrubin>
(but in general, if someone better understands the lockfree stuff there are a bunch of low hanging fruit optimizations to investigate, although as is it's already fast enough to forget about for a while)
< morcos>
in any case the question there will only be whether it gets slower, as there isn't much room to get faster by more parallelism without changes. waiting on verification is pretty minimal now.
< morcos>
i suppose it could help a lot on startup/reindex, where you don't have warm caches (sig or pcoinstip)
< jeremyrubin>
quick q: best way to give up cpu cycles
< jeremyrubin>
eg, sleep(0), this_thread::yield()
< sipa>
jeremyrubin: i wonder about the same
< gmaxwell>
jeremyrubin: it's numa. with 4-way parallel memory on each of the two chips.
< morcos>
sipa: have you figured out what the optimal number of scriptcheck threads is for you do do a reindex is right now?
< sipa>
morcos: i have not; will do
< sipa>
morcos: how do you benchmark? use -debug=bench numbers?
< morcos>
sipa: yes thats what i've been doing
< morcos>
although for reindex, i guess the actual time elapsed is most interesting
< sipa>
for the graph you posted above... where do those come from?
< morcos>
thats using sdaftuar's simulation mode over the first 3 days of may (470 blocks)
< morcos>
so a bit biased by having no mempool at the beginning of that period..
< NicolasDorier>
Having seen that independently, I tried to make a test to reproduce the bug in regnet. However I could not reproduce: If the block was rejected for time-too-new, then later rebroadcasted it worked fine and was reconsidered. I've not tried with header first propagation though
< gmaxwell>
NicolasDorier: maybe if its rejected during the startup checks.
< gmaxwell>
e.g. set time forward, accept a block. shut down, turn time back. restart.
< sipa>
NicolasDorier: i don't understand the logs there
< sipa>
the "CheckBlockHeader(): block timestamp too far in the future" message is something that implies a header is being received that at the time of receipt is too far in the future
< sipa>
while the "AcceptBlockHeader: block is marked invalid" message is about the header being marked as invalid in the database
< sipa>
the two should never occur together
< sipa>
as the first should result in us not storing the header at all