< gmaxwell>
no those figures are from using the scalar sha2 code.
< gmaxwell>
AFAIK the only real place we can make good use of the parallel sha2 SSE code would be in hash tree computation, but that is complicated as you know.
< gmaxwell>
IIRC the SIMD scalar sha2 is ~2x faster than ours, and the SIMD parallel sha2 is 3x faster than ours.
< luke-jr>
merkle trees probably aren't a significant amount of hashing I think?
< gmaxwell>
They're actually tons. because every node in it is three compression function runs. and there are txn*2 nodes in total.
< gmaxwell>
so a block can have something like 24000 compression function runs.
< luke-jr>
hmm
< sipa>
so 8ms?
< sipa>
that's significant
< sipa>
assuming 3 GHz and 15 cpb for sha256
< TD-Linux>
<wumpus> gmaxwell: it's a bit scary though as the external process will be able to keep a reference, and have all your key data :) <- you can seal the fd and verify the seal in the sandboxed process to eliminate this vulnerability
< gmaxwell>
yes, it's non-trivial in terms of validation latency.
< TD-Linux>
er actually disregard me, there is no read seal
< jeremyrubin>
hm Bitcoin Unlimited just added ~parallel block validation~ but I'm pretty sure it has no perforance benefit
< jeremyrubin>
(in case anyone is looking at what they implemented)
< TD-Linux>
jeremyrubin, correct, it is not a performance enhancement but an attempt to "fix" quadratic hashing
< jeremyrubin>
wait what
< jeremyrubin>
The parallel block validation?
< TD-Linux>
yes
< sipa>
it means that the node doesn't stall if your block takes a day to validate
< TD-Linux>
yup, it'll just burn a core for a day
< TD-Linux>
of course under the assumption that the block also gets orphaned.
< jeremyrubin>
Yeah isn't that block hopefully just going to orphan
< jeremyrubin>
if it takes that long to validate
< jeremyrubin>
hmm
< sipa>
also, since our validation is already parallel anyway, it makes both blocks slower
< jeremyrubin>
yeah that's why I thought it was useless
< jeremyrubin>
core can only do so many IPC
< TD-Linux>
jeremyrubin, thus the question "how long is too long"
< jeremyrubin>
Well... that seems to be a hard fork then?
< jeremyrubin>
Because now you'll partition old nodes trying to validate whatever monster block (assuming it crashes/kills your node on old hardware)
< TD-Linux>
jeremyrubin, it is likely to cause forks, yes.
< TD-Linux>
itself it is not a hard fork.
< jeremyrubin>
yeah, sorry was slightly imprecise with terminology
< jeremyrubin>
it seems they don't mention it being a quadratic hashing fix in the documentation
< jeremyrubin>
Also it seems quadratic hashing isn't really a problem before this either, just wait until the block gets orphaned?
< gmaxwell>
yea, it clearly has some unexplored interactions with selfish mining too. E.g. if you mine an empty block, now you hold it locally for a bit comfortable that if there is a block race, you'll win even though you announced later... you can be up to the typical validation time late in your announcement.
< gmaxwell>
jeremyrubin: qudratic hashing is a _huge_ problem if you've ripped off the blocksize limit and done nothing about it.
< gmaxwell>
a block could take days to hash and shut down the network. :P (except for collaborating miners that know to 'optimize out' checking that transaciton) :)
< jeremyrubin>
Do we have a provision for abandoning a block mid validation if a longer header chain is seen?
< jeremyrubin>
Probably a tighter way to address same concern.
< gmaxwell>
it's not a concern for us. As it's not really easy to make excessively slow blocks in the concernsus rules and segwit completely fixes quadratic hashing.
< gmaxwell>
could basically worry about it once it _ever_ would have made a difference, rather than adding complexity now.
< gmaxwell>
(the complexity would be that that longer chain may be invalid, so you'll have to go back to validating the other thing, seems messy as heck)
< jeremyrubin>
well... it can be a good "defensive" code in case a new compelxity attaack is ever found.
< gmaxwell>
sure but has to be weighed against the complexity of the fix and the risk it implies.
< jeremyrubin>
I think it's a simple rule; always be trying to validate the largest-POW chain
< TD-Linux>
if such an attack was made, simply stalling is a pretty good failure option
< gmaxwell>
(maybe if someone implemented it they'd find it was easy... though just testing it makes me feel uneasy, concurrency is really hard to test well.)
< jeremyrubin>
fair!
< gmaxwell>
jeremyrubin: we do that, subject to the fact that while validating a block we're effectively non-concurrent, so we won't learn about the longer chain until we're done. So really the complexity there is just in safely increasing the concurrency. Which might be an independant good--- e.g. a side effect of the changes for the block testing stuff we were talking about a day ago.
< TD-Linux>
certainly I'd hope for more tests than parallel validation has :^)
< gmaxwell>
we don't really have a good test harness for testing concurrency. data race freeness doesn't mean that a parallel algorihim will yield expected results in all ordering sequences.
< TD-Linux>
some sort of framework that would cause all mutexes to block until the test explicitly lets them continue would be neat.
< gmaxwell>
TD-Linux: well RR actually has neat stuff for making threaded execution determinstic, that I think could be listed into being a concurrency fuzzing tool.
< gmaxwell>
e.g. replay from an to a given point... and then repeat the replay with many different values given to the RNG that schedules the threads, and see if you get different results.
< TD-Linux>
gmaxwell, well if you want to fuzz rather than be explicit, doesn't rr's chaos mode already count?
< gmaxwell>
oh does it already do this? lol
< luke-jr>
[01:01:39] <gmaxwell> yea, it clearly has some unexplored interactions with selfish mining too. E.g. if you mine an empty block, now you hold it locally for a bit comfortable that if there is a block race, you'll win even though you announced later… you can be up to the typical validation time late in your announcement. <-- they may claim this is a good thing, since it incentivises smaller blocks
< luke-jr>
although in fact it incentivises blocks which meet relay-network policy even better
< luke-jr>
or rather, the most-limited relay policy
< gmaxwell>
the key point about selfish mining is that it gives excess returns to larger miners. So "incenticizes smaller blocks at the expense of decenteralization" ... missing the point. :P Also, not smaller but empty the validation time difference between a block and a slightly smaller one is neglgible, you have to make the block empty to reliably cut in front of others.
< gmaxwell>
the latest BIP152 stuff is much more policy durable than prior stuff, since it will retain transactions rejected for policy reasons and still use them to reconstruct blocks.
< luke-jr>
that's not a good thing IMO
< luke-jr>
the network policy putting pressure on miners is a desirable trait
< luke-jr>
although most-restrictive isn't the ideal either, so meh
< gmaxwell>
'network policy' doesn't matter, other miners policy matters. And moreover: doublespend is 'policy' that is not in the miners control.
< gmaxwell>
Without extra someone spamming doublespends can considerably slow propagation.
< luke-jr>
true, but if that's the only concern, we would want to limit the extra pool to just double spends
< gmaxwell>
luke-jr: consider, without it there is pressure to not increase the minimum feerate in your mempool.
< gmaxwell>
because it will make you slower to accept blocks from others with a lower minimum feerate.
< gmaxwell>
(though I think there should probably be seperate extra pools for different kinds of rejections, adding complexity would have delayed getting it in)
< gmaxwell>
luke-jr: also keep in mind that any time miners expirence delays the easiest solution for them is to just centeralize their pooling more.
< gmaxwell>
they're not going to sit and go "oh that sucks, I'll twiddle my policy."
< * luke-jr>
notes he didn't oppose extra-pool :P
< gmaxwell>
fair enough.
< luke-jr>
#8694 is finally ready for final reviewing
< jeremyrubin>
I'm debugging something; is there a threadsafe way to access pcoinstip during connectblock? I want to be able to access it from a scriptcheck
< jeremyrubin>
I'm guessing I would have to add locks around the usage
< jeremyrubin>
curious if anyone's done this before and if the performance decrease is bad
< jeremyrubin>
(I think it might be doable to parallelize checking the inputs)
< jeremyrubin>
hm I think i have something workable using shared_mutex
< Lightsword>
wumpus, would it be easy to also do block notifications over the unix socket?
< Lightsword>
current ckpool local block notification method is basically to execute a binary that then writes to a unix socket for notification of a block
< wumpus>
Lightsword: everything that an be done over the current P2P port can also be done over the UNIX socket
< wumpus>
eh, RPC port
< wumpus>
I guess what you're looking for is #7949
< Lightsword>
wumpus, not long polling, having bitcoind itself write to another app’s listening unix socket for notifications
< wumpus>
conceptually it's the same, apart from who calls who. In both cases the listeners get immediate notification. Longpolling is simpler as bitcoind doesn't need to keep track of who to notify, that's implied by who is listening
< wumpus>
block notifications can also be broadcast over zeromq
< jeremyrubin>
how does one CreateNewBlock s.t. a witness is added?
< jeremyrubin>
I'm having trouble writing a unit test using TestChain100 once segwit activates
< sipa>
you need at least 3 retarget periods worth of blocks
< jeremyrubin>
ah so I need to be at > 432
< sipa>
right
< sipa>
1 period before signalling starts, another before it's locked in, and a third before it is active