< GitHub136>
[bitcoin] paveljanik opened pull request #8677: Do not shadow upper local variable 'send', prevent -Wshadow compiler warning. (master...20160907_Wshadow_8606) https://github.com/bitcoin/bitcoin/pull/8677
< OxADADA>
sup
< GitHub150>
[bitcoin] jonasschnelli opened pull request #8678: [Qt][CoinControl] fix UI bug that could result in paying unexpected fee (master...2016/09/qt_cc_ui_radrio_fix) https://github.com/bitcoin/bitcoin/pull/8678
< skyraider>
trying to make setup.py install find a package with wheels-only (no sdist) archives on pypi. pythonwheels.com claims setuptools supports wheels, but https://github.com/pypa/setuptools/issues/558 claims setuptools does not support wheels. i'm running into "No local packages or working download links found for mypackage==myversion".
< GitHub128>
[bitcoin] MarcoFalke closed pull request #8673: Trivial: Fix obvious assignment/equality error in test (master...fix_arith_tests_trivial) https://github.com/bitcoin/bitcoin/pull/8673
< sipa>
MarcoFalke oops, i missed you already had a backport
< morcos>
sipa: i'm trying to dive back into things now, and picking up the benchmarking for speeding up ConnectBlock to try and help jeremyrubin's stuff get finalized. It appears that connecting transactions slowed down somewhat significantly in master. I think this is due to 8524.
< morcos>
For now those hash calculations are just extraneous, but even post segwit, i think the effect is you've moved the hashing from something thats parallelized to something thats not
< morcos>
the benefit of solving the O(n^2) problem dominates of course in the worst case, but in the typical case, i think this is maybe a slowdown we'd like to avoid
< morcos>
just wondering if you'd thought about any of this, or whether it makes sense to store these hashes from ATMP?
< sipa>
morcos: i'm perfectly fine with moving the hashing to a more parallellizable place
< gmaxwell>
Why was it parallel before but not now? I don't think we care if there is parallel hashing within a single transaction (that seems too fine grained to me), so long as multiple transactions could run in parallel?
< sipa>
all sighash precalc now runs in the main thread
< morcos>
its tricky to parallelize because then you need to synchronize a cache for them right... which is what you were trying to avoid
< gmaxwell>
but the cache need not be shared across transactions
< morcos>
gmaxwell: yeah what sipa said, the main thread which is connecting the transactions going to end up being the bottle neck, not sure how many script verification cores are required for that to be the case, but its good to not push more and more stuff into that
< sipa>
though it does run simultaneously with other transactions' normal signature checks
< morcos>
perhaps there is a design that puts all scriptchecks from a single tx onto a single thread
< sipa>
but yes, for the average case, moving more into the main thread is not good
< morcos>
but that breaks down for a really big tx
< gmaxwell>
ah, main thread. but .. yes, I think you should keep the scriptchecks for a single transaction togeather, process them as a group, and share the hashing along with them. ... it wouldn't exploit parallelism for big transactions, but I don't know if thats really needed.
< jeremyrubin>
i think maybe if you can estimate by #inputs or something
< jeremyrubin>
and split it if too big
< jeremyrubin>
but you won't be parallel if you have, say, one big txn
< gmaxwell>
previously the dispatch overhead made that irrelevant-- but perhaps with jeremyrubin's work the overhead is low enough to make that kind of parallelism useful for something.
< morcos>
so what are your thoughts on caching the hashes for a tx in AcceptToMemoryPool and then the main thread can still check that first before calculating them itself, so maybe that would be pretty fast
< morcos>
would need only simple synchronization on that b/c its only accessed by ATMP and the main thread
< jeremyrubin>
What about using some kind of atomic future for the hashes
< jeremyrubin>
First script check to get there evaluates it and fills it in?
< gmaxwell>
uh the cacheline bouncing, it hurts.
< gmaxwell>
morcos: similar to how we have the transaction id hash just calculated once (hopefully) and carried around with the transaction?
< jeremyrubin>
Well...
< jeremyrubin>
actually not that bad
< jeremyrubin>
MESI state only causes invalidation on write
< jeremyrubin>
which happens once
< gmaxwell>
there is no actual meaningful concurrency here, however.
< gmaxwell>
only false concurency created by logistics. (to the extent this hashing is done multiple times, it's only because its being done redundantly)
< gmaxwell>
morcos: or store validation flags in the mempool and assume valid if the same flags are still in effect ...
< sdaftuar>
gmaxwell: gah
< morcos>
gmaxwell: ha, the validation cache, that scares me!
< morcos>
but yeah i think easy enough to just store the hashes
< morcos>
ok, well not an emergency, just wanted to see what thoughts you guys had, will circle back when we have a proposed change
< Chris_Stewart_5>
jeremyrubin: With your new pull request (#8670), are you suggesting writing a new testing framework specific to bitcoin from scratch?
< jeremyrubin>
The content of the tests will not change, just the runner.
< jeremyrubin>
Chris_Stewart_5: ^^
< sipa>
jeremyrubin: feature request: a command line argument to make the binary crash on test failure
< sipa>
so a core dump file gets created
< jeremyrubin>
sipa: please put in the issue to keep it catalouged
< jeremyrubin>
but good suggestion; I was trying to get core dumps on travis for a while and couldn't
< kanzure>
what were you trying? the after_failure stuff wasn't working for you or something? or it did work, but couldn't find the actual core dumps, and therefore couldn't upload those somewhere?
< kanzure>
btw i also think running gdb bt might be good enough in after_failure
< kanzure>
perhaps once for each core dump too
< sipa>
jeremyrubin: agree, will report on issur
< kanzure>
jeremyrubin: for 8670 perhaps the silly xml outputs should be considered, and (separately) compatibility with mutation testing.
< jeremyrubin>
kanzure: how about jsons the kids like those these days
< * BlueMatt>
stabs kanzure
< kanzure>
yeah but i forget the name of the json test output 'standard'/format
< BlueMatt>
kanzure: how about the first personw ho needs that can implement it
< gmaxwell>
ugh
< kanzure>
BlueMatt: the issuetext is asking for
< kanzure>
ok whatever. i don't care.
< kanzure>
it would be more efficient to complain about the issue text in particular in the future :)
< BlueMatt>
yes
< veleiro>
Im trying to build v0.13.0 from source on a beaglebone black with debian jessie, i had to compile db4.8 from source but i installed libboost-all-dev, but in the ./configure stage the error i see is "configure: error: No working boost sleep implementation found." I went through https://github.com/bitcoin/bitcoin/issues/3003 with no success
< veleiro>
also, the build readme isnt clear about compiling boost from source
< sipa>
that's surprising
< sipa>
libboost-all-dev should have a sleep implementatio
< sipa>
is itp possible you have multiple boost versions side by side?
< veleiro>
its possible that i may have tried to compile boost from source when i went through this before, but it was a few weeks, but wouldnt ./configure use the system package version unless specified otherwise?
< veleiro>
i'll try to remove all boost and see whats left over
< sipa>
it searches in many places
< sipa>
what os is this?
< sipa>
and diatribution
< veleiro>
debian 8 jessie
< veleiro>
on armv7
< sipa>
that should work fine
< sipa>
you can also try to do a depends build
< sipa>
which builds all dependencies for you and creates a static buildd
< cfields>
in that commit the wait() is directly after the calculation, so it can only be worse. Would need to experiment with how early we can begin hashing to offload the most
< veleiro>
error in depends build too :( got any ideas? I couldnt find anything right off: "error: toolset gcc initialization: error: provided command 'armv7l-unknown-linux-gnueabihf-g++' not found" (gcc version 4.9.2)
< jeremyrubin>
cfields: that was fast
< jeremyrubin>
Hm
< jeremyrubin>
I like the general idea...
< jeremyrubin>
Initially I figured that the scriptcheck threads could, on a first visited policy, compute these hashes.
< jeremyrubin>
Obviously putting them earlier is better
< jeremyrubin>
Also std::mutex is :/
< jeremyrubin>
nicer to have cooridnation free.
< jeremyrubin>
I have an idea
< jeremyrubin>
CBlock is immutable?
< jeremyrubin>
What if you just pass a pointer to it
< jeremyrubin>
and have the background thread ONLY do all the hashes
< cfields>
jeremyrubin: sure, it was just a quick hack. I figured it'd be helpful to have the scriptcheck threads do it, but that got complicated in my head pretty quickly. Figured it'd be worth experimenting before committing to the complication
< cfields>
jeremyrubin: i don't think the overhead of the mutex/condvar is significant enough to throw off the "is it worth doing" tests :)
< cfields>
jeremyrubin: and sure, makes sense to do on the per-block level, but only if we don't end up caching in ATMP