<@wumpus>
to me it looks like it's just spending more time waiting
< MarcoFalke>
tearing down the nodes takes ages for tiny tests compared to the test's actual run time
< gmaxwell>
unfortunately with the test running on shared vm infrastructure timings are probably not all that useful.
<@wumpus>
it probably doesn't help rpc performance from python that authproxy calls log.debug for all data that comes in and goes out, pretty-printing everything even though usually it's discarded
<@wumpus>
(not likely the cause of my slowdown, just noticed)
< MarcoFalke>
Indeed, no wall clock, but maybe cpu_time, memory_peak and io could help.
<@wumpus>
I'm on to something maybe, a getnewaddress call takes 0.013565 on the one system, 0.168683 (more than ten times as much) on the other. Could be slow i/o, but that slow?
<@wumpus>
this does not seem to extend to most other RPC calls (though havne't looked at them all)
< gmaxwell>
well getnewaddress is syncing the wallet... so fsync time?
< sipa>
we do a db sync operatiom after every new address
< sipa>
jinx
<@wumpus>
if so we need a flag to disable that for the tests
<@wumpus>
fsync slow makes sense, I've noticed that before, I think it tries to sync the entire partition image
< gmaxwell>
there is that eatmydata thing that could be used with tests.
<@wumpus>
cool, didn't know about that one
< sipa>
s/partition/filesystem
< sipa>
i think? or is literally the disk block cache?
<@wumpus>
yes, filesystem
<@wumpus>
or not sure really
<@wumpus>
it might as well be trying to sync the entire virtual disk to disk
< sipa>
but syncing of a filesystem needs dependency information between sectors, or you may end up with an inonsistent state
<@wumpus>
I suspect it's something like that at least: fsync() inside the VM has file granularity, but qemu calling fsync() has complete file system granularity
< sipa>
so even if it's a disk level cache, it needs information from the filesystem to order the wrotes
<@wumpus>
so not only the wallet is fsynced, but also all the other things the tests do such as writing tons of log files
<@wumpus>
anyhow I'll try with the eatmydata and see if it resolves the slowdown
< bitcoin-git>
[bitcoin] laanwj opened pull request #10220: Experiment: test: Disable fsync in travis tests (master...2017_04_tests_eatmydata) https://github.com/bitcoin/bitcoin/pull/10220
<@wumpus>
so apparently on travis, eatmydata gives a 2x gain (thanks for testing MarcoFalke), not as good as in my VM (I may have some misconfiguration) but still nice
< sipa>
nice indeed!
< SopaXorzTaker>
offtopic PSA
< SopaXorzTaker>
wumpus, sipa, the large bitcoin collider client script is untrustworthy
< SopaXorzTaker>
refrain from running it until the author gives explanations
< sipa>
don't worry, i had no intention of running it
<@wumpus>
SopaXorzTaker: good sleuthing, but no, you don't have to be afraid I run random scripts from the internet on anything important, let alone bitcoin-related ones
< SopaXorzTaker>
wumpus, yeah
< SopaXorzTaker>
but this script actually does remote code execution
<@wumpus>
the whole premise is a bit scammy; it reminds me of the trojans in the 90's whose control component had a trojan too. So everyone using the trojan to grief other people got owned themselves too...
< luke-jr>
what is it even supposed to do?
<@wumpus>
people running this script try to steal coins by generating random private keys. This is incredibly unlikely, and if it worked it'd be wrong in various ways
< luke-jr>
lol
< luke-jr>
bruteforcing privkeys is just ridiculous to attempt, but to do it with *Perl code*? lololol
< luke-jr>
at face value, it's obvious the only purpose is to be a backdoor
< SopaXorzTaker>
luke-jr, well
< SopaXorzTaker>
it's actually done with an inner C program which is natively compiled by the script
< SopaXorzTaker>
actually, the bruteforcing has some results
< sipa>
and it seems to have OpenCL code too
< SopaXorzTaker>
there were some addresses deliberately generated with weak PRNGs
<@wumpus>
yes, if it mimimcs specific bad PRNGs (or bad brainwallets) instead of simply randomly generating keys it can certainly turn up something
< sipa>
including things put there by the script's author :)
< SopaXorzTaker>
wumpus, yes
< SopaXorzTaker>
there is a so-called puzzle transaction
< SopaXorzTaker>
with 32 BTC
< SopaXorzTaker>
look it up
< SopaXorzTaker>
each address uses a key one bit stronger than the previous one
< SopaXorzTaker>
(there's 256 addresses)
< SopaXorzTaker>
eg. 0000..0001
< SopaXorzTaker>
0000..0011
< SopaXorzTaker>
0000..0101
< SopaXorzTaker>
0000..1011
< SopaXorzTaker>
and so on
<@wumpus>
yes, for the author it could be very profitable, and no need to bruteforce at all, just stealing all the wallets of people running this
<@wumpus>
in a way it's the classical con, make people believe something that's too good to be true
< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #10221: Stop treating coinbase outputs differently in GUI: show them at 1conf (master...2017-04-no-coinbase-display-lag) https://github.com/bitcoin/bitcoin/pull/10221
< morcos>
I'm happy enough to just exclude in all my bitcoin.conf files, but just want to see if everyone else is ok with the leveldb log spam... it prints a lot of useless messages right at startup (where everythign else you care about is printed)
< morcos>
I'm not sure exactly what information we're expecting from the leveldb logging, so maybe there is a better solution that concentrates on that infomration or at least cleanly aggregating at startup.. it's not really clear to me how to make much use of what it does log
< sipa>
only when you enable the relevant debug category?
< morcos>
sipa: i assume, but my habit is to always enable all
<@wumpus>
the more debugging is added, the less useful it becomes to run with debug=all
<@wumpus>
though I'm fine with a debug=alllowvolume or such if you want to add that, which excludes at least leveldb and libevent
<@wumpus>
but usually the recommedation is to add debug categories only when troubleshooting a certain subsystem; this became even easier with the RPC call to turn on/off individual debug flags
< jtimon>
is there any advantage to Q_FOREACH over c++11 foreach?
<@wumpus>
no
<@wumpus>
fairly sure c++11 foreach will work with qt objects too
< jtimon>
I mean, not in a way that it can't be solved
< sipa>
hmm, seems to be about some specifics with Qt containers
< jtimon>
the reason I ask is because I was trying to remove PAIRTYPE and it seems Q_FOREACH requires it too, I'm not completely sure though, I'm compiling removing Q_FOREACH first and then I'll try again without removing Q_FOREACH in case we prefer to keep it, but only -j4 since I'm on the laptop...
< bitcoin-git>
bitcoin/master c9e31c3 Warren Togami: Clarify importprivkey help text with example of blank label without rescan...
< bitcoin-git>
bitcoin/master 50a1cc0 MarcoFalke: Merge #10207: Clarify importprivkey help text ... example of blank label without rescan...
< bitcoin-git>
[bitcoin] MarcoFalke closed pull request #10207: Clarify importprivkey help text ... example of blank label without rescan (master...importprivkey) https://github.com/bitcoin/bitcoin/pull/10207
< sipa>
BlueMatt: i'm going to add a WIP to the title, i'm not confortable with merging until there are more substantial tests
< BlueMatt>
yea, I was starting to feel the same way....I mean alternatively we could drop the multi-head support and only support single-action (ie a series of connects/disconnects) between full flushes, which would simplify things and go back to a bit more how it was until you rewrote a bunch
< BlueMatt>
sipa: ^
< sipa>
BlueMatt: i think it is fine to only test the single-head case for now
< sipa>
at worst, the result is not backward compatible omce we need muktihead
< sipa>
*multi once
< BlueMatt>
sipa: well my point was just the implementation of the multi-head-handling case is complicated enough that it adds a ton of review burden
< BlueMatt>
esp pre-utxo-db-format-change
< BlueMatt>
may be easier to just do it single-head-only, then do utxo-db, then change to multi-head...if we break compat there its ok
< sipa>
the multihead code and pertxout are orthogonal, i think
< BlueMatt>
i havent dug too much into pertxout yet, but shouldnt it simplify things, or is there still a concept of per-tx CCoins everywhere above the db?
< BlueMatt>
my assumption was the review for this would be much simpler if you dont have to think about making sure entire transaction objects are correct, instead of there just being add/remove-outputs
< BlueMatt>
at least i found it much easier to review prior to the latest changes, even ignoring the handle-disconnect stuff
< gmaxwell>
I think they turn out to be pretty much orthorgonal.
< gmaxwell>
okay thats a point.
< sipa>
BlueMatt: the Clean call is indeed a possible violation of that orthogonality...
< BlueMatt>
that was my primary example, indeed
< sipa>
i don't think there are any others, but it is a fair point that pertxout is breaking backward xcompatibility already, so perhaps attempting to already support multihead isn't actually worth it
< sipa>
but i think the complexity is mostly in testing
< sipa>
the difference in implementation between multihead and single head is just that loop and building of a set
< BlueMatt>
ok, i found the building of a set hard to reason about :p
< sipa>
fair enough, but you can reason about the cases that are relevant for single head?
< BlueMatt>
probably? dunno, i was more tired today than previous days, so there may also be a skew there :p
< sipa>
well, maybe it is best to explain the full algorithm and reasoning why it is correct in text in comments