< GitHub94>
[bitcoin] wtogami opened pull request #8033: Fix Socks5() connect failures to be less noisy and unnecessarily scary (master...proxy_fail_too_scary) https://github.com/bitcoin/bitcoin/pull/8033
< jonasschnelli>
I'm impressed how bitcoin-core does perform on a 29$ computer (Pine64). Progress=0.5~ in <24h. dbcache=1500. Using a cheap/slow USB stick.
< btcdrak>
jonasschnelli: did your Pine64 arrive already?
< jonasschnelli>
Yes. Its syncing next to me.
< wumpus>
jonasschnelli: nice result
< jonasschnelli>
I really think this machine could allow my long-term goal: a full node in a box for ~50USD.
< jonasschnelli>
Nice casing, some led for status indicator. Could come with a 128GB usb stick with the blockchain preloaded. People can refect/re-IBD if they like (led could indicate re-index)
< jonasschnelli>
"your bank at home"
< wumpus>
sipa: how do you measure exact cycle counts in #8020?
< wumpus>
jonasschnelli: such a thing has also always been my plan, but yes up to now devices have always been too weak for that. People using bitcoind on a RPi are just practicing masochism.
< jonasschnelli>
wumpus: Agree. Ordoid and Pine are capable. RPi is probably not.
< * jonasschnelli>
is also wondering how sipa does measure the cycles...
< jonasschnelli>
can you measure it with gdb by stepping single instructions?
< jonasschnelli>
(on ASM level)
< wumpus>
I think he uses 'performance counters' with some profiler, it's certainly possible to count instructions and cycles with linux' 'perf' for example on a larger scale but I've never been able to do so on a per-function level
< wumpus>
so yes I wonder what exact software
< wumpus>
try 'perf stat ls' for example
< wumpus>
of course the number of cycles will be different per CPU type, even between different vendors and models
< wumpus>
but still it's nice to be able to measure it that precisely
< wumpus>
what is possible with perf is 'sampling' e.g. making it probe the counters a certain number of amount per second, it's possible t ocreate some nice flame graph diagrams that way where most of resources are spent: http://www.brendangregg.com/flamegraphs.html
< wumpus>
there is even a 'perf top' to see what processes/functions consume most CPU cycles globally
< jonasschnelli>
(but the pref stats above are from a different computer)
< wumpus>
Features : fp asimd crc32
< * jonasschnelli>
installing pref on the Pine64
< wumpus>
I'd expected so: perf stat and friends by far work best on x86
< wumpus>
performance counter support for other CPUs is stil catching up, though it sometimes works
< jonasschnelli>
Would openCL be something to speed up SHA256 batch calculation? At least for desktop pcs?
< wumpus>
in any case from what I understand this means you can use the vsha256hq_u32 vsha256h2q_u32 vsha256su0q_u32 vsha256su1q_u32 NEON intrinsics on that board
< wumpus>
I don't know yet how to implement sha256::Transform with them, as I lost interest as it's not possible with my board, but it should be possible :-)
< jonasschnelli>
Hmm... yes. This sounds after another weekend project. :)
< jonasschnelli>
But free weekends are so precious and rare!
< wumpus>
sha256 is an inherently linear operation, I'm not sure how well itlends itself to OpenCL paralellization. Indeed maybe if you can manage to queue up a lot of different things to be SHA256'ed at once
< wumpus>
same for doing secp256k1 operations in opencl
< wumpus>
at the least, GPUs became a lot better with integer operations compared to the time I used it a lot, partially thanks to bitcoin mining :-)
< jonasschnelli>
Yes. But the main problems is probably how to split of batches and not loose performance in syncing back these wored-down batches.
< jonasschnelli>
*worked
< wumpus>
yes, exactly, usually the problem is how to structure the work at a higher level
< wumpus>
in any case I think there is low hanging fruit in the form of better CPU implementations
< jonasschnelli>
Right. And I don't expect good GPUs in most bitcoind machines.
< jonasschnelli>
(mostly VPS/servers or barbones)
< wumpus>
e.g. there are also some practical issues with GPUs, they tend to be even less reliable (on average) than CPUs, and prone to overheating
< wumpus>
that too, who wants to use their high end gaming machine to sync the chain (except to show off how fast it can be done)
< gmaxwell>
bluematt: sipa: Another proposed implementation tweak for compactblocks: The sender can use the formula to decide the length to send. The reciever can then also use the formula, and if their mempool is too big for the number of bytes sent, it can just the top subset of the mempool.
< gmaxwell>
BlueMatt: if you get sipa' implementation tweaks in, I'll get some public nodes up running it. Maybe with a little hack to continually INV top of the mempool to you every block, in order to hotstart you. (otherwise you have to run it for a day to get realistic hit rates)
< gmaxwell>
jonasschnelli: awesome.
< jonasschnelli>
Breaking up the CHDChain data-model would save another 10-20 lines. But would lead to a ugly design.
< GitHub74>
bitcoin/master 0fd5997 Patrick Strateman: Fix insanity of CWalletDB::WriteTx and CWalletTx::WriteToDisk
< GitHub74>
bitcoin/master 373b50d Wladimir J. van der Laan: Merge #8028: Fix insanity of CWalletDB::WriteTx and CWalletTx::WriteToDisk...
< GitHub54>
[bitcoin] laanwj closed pull request #8028: Fix insanity of CWalletDB::WriteTx and CWalletTx::WriteToDisk (master...2016-05-09-cwalletdb-writetx) https://github.com/bitcoin/bitcoin/pull/8028
< BlueMatt>
gmaxwell/sipa: yea, thinking about it I'm really not a fan of the sender calculating the size in compact blocks...it is really awkward that the sender is picking a value based on their own mempool size assuming the receiver has the same size
< gmaxwell>
BlueMatt: it's harmless, because the reciever can artifically reduce their effective mempool size if the sender picked too small a value. (and if the sender picked too large, thats harmless too, just a bit more bandwidth)
< BlueMatt>
not if, eg, the sender just was brought online, so the mempool isnt the top of the peers mempool, but a different random set
< gmaxwell>
what does that have to do with anything?
< gmaxwell>
if so, they may assume the peer's mempool is smaller than it is, send only 5 bytes when they should have sent 6 and the peer will end up having to gettxn as if they only used the top 10000 txn in their mempool.
< BlueMatt>
gmaxwell: hmm? if the sending peer just came online, then their mempool is small, but random, not the "top X" txn.
< gmaxwell>
the content of the sending peers mempool isn't important.
< BlueMatt>
no, but its size matters
< BlueMatt>
oh, i see your point though
< gmaxwell>
right.
< gmaxwell>
it just means they may go to small, but if they do, worse that happens is the reciever needs to gettxn as if the recievers mempool was also smaller.
< gmaxwell>
(though could still use the extra txn in an attempted gettxn-less reconstruction)
< gmaxwell>
e.g. use whole pool? succest? if so top. Else remove everything except the top X (based on size), and gettxn.
< gmaxwell>
er success*
< BlueMatt>
still, makes me uncomfortable for the sender to pick a shortid size based on their mempool size when what actually matters is the receivers mempool, or, really, the miners mempool
< BlueMatt>
like, this falls apart the second a miner picks a tx not from the top of the fee-sorted pool
< BlueMatt>
or with cpfp or something
< gmaxwell>
what matters is pretty much exclusively the recievers mempool. the driving factor in the fp rate is how many txn will be compared against the short IDs.
< BlueMatt>
yesyes, but you're suggesting using the "top X" from your mempool
< gmaxwell>
it doesn't fall appart, just more approximate. in reality though you're talking about a corner case. 5 bytes is good for mempools significantly larger than we have typically.
< BlueMatt>
hmm...lemme get more coffee and think, I may just be being tired
< gmaxwell>
just means that you're going to gettxn a few extra txn when the sender goes too small; this could be further improved by making the sender do the table amount +1.
< Chris_Stewart_5>
Hmm, are the arguments for OP_PUSHDATA parsed as unsigned numbers?
< sdaftuar>
MarcoFalke: hi -- i was pretty sure the bip9-softforks test would be failing for everyone, but maybe it's a local problem
< MarcoFalke>
Is it failing when you try bitcoin/master?
< sdaftuar>
the failure is only because the script is outputting to stderr, which is introduced in that pull
< sdaftuar>
so i take it you don't get this error when you run locally? "BDB3028 /tmp/testly60vwvd/blocks.db: unable to flush: No such file or directory
< sdaftuar>
"
< MarcoFalke>
nope
< MarcoFalke>
This happens after the nodes are shut down/
< sdaftuar>
yeah at the end of the test
< sdaftuar>
i think it's because the test is deleting the directory that contains the file used by the blockstore
< sdaftuar>
(it does this over and over, sort of a layer violation)
< sdaftuar>
but i don't know why this would only be affecting me and not you...
< sdaftuar>
which python do you use?
< MarcoFalke>
Shouldn't be the python version
< MarcoFalke>
It fails for you on py2 and py3
< MarcoFalke>
which bdb are you running?
< sdaftuar>
hm, not sure how to determine that?
< MarcoFalke>
Should be the default version if you don't pass opts to ./configure