< GitHub143>
bitcoin/master aa62b68 Pieter Wuille: Benchmark rolling bloom filter
< GitHub143>
bitcoin/master 1953c40 Pieter Wuille: More efficient bitsliced rolling Bloom filter...
< GitHub143>
bitcoin/master f17032f Wladimir J. van der Laan: Merge #7934: Improve rolling bloom filter performance and benchmark...
< GitHub9>
[bitcoin] laanwj closed pull request #7934: Improve rolling bloom filter performance and benchmark (master...benchrollingbloom) https://github.com/bitcoin/bitcoin/pull/7934
< phantomcircuit>
paveljanik, wallet.cpp was being compiled incorrectly
< phantomcircuit>
(or rather was compiling when it shouldn't have been)
< phantomcircuit>
ccache -C
< phantomcircuit>
and the issue disappeared
< paveljanik>
strange
< paveljanik>
never seen such issue with ccache...
< phantomcircuit>
haven't seen that kind of problem with it in... well yeah ever
< GitHub6>
[bitcoin] pstratem opened pull request #8028: Fix insanity of CWalletDB::WriteTx and CWalletTx::WriteToDisk (master...2016-05-09-cwalletdb-writetx) https://github.com/bitcoin/bitcoin/pull/8028
< phantomcircuit>
i like how the trivial patch to change keypool size has tons of activity
< phantomcircuit>
:P
< paveljanik>
phantomcircuit, you are not the first though. MAX_BLOCK_SIZE...
< jonasschnelli>
:)
< phantomcircuit>
ha
< phantomcircuit>
now i just need someone to review 8028
< phantomcircuit>
it's almost as trivial
< * phantomcircuit>
looks at jonasschnelli
< * jonasschnelli>
will look at 8028
< jonasschnelli>
I never liked the wtx.WriteToDisk()
< gmaxwell>
BlueMatt: I think pieter told you that your compact block recovery should request the txn when there is a collision. You should also have the recovery check the orphan pool for txn.
< MarcoFalke>
jonasschnelli, 8018 looks good. Mind to address the nit?
< jonasschnelli>
MarcoFalke: yes. Will do in a couple of hours. Need to finish the open work before I can checkout the branch. :)
< MarcoFalke>
sure, take your time.
< * sipa>
casually mentions git-worktree, which allows checking out two branches simultaneously
< wumpus>
git-worktree is great, it requires a pretty new git though, unfortunately
< jonasschnelli>
hmm... I probably should check this.
< jonasschnelli>
I have also multiple working copies to switch between work
< wumpus>
yes I simply have mulitple clones now
< jonasschnelli>
But loosing focus means loosing productivity. :)
< wumpus>
will switch to git-worktree when I upgrade more widely to ubuntu 16.04
< sipa>
jonasschnelli: you need to upgrade your brain to a multicore one
< jonasschnelli>
haha... I already run on 8 cores!
< sipa>
wumpus: no problems with ubuntu 16.04 here
< jonasschnelli>
core-dev organizing, Pine64/Odroid installation, hardware wallet work, libbtc and managing a familiy... :)
< wumpus>
sipa: no problems here either, but there is no supported upgrade path from 14.04 to 16.04 yet
< sipa>
upgrading worked fine here
< wumpus>
so all my 16.04 installations are either upgraded from 15.10 or new ones
< gmaxwell>
Hurray for the multiple clone workflow!
< gmaxwell>
join us
< wumpus>
well what Ubuntu says is that "14.04 LTS to LTS upgrades will be enabled with the 16.04.1 LTS point release, in approximately 3 months time.". I think it's the safer option. I can wait 3 months for that anyhow :)
< sipa>
wumpus: ha, i didn't bother to look that up and just used the dev upgrade... it has always worked fine in the past :)
< sipa>
and if it fails, it's not like reinstalling is a disaster
< * sipa>
zZzZ
< wumpus>
heh :)
< wumpus>
I've had very mixed experiences with upgrading, usually it goes fine, but I've also had it crash during upgrade once, or apparently upgrade fine that have an avalanche of errors at the next start
< wumpus>
worst upgrade experience I ever had was with slackware, a bug in a documentation(!) package upgrader caused a rm -rf /, when I was wondering why it was taking so long it was nearly too late. These days I keep good backups.
< gmaxwell>
wumpus: there have been a couple of those incidents over the years.
< wumpus>
ouch
< gmaxwell>
one of the popular linux music players ("amarok"?) had a system("rm -rf "||unquoted_file_name); for when you told it to delete a file...
< gmaxwell>
better not delete "dance / greatesthits.mp3"
< kinlo>
sigh, what's wrong with a call to unlink() ? why do ppl call the shell so much
< wumpus>
yes, seems absurd to use the shell for that. Usage of system() is usually a code stink
< jonasschnelli>
can the cache-sizes be changed during runtime?
< jonasschnelli>
Things like nCoinCacheUsage
< wumpus>
no
< wumpus>
(I don't think there's any deep reason for that not being possible though)
< wumpus>
at least the size of the coin cache could be trivially changed, it's just a threshold
< gmaxwell>
some kinds of datastructures are fairly hard to resize at runtime. ... bloom filters being an example.
< wumpus>
the leveldb caches on the other hand probably require re-opening the dtabase
< wumpus>
true, but I don't think we have any user-sizable bloom filters
< paveljanik>
so rebasing the branch is the solution
< paveljanik>
yup
< wumpus>
yes, but asking everyone to rebase their pull request because of completely unrelated issues is messy
< phantomcircuit>
i think travis is screwing up on 8026
< paveljanik>
yes, so if travis is using the old config and new test scripts from master, then yes, reverting the mentioned commit can help
< phantomcircuit>
hmm maybe not
< phantomcircuit>
oh i see
< phantomcircuit>
heh
< phantomcircuit>
fundrawtransaction.py tries to encrypt the wallet...
< GitHub96>
[bitcoin] laanwj opened pull request #8030: test: Revert fatal-ness of missing python-zmq (master...2016_05_revert_zmq_req_tests) https://github.com/bitcoin/bitcoin/pull/8030
< jonasschnelli>
The idea I had with resize the caches was that Bitcoin-Qt could ask for a db cache size during first run (like the datadir), then It could allocate ~>1GB, after IBD it could reduce it back to 100MB.
< jonasschnelli>
Using 1.5GB would probably reduce IBD form a couple of days to a couple of hours
< wumpus>
sounds like a good idea to me, probably the same option should hold when the node has been down for a while, while catching up
< wumpus>
it should be optional of course; not everyone cares about fast sync speed, some people prefer it to happen in the background with as little system load (cpu, memory) as possible
< jonasschnelli>
right
< wumpus>
but a choice when first launching the software would make snse
< wumpus>
sync as fast as possible, but hogging the computer, or sync slowly and steadily in the background, this could also set -par
< jonasschnelli>
wumpus: what was the reason you have started with lmdb? Did you had problems compiling leveldb for Odroid?
< wumpus>
leveldb was horribly slow on my odroid-C2
< jonasschnelli>
I just successfully compiled (and running IBD) bitcoin-core master for Pine64
< wumpus>
I noticed high I/O latency, but had nothing to compare to, so I wondered whether another database would do better
< wumpus>
haven't had any problems with stability of leveldb on aarch64
< jonasschnelli>
wumpus: But i guess you ran with a high DB cache?
< wumpus>
yes, reasonably high, though the device has 2GB of memory so some care has to be taken not to invite the OOM killer
< jonasschnelli>
I guess if i run with -dbcache=1500 it will result in only tiny disk i/o?
< wumpus>
as long as not the entire utxo set fits in memory you'll have fairly much disk i/o, especially after flushes
< jonasschnelli>
Right. This is a point.
< wumpus>
anyhow I wanted a lmdb branch to be able to compare databases, many people said they were going to try with lmdb over the years and no one actually did it :) And ARM going 64-bit as well makes it somewhat more attractive.
< wumpus>
I also have a dummydb branch, whose point is to have the entire utxo set in memory but in a more compact format than expanded -dbcache
< wumpus>
e.g. an utxo dump takes about 1.5GB, whereas a sync with -dbcache=<lots> goes to about 5.5GB of memory usage, so there is some scope there (not enough to fit everything into the 2GB of memory of the odroid probably, there will always be some overhead, but okay)
< jonasschnelli>
wumpus: what i/o (disk) do you use? SDCard? USB/SSD?
< wumpus>
I've tried with external USB2 hdd and with a network block device mounted over 1000gb link
< wumpus>
sdcard is a pretty bad idea for the database, I've worn at least one USB stick that way :)
< jonasschnelli>
Right. We discussed that before. 1GB link is probably the fastest i/o (next to the GPIO).
< wumpus>
there's no SATA on the board unfortuantely or I'd have used that
< wumpus>
yes the network I/O is fast
< wumpus>
in throughput at least, latency was still disappointing here
< jonasschnelli>
Hmm... good point. Latency might be very important for the utxo set
< jonasschnelli>
Pine64<->USB2.0<->HDD should result in 30MB/s r/w. Not very powerful but should be enough for a full node.
< wumpus>
as for databases my (inconclusive) results seems to be that lmdb was somewhat better in query latency, but leveldb seems to be faster in writing
< jonasschnelli>
Disabling bloom filter should be done then.
< wumpus>
yes the throughput is good enough
< wumpus>
especially for block storage
< jonasschnelli>
But I guess also the latency. GBit Eth. has probably higher latencies (regarding utxo set)
< MarcoFalke>
wumpus, have you tried clearing the travis cache? If you look at 7938 there are also other pulls messed up (This one prob due to the cpp11 switch) ...
< MarcoFalke>
I am just guessing that it is caused by corrupt cache, though.
< wumpus>
I've tried clearing the caches for a few pulls, yes, what I have not tried is clearing all the caches
< wumpus>
(afraid it will hamper all testing)
< wumpus>
but I can try clearing 7938, sure
< MarcoFalke>
As I understand it, it should only make it slower until there is a commit to -master. But if you prefer to disable the zmq error, fine with me.
< MarcoFalke>
I'd rather have more travis failures than less because the test can only tell you something is wrong. They can never tell you something is right.
< MarcoFalke>
So the alternative would be to ignore unrelated errors as best as possible
< MarcoFalke>
if they accumulate to high nubers, though, it might be better to disable the test...
< wumpus>
but false positives are bad, the missing zmq error prevents all RPC tests from being run
< wumpus>
it's preferable to just not run the zmq test - especially if the pull in question doesn't change anything zmq related at all
< MarcoFalke>
I am assuming it would fail the other test as well because there is something wrong with the cached python version
< wumpus>
I don't think there is anything wrong with the cached python version, apart from the missing python3-zmq package
< wumpus>
I think it's simply a matter of the wrong package being installed
< wumpus>
e.g. python-zmq is installed instead of python3-zmq
< wumpus>
so the test fails to find it
< paveljanik>
But anyway, this is the only test that needs separate package installed. I think it should be written as it "succeeds" when the is no such package. It will make testing easier on clean installs, will shorten the documentation etc.
< wumpus>
I'm open to other solutions, but 'all old pulls have to be rebased' is not acceptable
< MarcoFalke>
paveljanik, The error message is pretty clear how to fix the issue on the user side (#ERROR: \"import zmq\" failed. Set ENABLE_ZMQ=0 or ...
< paveljanik>
MarcoFalke, yes, of course. But slows down the work a bit.
< paveljanik>
wumpus, yes, definitely. It can also help us to understand the problem.
< MarcoFalke>
and I want travis to fail if pzthon-zmq is missing instead of just returning 'succeed'
< wumpus>
as this would mean you could no longer submit pulls based on older commits for no good reason
< paveljanik>
right now, we are all guessing only.
< jonasschnelli>
gmaxwell, sipa: for keypools containing HD keys: do we need two keypools? one for internal (change), one for external usage?
< sipa>
yes
< sipa>
but i don't think that's a necessity for a first working version
< jonasschnelli>
sipa, okay will start using only the external chain.
< jonasschnelli>
Also,.. i will re-derive the external child key in each GenerateNewKey to keep the diff small and tight.
< jonasschnelli>
sipa, if we only have one hd master key in the database, would a static AES IV be okay for encrypting/decrypting the master key?
< jonasschnelli>
Or do i have to generate a random IV and store if in the same data-record?
< jonasschnelli>
nm, ... need to do the later
< sipa>
you can just make the master key itself a wallet key
< sipa>
and store the corresponding pubkeyhash in the wallet
< jonasschnelli>
meh..
< sipa>
so you automatically get the encryption benefits
< jonasschnelli>
though about that. But do we really want to loose the CExtKey metadata?
< sipa>
how do you mean?
< jonasschnelli>
This would really be hard to extend then (if we would use a CKey for the master key)
< sipa>
why?
< jonasschnelli>
What if someone wants to later start a wallet with a xpriv at m/44/0/0?
< jonasschnelli>
And how would we distinct between a normal CKey and a CKey thats used as bip32 masterkey?
< jonasschnelli>
Just over the metadata?
< sipa>
just make it a ckey that's not added to the keypool
< sipa>
so it's never exposed as a receive address
< sipa>
maybe add a field to its metadata to indicate that it's not a wallet key
< jonasschnelli>
hmm... but ckey is used for crypted keys, right? At least during db load we need to identify the ckey that is used for a master key.
< sipa>
why?
< jonasschnelli>
hmm... right, we could just store the pubkeyhash somewhere to identify the master key...
< jonasschnelli>
But all "ckey" objects get passed into LoadCryptedKey
< sipa>
yes, that's exactly what you want?
< jonasschnelli>
What if the wallet is unencrypted?!
< sipa>
then it's stored as an unencrypted key
< jonasschnelli>
heh...
< sipa>
also exactly what you want
< jonasschnelli>
Where do we store the chaincode? Unencrypted in metadata?
< jonasschnelli>
Ah.. wait.
< jonasschnelli>
We always use setMaster()
< sipa>
yes, you'd have a hdderive wallet record which stores 1) the pubkeyhash of the master key 2) the path for derivation 3) how many keys have been derived already
< sipa>
you could avoid 3, and derive it at startup
< jonasschnelli>
Okay. I'll see. This is simpler. Thanks for the sparring.
< jonasschnelli>
3 is necessary to refill the keypool i guess
< sipa>
right, but when generating a new key, you can just check whether you already have the newly derived key
< sipa>
and in that case, increment the (in-memory only) counter and try again
< jonasschnelli>
okay... this could get slow if your keypool gets bigger
< sipa>
perhaps you can even use exponential backoff, so it's only log(n) time in the number of keys
< jonasschnelli>
I don't use flexible keypath for now. So your 2) is not necessary for now.
< jonasschnelli>
Is will just use m/1/<key>
< jonasschnelli>
To avoid another 50L of code
< sipa>
ok
< luke-jr>
CodeShark: I think this is broken: if (pindex->nVersion > VERSIONBITS_LAST_OLD_BLOCK_VERSION && (pindex->nVersion & ~nExpectedVersion) != 0)
< gmaxwell>
BlueMatt: sipa suggests the sender choose the size, and send a byte with the number of bytes, constrained to 4-8.
< gmaxwell>
BlueMatt: sipa can probably suggest a decision table based on block/mempool size.
< BlueMatt>
gmaxwell: yea, I was thinking I might have to split udp vs tcp for size, but thats a good point that you could even lookup-table-it
< BlueMatt>
though I'm not sure how useful that would be really well
< sipa>
BlueMatt: the formula is just log2(mempool_txn * unmatched_txn / failure_rate)
< sipa>
if that's over 40, use 6 bytes, otherwise 5
< gmaxwell>
BlueMatt: did you see my comment about adding the orphan pool to your search.
< BlueMatt>
gmaxwell: I did not
< BlueMatt>
valid point, but "meh"
< gmaxwell>
BlueMatt: I instrumented my copy and a lot of the missed txn are in the orphan pool.
< BlueMatt>
WTF :(
< gmaxwell>
actually it makes a lot of sense until the node has been up a long time. the reason is that txn get stuck outside of the chain due to missing parents thanks to trickling.
< gmaxwell>
it'll be much better with 0.13 widely deployed, but it's cheap to check the orphan pool.
< BlueMatt>
yea, understood
< gmaxwell>
(we also need to fix some of the orphan pool issues that the trickle improvements exposed)