< GitHub64>
[bitcoin] luke-jr opened pull request #8293: Bugfix: Allow building libbitcoinconsensus without any univalue (master...sys_univalue_opt) https://github.com/bitcoin/bitcoin/pull/8293
< GitHub96>
[bitcoin] jmcorgan closed pull request #8284: Backport remaining commits for out-of-tree builds from master to 0.12 branch (0.12...build-oot-0.12) https://github.com/bitcoin/bitcoin/pull/8284
< sdaftuar>
proposed topic for the meeting today: wrapping up the mining-related changes to 0.13.0 (please see #8294)
< jtimon>
since then does AcceptToMemoryPoolWorker take a bool* pfMissingInputs parameter instead of just logging a validation error and returning false?
< jtimon>
that's a consensus check, do we plan to pass bool* pfMissingInputs to libconsensus' veryfyTx too?
< sipa>
since 0.1.0
< sipa>
it's literally been there in every release ever
< sipa>
i think we shouldn't keep the old algorithm just because we're not sure if the new one works
< sipa>
we can always revert code if needed
< sipa>
the current reason for keeping is because when -blockminsiz or -blockmaxsize are -blockprioritysize are given, the new algorithm does not work (due to missing accounting)
< sipa>
if we fix the accounting, and make the new algorithm support those parameters, the old algorithm can go
< sdaftuar>
sipa: right, and that is addressed in the first commit of #8295
< gmaxwell>
the only gain I see is if there is some stupidity in the new one the old one is a switch away... but the new one has a reasonable amount of testing and if it has some issue it seems unlikely that its so major that just fixing it wouldn't be the prefered solution.
< sipa>
ok, grea;t i haven't looked
< jtimon>
I don't remember addScoreTx, but ack on removing blockminsize
<@wumpus>
it's a bit late in the release to be removing arguments, on the other hand it was already not working so let's do it...
<@wumpus>
as long as it's documented in the release notes
< gmaxwell>
it should be removed but not cause the daemon to fail to start if specified.
< sdaftuar>
wumpus: agree, the release notes need a bunch of writeup for all the mining changes
<@wumpus>
or e.g. emit a warning/error if it is provided
< sipa>
i am perfectly fine with just making -blockminsize work for 0.13 with CPFP, and removing it in 0.14
< sipa>
sdaftuar: unless you think that would be much more work (i think it would be deleting 2 lines to stop supporting it)
<@wumpus>
sounds like that would be a waste of work, if it is going to be removed anyway, but I don't know how much work it is
< sdaftuar>
sipa: it'd be adding code to support it. i think it's probably easy, but i haven't thought carefully about it. probably a one line change.
< sipa>
i expect it to be trivial to make it work, and also trivial to remove
<@wumpus>
if it's trivial to make it work, let's go for that
< sipa>
also, i think it's utterly pointless in the current mining encironment
< sdaftuar>
sipa: agreed.
< jtimon>
emit warning seems a good option
< gmaxwell>
I think it's a goofy option.
<@wumpus>
if it's not trivial, or annoying to test, let's get rid of it
< gmaxwell>
less options good.
< sipa>
ack, we could just test for its value, and fail at startup if a nonzero value is passed
< sdaftuar>
we have no tests for it now, i believe.
< sipa>
it has no tests
< sipa>
pretty sure
< sdaftuar>
my preference would be to get rid of it, rather than add a test to make sure it works.
< sipa>
sdaftuar: will review your patch
<@wumpus>
right, that's fine
< jtimon>
gmaxwell: why you don't want it to fail if the argument is provided ?
< gmaxwell>
(sorry, just creating a ping macro for future use)
< michagogo>
Oh
< michagogo>
Hi
< sdaftuar>
jtimon: my preference for not causing failure is that that would be extra code to write/test/review. silent failure seems easier, and harmless in this situation.
< gmaxwell>
jtimon: because it's a mostly meaningless setting that some people are going to have by virtue of copy and pasting things.
< sipa>
meh, ok
< kanzure>
i nominate jtimon for double inclusion in the ping, but otherwise ACK
<@wumpus>
I don't really like ignoring options, that can be really frustrating
<@wumpus>
'wtf doesn't this work', oh, then after an hour of debugging you notice that the functionality has been removed
< sipa>
agree
< sdaftuar>
wumpus: in this case though, you wouldn't have been noticing the old behavior "working"
<@wumpus>
at least add a warning, I don't care about failing
< jtimon>
I'm fine with either failing or giving a warning
< sdaftuar>
i can add a warning, sure.
< sipa>
let's discuss this when the actual PR is created
< jtimon>
fair enough
< sdaftuar>
i included removal of -blockminsize in 8295
< sdaftuar>
i can fix to warn if the argument is given
<@wumpus>
great
< gmaxwell>
okay warning fine.
< sdaftuar>
one thing i wanted to mention, if we do decide to remove addScoreTxs(), we could also remove the mining_score from the mempool multiindex
< gmaxwell>
wumpus: I agree in principle except for the fact that a LOT of people have copied the "example config" from the bitcoin wiki and are setting all kinds of crazy things. :)
< sdaftuar>
i didn't incldue that in 8295, but wanted to mention it in case we want to remove for 0.13
< sdaftuar>
it would give us a small memory savings in the mempool, but not sure if that's a change that we'd rather wait for 0.14 tomake
<@wumpus>
gmaxwell: yes, indeed, it's an option that's bound to be misinterpreted/misused anyway
< gmaxwell>
<meme text="Delete all the code."/>
< sdaftuar>
jtimon: it builds on removing addScoreTxs i think?
< jtimon>
sdaftuar: oh, I see
< sdaftuar>
jtimon: but yes i could have split those two commits out
< jtimon>
mhmm
< gmaxwell>
Okay, anything more on this that needs to happen in the meeting?
< jtimon>
whatever, minor point, I just don't think I can ack the pr like that because I haven't been following some of the refactors in miner much
< sdaftuar>
i think we can move on, thanks
< jtimon>
well, refactors and changes
<@wumpus>
sdaftuar: I'd say if they are after your changes remove the mempool fields, it's not much of a risk, if everything is alright the compiler checks that nothing is referring to it
< sdaftuar>
wumpus: ok great, thanks
< sdaftuar>
i'll defer that until after 8295 is merged anyway
< sipa>
petertodd: yes, i think it is a good point
< petertodd>
sipa: cool, thanks
< sdaftuar>
sipa: you noticed a related issue as well, right?
< petertodd>
sipa: (mainly wanted to know if I needed to post a retraction!)
< sipa>
sdaftuar: the sigops counting? i believe that is not vulnerable to mauling (the verb related to 'malleability', apparently)
< sipa>
(also not found by me)
< gmaxwell>
#topic segwit
< sdaftuar>
sipa: it is, because we don't verify that the witness script matches the commitment in the pubkey output
< sdaftuar>
right?
< petertodd>
sdaftuar: no, I think we do that prior to sigops checking
< sdaftuar>
i'll take another look. pretty sure sipa and i discussed and concluded that we didn't.
< sipa>
sdaftuar: something to look into
< petertodd>
sdaftuar: yeah, wouldn't hurt to take another look - that review was a bunch of owrk and I may have been a bit faded by the end :)
< sipa>
i don't think these are serious problems (it allows preventing a node from getting a transaction, but there are other means for that, like announcing it and not responding)
< sdaftuar>
petertodd: i'm specifically referring to the sigopcount checks in accepttomemorypool, not consensus level checking
< sipa>
but they should be understood
< petertodd>
sdaftuar: yeah, I _thought_ I looked at that, but may have missed it - mempool was the last thing I looked at
< sipa>
petertodd: i'm really glad you did that review, by the way - the writeup makes a lot of things more communicatable
< petertodd>
sipa: thanks! yeah, mainly wanted to demystify the process of doing review
< gmaxwell>
I don't think its of major importance, but "Bitcoin XT" recently started making only outbound connections to other XT nodes. In combination with segwit, I believe this will partition them. Because they are such a small number of nodes I don't think it warrents much attention, but something to be aware of.
< gmaxwell>
petertodd: yea, nice work.
< sdaftuar>
gmaxwell: we have a little time before that will happen, as we haven't yet activated the preferential peering yet
< gmaxwell>
sdaftuar: indeed.
< petertodd>
gmaxwell: ...
< sipa>
segwit by the way does not _only_ make outbound connections to segwit
< sipa>
but it strongly prefers them
< petertodd>
sipa: yeah, I noticed that - after 40 tries you connect to non-segwit outgoing
< gmaxwell>
sipa: right, I think that would be sufficient to partition there.
< sipa>
petertodd: yes, totally arbitrary number
< jl2012>
gmaxwell: they should fix that. We just need to let them know
< petertodd>
wouln't be a crazy idea to have a mode that disabled the DoS banning for incoming nodes btw...
< sipa>
petertodd: DoS banning, or also DoS disconnection?
< gmaxwell>
petertodd: one can set that threshold arbritarily high at least.
< petertodd>
sipa: both, to be able to (at least temproarily) work around bugs related to DoS banning
< petertodd>
sipa: equally, have a small subset of nodes that can be set to ignore the DoS banning
< gmaxwell>
Patch accepted to make low-dos-score a ranking criteria in eviction protection logic, which would also make running with a very high dos threshold more reasonable.
< petertodd>
gmaxwell: good idea
< gmaxwell>
jl2012: I was just made aware of it, but sure.
< gmaxwell>
my past expirence with that project has not been very good.
<@wumpus>
so a different ban thrshold for incoming versus outgoing connections?
< petertodd>
jl2012: there's a issue open in the XT repo for it already
< btcdrak>
jl2012: there is a ticket open already
< gmaxwell>
petertodd: we should probably just brainstorm some about connection management stuff, there are a number of neat improvements we can make.
< sipa>
if we're thinking about such refactors, the DoS score should probably also attenuate over time
< petertodd>
wumpus: I think the main thing, is just to have different thresholds period for some nodes, so that while you won't wast bandwidth on everyone, keep a few peers connected regardless in case you're dealing with a DoS-related bug and should be distributing the data anyway
< sipa>
otherwise longer-lived connections will accumulate a score they don't deserve
<@wumpus>
petertodd: so a sort of halfway-whitelisting
< petertodd>
wumpus: also, along those lines, distributing invalid blocks with valid PoW isn't a crazy idea
< petertodd>
wumpus: yup
< gmaxwell>
petertodd: such a thing could also turn those peers into blocksonly.
< gmaxwell>
since thats all we care about for anti-partitioning.
< petertodd>
gmaxwell: yeah, that's a good idea
< gmaxwell>
and yet it almost completely eliminates dos concerns.
< petertodd>
blocksonly on-DoS + relaying of invalid blocks w/ valid PoW in a separate "may be invalid" message would satisfy everything I think
<@wumpus>
sipa: attentuating theDoS score over time makes sense, a very slow DoS attack isn't really a DoS attack (apart from keeping a connection slot open, but that's not what the score is to protect against)
< sipa>
theDos? sister of GLaDoS?
< petertodd>
sipa: lol
< btcdrak>
omg
<@wumpus>
then again it's not really a anti-DoS score is it, it won't get triggered on accidental over-use of resources, more a misbehavior score
<@wumpus>
hehe
< sipa>
yes, i think there may be reasons to introduce something like a "resource usage score" which is distinct from "misbehaviour"
<@wumpus>
the cake is a lie
< sipa>
and which gets used to decide which peers to disconnect in favor of others
< sipa>
but never causes a ban
<@wumpus>
yes, indeed
< sipa>
not a disconnection, unless there is a pressing reason to disconnect someone anyway
< petertodd>
wumpus: might not be a bad idea to go through the DoS stuff and remove bans in cases where the misbehavior doesn't take much cpu time to detect (and then use it in the anti-partitioning score instead)
< gmaxwell>
yea, I think there is a lot we can do here.
< gmaxwell>
So-- the issue of dbcache that I brought up last week can we talk about that for a bit?
< sipa>
ack topic
<@wumpus>
sure
< gmaxwell>
#topic dbcache
<@wumpus>
petertodd: agreed
< gmaxwell>
Last week I brought up that reindex with defaults was SUPER slow on my laptop, been a while since I'd reindexed with default dbcache.
< gmaxwell>
This resulted in more benchmarking, and wumpus has a PR to increase the default dbcache to 300mb and also clamp the leveldb cache usage (so that it doesn't go gigantic as dbcache increases)
< gmaxwell>
I did testing to confirm that the leveldb cache sizes don't have much effect.
< gmaxwell>
they seem like they have more effect with txindex enabled, however.
< * jonasschnelli>
will test 8273
<@wumpus>
which makes sense, txindex has a completely different access pattern than the utxo set
<@wumpus>
it's almost write-only
< sipa>
morcos: no more work on #6936?
< gmaxwell>
As a related aside txindex performance is fairly slow, makes an 'infinite' dbcache reindex by about 25%.
<@wumpus>
so we could cap that at a different value
< sipa>
txindex adds many many more writes
< gmaxwell>
wumpus: yes, ran a test with 32mb which has finished but I didn't have time before the meeting to collect the results, will shortly after.
< sipa>
though they should be configured to bypass all caches
<@wumpus>
yes txindex is very slow, just a lot of data
< gmaxwell>
my testing is on a fast spinning disk system, which I think is probaby the right thing to test for on this.
< sdaftuar>
sipa: oh, morcos and i were just talking about whether there were things we were working on that used the mining score index, but i think we had both forgotten about #6936.
<@wumpus>
even the idea of indexing every transaction on the block chain is crazy
< jonasschnelli>
My last tests showd me that a reindex (master) was twice as long as a a normal IDB (from rnd peers) from the scratch...
<@wumpus>
there's much slower implementations possible though, like stashing everything into mysql :-)
< sipa>
jonasschnelli: wut?!
< gmaxwell>
One result of my testing is showing that even the 300MB dbcache is really not big enough to give good reindex performance. We should consider something for 0.14 to give mempool memory to dbcache during ibd/reindex or something.
< gmaxwell>
jonasschnelli: that last test was before the signature checking fix?
< jonasschnelli>
sipa: 5.5h reindex against ~3h IBD
<@wumpus>
jonasschnelli: whoa
< jonasschnelli>
I'm redoing it now...
< gmaxwell>
jonasschnelli: when was it?
< sipa>
jonasschnelli: what part of that was real reindex, and what was chainstate rebuilding?
<@wumpus>
I thought we solved that recently
< jonasschnelli>
a couple of weeks ago after sipas work
< jonasschnelli>
sipa: just a plain -reindex
< jonasschnelli>
wth dbcache=8000
< sipa>
that's painful
< gmaxwell>
Reindex now has that header scan, it adds about 20 minutes on my test hosts that a sync wouldn't have.
< sipa>
jonasschnelli: can you benchmark -reindex-chainstate vs IBD?
<@wumpus>
that can't make the difference between 3h and 5.5h though
< gmaxwell>
but I tested with and without signature checking, and in both cases reindex in master is faster than 0.12.1.
< jonasschnelli>
sipa: okay. Will do.
<@wumpus>
20 minutes difference would be ok
< sipa>
it may depend on your disk
< gmaxwell>
I didn't test an IBD but I can easily do that. We do probably lose some time due to out of order blocks.
< jonasschnelli>
Also there is the potential_deadlock_detected that stops my node (-debug) every couple of minutes..
< sipa>
if it's a VM with a file-backed storage which is on a network filesystem stored on an encrypted ZFS container...
< sipa>
jonasschnelli: i hope your normal benchmarks are not with -debug builds
<@wumpus>
lock debugging slows down the sync too
< sipa>
additionally, we need to fix that bug
< sipa>
i'll look into that lockorder
<@wumpus>
but yes, the deadlock detected in the network code seems to be a real deadlock
< jonasschnelli>
wumpus: yes. Agree. But i have compare -debug IBD against -debug reindex
< sipa>
jonasschnelli: meh, don't do that
< sipa>
debug is not ever going to give you a realistic benchmark
< jonasschnelli>
developer are supposed to run --enable-debug software! :)
<@wumpus>
do we have an issue open for that deadlock issue btw?
< sipa>
jonasschnelli: yes, but not for benchmarking
< gmaxwell>
#topic net deadlock
< jonasschnelli>
wumpus: no. Let me open one.
< sipa>
jonasschnelli: thanks
< jonasschnelli>
sipa: Yes. Agree... will benchmark again without -debug
< sipa>
jonasschnelli: please list your exact commit (so we can identify the line numbers)
< jonasschnelli>
okay. Good point.
< gmaxwell>
#action jonasschnelli to open issue on lockorder deadlock report that he's seeing
< gmaxwell>
Any other topics?
< gmaxwell>
okay Seems quiet, perhaps we can end a bit early.
< sdaftuar>
maybe bip152?
<@wumpus>
seemingly not :)
< gmaxwell>
ah okay!
< sdaftuar>
not sure if that's worht bringing up here
< gmaxwell>
#topic BIP152
< gmaxwell>
So sipa has SW BIP 152 working on testnet and a number of people testing it.
<@wumpus>
good to hear
< petertodd>
gmaxwell: as in, segwit bip152 right? +1
< btcdrak>
I dont recall seeing a PR for that yet?
< gmaxwell>
petertodd: yes.
<@wumpus>
there's no PR for that
< sipa>
btcdrak: intentionally not
< sipa>
1) this allows us to test interaction between non-SW-BIP152 testnet and changed
< sipa>
2) needs some BIP changes (in BIP144, BIP152, or a separate bip entirely)
< petertodd>
sipa: +1
< sipa>
3) not necessary before the segwit softfork
<@wumpus>
not necessary, but would be good to have it before the activation, otherwise CB will suddenly disable
< sipa>
indeed; it is necessary imho before activation in mastdr
< gmaxwell>
Thats my view.
<@wumpus>
right, for 0.12 it makes no different, there's no cb there anyhow
< gmaxwell>
jinx
< sipa>
agree
< gmaxwell>
In any case, it's working, didn't require anything too complicated. (says the guy who didn't write it)
< sdaftuar>
sipa, gmaxwell: i still think it'd be better if the information in a blocktxn message could be understood without any state. did i make any progress trying to convince you?
<@wumpus>
there's no need to hurry it anyhow, CB and segwit separately are already huge changes, better to test them in isolation for now
< gmaxwell>
sdaftuar: only a very small amount.
< sipa>
sdaftuar: no strong opinion yet
< gmaxwell>
sdaftuar: we can talk more post meeting.
< sdaftuar>
sure
< gmaxwell>
okay, Anything else, we're almost out of time? :)
< petertodd>
ack endmeeting
< gmaxwell>
#endmeeting
< lightningbot>
Meeting ended Thu Jun 30 19:58:51 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< gmaxwell>
May your usage of the remaining minute be productive.
< petertodd>
heh, my clock must be off... :)
< petertodd>
brb, gonna fix ntp :)
< sipa>
doh, too late
< gmaxwell>
sdaftuar: So; I'm still trying to come up with a case where the offsets would be useful where were are not already keeping an equiivlent amount of state. There are DOS reasons why we can't, say, loadbalance getblocktxn fragments to all peers that have inved. But even if we have, keeping a list of indexes per peer is negligible additional state.
< sdaftuar>
sipa: btw morcos was away from his desk, but says he can look at the coins cache stuff again after 0.13 is branched off
< sipa>
sdaftuar: sounds good
< sdaftuar>
gmaxwell: basically you'd be tracking your last getblocktxn request to each peer
< sdaftuar>
i agree you could do something like that, but this is also complicated, as you'd have to ensure you don't have more than one in-flight
< gmaxwell>
right, effectively the same information the peer would send back over the wire, you'd keep in memory. You can have more than one in-flight per peer but not more than one for a given block.
< sdaftuar>
i guess my point is that it's a trivial amount of bandwidth use in order to do less bookkeeping in code
< gmaxwell>
But at the same time we could probably even DOS a peer that getblocktxned the same block multiple times. (and we should perhaps consider doing that)
< sdaftuar>
i think that's a bit too aggressive -- there are legitimate reasons you could do that i think
< sdaftuar>
ie if block reconstruction failed in an unexpected place
< gmaxwell>
it's maybe 0.4% overhead
< sdaftuar>
but you got insight from another peer
< sdaftuar>
i just don't want to overly limit what we can do with the p2p messages we're adding
< sdaftuar>
.4% is probably an upperbound on the overhead
< gmaxwell>
yes.
< gmaxwell>
sdaftuar: I agree on not limiting the usefulness but I really am just trying to get a case where I agree that it's actually helpful beyond reducing the state size slightly in an already stateful exchange.
< gmaxwell>
Because there is a cost-- the admitably trivial overhead, and the extra error conditions to test for.
< sdaftuar>
yeah, it'd be more useful if i had an implementation of something that used it that i could point to, which i sadly don't
< gmaxwell>
I had a couple ideas, but then found that they didn't actually work. :)
< sdaftuar>
my intuition is that messages which can be interpreted on their own is more helpful for writing code. this isn't super relevant for bitcoind, but imagine how much better it would be to look at historical data that can stand on its own, versus require state
< sdaftuar>
or writing tests that do strange things
< gmaxwell>
hah but thats part of my point: it adds more corner cases that need to be tested. E.g. what happens when the indexes weren't what you asked for but were correct, what happens if the txn are what you asked for but the indexes are correct?
< * gmaxwell>
steps out for lunch and to ruminate.
< * sdaftuar>
has to leave to catch a train anyway, will think about it some more
< midnightmagic>
hrm.. I like meetbot.
< morcos>
sipa: yes i'm happy to take another pass at keeping the cache warm. but whats your current state of thinking with keeping the utxos by txid instead of by individual utxo? we had discussed at one point changing that?
< sipa>
i think we should switch to individual utxo
< sipa>
but it's a pretty nontrivial change
< morcos>
yeah...
< morcos>
i wonder what the current overhead is
< sipa>
the one pain point is storage of per-tx information (height and coinbaseness)
< sipa>
and nversion
< sipa>
copied into each utxo, or still have some per txid state
< morcos>
yeah maybe there is some hybrid that makes the most sense.. anyway, ok, just food for thought
< sipa>
in the storage format that means refcounting per txid, though
< sipa>
yuckness
< sipa>
or a read after write to see if any entries remain, and if not, delete the txid data
< morcos>
yeah, i was thinking more the latter, we kind of already do that right?
< morcos>
at the disk level for pruning
< morcos>
ugh sorry, bad choice of words, but think its called that in the code
< morcos>
maybe you could keep a bit field of the spentness of various vout#s in the txid level state, so when its in memory, you know whether any uncached vout#s are still unspent.
< gmaxwell>
own't the key overhead make the dataset much larger? whats the average outputs per key in the database today?
< luke-jr>
I don't see why the new algorithm can't work with blockmaxsize etc?
< gmaxwell>
because it will produce wrong results if blockmaxsize is the limited commodity.
< gmaxwell>
wrong in the sense of not picking the optimal block subject to that constraint.
< zooko>
Hey Bitcoin core devs. Over in the Zcash project we occasionally do things which might be independently useful to Bitcoin core.
< zooko>
One of those is a performance measurement of a transaction with the maximal number of inputs.
< zooko>
We're currently talking about that test/measurement over in #zcash and Nathan mentioned that it might be of use to core.
< luke-jr>
gmaxwell: if we're hoping to see that constraint unnecessary in the next year, I don't think we need it to be "ideal"; but if sdaftuar wants to support it better, that's fine with me too
< gmaxwell>
I think its fine if its not ideal. given the current climate we can expect people to just turn it as high as possible in any case.
< gmaxwell>
just as they're doing with the current target.
< gmaxwell>
dgenr8: your intended behavior will partition the network and leave XT isolated.
< gmaxwell>
as your only connections from core nodes will be inbound and after segwit activates core will avoid making outbound connections to non-segwit enabled hosts; already that behavior is dangerous because there are relatively few hosts that it is willing to outbound connect to, making chance partitioning and vulnerability to sybil attacks worse.
< sipa>
gmaxwell: that's not the reason. the algorithm needs per-package byte sizes, which are not counted currently
< sipa>
gmaxwell: it's not just that it's suboptinal
< sipa>
without adding package-accounted sizes, it just cannot work
< gmaxwell>
sipa: hm why doesn't it just construct assuming that there is no size limit then truncate when the block is full.
< gmaxwell>
[pp.getblock(pp.getblockhash(x))['size'] for x in range(353213,417725)] ... man rpc from python is slow, thats running at 9 blocks per second and is going to take two hours to finish.