< gmaxwell>
phantomcircuit: not clearing would significantly reduce address propagation, I think? like... if the ratio of addresses to filter size is small enough it's the same as never clearing.
< phantomcircuit>
gmaxwell, which addresses though?
< phantomcircuit>
real node count, addresses in addrman, or addresses that random peers push out to the network?
< gmaxwell>
well if real node count is low then basically learning about them depends on their being enough bogus addresses to flush out the table... seems like a dumb thing to count on!
< jgarzik>
heh, same comment on the PR
< phantomcircuit>
oh right
< phantomcircuit>
i forgot that PushAddress is what checks addrKnown
< gmaxwell>
:)
< phantomcircuit>
i believe that even with a very low real node/filter size ratio the node would still effectively broadcast it's own address
< phantomcircuit>
for all outbound connections we broadcast our own address
< phantomcircuit>
gmaxwell, hehe also AdvertizeLocal -> AdvertiseLocal
< phantomcircuit>
gmaxwell, the logic in AdvertizeLocal is slightly different from the logic in the "version" message handler, is there a reason for that?
< phantomcircuit>
oh and the logic in the "version" handler ignores fDiscover
< phantomcircuit>
hmm except IsInitialBlockDownload() is going to be true fairly often
< phantomcircuit>
gmaxwell, im thinking that the IsInitialBlockDownload check shouldnt gate AdvertiseLocal in the "version" handler
< phantomcircuit>
(i bet that's mostly the reason why people who up in #bitcoin and ask why they have no inbound connections)
< CodeShark>
I would perhaps like to merge code that just moves the IsSuperMajority checks into a separate unit before going full force on version bits. This change would not affect any behavior and only touches main.cpp in very few places. Any objections to this approach?
< CodeShark>
should be easy to review...and easy to rebase against
< CodeShark>
The reason being that if we're going to still use ISM for deployments I'd like to start organizing them better
< CodeShark>
So that when we do the first version bits deployments they are easier to review
< gmaxwell>
So it seems someone might have taken my comment about canoniczing transactions to heart, as someone is complaining to me about mutants, and they say that they think the smaller of the mutants is the mutated one.
< gmaxwell>
and that it started for them around midnight utc.
< gmaxwell>
if someone here is doing that, you probably should cut it out. If nothing else, it'll complicate scanning the blockchain and figuring out which wallet software needs to be nagged to produce low-S signatures.
< CodeShark>
if we rebase the current ISM soft forks against my soft forks unit, we can more easily decide how to deploy them later on without having to rebase them again
< CodeShark>
we can either continue to use ISM, we can deploy them one at a time, or several together, or eventually use versionbits
< rusty>
CodeShark I can only really judge after I've seen the result, so use your judgement I guess.
< CodeShark>
I can rebase 68, 112, and 113 against it
< CodeShark>
as well as 65
< CodeShark>
I just want to see if there are any potential serious objections so I don't waste my time doing it
< rusty>
CodeShark that patch does too much. There's unnecesary code motion, collapse of variables, and a new return state.Invalid() which I don't see a source for.
< CodeShark>
which code motion are you referring to in particular?
< rusty>
CodeShark ah, this is more than one commit I'm looking at. One sec/
< CodeShark>
yeah, ignore the commit structure - just look at the final compare
< CodeShark>
I'll be sure to structure the commits to make them easier to review
< rusty>
CodeShark Hmm, IsValidVersion seems like a new concept; I can't see where the equiv code is removed. Maybe because you commented out rather than removing?
< CodeShark>
it replaces the explicit calls to IsSuperMajority in ContextualCheckBlockHeader
< CodeShark>
I'll be sure to make those movements in a single commit
< CodeShark>
yes, I did comment out what it replaces in main.cpp
< rusty>
Right, so overall it's not too bad, but I would be super paranoid and keep eg. "fEnforceBIP30" var, so diff is minimized. Then cleanup unnecessary vars at the end.
< rusty>
Make it super bite-size digestible. People may want a backport...
< CodeShark>
yeah, I want to try to change as little as possible in main.cpp.
< CodeShark>
Eventually I'd like the forks to be clearly visible within the consensus code and structured nicely...but that can wait
< phantomcircuit>
gmaxwell, ha
< CodeShark>
one of the goals, rusty, is to separate the review of BIPs for their features from the review of the deployment strategy
< CodeShark>
would be nice to disentangle the two
< CodeShark>
it would also be beautiful if we could avoid as much rebasing as possible from BIP proposal to BIP deployment
< CodeShark>
we keep getting stuck on stupid crap that continues to delay deployment unnecessarily
< gmaxwell>
versionbits must be backported unless we want to delay it's deployment until 0.12 is the oldest thing we'd backport to. :(
< gmaxwell>
we 'backported' it could be reimplemented, of course.
< rusty>
CodeShark: Agreed, a laudable goal.
< gmaxwell>
I think sipa's initial plan might have been to do that... e.g. using efficient implementation in new code and a 'walks the last 1000 blocks very block' implementation in backports.
< CodeShark>
walking back 1000 blocks every block isn't enough - we need to map state every activation interval in the block index
< phantomcircuit>
gmaxwell, what do you think about setting random bits to 0 in the bloom filter instead of clearing it at a fixed interval?
< CodeShark>
in any case, I'm fine with backporting versionbits itself after 0.12
< CodeShark>
the implementation is the easy part - it's getting it reviewed and acked that's a bitch :p
< gmaxwell>
we're pretty screwed on backport review bandwidth.
< CodeShark>
point is the current proposal won't require backporting
< CodeShark>
or will minimize it considerably
< gmaxwell>
there is always backporting even if the code is unchanged. :)
< CodeShark>
backporting will probably amount to changing a handful of lines in main.cpp
< CodeShark>
or are you also taking into account the proposed code movements for consensus and other stuff?
< gmaxwell>
More that I just mean that most of the 'backporting' work is actually the review to figure out if the changes are really safe in the new context.
< CodeShark>
yes, which is why I'm primarily interested in first merging stuff that will minimize such risks later on
< CodeShark>
it's a horrendous bottleneck
< CodeShark>
lots of people are frustrated
< CodeShark>
a good number of people doing real quantum leap type work on Bitcoin are waiting on changes that are taking forever
< CodeShark>
and it seems that we're wasting a lot of precious expert dev time having them basically repeat the same type of grunt work for each PR
< CodeShark>
(I'm looking at you, sipa)
< CodeShark>
to be clear, I don't want to sound like I'm complaining - this is not a complaint...I want to help Bitcoin Core be more efficient
< phantomcircuit>
that's interesting someone reported "non-canonical ReadCompactSize()" in their error field thing
< phantomcircuit>
see the same thing... on all nodes
< GitHub176>
[bitcoin] paveljanik opened pull request #6748: Rewrite help texts for features enabled by default (master...configure_disable) https://github.com/bitcoin/bitcoin/pull/6748
< phantomcircuit>
gmaxwell, at this point it seems like it would be best if BIP62 rules were made IsStandard
< gmaxwell>
phantomcircuit: IIRC they aren't even all implemented, and as far as bip62 goes many were only to be applied to higher versioned transactions which don't exist.
< gmaxwell>
we also have relatively low confidence that they're complete.
< gmaxwell>
(like... I don't think anyone has even ever tried implementing them and AFLing to try to get a passing mutant)
< phantomcircuit>
gmaxwell, sure... but it raises the bar a bit
< gmaxwell>
but .. it doesn't. it would have no effect on the current mutations if they're ecdsa based, until people started producing elevated version transactions.
< phantomcircuit>
gmaxwell, mutate to low-s and reject even without higher version numbers as non standard
< phantomcircuit>
it's not a security consideration
< phantomcircuit>
just a nuisance reduction
< gmaxwell>
phantomcircuit: okay, thats the instutionalize the attack thing I mentioned a day ago.
< phantomcircuit>
broken wallets producing high-s are already just as broken as they are now
< gmaxwell>
I'd really like to figure out who is producing high S and nag them to change.
< gmaxwell>
I went to go measure it today but of course the attacks obscure the results. :)
< phantomcircuit>
follow the crumbs of users complaining about weirdness with their wallet :P
< phantomcircuit>
or maybe they'll stop
< gmaxwell>
phantomcircuit: we should probably go write the nuclear canonicize transaction patch at least.
< phantomcircuit>
yup
< phantomcircuit>
gmaxwell, btw im trying to go back and get that trickle logic fixing patch commit ready
< phantomcircuit>
thus my questions from before about AdvertiseLocal
< phantomcircuit>
gmaxwell, well lets see, of the things on the BIP62 list the first is BIP66, right?
< wumpus>
<phantomcircuit> is there a reason that ARM is the fastest travis build? <- yes a) no GUI b) doesn't run tests
< Luke-Jr>
eh, why no GUI?
< Luke-Jr>
I just figured it was because it ran first :P
< CodeShark>
how can I run the same tests travis runs locally?
< Luke-Jr>
wumpus: is there a reason for unbanning lightningbot?
< wumpus>
Luke-Jr: something with depends, building Qt was somewhat tricky on ARM
< wumpus>
CodeShark: qa/pulltester/rpc-tests.py
< wumpus>
Luke-Jr: it's the logging bot
< Luke-Jr>
oh
< CodeShark>
thanks, wumpus
< Luke-Jr>
wumpus: btw, I believe 5574 and 6349 are good to go
< aj>
wumpus: this time for sure :-/
< aj>
wumpus: logs are backfilled from my scrollback too fwiw
< BlueMatt>
lol, you reg'd gmaxweII???
< midnightmagic>
lotta scammers.
< wumpus>
where bitcoin developers do their ritual tribal name-swapping dance
< CodeShark>
hmm, when I run the tests locally it succeeds - but travis is choking
< BlueMatt>
oh, I love those ones :(
< gmaxwelIl>
BlueMatt: there was a scammer using it in the past, also freenode is about to expire a bunch of not used for a while nicks.
< * BlueMatt>
misses when he ran the tester, then I could ssh in and figure it all out :)
< jonasschnelli>
Hmm... I think we could speed up Travis by moving to the new platform. Would require to get rid of "sudo", "apt-*" calls and migrate to package installing over the YAML structure.
< gmaxwelIl>
Luke-Jr: write the patch, I'll gladly ack/merge, just a "This document is a work in progress and is not complete, implemented, or otherwise suitable for deployment."
< jonasschnelli>
BlueMatt: an additional CI is still possible. Maybe no github auto-comment feature. I think some people run it.
< BlueMatt>
jonasschnelli: you miss the point, my point is when the tester failed, I could manually fix it for my pulls, adding new testers just means more failures, not ability for me to have privileged status through fixing my own pulls :p
< CodeShark>
is there anything else I can do to diagnose the travis test failure issue?
< wumpus>
CodeShark: travis servers tend to be very busy, this has, for me at least, triggered race conditions that wouldn't appear locally
< CodeShark>
so what's the workaround?
< BlueMatt>
load your workstation a ton before running locally?
< CodeShark>
lol
< BlueMatt>
also run tests on a workstation with like 20 cores?
< wumpus>
run your tests while running a parallel a build of bitcoin core with gccin the background?
< wumpus>
that will help to fill up memory as well as cores :-)
< BlueMatt>
wumpus: yea, if only my workstation would actually fill up its cores running "make -j"
< BlueMatt>
but, it doesnt :(
< BlueMatt>
damn 40 threads
< wumpus>
BlueMatt: if X is high enough (though it will probabl yfill up memory first)
< BlueMatt>
wumpus: no, not "make -jX", "make -j"
< BlueMatt>
as in, run it ALL at once
< wumpus>
but it creates a lot of nice I/O delays. Another option is the latter part of the chain verification with -par=high-number
< CodeShark>
are you saying the race conditions are in Bitcoin Core? or in the travis environment?
< wumpus>
BlueMatt: ooh I didn't know that one
< wumpus>
CodeShark: in your code change, usually
< CodeShark>
I didn't touch any threading stuff whatsoever
< BlueMatt>
CodeShark: time to dust off the cpu miner!
< wumpus>
CodeShark: where is it failing btw?
< BlueMatt>
wumpus: yes, its only useful when you have 64GB memory lying around that has half your hdd in cache anyway, plus lots of threads to eat the work quick
< BlueMatt>
ok, that shit was bugging me, now its 50 :)
< GitHub108>
[bitcoin] petertodd opened pull request #6750: Recent rejects backport to v0.11 (0.11...recent-rejects-v0.11-backport) https://github.com/bitcoin/bitcoin/pull/6750
< btcdrak>
Luke-Jr: gmaxwell: I asked coinkite to change their blog post
< sdaftuar>
;
< DocHex>
btcdrak: (it's been updated) if you have a link to something about what's missing in BIP62, or what remains to be done, I'm happy to link to that on the blog too.
< sdaftuar>
BlueMatt: one other thought about the mempool limiting algorithm -- i was thinking about how, in TrimToSize, we're skipping over any packages that have the new tx as a descendant
< sdaftuar>
it seems a little counterintuitive to me that we might let in a new transaction that ends up joining some package near the bottom of the mempool, while it may have been able to evict some higher up package (and set the min relay fee higher than the bottom package in the mempool)
< sdaftuar>
i'm not sure there's any big problem here that needs solving, i haven't yet thought of any new fundamental attacks, but it made me wonder if this modified algorithm might be easier to reason about:
< sdaftuar>
when a new transaction comes in, let it in if it exceeds the current prevailing relay fee (and all the usual other checks). after accepting it, call TrimToSize() and remove packages from the bottom until we're under the size limit, and update the relay fee based on whether anything was evicted
< sdaftuar>
(at that point, we could set our notion of whether this tx was accepted or rejected based on whether it's still in the mempool at the end)
< morcos>
BlueMatt: your change to use only package fee rate and not max: any reasoning behind that?
< morcos>
I can't think of a problem off of the top of my head, i know there were problems with earlier versions if you didn't use the max, but maybe they've all gone away now that we evict (and resort) on every pass.
< morcos>
but it certainly changes behavior.
< cfields>
jonasschnelli: sorry i missed your question yesterday, i didn't realize my irc client had shut down. It did seem unusually quiet, though :)
< cfields>
Missed the meeting too :\
< cfields>
wumpus / sipa: I'm working to have something to show wrt network code asap. It's taking much longer than expected, I've hit snags and had to refactor several times now
< cfields>
ultimately I've decided not to mess with much at the individual message level yet, so the general flow there is pretty much the same as now
< wumpus>
"i didn't realize my irc client had shut down" hehe that has happened to me too
< wumpus>
nice cfields!
< wumpus>
any problems with libevent? or just with our code structure?
< cfields>
wumpus: nah, no problems. Just lots of sudden "oh, so that's how that works. well shit." moments
< CodeShark>
cfields: as you probably know, I've also dabbled a bit in this messaging stuff...if you would like me to review anything I'd like to have a look
< cfields>
CodeShark: great, thanks
< jgarzik>
cfields, feel free to ask questions
< jgarzik>
cfields, there is a lot of subtlety. for example, we stop reading in if writing output pauses. fixes some TCP windowing attacks (along with some related logic).
< morcos>
btcdrak: typo? its just different text, oh you mean typo on the link?
< btcdrak>
yeah the link is to brancn bip68 rather than master branch, ping @maaku
< morcos>
what's the right way to leave feedback on the bip text
< morcos>
the terminology "reduced by 2^14" was confusing to me, also there are small typos
< morcos>
open an issue?
< btcdrak>
Open a PR directly
< btcdrak>
or if you dont have time PM me the issues and I'll make a PR quickly
< morcos>
Yeah I didn't want to PR because I'm not sure what the history of arriving at the phrase "reduced by" is, but I found it confusing. that same sentence has a typo "heigh" instead of "height". If I find more, I'll PR them, but I'm going to review pulls now
< btcdrak>
ty
< btcdrak>
does anyone know which block BIP66 enforced on specifically?
< CodeShark>
I think 358806
< CodeShark>
it's probably something really stupid - according to the cli, it's still activating the best chain when it runs
< maaku>
sdaftuar: it's a one-line fix to make the CLTV soft-fork do 68, 112, and 113 as well
< btcdrak>
maaku, we need to bump transaction version to 2 also right?
< BlueMatt>
morcos: the reasoning was its not no longer useful to do so
< BlueMatt>
(and simplicity is better)
< BlueMatt>
sdaftuar: hmmmmmm, yeaaaaa
< BlueMatt>
sdaftuar: yay more loc removal :)
< jgarzik>
maaku, oh? (RE one-line)
< morcos>
BlueMatt: For example: if A->B,C and B,C -> D. D = 90 B = 10, C = 10 A = 70. Then the package sorts before B or C, but the max sorts after
< morcos>
so would you really want to boot A from your mempool? it might be the most expensive immediately minable tx in there?
< maaku>
ok maybe 2-3 lines. basically the CLTV soft-fork does a supermajority check and sets a script validation flag. 112 would set a different script validation flag, and 68 and 113 would set a lock-time flag
< morcos>
its not clear to me that max is better, but its a difference worth considering.
< maaku>
the pull requests for 68, 112, and 113 are specifically coded to support the same soft-fork rollout as cltv
< morcos>
maaku: i've just started my review of these things, is there an analysis anywhere of why we don't need the mempool-only versions to be released for a while before we do the soft fork
< BlueMatt>
ok, TrimToSize is now 10 LOC
< morcos>
for instance do we know there are no txs which would fail?
< morcos>
well not none, but you know what i mean, they aren't being regularly generateed
< morcos>
quite honestly this seems like kind of a lot of new code to me and that it might make more sense to have it as mempool only first before softforking it in
< maaku>
morcos: why would we?
< morcos>
or at least have it in master for a while before releasing a soft fork with it
< morcos>
why would we what?
< maaku>
what would be the reason to have them in master as mempool only?
< maaku>
for some period of time that is, it obviously makes sense to break up the PRs that way
< morcos>
so they get a lot more real world testing
< BlueMatt>
morcos: hmmm...how did I convince myself that it was never used? :/
< BlueMatt>
(anymore)
< maaku>
(1) they can (and have) been tested on elements alpha, (2) i'm not sure what real world testing can be done until it is a consensus rule, that can't already be done against the PR before merging it
< morcos>
i mean i can see why waiting for 0.12 to release mempool only and months later for a soft fork roll out is too long to wait. but i find it hard to imagine that i personally will be comfortable with acking it for a soft fork in the next couple weeks
< morcos>
1) is a good point, i hadn't though tabout that
< morcos>
thats not to say i would NACK it, not at all, just i'd depend on other people being really comfortable with it
< morcos>
anyway, i'll keep reviewing, maybe comfort will come quicker, and at the very least we should get the mempool pulls merged as quickly as we can
< maaku>
morcos: maybe there's a reason to have it tested as mempool-only. but i'd like to actually see a test plan.
< btcdrak>
morcos we should get these ACKd, the ISM softfork would be a separate PR anyway, so there's no reason you couldnt ACK these as they are mempool only anyhow.
< morcos>
BlueMatt: certainly before you evicted on every pass, it was a requirement to avoid certain types of degenerate situations.. i remember being glad it was there on several occasions. i _think_ its not required any more, but i'm not sure whether its better or worse with it.
< morcos>
btcdrak: yes agreed, see my prior mesg
< BlueMatt>
morcos: well its obviously non-optimal in the case you just proposed
< BlueMatt>
welll, maybe not
< BlueMatt>
so I'm gonna leave it in
< morcos>
BlueMatt: leave what in?
< BlueMatt>
hmm, I dunno, so the metric I was targeting (which it seems we can hit) is to optimize average feerate in mempool
< morcos>
I'd vote for reverting it to max, but I can't make a concrete argument thats better.
< maaku>
morcos: my frustration re: this is that I've seen stuff get shelved on multiple occasions for reasons such as "to get more real world testing", but really all that seems to happen is delay because nothing gets done in the interim
< maaku>
if we need to delay for mempool testing, then that's fine, but we need to be explicit about what needs to be tested before advancing to the next stage
< morcos>
BlueMatt: I don't think you should be worrying about optimizing for that metric. Worry about attacks and not doing anything stupid.
< BlueMatt>
suresure
< BlueMatt>
i wasnt suggesting deciding based on that metric, but that that metric is what we can make an algorithm that isnt trivially attack-able and is optimal
< BlueMatt>
in the above case, using a max() will result in a larger mempool with lower feerate
< BlueMatt>
which I kinda like
< BlueMatt>
so, yea, I'd prefer leaving it in
< BlueMatt>
(I already pushed that, anyway)
< sdaftuar>
maaku: i've been reviewing #6312, and one thing jumps out at me
< jgarzik>
What are good pre-reqs for time based limiting? bluematt's stuff
< sdaftuar>
i don't understand how we're using the new LockTime() function in ContextualCheckBlock
< jgarzik>
?
< sdaftuar>
pcoinsTip isn't being updated between transactions in the caller
< sdaftuar>
so i don't get how the LockTime code can be doing the right thing, in the case of a chain of transactions within a block?
< sdaftuar>
i know it's not relevant for the mempool-only case, but it seems like a problem when we want to make it part of consensus
< maaku>
sdaftuar: in that case it isn't being called with pcoinsTip, it is using the block validation view
< maaku>
the consensus code doesn't use pcoinsTip
< sdaftuar>
ContextualCheckBlock() does, right?
< morcos>
jgarzik: i didn't understand your question. i think BlueMatt is adding time based expiration similar to sipa's commit from 6557 right?
< maaku>
sdaftuar: you are correct. hrm. working around sipa's nits broke that
< morcos>
BlueMatt: was the time based commit going to be added to 6722? what else is left to do? i'm kind of waiting til you say "done" to start diving into it again.
< maaku>
(prior versions would skip inputs that weren't found)
< jgarzik>
ditto
< jgarzik>
morcos, BlueMatt: I thought time-based was "later" but may have misunderstood.
< sdaftuar>
i don't think skipping would be safe either, would it?
< sdaftuar>
you might have a transaction that can't be spent until a block after its parent...
< sdaftuar>
i'm not sure but i don't believe that ConnectBlock redoes the LockTime() tests (?)
< morcos>
maaku: btw, i think the answer to my question a few mins ago is that its already non-standard to create txs which violate the new rules, because they have to be nVersion=2. i hadn't caught that that makes it safer to go straight to soft fork.
< BlueMatt>
morcos: I was figuring I'd just leave the timeout-ing to right afterwards, and adding multiple txn in one step, too
< BlueMatt>
so....go review?
< BlueMatt>
all thats left that I know of is writing tests
< BlueMatt>
it may be very broken right now and I wouldnt know it
< morcos>
BlueMatt: you didn't adddress the DynamicMemoryUsage calculation then. its already set up to include a time index, which wouldn't be part of your pull
< morcos>
i commented on the pull about that earlier
< morcos>
i say just add the Expire() commit, its short and simple and basically stands alone
< jgarzik>
BlueMatt, +1 :)
< morcos>
and i think it really helps to reason about how the whole mempool is kept limited
< morcos>
(but definitely hold off on the multiple txs piece! i'm not sure i even agree thats something we want to do, RBF is so much superior)
< sdaftuar>
wow that is way shorter code
< maaku>
sdaftuar: I'm investigating -- that was my memory that ConnectBlock redoes the LockTime() tests with context
< maaku>
but I'm having trouble finding it
< jgarzik>
Does CLTV/CSV enable a situation where you can create tiered time locks: funds frozen on chain. At time X, pubkey A can redeem. At time X+N, pubkeys (A,B) can redeem.
< maaku>
jgarzik: yes that's the driving purpose
< jgarzik>
The idea is a custodian could hold a backup key for the later time period, if funds are not claimed by active and aware participants.
< btcdrak>
jgarzik: yup.
< gmawell>
jgarzik: of course, as its just a script opcode. :)
< maaku>
morcos: different CheckLockTime (ugh, sorry about that)
< maaku>
the interpreter.cpp one is a method of SignatureChecker
< morcos>
oh yeah of course, thats horrible
< maaku>
yeah didn't even notice until now
< * btcdrak>
just had a cup of coffee
< btcdrak>
*gmaxwell
< BlueMatt>
morcos: it stands well alone is exactly why it should be its own pr :p
< morcos>
BlueMatt: i regreted that choice of words before I hit Enter
< BlueMatt>
:)
< BlueMatt>
morcos: yes, but the earlier comment was "the 9 is wrong"
< BlueMatt>
not what it should be
< morcos>
i think its supposed to be 3 * the number of indices
< BlueMatt>
so the 9 should be 3
< morcos>
so 6
< BlueMatt>
and then its fine?
< BlueMatt>
oh, wait, what
< morcos>
no
< morcos>
unfortunately sdaftuar and i broke it in master already
< BlueMatt>
the 9 should be 6 and then its fine?
< morcos>
DynamicMemoryUsage is 9 (and hsould be 6) in master
< BlueMatt>
who's code is it anyway? (tm)
< sdaftuar>
was my fault :)
< morcos>
GuessDynamicMemoryUSage is new and should be the same as master
< morcos>
sorry
< morcos>
same as DynamicMemoryUsage
< BlueMatt>
ahh, ok
< BlueMatt>
I'll go fix both, then
< morcos>
so both should be 6 unless you add expiration
< morcos>
which is what you should do
< morcos>
so then they should both be 9
< BlueMatt>
I'll just do 6 and let you add expiration :)
< morcos>
(if i remain silent does it cause you to reconsider?)
< BlueMatt>
heh
< BlueMatt>
I'm gonna go rebase the whole pr
< morcos>
+!
< morcos>
1
< phantomcircuit>
sipa, what ever happened to the normalized transaction id stuff?
< morcos>
maaku: btcdrak: its really confusing for someone who hasn't been following all along when the BIP's and the PR's are just completely different
< morcos>
for CSV, the BIP seems to be describing an earlier version?
< morcos>
where is the most up to date version of BIP-112?
< btcdrak>
morcos, the BIP text needs updating, let me push that
< btcdrak>
(for 112)
< morcos>
btcdrak: i know BIPs are supposed to be descriptive, not proscriptive, but it sure would be helpful for proposed soft forks like this if there is a clear consensus on the semantics separately and first, and then the code can be reviewed for compliance with that
< btcdrak>
BIP68 was ironed out exactly that way. BIP112 just needs tweaking because of the references to BIP68.
< morcos>
ok fair enough
< maaku>
sorry btcdrak had authored new text for BIP 112. I didn't realize it hadn't been pushed
< morcos>
maaku: oh i didn't realize the name conflict on CheckLockTime goes away with CSV, that's good
< morcos>
oh spoke too soon, sorry
< BlueMatt>
morcos/jgarzik/sdafutar: ok, pushed a new version of #6722
< BlueMatt>
because expiry is so simple I went ahead and merged that
< BlueMatt>
I also squashed everything into reasonably reviewable bites
< jgarzik>
BlueMatt, does expiry occur even if mempool is not full?
< BlueMatt>
yes
< jgarzik>
good
< morcos>
BlueMatt: great! just in time for me to call it a week however. will review asap
< morcos>
why'd you make it 72 hours though
< morcos>
way too small
< BlueMatt>
because we dont have large-scale cpfp deployed by miners :(
< morcos>
if blocks are on average 90% full, then a 300MB mempool could last most of a week
< BlueMatt>
and the mempool limiting stuff assumes that
< morcos>
ah
< morcos>
ok will review with that in mind
< jgarzik>
BlueMatt, with 10,000 txs in mempool, what is the expense of Expire() + TrimToSize() for each new tx? (wall clock time /cpu cycles)
< morcos>
jgarzik: i did some measurements on old versions, and it was pretty fast, and its been made much much faster with matt's algo
< morcos>
the slow stuff is already merged which is calculating the descendant packages
< BlueMatt>
in terms of real cost, untested, in terms of complexity - its something like n*m^2*lgm where n is the number of packages to remove and m is the number of txn in that package
< morcos>
it wouldn't be a bad idea to measure it again now
< jgarzik>
as long as it's operating off a sorted index it's fine. just want to avoid a bunch of whole-mempool walks in edge/worst cases. reading now...
< morcos>
but the real question is yep, exactly, edge cases
< morcos>
so its more about thinking of whats the worst case we can encounter
< BlueMatt>
if you have a bunch of tiny packages, then its just n, which isnt so bad
< BlueMatt>
if you have very large packages, then n is 1
< BlueMatt>
the worst-case, really, is a large single txn comes in to replace a single package which has a TON of tiny txn
< morcos>
no i meant like the attack scenarios we were describing
< morcos>
shoot, i forgot i was supposed to draft an email to dev about reduced package sizes
< jgarzik>
expiry can also create orphans
< morcos>
CalculateDescendants makes sure all the descendants are found and also removed
< dgenr8>
expiring at 2 hours, and happily relying on replacement to bump fees is an interesting strategy
< jgarzik>
I think 2 hours is way too short for expiry
< BlueMatt>
honestly I think it should be closer to 12/24 with the current mempool limiting stuff
< * dgenr8>
good to look at the latest fee estimates
< jgarzik>
48 hours is my pref
< BlueMatt>
jgarzik: you forget the mempool limiting stuff assumes cpfp
< BlueMatt>
and miners dont have it
< BlueMatt>
so anything that is in mempool because of cpfp is essentially there until it times out
< jgarzik>
miners run big mempools anyway
< BlueMatt>
that is irrelevant
< jgarzik>
few use CPFP or descendents at all
< BlueMatt>
yes, thats my point
< jgarzik>
zero conf security actively incentives against it
< jgarzik>
& malleation
< BlueMatt>
what???
< BlueMatt>
as a miner, its clearly in your best interest to use cpfp
< BlueMatt>
there just isnt code to do so, so most dont
< BlueMatt>
in any case, the point is, because miners dont do it, random nodes' mempools accept things that they hold because cpfp makes them have lots of fees
< BlueMatt>
so they hold these txn only because of cpfp and will hold them until they expire
< BlueMatt>
hence why a low expiry time makes sense
< jgarzik>
BlueMatt, in practice there are few descendents and malleation attacks make them even less likely
< BlueMatt>
what does this have to do with miners?
< BlueMatt>
yes, users dont use cpfp
< BlueMatt>
they should be using rbf
< jgarzik>
thus miners don't use cpfp
< jgarzik>
let's consider what -is- before -should be- :)
< BlueMatt>
miners dont because its not much extra income and the patches arent well-reviewed, not because they actively choose not to
< BlueMatt>
yes, thats what I'm saying.......
< BlueMatt>
of course all the mempool stuff being written today is written assuming we do cpfp in mining soon
< jgarzik>
if there is no user cpfp traffic, miners do not use cpfp (nor have an incentive to deploy it or bother with it much at all)
< BlueMatt>
which we should
< BlueMatt>
there is some, however
< BlueMatt>
and thus they should be using it
< BlueMatt>
in *any* case, writing a cpfp-included mining thing based on mempool tracking will make mining a TON more efficient in any case
< BlueMatt>
so we should do that
< jgarzik>
In theory sure. But's it's optimizing for the "last 0.0001%" Bitcoin isn't great for long chains of unconfirmed transactions. Sure miners should capture those, but users and services will not generate them.
< BlueMatt>
we need to rewrite the mining stuff anyway
< jgarzik>
Just noting the context. If it comes for free, great. But don't put a lot of effort into supporting something users won't much use.
< BlueMatt>
because it sucks
< BlueMatt>
including cpfp in that is the logical decision
< jgarzik>
Including rarely used features isn't necessarily logical.
< BlueMatt>
in any case, this is way off topic from the original discussion - the mempool expiry time
< BlueMatt>
rarely used because you also cant use it, because miners dont care about it
< BlueMatt>
so its also a chicken-and-egg problem
< BlueMatt>
of course the real solution is rbf, but...well, people are against that for political reasons
< dgenr8>
if fee(12) and fee(24) were equal, should txes expire at 12?
< BlueMatt>
?
< dgenr8>
see #'s above
< jgarzik>
no it's not quite chichen-and-egg. users are incentivized against it because long chains of unconfirmed transactions do not work well in the bitcoin system. that's not specific to CPFP but relevant to evaluating CPFP.
< dgenr8>
if they were equal, that amount gets you confirmed in 12 and nothing happens for the next 12 blocks
< BlueMatt>
jgarzik: indeed, hence why we need RBF :p
< jgarzik>
BlueMatt, so you agree CPFP is mostly useless? :)
< BlueMatt>
no, i dont agree its mostly useless
< jgarzik>
it's a miner "nice to have, in the rare occasions the code is triggered"
< BlueMatt>
its the next-best thing when people are against rbf for bullshit reasons
< BlueMatt>
its useful for the system as a whole
< BlueMatt>
very useful
< BlueMatt>
its just not as nice as rbf
< dgenr8>
RBF solves an important need. but that need is not more important than not destroying all confidence in 0conf
< BlueMatt>
dgenr8: I think you're comparing different numbers? estimatefee 12 is in 12 blocks, no? not 12 hours?
< jgarzik>
The businesses like Coinbase and Bitpay are fine with RBF -- if and only if there is a secure instant transaction tech in place
< dgenr8>
yes, im talking about 12 blocks = 2 hours
< jgarzik>
need a replacement first, hard requirement
< jgarzik>
*zero conf replacement
< BlueMatt>
the funny thing is rbf has almost zero effect on 0conf security
< BlueMatt>
its so hilariously insecure already that adding rbf to the mix doesnt make it materially worse
< jgarzik>
It matters from a practical standpoint
< BlueMatt>
people using 0confs today are doing so by betting most customers are not trying to rip them off
< BlueMatt>
which works fine for many people who have lots of customers
< BlueMatt>
and who do other kyc stuff
< BlueMatt>
so, they can keep doing that
< BlueMatt>
with or without rbf
< jgarzik>
Outside the theoretical world, deploying full bore RBF is anti social to several existing payment flows at major businesses
< dgenr8>
today, if you pay with a highly standard tx and miner respects first-seen, it's not that easy to get double-spent
< BlueMatt>
in any case, this is my point - people object to rbf because, whatever, which means we need cpfp
< BlueMatt>
jgarzik: how?!
< BlueMatt>
dgenr8: not true
< jgarzik>
Attacks become quite a bit easier
< dgenr8>
ah you'll want to enter my "double-spend me" contest
< BlueMatt>
dgenr8: also, the way "the big guys" are mostly doing it doesnt even check that miners have seen a tx
< BlueMatt>
or that its not double spent anywhere
< jgarzik>
There was an RBF attack spree the instant that Chinese pool turned it on
< BlueMatt>
jgarzik: not really, actually
< BlueMatt>
jgarzik: yea, because it was publicized
< jgarzik>
...thus reality != theory
< BlueMatt>
so dont publicize it on reddit
< jgarzik>
There is real end user negative impact
< BlueMatt>
still, you're just proving my point - people reject to rbf, so we need cpfp
< jcorgan>
guys, can you take it to #bitcoin-dev or #bitcoin?
< BlueMatt>
because we need *something* for this usecase
< jgarzik>
CPFP is harmless, so no NAK. Just will be rarely used feature.
< jgarzik>
Similar to code miners should have that sweeps anyone-can-pay into their wallet.
< BlueMatt>
then how do wallets who paid too little fee fix their stuck txn?
< dgenr8>
i actually think mempool expiration at 2 hours is the solution to replacement
< jgarzik>
Miners -should- be doing that, just like they -should- be deploying CPFP, a just-in-case capture. It won't be used much at all though.
< jgarzik>
dgenr8, the lower the limit, the greater the traffic on the network
< dgenr8>
depends on the numbers then
< jgarzik>
yep, as with anything in life there is a zen balance to be achieved :)
< morcos>
ahh i go away for 30 mins
< morcos>
you guys are touching on this, but its a very important point
< morcos>
we're conflating two completely different things
< morcos>
CPFP as a user action for helping a stuck tx is completely separate from CPFP as a mining algo
< morcos>
i couldn't care less about the first
< morcos>
but the second is just properly maximizing miner income
< morcos>
its very important that core includes a good algorithm for that
< morcos>
otherwise big miners that have the money and talent to do their own custom mining algos will have a significant advantag over small miners
< BlueMatt>
jeff's point is that it really doesnt matter today (go ask big miners - they dont care about fees, really)
< morcos>
its another example of where the default has to be really close enough to optimal
< BlueMatt>
however, if you have users using it for stuck txn, then it does matter
< btcdrak>
this really is a discussion for #bitcoin-dev or #bitcoin
< morcos>
this is exactly a discussion for core-dev we're trying to figure out how important it is to include CPFP in core's mining algo
< BlueMatt>
the big miners I've heard from dont care about fees, and are perfectly willing to forgo them to make bitcoin work properly for users - if users' wallets are using cpfp to unstick transactions, then it becomes required for bitcoin to work properly for users
< btcdrak>
CPFP is incredibly inefficient compared to RBF
< Luke-Jr>
is someone going to rewrite CPFP?
< BlueMatt>
indeed, but there are many rejections to rbf
< morcos>
btcdrak: exactly we're not talking about CPFP for users
< Luke-Jr>
btcdrak: they're not competitive, they solve different problems
< BlueMatt>
Luke-Jr: i dont think anyone has signed up to do so, yet
< BlueMatt>
Luke-Jr: it requires adding more mempool tracking
< * Luke-Jr>
not entirely sure why his CPFP was closed then.
< BlueMatt>
but it should all be inverse duplication of whats already there
< morcos>
sdaftuar and i are rewriting mining to be more optimal (which includes tracking ancestor package fee rates)
< BlueMatt>
Luke-Jr: the way to do it is to do ancestor package tracking
< BlueMatt>
ahh, ok, good
< morcos>
i'm going to stop calling it CPFP so poeple don't think i'm talking about adding a user feature
< Luke-Jr>
CPFP is what it's always been called, and doesn't imply anything like user features..
< btcdrak>
how is CPFP not a user feature?
< BlueMatt>
btcdrak: its both a user feature and a miner feature, is the point
< Luke-Jr>
morcos: if you're going to work on this stuff, it'd be nice to avoid the centralised policy problem we have now before it gets too deep ;)
< Luke-Jr>
btcdrak: users don't care *how* their transactions get confirmed
< morcos>
BlueMatt: i think that should be doable
< BlueMatt>
morcos: its easy, just not done yet
< morcos>
Luke-Jr: i'm thinking that the policy is adjusted by using your priorizeTransaction functionality
< Luke-Jr>
morcos: that is not sufficient/reasonable.
< morcos>
ha, well thats hard enough on its own
< morcos>
why is that not sufficient
< btcdrak>
OT: curious, who here, other than jgarzik and dgen8 are opposed to RBF?
< morcos>
btcdrak: heh, and i get yelled at. :)
< Luke-Jr>
because then people need to figure out how to translate their policy into relationship with the default policy
< Luke-Jr>
it's already hard enough to implement policies; forcing people to implement *and* translate is even more work
< btcdrak>
morcos: if you cant beat them, join them? :-P
< Luke-Jr>
the goal is to make policy changes *easier*, not harder
< morcos>
ok, i think its worth me learning more about that, so maybe i'll ping you next week to try to learn a bit more about what policies look like
< morcos>
BlueMatt: to your point about miners not caring, i agree maybe the urgency isn't there at this exact second
< morcos>
but it could be important soon, and we shouldn't fall behind the curve
< morcos>
anyway, i really have to run now... good night!
< btcdrak>
I mean, especially now that f2pool has RBF there's no reason we shouldnt add it to core
< btcdrak>
FSS-RBF at the very least..
< Luke-Jr>
frankly there was never a reason not to
< BlueMatt>
morcos: sure
< Luke-Jr>
other than perhaps encouraging miners not to use Core's policy code.. but that's a failure so far
< btcdrak>
The funny thing is the clutching onto insecure 0conf will go away when we have real instant paynebt via payment channels/LN
< BlueMatt>
btcdrak: yupp
< BlueMatt>
we're all looking forward to that :)
< btcdrak>
:)
< Luke-Jr>
btcdrak: well, that's not really relevant to merge-or-not, since miners can run RBF whether it's in Core or not
< BlueMatt>
does eligius still do rbf?
< btcdrak>
Luke-Jr: right, but I mean the people who are objecting on the grounds of 0conf lose their basis
< phantomcircuit>
jgarzik, it's the businesses responsibility to ensure that users are credit worthy if they are accepting unconfirmed transactions
< dgenr8>
btcdrak: do you need to replace your tx like, pronto? can you wait 2 hours?
< phantomcircuit>
jgarzik, realistically they cannot rely on unconfirmed transactions to confirm when it is in the financial best interest of miners to run full replace by fee
< phantomcircuit>
jgarzik, long term it's going to do much more harm to them if we even try to make it less dangerous
< btcdrak>
dgenr8: think about fee bumps
< dgenr8>
btcdrak: yes, i'm asking how high-frequency fee-bumps need to be
< phantomcircuit>
jgarzik, also there is CPFP traffic on mainnet today
< phantomcircuit>
it's in the Schildbach wallet
< BlueMatt>
oh really?
< BlueMatt>
heh, thats cool
< phantomcircuit>
BlueMatt, it basically is useless though because miners aren't running cpfp
< phantomcircuit>
Luke-Jr, is eligius running cpfp?
< BlueMatt>
phantomcircuit: indeed
< BlueMatt>
cool that its deployed though, thats pretty awesome
< Luke-Jr>
yes, Eligius has been CPFP since like 2011 or 2012
< Luke-Jr>
which also means unless it gets updated for 0.12, we're going to have to stick to 0.11 ..
< BlueMatt>
I'm sure there will be a patch to do it on 0.12, even if not merged