<bitcoin-git>
[bitcoin] fanquake opened pull request #32496: depends: add Qt `-ltcg` for Darwin, drop it for Windows (master...depends_qt_ltcg) https://github.com/bitcoin/bitcoin/pull/32496
Holz has joined #bitcoin-core-dev
mudsip has joined #bitcoin-core-dev
mudsip has quit [Client Quit]
brunoerg has quit [Remote host closed the connection]
SpellChecker has quit [Remote host closed the connection]
SpellChecker has joined #bitcoin-core-dev
<gmaxwell>
I've still never really understood why reorgs can't be handled by a list of "non-conflicted transactions which were previously confirmed which must be reconfirmed with the highest priority" and absolutely no computational complexity or limits on that list.. and then a mempool of additional never-confirmed transactions which treats everything in that list of deconfirmed as confirmed, and is
<gmaxwell>
managed as the mempool normally is.
<gmaxwell>
so then in a reorg a block is filled by trying to stuff back in all the deconfirmed transactions (in their original order from the node's perspective), and then filling in what remains from the mempool.
<gmaxwell>
And with a disclosed note that in the event of a reorg, the mining logic favors reconfirming transactions over income maximization on the basis that system health is better for the miner's bottom line than scraping out the absoulte highest income...
<_aj_>
if there's a higher fee(rate) conflict with one of those transactions, a miner will probably prefer to mine that conflict rather than confirming the old one at "highest priority" ?
<gmaxwell>
_aj_: I don't really think so, particularly given that the reorg is a very rare event... but someone having a significant theft as a result of reorg may be incredibly damaging to the miner. Like great you got 0.001 BTC more but the public drama means all the bitcoin you earned is now worth 10% less.
<instagibbs>
you'd still have to deal with oversizedness, so I think this is a question about how things are shoved back into txgraph vs the proposed trimming function?
<gmaxwell>
It might be the case that if the difference was some huge bogon fee they might prefer it but (1) if there is a bogon fee it's probably some theft attempt, and (2) all this is so rare the expected cost of making the pro-social move is negligible, and a lot more could be earned by doing stuff like making template update latency non-terrible (it's very terrible today, high fee txn are often missed
<gmaxwell>
10s of seconds after they're well relayed).
<gmaxwell>
instagibbs: oversizedness in what sense?
<instagibbs>
cluster size/count (count being most important)
<gmaxwell>
That's my point. no you shouldn't.
<instagibbs>
hm
<gmaxwell>
There should just be a vector of txn that were previously confirmed and non-conflicted (so still consensus value).. and block building logic should just stuff them all back in. Mempool should treat them as confirmed, not a part of clusters.
<instagibbs>
so you have a bunch of stuff you can't actually tell how good it is, you get a new tx, you'll just ignore that too?
<gmaxwell>
Perhaps I'm just stupid, would not be the first time. I had this covo with sipa before and got a response like yours, and I don't get it.
<instagibbs>
ditto hah
<gmaxwell>
instagibbs: I think mempool should treat previously (non-conflicted) confirmed stuff as confirmed, and mining should work on reconfirming all that stuff. Then you don't have to worry about it violating cluster limits, etc. because it's not part of clusters.
<gmaxwell>
It's just a list of stuff you'd like to try to get back into blocks before resuming the regularly scheduled programming.
<_aj_>
had a discussion with luke along similar lines once, but the idea was just to mark previously-confirmed txs as un-replaceable but still in the regular mempool
<_aj_>
it seems like a good idea to me, especially if it simplifies things/makes them more efficient, but i don't have any confidence we'd be willing to make a "peternalistic" policy like that and stand up against the criticism of it
<gmaxwell>
Right but now there is a bunch of hairball about putting that stuff in the mempool and having it frequently break cluster limits and how do you handle that particularly because all of those limits are motivated by computational complexity concerns.
<gmaxwell>
It's justifyable on an entirely or almost enirely non-parternalistic basis, because otherwise you have this cluster limits problem.
<gmaxwell>
Also it's not like the reinstated transactions are going to be *bad* in terms of income. Particularly in the ordinary case of a 1 block reorg.
pseudoramdom has joined #bitcoin-core-dev
<_aj_>
last time i looked, the 1-block reorgs had 99.9% the same transactions already anyway
<_aj_>
sadly, i no longer remember how to look that up
<gmaxwell>
_aj_: it was previously (years ago) when I looked.
<_aj_>
yeah, it was at least a year ago when i looked
<gmaxwell>
Like there is this annoying computational problem, it just so happens that doing the "good for the network health" thing can avoid it. Probably the socially good thing is in the miners selfish self interest, but even if it's against it, probably the loss is very small. But all that aside it solves the limits problem... and does so in a way that doesn't make the network health _worse_ where the
<gmaxwell>
node evicts perfectly valid already-confirmed txn just because the broke a cluster limit.
pseudoramdom has quit [Ping timeout: 248 seconds]
<_aj_>
the idea is the ordering is just exactly how it appeared when it was confirmed?
<gmaxwell>
And it isn't as if the current (or post cluster) mining logic is actually the true income maximizing logic, -- e.g. node won't keep around multiple conflicting pools and decide at the moment of template construction which makes the most money. It's not even particularly hard to do that (like just calling out to MIP solver with a pretty trivial problem gets you the answer), but it doesn't
<gmaxwell>
result in a sensible "implied relay logic" that makes sense.
<gmaxwell>
_aj_: yeah or at least the starting point of my suggestion is that, you could apply some sorting (restricted to O(n log n) optimizations) but I think the idea works with just preserving the original order.
<gmaxwell>
You just can't apply any O(N^2) or worse algorithim to an arbitarily long reorg... without having to potentialy cut transactions out that were confirmed to manage the complexity blowup.
pablomartin_ has joined #bitcoin-core-dev
Guest70 has joined #bitcoin-core-dev
Guest70 has quit [Client Quit]
<gmaxwell>
and for system health, I think it's very good and important to restore the 'most confirmed' (greatest depth) transactions first.
<gmaxwell>
but like even if you don't give a fuck about system health, you got some contract that sells your coins the instant they're mined yadda yadda... *something* has to be done here because just dumping the reorg into the mempool doesn't work because of potential complexity blowup. May well be that what I suggests earns more income directly, anyways, because otherwise you'd drop more profitable txn
<gmaxwell>
because they violated cluster limits.
<_aj_>
well, i liked the non-replacable idea, and this is better than that, so +1 fwiw
<gmaxwell>
And as far as the health argument goes, you can make a lot of very subjective arguments for the merits of things like RBF or filtering or whatever about non-confirmed txn. But all this is pretty subjective and the system can't make any promises about unconfirmed stuff... But "this was already N confirmed" is an extremely unambigious metric and confirmation is the point where
<_aj_>
maybe worth it do a write up of the approach/implications, getting chinese/etc translations, and consulting miners/pools before implementing it
<gmaxwell>
we-the-tech-community has always said that expectations ought to start forming there. It's would also be completely technically pratical for the network to relay near-tip orphans and for miners to priortize their distinct txn, though that isn't done today... and probably no one will bother implementing because orphans are rare enough.
<_aj_>
near-tip stale blocks you mean?
<gmaxwell>
but unlike "miners should always prioritize FIRST SEEN" ... priortizing stuff that has actually been mined (and particularly was YOUR best tip) is actually reasonable.
<gmaxwell>
_aj_: Exactly, near-tip stale blocks. To be clear, I'm not suggesting doing that now, but I think it would be reasonable to do (as it's kind of the logical conclusion of the policy I suggest, so if you hated near-tip inclusion as a priority thing then maybe you should hate my suggestion)
pseudoramdom has joined #bitcoin-core-dev
<gmaxwell>
(and given that stale blocks are relatively rare, it's arguable not worth implementing even if you fully buy my argument that those txn should be prioritized where they are non-conflicted)
<_aj_>
you lose the "i've already validated this block, so i know the txs are fine" for that code path
<gmaxwell>
_aj_: yeah but the node is usually pretty idle, validate in background. It's got tip-POW so it's not a DOS vector.
<_aj_>
validating in the background could make reorging to that block and a new child of it faster (and would certainly make rejecting a child of it faster if the stale block turned out to be invalid)
<_aj_>
not sure if any of that would also affect the selfish-mining stuff
<gmaxwell>
_aj_: best counterargument (beyond it not being worth impl complexity) I'm aware of is that there is a harm to consensus to help anyone get *access* to a fork from your tip, but I think it's a pretty thin issue.
pseudoramdom has quit [Ping timeout: 276 seconds]
<_aj_>
is it a harm to consensus to know that there's an equal-work valid tip out there that may have more hashrate mining on it though?
<gmaxwell>
_aj_: yes it would, which is at least a theoretical problem, if there is a deep fork bitcoin can stop converging when the cost to switch starts approaching the interblock time. This actually was seen in practice in some shitty altcoin called 'liquid bitcoin' or something like that where it has a fixed difficulty-- the network shattered into a thousand forks once the reorg time exceeded the
<gmaxwell>
_aj_: I think everyone has a self interest in trying to avoid there being any additional hashrate confilicting the tip they're on.
pyth has quit [Quit: Leaving]
<_aj_>
not dealing with an alternative fork that's more than 1 block deep seems fine -- ie, don't disconnect more than 1 block for bg validation of a fork
<_aj_>
gmaxwell: i've had that thought: logical conclusion is that you should make the most impressive mistakes you can, and document them thoroughly
<gmaxwell>
_aj_: I mean in terms of theoretical weakness/attacks, lets say that a 50 block reorg takes more than 10 minutes for most nodes to proces. And forever reason (attack, internet cut, etc) such a fork forms in Bitcoin-- the result may be that the system never reconverges to a single new chain without human intervention, or takes a very very long time.
<gmaxwell>
but I think I'm off in the weeds about pretty theoretical risks.
<gmaxwell>
To 'solve' that risk it would be sufficient to track multiple near-best chains which have recieved a new block with in n*average_block_time, as in maintain a cloned chainstate for each of them so new blocks in them could be validated without any switching costs.
<gmaxwell>
But oy, lots of software complexity for a theoretical rather than pratical risk, that in practice would just get resolved by intervention if it ever happened.
<glozow>
question: If you have the "previously confirmed txns pool" and see a child of one of those, do you put the child in that pool or the mempool?
<_aj_>
the mempool, unless it was a child in a confirmed block
<gmaxwell>
glozow: in my thinking, the mempool. The previously confirmed pool is just treated as confirmed from the perspective of the mempool.
<gmaxwell>
as _aj_ says, unless it was itself previously confirmed.
Cory38 has quit [Quit: Client closed]
<_aj_>
the mempool, unless it was a child in a pow-valid block (confirmed or perhaps stale)
Cory38 has joined #bitcoin-core-dev
<glozow>
how long would a tx persist in previously confirmed pool?
<_aj_>
gmaxwell: depends how many new txs were on the other side of the reorg i guess, a simple invalidate/reconsider of 50 blocks seems to take a couple of minutes on not very quick hw
<_aj_>
until it's mined or conflicted by a confirmed tx
<gmaxwell>
Essentially I suggest the node behavior is "once confirmed, always confirmed, unless it gets conflicted"
<_aj_>
i guess if it's mined, technically that's also conflicted by a confirmed tx
<glozow>
how would we decide on evictions for child-of-previously-confirmed in the mempool? it also gets evicted if it's lower feerate right?
<glozow>
as in, it's compared with all the other stuff in mempool
<gmaxwell>
glozow: It might be prudent to time it out, e.g. there is an unknown to you softfork that makes it no longer consensus valid to other participants. But I think my general concept is forever.
<_aj_>
same as we do for child-of-confirmed-tx in the mempool
<gmaxwell>
_aj_: right, in my view it's just treated like its confirmed.
<_aj_>
gmaxwell: wow, that would be horrible -- if you had hashpower, you'd stick it in a block, and everyone would reject your block
<gmaxwell>
_aj_: I mean thats generally the problem with softforks, if someone does mine something invalid then you end up with multiple rejected blocks-- that already happens.
<_aj_>
gmaxwell: you'd normaly reject it from your mempool for being non-standard, but maybe you wouldn't reject it here for that reason
pseudoramdom has joined #bitcoin-core-dev
<gmaxwell>
_aj_: yes but people build on the invalid block. We've seen it in practice in prior softforks, where every time a miner that allowed the consensus invalid blocks there would be several additional blocks that built on it.
<gmaxwell>
during one softfork 50btc one of the larger pools at the time spent an entire month producing invalid blocks. many of which triggered other miners to produce invalid-because-child. The pool was PPS too so they didn't lose much hashpower...
<glozow>
would you not still run validation on the disconnected transactions?
<gmaxwell>
glozow: sure absolutely you're only trying to mine transactions YOU think are valid.
<gmaxwell>
glozow: aj's point is that the network may have other ideas about their validity.
pablomartin_ has quit [Ping timeout: 244 seconds]
pablomartin_ has joined #bitcoin-core-dev
<gmaxwell>
_aj_: of course you could also further restrict any "restoration" transactions to ones that don't violate forward compatiblity standardness rules to reduce that risk. I don't really have a strong opinion one way or another, and could argue it either way.
Christoph_ has quit [Quit: Christoph_]
<gmaxwell>
_aj_: the principle of FAFO applies to both sides, non-upgraded-miner producing invalid blocks due to not enforcing a newly active consensus rule, or a user relying on confirmation of a transaction that is non-standard due to well understood forward compat rules. I *generally* prefer to screw the miner when those two conflict, because they're the more sophicated party.
pseudoramdom has quit [Ping timeout: 252 seconds]
<gmaxwell>
_aj_: but otoh the new consensus rule might be some rogue bullshit performed by just a couple mining pools, the smaller miner doesn't even know about the new rule, etc.
<gmaxwell>
_aj_: but if the majority hashpower is out to get them they can invalidate stuff that isn't blocked by forward compat rules anyways so ::shrugs::
<gmaxwell>
like right now today two or three miners constuting >>50% hashpower could decide to blacklist some address, maybe belonging to a known bad actor.. and then everyone else is producing blocks that they'll exclude from their chain, and will keep doing so 'forever' (until human intervention).
<gmaxwell>
so I don't think restoration creates a vulnerablity to attack that doesn't just already exist.
<gmaxwell>
(restoration being the word I'm using to describe trying hard to confirm deconfirmed transactions, if it wasn't clear)
pseudoramdom has joined #bitcoin-core-dev
<gmaxwell>
_aj_: as to why I favor restoring transactions you wouldn't mine generally, the biggest reason is because restoring them is necessary to restore their children, which you would have mined. like right now today if I'm not an idiot I don't author transactions that spend unconfirmed non-standard coins. But once they're confirmed I don't care if they're standard at all, and I expect that right
<gmaxwell>
now a large percentage of the tip-circulating coins have non-standardness somewhere in their recent causual history.
<gmaxwell>
like someone does some bullshit monkey jpeg crap and the change from that eventually ends up in an exchange wallet, and so on.
<_aj_>
maybe after some timeout you just start moving restoration-txs back into the mempool (relaying to other peers as you do so, and evicting things that violate cluster limits)
enochazariah has joined #bitcoin-core-dev
<_aj_>
like 72 blocks (12h)
<gmaxwell>
_aj_: yeah I think that is a reasonable backstop. The timeout could also just be some function of their order. like you can assume the network is generally trying to restore and if a txn isn't immediately restored there may be a reason why not. So like if there is a 2 deep reorg, the oldest txn are 2 blocks old and maybe you want to kick them out by the time they're N blocks old. If you
<gmaxwell>
assume *everyone* is retoring you'd kick them out after litterally the next block, but some slack makes sense.
saikasyap has quit [Quit: Client closed]
<gmaxwell>
_aj_: more than I was thinking. why not just assume miners are restoring, except for some jokers who aren't, and kick stuff out after a couple blocks more than than it would take if everyone was restoring?
pseudoramdom has quit [Ping timeout: 272 seconds]
<gmaxwell>
Again I'm not strongly opinioned on this one. I think if you don't try to restore forward-compat-vioating txn then it's pretty reasonable to have a long expiration. If you restore litterally everyone you may want to evict fast to minimize risk that you're trying to restore something that is invalid but you don't know its invalid. Of course, the two could be hybridized...
<gmaxwell>
(and to be clear I mean evict from the restore-pool to the mempool, which may or may not mean dropping entirely due to policy/cluster/etc limits)
<_aj_>
gmaxwell: i think you'd drop to the mempool in the reverse order to how you'd mine; i think if the main thing we're protecting against is miners who aren't following current consensus rules, then risking up to half a day of lost hashpower isn't going to cause massive angst, and gives plenty of time to catch up any txs for even a long reorg
<_aj_>
gmaxwell: revalidating previously confirmed txs to see if they follow standardness rules seems annoying, so i dismissed the hybridized approach :(
<_aj_>
(it seemed especially annoying to try to do it straight away when you want to do a getblocktemplate asap after the reorg, so i think that means "restoring txs you wouldn't mine generally" also makes sense)
saikasyap has joined #bitcoin-core-dev
<instagibbs>
_aj_ I suppose when "evicting" from "restore pool" back into the mempool , one could do essentially the reverse of what I was doing for another purpose. Use known cfr of non-oversized clusters, and remove things backwards until not oversized, then commit
<gmaxwell>
_aj_: hm I was thinking the forward compat rules were decidable as a pure function of the txn, you don't need the inputs. But I'm not particularly sure about that.
<gmaxwell>
I don't have a strong opinion but I'm not the greatest fan of handling the eviction that way, just because its an exceptional case that shouldn't happen, so easier to just pretend its newly recieved.
<_aj_>
gmaxwell: not for OP_SUCCESS and so forth
<gmaxwell>
_aj_: darn. okay thats a big strike against checking.
Christoph_ has joined #bitcoin-core-dev
<_aj_>
gmaxwell: yeah :( DISCOURAGE_UPGRADEABLE_PUBKEYTYPE can be conditional on arbitrary script logic too
<gmaxwell>
But I think restorepool's mission can be achieved even if you evict very quickly, which mitigates harm.
<_aj_>
so depth of reorg + fudge factor blocks?
<gmaxwell>
yeah. Basicaly assume that the network is going to try to restore ASAP, and if it fails to do so, too bad so sad, you get back in line.
<gmaxwell>
also if you didn't get double spent in fudge_factor blocks, good odds you never will be.
<_aj_>
gmaxwell: "I'm not the greatest fan of handling the eviction that way" -- sorry, which way?
<gmaxwell>
_aj_: I understood you were suggesting that a restore-evict should be added to a cluster in a priority way, so eg. cluster limits evict something else. I'm not a huge fan simply because it requires processing the whole cluster, means there is a bunch of reorg special case mempool logic. Which this restorepool idea was trying to eliminate.
<_aj_>
instagibbs: oh, i guess when you evict from the restorepool, you have to take all the descendants in the mempool, and validate them as a package, in case there's cpfp stuff? but you also want to remember you've already done the consensus/standardness validation for the descendants
<_aj_>
gmaxwell: oh, no, treating it as new sounds fine to me; i just meant cluster limits would apply now and something would get evicted
<gmaxwell>
ah okay.
<gmaxwell>
there is some unavoidable complexity I think, like say the evicted txn is the parent of all txn in the cluster.
pablomartin_ has quit [Ping timeout: 248 seconds]
<instagibbs>
_aj_ well, if it's undersized, you'll end up being able to evaluate the new chunks, and we're reyling on PoW for anti DoS. How that's advertised on relay, i havent considered it
<gmaxwell>
and cluster is overlimits with it in there.
<_aj_>
gmaxwell: i think package logic might already deal with that (with the only problem be we can't relay it currently, but that's not an issue here). **handwave** **look, a dinosaur!!**
pablomartin_ has joined #bitcoin-core-dev
<gmaxwell>
yeah I'm not too concerned with relay in the sense that assuming the reorged block was well propoagated there doesn't need to be anyone as ~all nodes will just be doing the same operation.
<gmaxwell>
I mean it's fine to try relaying it.
<_aj_>
instagibbs: relay is just belts and suspenders / best effort; everyone running this new logic should already have the txs...
<gmaxwell>
_aj_: jinx
<instagibbs>
im not worrying too much when big reorgs...
<instagibbs>
s/big//
<_aj_>
what are you worrying about then?
<instagibbs>
you're the one who brought up package relay not me! :)
<_aj_>
okay, worry about it when there's a spec then!
<bitcoin-git>
[bitcoin] BrandonOdiwuor opened pull request #32501: RPC: removeprunedfunds should take an array of txids (master...removeprunedfunds-array) https://github.com/bitcoin/bitcoin/pull/32501
<_aj_>
instagibbs: i just mean if you take X from the restorepool with descendants A B C in the mempool, then remove A,B,C from the mempool and call ProcessNewPackage([X,A,B,C]) maybe that already does 95% of the right thing
<instagibbs>
ah sure, the CFR would be well-known as long as XABC isn't oversized
<instagibbs>
(and I think you can use ABC CFR knowledge to trim a bit smarter)
<_aj_>
well A,B,C might have been in different clusters initially, and bring in whatever else was in those clusters as a result, but yeah
<instagibbs>
yes
<gmaxwell>
That would actually be the most common case too, as mempool cluster size is 1+e on average, and many txn are spending outputs in the last block or two. Bringing back in a parent will tend to merge clusters.
<_aj_>
maybe annoying if your next tx to move from restorepool to mempool is Y which is X's parent
<instagibbs>
you have to keep backing it out, but I don't see a fundamental issue there
<_aj_>
yeah, probably so rare in practice that annoying doesn't matter
<gmaxwell>
well the eviction should be the order of original confirmation I think?
<_aj_>
keeping trying to confirm what were the deepest confirmed txs for the longest time also seems okay?
<gmaxwell>
I mean you can evict parents 'first' but you have to evict all the children with htem if you do.
<gmaxwell>
_aj_: yeah I guess my thinking was mostly "this one really should be back in already, if it's not, there is an issue with it"
<instagibbs>
yes, and you'll have to handle that due to conflicts via other blocks
<instagibbs>
anyways
<_aj_>
i don't think you want to evict everything at once because you still have to validate standardness for all of them; didn't see any obvious midpoint between 1-by-1 and everything
<_aj_>
gmaxwell: fair
<_aj_>
gmaxwell: so maybe 1-by-1 from the front, but grab all its children
<gmaxwell>
That's what I was thinking (well hadn't thought through the children part, but you're right that its necessary)
<_aj_>
gmaxwell: then let the package logic discard stuff that's too big and validate everything in the package, because validating a package all at once is fine anyway
<instagibbs>
in that case, oversized results degenerates to Trim from PR?
<instagibbs>
(in batch)
<_aj_>
PR=Package Relay? yeah, i think so
<instagibbs>
sorry this one #31553 (I still have to review it hence this discussion)
<instagibbs>
shove everything in, make best-effort undersized txgraph
Christoph_ has quit [Quit: Christoph_]
<_aj_>
instagibbs: ah, not sure at that detail, but it sounds good
Talkless has joined #bitcoin-core-dev
pseudoramdom has joined #bitcoin-core-dev
enochazariah has quit [Quit: Client closed]
pseudoramdom has quit [Ping timeout: 276 seconds]
BlueMoon has joined #bitcoin-core-dev
mudsip has joined #bitcoin-core-dev
pseudoramdom has joined #bitcoin-core-dev
mudsip has quit [Client Quit]
pseudoramdom has quit [Ping timeout: 260 seconds]
BlueMoon has quit [Quit: Client closed]
pseudoramdom has joined #bitcoin-core-dev
<bitcoin-git>
[bitcoin] davidgumberg opened pull request #32502: wallet: Drop unused fFromMe from CWalletTx (master...5-14-25-dead-code-removal) https://github.com/bitcoin/bitcoin/pull/32502
pseudoramdom has quit [Ping timeout: 260 seconds]
pablomartin_ has quit [Ping timeout: 244 seconds]
jespada has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
saikasyap has quit [Quit: Client closed]
saikasyap has joined #bitcoin-core-dev
PaperSword1 has joined #bitcoin-core-dev
PaperSword has quit [Ping timeout: 245 seconds]
PaperSword1 is now known as PaperSword
PaperSword1 has joined #bitcoin-core-dev
PaperSword has quit [Ping timeout: 248 seconds]
PaperSword1 is now known as PaperSword
PaperSword1 has joined #bitcoin-core-dev
PaperSword has quit [Ping timeout: 248 seconds]
PaperSword1 is now known as PaperSword
Guest3748 has quit [Ping timeout: 276 seconds]
jespada has joined #bitcoin-core-dev
Talkless has quit [Quit: Konversation terminated!]