< bitcoin-git>
[bitcoin] meshcollider opened pull request #17204: Do not turn OP_1NEGATE in scriptSig into 0x0181 in signing code (sipa) (master...201910_1negate_rebase) https://github.com/bitcoin/bitcoin/pull/17204
< bitcoin-git>
[bitcoin] meshcollider closed pull request #13084: Do not turn OP_1NEGATE in scriptSig into 0x0181 in signing code (master...201804_keepnegone) https://github.com/bitcoin/bitcoin/pull/13084
< bitcoin-git>
[bitcoin] sadrasabouri reopened pull request #17202: Travis CI bug in macOS environment #17178 solved (master...issue#17178) https://github.com/bitcoin/bitcoin/pull/17202
< luke-jr>
when writing, it literally uses the path saved by when it was read..
< achow101>
luke-jr: I was testing #15454 (which uses 11082) and it wasn't working as I was expecting. I may just be doing something wrong in my implementation
< luke-jr>
achow101: it may be helpful to clarify that in your comment so people don't assume it's a blocking problem ;)
< achow101>
well it might be, I didn't debug it super far, but IIRC moving the file to the datadir worked
< achow101>
i'll debug further today
< bitcoin-git>
[bitcoin] practicalswift opened pull request #17208: Make all tests pass UBSan without using any UBSan suppressions (master...ubsan-warnings) https://github.com/bitcoin/bitcoin/pull/17208
< jeremyrubin>
BlueMatt: sdaftuar: I'm looking at descendant tracking recently, morcos suggested I ping you both. Similar to https://github.com/bitcoin/bitcoin/pull/15681/files, I'd like to make an exception for OP_SECURETHEBAG transactions where the descendant does not get counted if its parent is an OP_SECUREHTEBAG and it goes through ancestors to some confirmed output, which is an OP_SECURETHEBAG. This property would 'prove' that the
< jeremyrubin>
data is immutable, and therefore impossible to be replaced without a reorg. The issue otherwise is that an OP_SECURETHEBAG transaction tree with a large number of recipients and a small radix quickly surpasses the descendants limit (large radixes can be OK). See https://github.com/bitcoin/bitcoin/compare/master...JeremyRubin:stb-with-rpc?expand=1 for more details on the implementation details
< jeremyrubin>
I guess concretely, my question is if you have thoughts on how best to make such modifications
< sdaftuar>
i'm a bit confused -- why are we talking about proving data immutability? the descendant tracking limit is for limiting cpu overhead when updating package state (eg for transaction selection in CreateNewBlock)
< jeremyrubin>
Hm ok -- my understanding was that desc tracking was to limit the amount of churn caused by replacing an ancestor
< sdaftuar>
my recollection of the history there is that we realized we needed some limit (less than 1MB) in order to prevent stuffing the mempool full of stuff that would never get mined
< jeremyrubin>
(which is why -- if the ancestor is irreplacable/immutable, then there is 0 churn)
< sdaftuar>
eg if you have a high fee child with 1MB of 0-fee ancestors, you have a problem
< sdaftuar>
pretty sure this was possible in old versions of bitcoin core
< jeremyrubin>
> the mempool is full a new transaction must be able to pay not only for the
< jeremyrubin>
> transaction it would evict, but any dependent transactions that would be
< jeremyrubin>
> removed from the mempool as well. In order to make sure this is always
< jeremyrubin>
> feasible, I'm proposing 4 new policy limits
< sdaftuar>
then we later realized when we did mempool eviction that there is a small amount of free-relay possible, up to the size of single descendant package
< sdaftuar>
so making those packages smaller was good for that reason as well, and for updating mempool state after a reorg
< sdaftuar>
(the latter being a very minor reason)
< sdaftuar>
then when we did ancestor-feerate mining, we observed that small package sizes help a lot with ensuring that code stays fast
< sdaftuar>
because there's potentially a lot of descendant walking in order to update package feerates as transactions are selected
< sdaftuar>
there, the number of descendants is the relevant thing
< sdaftuar>
anyway these are all just heuristics, and there's room to tweak any given one, but i think it could be problematic to unbound anything
< jeremyrubin>
Hm ok -- what about caching for some of this stuff?
< jeremyrubin>
E.g., for OP_SECURETHEBAG you can cache the walks (immutably so)
< sdaftuar>
you mean caching mempool package statistics?
< sdaftuar>
i'm not sure what that means/in what context
< jeremyrubin>
Maybe let's jump up a level or two for a sec
< jeremyrubin>
Are you familiar with OP_SECURETHEBAG
< sdaftuar>
COSHV right
< jeremyrubin>
yeah
< sdaftuar>
yeah let's call it coshv :)
< jeremyrubin>
Well it's more than just the outputs that has to be hashed
< jeremyrubin>
but anyways
< sdaftuar>
oh the inputs (aside from witnesses) are hashed as well i guess?
< jeremyrubin>
no
< jeremyrubin>
You hash everything except for the inputs, plus the number of inputs
< jeremyrubin>
And the sequences you hash too
< jeremyrubin>
but not the COutpoint
< sdaftuar>
ah ok
< jeremyrubin>
You don't hash the COutpoints because you don't know it when you're constructing the Bag Hash
< jeremyrubin>
You only know it once you make the tree and spend from a specific output
< jeremyrubin>
then you can fill in the rest of the txids
< jeremyrubin>
If you look at the RPC code this shows how this works
< jeremyrubin>
first you loop over the destinations, and you add a table of templates (transactions with no COutpoint, but you know the Bag Hash from the script that will create it) then you iterate over the outputs, filling in the templates
< jeremyrubin>
So let's say we have 1000 recipients.
< jeremyrubin>
And we use a radix of 200. Then we have one transaction with 5 outputs, and 5 transactions with 200 outputs.
< jeremyrubin>
This fits within the decendants tracking default params.
< sdaftuar>
ok
< jeremyrubin>
Now we use a radix of 4 (which is optimal for specific use cases)
< jeremyrubin>
Then we get a much deeper tree
< jeremyrubin>
like 5 levels I think
< sdaftuar>
yeah that's a lot of transactions!
< sdaftuar>
anyway i think if there are reasonable use cases that would benefit from bumping the numbers a bit, we can certainly bump the numbers
< sdaftuar>
but i don't think we're likely to increase by a factor of 10 or something
< jeremyrubin>
Well so here's the thing that would be worth looking at
< jeremyrubin>
specifically in this case
< jeremyrubin>
We know those txns can never change if the root is confirmed
< * sdaftuar>
has to run (will catch up later)
< jeremyrubin>
kk
< jeremyrubin>
So it's really not that many transactions
< jeremyrubin>
So at the base layer you have 250 transactions
< jeremyrubin>
A layer up, you have 63
< jeremyrubin>
(round to 64)
< jeremyrubin>
then 16
< jeremyrubin>
then 4
< jeremyrubin>
then 1
< jeremyrubin>
so a total of 334 txns
< jeremyrubin>
But size wise, the interior nodes are all small so it's only 70808 bytes
< jeremyrubin>
(interior and leaf)
< jeremyrubin>
So if you compare to the case of a non-OP_SECURETHEBAG txn, which is like 42000 bytes it's not too much extra bytes wise.
< jeremyrubin>
Fees only need to be like 40% lower for this to be worth it purely fees wise...
< jeremyrubin>
Anyways
< jeremyrubin>
The issue is that as soon as a branch or two expands (or you expand out a couple levels) you saturate your descendant limit
< jeremyrubin>
Which sucks because people on different ends of the tree are entirely different
< jeremyrubin>
And if you saturate at multiple interior nodes and no leafs, it makes it difficult for you to do CPFP
< jeremyrubin>
So the solution that I'd like to explore is some kind of special casing to discount things which are OP_SECURETHEBAG
< harding>
Use CSV to enforce an ordering on who can spend when?
< jeremyrubin>
harding: that's inefficient
< jeremyrubin>
They should be consumable as needed
< jeremyrubin>
And we don't know who wants out first
< jeremyrubin>
CSV therefore makes us have to do more bandwidth than needed to extract an output
< harding>
I was thinking to CSV(1 block) the interior nodes so that nobody could expand more than one level at a time.
< jeremyrubin>
hmm. But what if fees are low now, and not in an hour
< jeremyrubin>
And that doesn't solve the problem in any case
< jeremyrubin>
that's a block level fix
< jeremyrubin>
we need a mempool fix
< harding>
It doesn't?
< harding>
Ok.
< jeremyrubin>
AFAIK we'll still lock up the mempool for such txns
< sipa>
is that a problem?
< jeremyrubin>
sipa: read scrollback
< sipa>
all participants know that once the bag output is confirmed, the committed to transactions will follow?
< jeremyrubin>
Yes
< jeremyrubin>
But it would be nice if they could all be immediately put into the mempool
< sipa>
so is it a problem that not all of them fit in the mempool simultaneously?
< jeremyrubin>
Yes
< sipa>
ok
< jeremyrubin>
It's not a blocker for the technique working at all
< sipa>
right, i see
< jeremyrubin>
it's just that they should go in the mempool so they can be used
< harding>
jeremyrubin: I think using CSV to require all inputs to be confirmed would prevent the transactions from being grouped into packages, so the only problem would be an n-block delay for trees of depth n.
< jeremyrubin>
harding: I'm not sure -- does it. (also we don't need a CSV because STB let's you set a sequence)
< jeremyrubin>
If a txn isn't in a package do ancestor rules apply?
< jeremyrubin>
Are there other ways to break ancestor rules?
< sipa>
packages are implicitly defined by ancestors
< sipa>
OAOB
< sipa>
i believe
< harding>
jeremyrubin: no, otherwise most transactions would be in packages extending back thousands of transactions to when their coins were mined. :-)
< jeremyrubin>
But you break the packages at a confirmed ancestor
< jeremyrubin>
Ah I guess I see your point
< jeremyrubin>
CSV provably says these can't be mined as a package
< jeremyrubin>
so we could modify around that?
< jeremyrubin>
But I think we really want to be able to do them all in a single block, because we don't want to impose the latency at the time you decide to spend
< jeremyrubin>
otherwise people will be having to expand more than neccessary to be 'ready'
< sipa>
you can't guarantee everything going into one block anyway
< jeremyrubin>
sure
< jeremyrubin>
But that's differnet than provably cannot be in one block
< sipa>
maybe
< jeremyrubin>
Best-available QoS at given price v.s. at best n-blocks
< sipa>
i feels to me like OP_STB is exactly making it less important when things get settled on chain
< jeremyrubin>
Well so when you are expanding that's precisely the time where you care
< jeremyrubin>
because then you're spending to someone new
< jeremyrubin>
and then that spend will be unconfirmed for at least N more blocks than absolutely needed
< sipa>
ah, i thought you'd do it in the general vicinity of low-fee times
< jeremyrubin>
hopefully, yes
< sipa>
of course it'd be nice if you can always guarantee optimal efficiency of getting things into the chain
< jeremyrubin>
But you also might just need it at a certain time
< sipa>
but it feels like a minor issue to me that it's 1 few blocks extra
< sipa>
ok
< harding>
How significant is this problem, though? I mean, can't we just recommend that coshv trees never have more the x leafs?
< jeremyrubin>
x is small
< jeremyrubin>
< 25
< harding>
That's fair.
< jeremyrubin>
And what's annoying is the likely solution people will have is to use a large radix, like 200
< jeremyrubin>
Which is compatible with current rules for a larger tree (e.g., 1000)
< jeremyrubin>
Which is OK
< jeremyrubin>
larger radix uses less overall on-chain extra data (which is prunable tho)
< jeremyrubin>
But passes more expense onto a spender
< jeremyrubin>
Which means that the fee savings are less effective
< jeremyrubin>
4 is the optimal radix for users -- minimizes the extra data they pay for
< harding>
Eh, I guess x is really 1 given that the receiver can just create their own descendent transactions.
< jeremyrubin>
correct
< jeremyrubin>
which is the same issue that matt points out
< jeremyrubin>
that we want to always permit '1 more' descendant
< harding>
So this radix stuff doesn't directly apply unless coshv gets some sort of mempool exception, I think.
< jeremyrubin>
Which radix stuff?
< jeremyrubin>
No the radixes still work
< jeremyrubin>
you just need to compute the radix such that the number of transactions and total size is under the descedants limit
< harding>
They work, but I don't think having a larger radix provides any additional immunity to transaction pinning.
< jeremyrubin>
Ah
< jeremyrubin>
because the spend-from issue?
< harding>
Yeah.
< jeremyrubin>
I think that might be right.
< jeremyrubin>
Well
< jeremyrubin>
No
< jeremyrubin>
that's not completely true
< jeremyrubin>
You can still do it
< jeremyrubin>
It just makes it work less well, especially if someone does low fee rate
< jeremyrubin>
So you'd want some anyone can spend nodes in the tree
< jeremyrubin>
for people to be able to replace by fee with high fee txns
< jeremyrubin>
to bump interior nodes through confirmation
< jeremyrubin>
which frees up the other half of the tree
< jeremyrubin>
So you *can* get it to work with the existing stuff
< jeremyrubin>
It's just much more annoying than coding an exception for these immutable type txns.
< harding>
Ancillary, it occurs to me that if you have a large tree of internal nodes and also a few receivers trying to CPFP bump the leaves, they could cause the minimum feerate of the mempool to increase faster than the package feerate of the coshv root node, causing the whole package to be evicted from the mempool.
< jeremyrubin>
Huh. This sounds like a general issue?
< harding>
Yeah, I was just thinking that. Maybe I just figured out for myself why we have both a desecendant limit *and* an ancestor limit. :-)
< harding>
Oh, goodness, that's probably what sdaftuar was talking about at the beginning of this thread: <sdaftuar> then we later realized when we did mempool eviction that there is a small amount of free-relay possible, up to the size of single descendant package
< harding>
jeremyrubin: so reading Suhas's comments earlier in the thread, I think maybe an issue with your proposal is that the package wouldn't be immutable if the receivers could CPFP fee bump their receiving transactions (the leaves). E.g., for each CPFP transaction, there would be a cascade of changes to the package fee through the tree. The mempool could refuse to accept mutable children from outside the tree, but then the users
< harding>
couldn't fee bump.
< jeremyrubin>
hm
< jeremyrubin>
This is why i'm saying that there can be some caching going on within the tree
< jeremyrubin>
But I see what you mean
< jeremyrubin>
That a leaf change guarantees log(N) updates through the tree
< jeremyrubin>
But I don't care to change the ancestor limit
< jeremyrubin>
just descendant
< jeremyrubin>
Which I think makes it OK
< jeremyrubin>
20 ancestors is PLENTY
< jeremyrubin>
20 descendants is paltry
< jeremyrubin>
20 ancestors can cover like 4**20 which is big enuf
< harding>
jeremyrubin: I dunno, `getrawmempool true` returns various "descendent*" fields to me, so even though I think you're right that only log(n) entries need their "modified" fees updated, it seems that descendant information might need to be updated too?
< harding>
Oh, I guess that's not recursive since it doesn't include the modified rates. Sorry.
< jeremyrubin>
correct
< jeremyrubin>
Log N upwards, O(2N) downwards
< jeremyrubin>
But then it can be worse if the leaf is doing what you said
< jeremyrubin>
which is why it makes sense maybe, for OP_STB, to have a branched descendants limit
< jeremyrubin>
descendants depth
< jeremyrubin>
rather than raw count
< jeremyrubin>
But I guess theres a question of special casing v.s. generally safe technique
< jeremyrubin>
Im confident special casing can work but maybe not a general technique to all outputs
< bitcoin-git>
[bitcoin] achow101 opened pull request #17211: Allow fundrawtransaction and walletcreatefundedpsbt to take external inputs (master...fundtx-external-inputs) https://github.com/bitcoin/bitcoin/pull/17211