<ariard>
yes you could construct txC has conflicting with multiple inputs from unrelated chain of transactions, that way gaining a batching effect as an adversary
sdhsdsj has joined #bitcoin-core-dev
tun4 has joined #bitcoin-core-dev
_andrewtoth_ has joined #bitcoin-core-dev
andrewtoth_ has quit [Remote host closed the connection]
ghost43 has quit [Ping timeout: 255 seconds]
ghost43 has joined #bitcoin-core-dev
tun4 has quit [Quit: Client closed]
jonatack2 has joined #bitcoin-core-dev
jonatack1 has quit [Ping timeout: 260 seconds]
b_101 has quit [Ping timeout: 252 seconds]
b_101 has joined #bitcoin-core-dev
sdhsdsj has quit [Quit: Client closed]
sdhsdsj has joined #bitcoin-core-dev
cmirror has quit [Remote host closed the connection]
cmirror has joined #bitcoin-core-dev
sdhsdsj has quit [Quit: Client closed]
b_101_ has joined #bitcoin-core-dev
b_101 has quit [Ping timeout: 252 seconds]
sudoforge has quit [Quit: 404]
yanmaani1 has quit [Remote host closed the connection]
yanmaani1 has joined #bitcoin-core-dev
bitdex_ has joined #bitcoin-core-dev
bitdex has quit [Ping timeout: 255 seconds]
andrewtoth_ has joined #bitcoin-core-dev
_andrewtoth_ has quit [Remote host closed the connection]
_andrewtoth_ has quit [Remote host closed the connection]
_andrewtoth_ has joined #bitcoin-core-dev
jonatack2 has quit [Ping timeout: 252 seconds]
<sdaftuar>
glozow: i was thinking more about package validation (re: package feerate definition and #26711)
<gribble>
https://github.com/bitcoin/bitcoin/issues/26711 | [WIP] validate package transactions with their in-package ancestor sets by glozow · Pull Request #26711 · bitcoin/bitcoin · GitHub
<sdaftuar>
glozow: can we say that it should never be the case that the "package feerate" of a transaction should never exceed the transaction's own feerate?
<sdaftuar>
conceptually i assume that even after #26711, that would still be possible -- imagine a "diamond" transaction chain, where you have a low fee parent A, 2 children B and C that each can't quite pay for A on their own, but once joined via a transactin D that spends them both, results in the package making it in.
<gribble>
https://github.com/bitcoin/bitcoin/issues/26711 | [WIP] validate package transactions with their in-package ancestor sets by glozow · Pull Request #26711 · bitcoin/bitcoin · GitHub
<sdaftuar>
it's not clear to me whether we want those diamond transaction packages in our mempool or not!
<sdaftuar>
on further thought, letting in diamond chains can be bad, because you could imagine in the worst case that our mempool could be full of transactions that we wouldn't ever mine.
<sdaftuar>
so maybe the easiest protection is just to require that when evaluating ancestor packages for mempool inclusion, if the child transaction's own feerate is not
<sdaftuar>
at least as big as the package feerate, that we abort
AaronvanW has joined #bitcoin-core-dev
brunoerg has quit [Remote host closed the connection]
brunoerg has joined #bitcoin-core-dev
brunoerg has quit [Ping timeout: 252 seconds]
<glozow>
sdaftuar: I suppose one could say the diamond shouldn't get in - similar idea to a long chain of 0-fee transactions with a big fee-bumper at the bottom. I'm not sure "if the child transaction's own feerate is not at least as big as the package feerate, that we abort" is sufficient. If we have 2 parents, 1 bumping the other, and a low-feerate child spending both, we should keep the parents.
<sdaftuar>
yes i agree with that. i was just thinking that whenever we consider a (sub-)package for validation, that a final sanity check would be that the child tx feerate exceed the (sub-)package feerate, or else we reject/move on
<sdaftuar>
so that's still consistent with the idea of 26711
<sdaftuar>
one concern i have is that even with a check like this, there may be some topologies we still shoulnd't let in for some sort of DoS reason
<sdaftuar>
in thinking about potential DoS vectors i have a new concern that i'm not quite sure how to think about, relating to package validation:
<sdaftuar>
package validation gives us a way to have (eg) a zero fee tx in the mempool, if it comes in with a child that pays for it.
<sdaftuar>
if that child were to get RBF'ed by a replacement tx that drops the zero fee parent, then we'll end up with a transaction in the mempool that will never be mined
<instagibbs>
unless it's trimmed
<instagibbs>
yes
<sdaftuar>
if that transaction is large, it could be used as a parent of incoming transacitons which will also never be mined
<sdaftuar>
this only works when the mempool is not full -- or else as instagibbs notes it would be trimmed
<sdaftuar>
but this still seems vaguely bad?
<glozow>
I was thinking about this the other day as well. let me find my writeup...
<instagibbs>
actually it relies on the mempool being pretty much empty?
<sdaftuar>
instagibbs: yeah
<instagibbs>
otherwise the only difference is min relay vs free
<sdaftuar>
i think my concern is that the mempool shouldn't have things in it that would never get mined, eg because they're below minrelay fee (which we compare against in the mining code iirc)
<instagibbs>
Yeah I get it
<glozow>
I suppose then, in `TrimToSize()`, we should evict low-descendant-feerate junk even when we are not at capacity
<sdaftuar>
specifically things below minrelayfee (i think its ok if things are below mempool minfee). that may work, not sure if there are any downsides to that
<glozow>
that makes sense to me. apart from TrimToSize, do we want to try to calculate the txns that will be below minrelay fee due to replacement?
<instagibbs>
unpaid-for-things getting evicted from a wallet perspective is fine, we'll just send a package again
<sdaftuar>
glozow: reading your gist now... i guess the worst case is that we drop 2400 additional transactions from the mempool due to this type of behavior?
<instagibbs>
so is there two concepts: allowing existing child-less txns to hang around for 2 weeks, vs letting a *new* tx in that we wouldn't mine?
<instagibbs>
one we've already validated, holding onto it isn't that expensive(?)
<sdaftuar>
instagibbs: i think that's fair but its complicated to stop a new tx from chaining off it, at first glance
<instagibbs>
right, single txn evaluation would have to be modified
<sdaftuar>
instagibbs: yes and in a way that i think isn't really possible. imagine the same scenario of leaving a zero fee transaction in, but before you drop the child which paid for it, you first add a bunch of small lowfeerate children
<instagibbs>
mhmm yeah
<sdaftuar>
it would be hard to tell (after the large fee paying child is replaced) whether the parent should be allowed or not
<instagibbs>
i can see
<glozow>
sdaftuar: yes, that's the worst case I could come up with is 50 packages, 1 child each bumping 24 parents. replacement conflicts directly with 1 parent from each package. 100 total to meet Rule 5, 23*50=1150 total txns that are now below minrelay fee but weren't counted in the 100.
<instagibbs>
from a usability perspective I think dropping is fine, so now it would "just" be a DoS thing
<sdaftuar>
glozow: nitpicking but i believe 100 packages should be possible, each of which has 1 child bumping 24 parents and another input that is confirmed. replacement can conflict with each of the 100 children (double spending the confirmed input only, and dropping the package), resulting in 100 conflicts total and 2400 additional low fee parents that would have to be dropped
<sdaftuar>
new tx would ahve no in-mempool parents
<glozow>
ahhh good point yes! 100 packages is possible
<sdaftuar>
it may be worth a standalone PR that adds removal of txs below the minrelayfee to TrimToSize and see if any reviewers spot an issue with that?
<sdaftuar>
though i guess it'll be the interaction of that and package validation that we need to really evaluate
<sdaftuar>
another option is that TrimToSize() can just bail early after a certain number of evictions, and continue again later
<sdaftuar>
it's probably not critical that we evict everything immediately (if we're not full)
<instagibbs>
Sorry, what does doing it in stages accomplish?
<glozow>
bail early because we don't want `TrimToSize()` to take a long time?
<sdaftuar>
glozow: yeah
<instagibbs>
ah :)
<sdaftuar>
i dunno, maybe it's fast, i have no idea :)
<glozow>
we would want to finish these evictions before we validate another transaction for acceptance, right?
<sdaftuar>
glozow: i agree that would be simplest
<sdaftuar>
the alternative would be to add complexity to our logic to prevent relay of a transaction that we ultimately drop
<glozow>
what would preventing relay look like? it would fail a fee filter when we announce (though it's possible we've already announced it). would we add another filtering step when we respond to getdata?
<sdaftuar>
i don't really know - i was just imagining coming up with a special case for what happens when we validate a transaction at a time when we know TrimToSize() didn't finish, and needing a way to delay relay of anything until that completes
<sdaftuar>
seems messy
<sdaftuar>
the issue here being transactinos whose own feerates pass feefilter, but with a (eg) zero-fee parent that shouldn't be in the mempool, would not be accepted
<sdaftuar>
hopefully evicting 2500 transactions is fast enough, relative to transaction validation, that we can ignore this question
<glozow>
if we're confident that it's bounded (2400 seems somewhat ok), it seems like evicting in TrimToSize() seems like the simplest approach. another would be for CTxMemPool to mark things as "to be lazily removed" and not hand those entries out
<sdaftuar>
makes sense
<instagibbs>
eviction benchmark needs some beefing :)
<glozow>
yeah, can write a bench
<instagibbs>
one exists, it's just very trivial
<sdaftuar>
i think the right metric is as a percentage of transaction validation time... unfortunately that can be slow for a tx with a lot of inputs, and very slow if an adversary wants it to be... but from that perspective taking a little more time in the mempool will seem minor
jonatack2 has joined #bitcoin-core-dev
<sdaftuar>
oh... i just realized that TrimToSize() can't really do what we need here, for the same reason as why it would be hard to change single-tx validation to not allow using a parent that should have been removed
<sdaftuar>
it's the same scenario i wrote above: imagine zero-fee txA arrives with txB, which pays for it.
<sdaftuar>
before txB is removed from the mempool via an RBF that drops A as a parent, imagine that txC and txD arrive and are children of txA.
<sdaftuar>
you can construct txC and txD so that the descendant feerate of A is greater than the minrelayfee, but neither C nor D have an ancestor feerate greater than the minrelayfee
<sdaftuar>
so again we'd end up with transactions in the mempool that never get mined.
<sdaftuar>
i'm starting to wonder if we should try to staple packages together, so taht if any part get RBF'ed out, the whole package has to go
<sdaftuar>
that would be akin to how single-transactionv alidation works today
<instagibbs>
basically track how sub-packages get accepted together
<sdaftuar>
right
<sdaftuar>
if we could implement taht, i think it would solve DoS concenrs, but raise potential incentive compatibility concerns
<instagibbs>
things getting kicked out when they would have improved the next template?
<instagibbs>
or something else
<sdaftuar>
yeah something like that. an RBF that doens't pay for the whole package's eviction but just a small piece of it, etc.
<sdaftuar>
oh i guess the way i stated it, you'd get logical contradictions too. sigh.
<sdaftuar>
not sure how to solve this
<glozow>
I do think it could be beneficial to track what transactions got in together, or somehow keep track of/update "fee dependencies" between `CTxMemPoolEntry`s? it would also help with persisting packages through a dump/load
<bitcoin-git>
[bitcoin] JeremyRubin closed pull request #23309: [WIP] Add a basic python REST API Server Wrapper (master...rest-python) https://github.com/bitcoin/bitcoin/pull/23309
<bitcoin-git>
[bitcoin] JeremyRubin closed pull request #22954: [TESTS] Allow tx_invalid.json tests to include flag rules for if_unset: [A,B,C] then_unset: [D] (master...if_unset_then_unset) https://github.com/bitcoin/bitcoin/pull/22954
<bitcoin-git>
[bitcoin] JeremyRubin closed pull request #22876: [TESTS] Update Transaction Tests to permit setting a flag as always on and disabling the exhaustive failure test (master...update-txvalid-json) https://github.com/bitcoin/bitcoin/pull/22876
<sdaftuar>
glozow: gets tricky with RBF. imagine a package comes in, and then the child is replaced with a new child (which has the same ancestry). you'd swap out the old child for the new child in your package tracking?
<sdaftuar>
now imagine it has a somewhat different ancestry...
<glozow>
yeah...
yanmaani1 has quit [Ping timeout: 255 seconds]
<_aj_>
glozow, sdaftuar: what do you think of a "mempoolreplace" rpc, that allows you to add a tx package [A,B,C] to your mempool, despite conflicts X,Y,Z, ignoring all rbf rules (at most optionally trying to read X,Y,.. to the mempool and doing so if they're valid rbf replacements of A,B,C..)
yanmaani1 has joined #bitcoin-core-dev
<glozow>
correct me if i'm wrong - today, you could have a parent + 2 children at the bottom of the mempool, where the parent's descendant feerate is above mempool min feerate but each child's ancestor package is below mempool min feerate. so not evicted, but also not great. though everything is above min relay fee, so not entirely wasted space
<glozow>
_aj_: do you mean that, after the RPC finishes, all 6 transactions would be in our mempool?
<sdaftuar>
glozow: yeah i think there's a distinction in my mind between mempool min fee and minrelayfee, because once you go below minrelayfee the mining code will always ignore it
<sdaftuar>
so if you are able to get stuff in below minrelayfee, that is a free relay DoS on the network. seems like it's bounded because once the mempool fills up, it goes away, but i wonder if we can do better
<_aj_>
glozow: after the RPC finishes, either (a) A,B,C are in your mempool or, (b) it's as-if you had A,B,C in your mempool and then received X,Y,... via (package) relay
<_aj_>
glozow: X,Y,.. conflicts with A,B,C so you should never have all of them in your mempool
<sdaftuar>
_aj_: i don't think i understand the problem you're trying to solve
<_aj_>
glozow: (the idea being that this would allow you to build an alternative layer 2 relay network and populate your mempool using it, if there's some subset of transactions where regular rbf is sub-optimal)
<sdaftuar>
would those transactions then be relayed on the p2p network, once accepted to the mempool?
<_aj_>
sdaftuar: not sure, i'm assuming "no"
<sdaftuar>
ok i definitely do not understand what would be achieved then :)
<glozow>
sdaftuar: right so if, as a short term workaround, all transactions have to meet min relay fee but packages can cpfp things below mempool min fee, seems like we would be on par with current situation? not ideal, but trying to get a sense of what -0 looks like.
<sdaftuar>
a transaction relay network that is separate from the one we maintain?
<sdaftuar>
glozow: are you suggesting that we rewrite TrimToSize() to look at ancestor packages and evict ones that are below minrelayfee?
<_aj_>
sdaftuar: if you have a tx X which you want mined, you relay it over p2p but it's pinned by Y and goes nowhere. so you also relay it over L2-relay, and hope that a miner is on L2-relay. i think this is what would be necessary to make that possible
<sdaftuar>
_aj_: got it, now i understand
<glozow>
_aj_: if it's a fee-related pin, prioritisetransaction() may help?
<sdaftuar>
_aj_: i think you'll run into the same issues we have in Bitcoin Core's logic wrt DoS. fundamentally we don't have a total ordering on transaction desirability, and that leads to the DoS concerns we have
<_aj_>
glozow: could be a non fee-related pin, but also seems like that would also allow you to prevent D,E,F at higher feerates from replacing A,B,C in the normal way, which would be annoying (i guess you could prioritisetransaction, sendrawtx, de-prioritisetransaction?)
<glozow>
sdaftuar: I guess I'm (lightly) suggesting / wondering what it looks like to only use package feerate for the mempool min feerate, and not min relay fee check. so you could bump a 1sat/vB tx when the mempool's full, but you can't bump a 0sat/vB tx.
<sdaftuar>
glozow: ahh i see
<glozow>
_aj_: yeah if you want it to be replaceable normally afterwards, you could just undo the prioritisetransaction after it's in.
<_aj_>
sdaftuar: that would then be L2-relay's problem, the idea being you only apply L2-relay to some very limited subset of txs where you can come up with a sane solution to that problem (and this is just an rpc for such a thing to use)
<sdaftuar>
glozow: yeah i think that works?
<glozow>
sdaftuar: (though I know that makes package acceptance much less cool...)
<sdaftuar>
glozow: if we can get anything that works i think we should call it a win :)
<sdaftuar>
all this rbf/package stuff seems too hard...
<sdaftuar>
_aj_: fundamentally i'd agree that we could offer infrastructure (such as RPCs) to allow for off-network transaction relay, and outsource the DoS problems
<_aj_>
"can't bump a 0sat/vb tx" ?
<glozow>
okay so then we'd be at -0. and let's say we then loosen to "with package relay, you can bump a v3 transaction below min relay fee" ? then we could do all our evictions inside of `TrimToSize()` - there can only be 100 i think, and none of them can have descendants.
<sdaftuar>
i can't recall if we require that rbf'ing a v3 transaction requires the replacement to be v3 itself?
<glozow>
sdaftuar: it doesn't, no
<sdaftuar>
i guess either way, you can't chain a v2 child on a v3 0fee transaction so maybe doens't matter
<sdaftuar>
so the rule is that non-v3 transactions must always have minrelayfee on them?
<glozow>
yep
<sdaftuar>
i think that works... will give it more thought
<glozow>
yeah. so any 0-fee transaction can only be in a package of size 2. Limit 100 replacements = limit 100 such 0-fee transactions afterwards. None of them can have any other descendants.
<_aj_>
glozow: can't you have n 0-fee txs as a package of size n+1 ?
<glozow>
_aj_: no, because of other v3 rules. v3 can only have v3 ancestors, and only v3 descendants.
<glozow>
and a v3 can only have 1 child
<glozow>
and only 1 parent
<_aj_>
glozow: right, so n 0-fee parents and 1 hihg-fee child that cpfp's all of them?
<instagibbs>
there's a pin behind every blade of grass
<glozow>
(in case the comment doesn't load) Let's say an adversary wants to pin a tx B (make it hard to replace). They add a high-fee descendant of B that also spends from a large, low-feerate mempool transaction, C. The child may pay a very large fee but not actually be fee-bumping B if its overall ancestor feerate is still lower than B's individual feerate. For example, assuming the default ancestor size limit is 101KvB, B is 1000vB paying 2sat/vB,
<glozow>
and C is 99KvB paying 1sat/vB, adding a 1000vB child of B and C increases the cost to replace B by 101Ksat.
<_aj_>
okay, goes in my "makes package acceptance much less cool..." but "if we can get anything that works i think we should call it a win" bucket
<glozow>
instagibbs: hashtag v3 fixes this
<instagibbs>
reasoning about this stuff is so hard :(
<_aj_>
not being able to pay for multiple parents means v3-ephemeral isn't a replacement for sighash_group anymore
<_aj_>
(not that sighash_group doesn't have all the same pinning issues itself)
<instagibbs>
yeah
<_aj_>
huh
<_aj_>
hmm, un-"huh", that was a dumb thought bubble
ajonas_ has joined #bitcoin-core-dev
<_aj_>
glozow: is it interesting to test this on signet/inquisition in general; or only in the context of eltoo btw? (the cpfp 0-fee txs stuff needs miner support presumably)
<instagibbs>
V3 is still interesting for anything that isn't batching, imo
<instagibbs>
batch bumps* I mean
<instagibbs>
batched payouts, still interesting
<_aj_>
just not sure what parts are fine to just test on mainnet vs are worth exploring in something more controlled that's not just regtest
<glozow>
v3 in general should make stuff less pinnable and our rbf heuristics more accurate. replacing a v3 transaction doesn't require the RBF carve out (where if we have 1 conflict, we add its descendant size to the descendant count to avoid overestimating). we might also be able to get rid of cpfp carve out if double anchor outputs are no longer necessary
<_aj_>
i don't think we could get rid of cpfp-carveout until there were ~0 old ln channels remaining; ie years and years away?
<instagibbs>
correct
<instagibbs>
well.... maybe they gave up on using the carveout
<instagibbs>
can't recall
<_aj_>
the carveout's there so you can always use your anchor, even if your counterparty is trying to otherwise hit descendant limits
<_aj_>
(iirc)
<instagibbs>
you're assuming you see the other's commit tx in mempool, then spend the anchor
<instagibbs>
or attacker does the other side
<instagibbs>
if no one actually implements it, we can probably speed up removal...
<_aj_>
yeah; i don't believe in ignoring the mempool :)
<instagibbs>
feel free to yell at LN implementations :D
<glozow>
yes sorry i should have said "if double anchor outputs are no longer *used*"
<instagibbs>
I guess the risk is evil nodes run by _aj_*would* scrape the mempool and pin honest users
<instagibbs>
when trying to submit their own commit tx
<_aj_>
instagibbs: anchors are only relevant if the mempool's not empty (otherwise the native fee on the commitment will get it mined) and the dual anchors are only needed in case your counterparty is trying to prevent the channel from actually closing for some reason (waiting for a htlc to expire? keeping your funds hostage?). if both those things are true, you either need an L2-relay to bypass the
<_aj_>
pinning, or need to see what's going on in the mempool to add your own anchor... i think you know the txid of their commitment, so you could construct a spend for your anchor tx off their commitment despite not having their signature for their commitment, and just optimistically broadcast that to random nodes
as2333 has joined #bitcoin-core-dev
<instagibbs>
gossip over LN p2p is kinda interesting, you won't know necessarily any old commit tx txid(?) but if a LN peer sees it, they might take your bump of it
<_aj_>
if it's an old commit, you claim all the funds as soon as it hits the blockchain, and it doesn't really much matter when that happens?
<instagibbs>
my point was you can't blindly spam the spend of the latest commit tx based on knowing the txid. If it's in your mempool you're good to go of course
<instagibbs>
or gossiped
<_aj_>
sure you can? if an old commitment was posted, it'll just be ignored
<instagibbs>
We must be talking about different things
<_aj_>
if they haven't posted a commitment tx, yours relays; if they've posted their current version and pinned it, your anchor spend spamming works; if they posted an old one, then you wait and punish. you win in each case?
<_aj_>
(spamming the anchor tx pointlessly while waiting is fine, since you can't distinguish the latter two cases)
b_101 has quit [Ping timeout: 272 seconds]
b_101 has joined #bitcoin-core-dev
b_101 has quit [Ping timeout: 252 seconds]
b_101 has joined #bitcoin-core-dev
<bitcoin-git>
[bitcoin] brunoerg opened pull request #26714: test: add coverage for unparsable `-maxuploadtarget` (master...2022-12-maxuploadtarget-parse-test) https://github.com/bitcoin/bitcoin/pull/26714
<instagibbs>
If you mean old commit tx are too dangerous for attacker to use a a pin, sure
<instagibbs>
they could still do it, but then their risk goes through the roof
<_aj_>
i mean, if they successfully use it as a pin, you don't lose any money?
<_aj_>
for ln-penalty, not eltoo of course
<instagibbs>
if you never see it, you can't penalize it
<instagibbs>
if it gets mined game over
<instagibbs>
(for attacker)
<_aj_>
i suppose (1. pin old commitment. 2. wait for htlcs to timeout. 3. let old commitment expire, unmined. 4. publish current commitment, claim expired htlcs) works, if successful
pablomartin has joined #bitcoin-core-dev
Hercules1 has joined #bitcoin-core-dev
<bitcoin-git>
[bitcoin] achow101 opened pull request #26715: Introduce `MockableDatabase` for wallet unit tests (master...test-wallet-corrupt) https://github.com/bitcoin/bitcoin/pull/26715
<darosior>
I didn't look into it, but on the surface it sounds similar to having annotations within the descriptor directly.
<achow101>
how would that information would get from the user to the size estimation?
<achow101>
since it could change per transaction
<darosior>
Yeah, a downside is that it would be static
<darosior>
(A downside to hardcoding it within the descriptor)
<sipa>
yeah it's more flexible if it can be done on a per-signature level, but that's a UX nightmare
<sipa>
but there may be useful static information as well
<darosior>
But i guess it's already an improvement in some potential common usecases with Taproot like have keyspend for day-to-day and a timelocked cold key for recovery. You can tell the wallet you use day-to-day to expect that all transactions you will be creating with it will be for the keyspend, and to not overestimate.
<sipa>
e.g. you know that any signature that your bitcoin core wallet will be involved in, will have access to that wallet's private key
<sipa>
if it's not in a hardware wallet
<achow101>
it would definitely be useful to indicate that the keypath is never going to be used
<darosior>
Hmm you could emulate that with partial descriptors? Only import the complete descriptor for the branches you may use on this wallet. And when estimating the size the wallet ignores unknown branches.
<sipa>
opr even better: the key path is always is going to be used
<sipa>
darosior: Yeah, you could, but I think that's only part of the picture, because within one script the same thing applies
<sipa>
And there you very much may want to know everything, even informaton about execution branches you won't use.
<furszy>
separate thing,about the overpaying topic: if we overpay, coin selection would increase its chances to create a change output, which is something that we usually try to avoid.
<darosior>
sipa: wouldn't it be uncommon to have different paths within a single branch?
<darosior>
And for pre-Taproot you don't care, since you always publish the whole script anyway
<sipa>
But even pre-taproot your witness estimation may very much depend on which keys are available.
<achow101>
can we use descriptors to remove the need for the dummy signer entirely? Even with size estimation, we still need the dummy signer for filling PSBTs
<sipa>
I think post-partial-descriptors we can actually move the signing logic into descriptors, rather than the other way around.
<sipa>
And anything that interacts with signing like size estimation too.
<darosior>
Yes i was thinking of going in this direction too.
<achow101>
that could be useful
<achow101>
anything else to discuss?
<darosior>
I need to leave unfortunately. Thank you all for the discussion, happy to see there is interest in this.
<sipa>
Murch had a topic I think
<furszy>
i have a one too.
<achow101>
Murch1: what was your topic?
<Murch1>
Ah
<Murch1>
No, just a ceterum censeo
<Murch1>
The Bitcoin Core wallet should really keep track of its UTXO pool instead of recalculating it from its list of transactions every time it needs it
<achow101>
furszy: go ahead
<furszy>
Murch1: agree, I actually have a card in my board to move into that direction too.
<furszy>
if we agree, could "standardize" it and create a guideline for it.
<achow101>
I've actually tended to avoid using exceptions since we don't have general exception handling
<achow101>
I've only really used them in places where there was no other way to return an error
<achow101>
IIRC an exception will kill the gui
<Murch1>
furszy: Awesome!
<achow101>
but having some guidelines would probably be useful
<Murch1>
Err, just to be clear, that was directed at furszy's UTXO pool plans
<furszy>
yeah, I know achow101. I ended up adding a try catch to the GUI create tx flow because of it.
<furszy>
but.. it sounds that we are avoiding having a somewhat "good" library structure because our qt framework can handle exceptions in a generic manner
<furszy>
*can't
<furszy>
which isn't something related to the wallet's module. It's a GUI issue.
<achow101>
sure, but we need to consider the effects of throwing exceptions on the rest of the software
<achow101>
it would be worthwhile to write up some guidelines and fix qt simultaneously
<achow101>
given that we already do throw exceptions in some places that can already be problematic for the gui
<furszy>
by fixing at you mean the place that require to catch the exception?
<furszy>
or by trying to find a way to catch them all at once
<achow101>
trying to catch them all generically
<achow101>
like we do for the rpc
<furszy>
yeah ok, I briefly tried it in the past but IIRC, there wasn't a way to catch exceptions on slots generically (at least not a way that worked fine on that time..)
<achow101>
hmm, that's something to look into again
<achow101>
just generally though, we should try to avoid things that will cause crashes
<furszy>
well. another possibility is to move all the gui backend calls to be done on a generic worker that catches all the exceptions.
<furszy>
that could also helps in terms of the gui freezing time
<furszy>
sort of "dispatcher"
<achow101>
yeah, there's definitely improvements that could be made there
<furszy>
yep. Will see if can cook something up for it.
<achow101>
my main point was that when we write guidelines for how errors should be emitted, we need to keep in mind how the rest of the project will handle those
<achow101>
and what we could do to make those parts handle them better