< andytoshi>
hehe, i did actually learn a lot about core
< achow101>
I think you've run into why everyone is afraid of changing coin selection. the logic is too hard to follow
< sipa>
it's probably a good reason to justify changes
< sipa>
the existing logic is a painful mess, but at least my impression was that it was very unlikely to actually fuck up
< andytoshi>
i mean, this is a real corner case, achow and i weren't sure if we could make it trigger without simulating weird fee behavior, and even then
< andytoshi>
what's surprising is that it is a "coin selection" bug which is entirely outside of SelectCoins, it's the retry-loop in CreateTransaction
< achow101>
unfortunately CreateTransaction is considered part of coin selection behavior because of the loop
< fanquake>
wumpus / sipa: may just want to block kengendron251
< sipa>
fanquake: going to wait
< sipa>
his last message made it sound like a mistake
< wumpus>
what did they do? looking at the profile it's either a spammer or a 12 year old :)
< fanquake>
Posted a bit of spammy crap, but now it also seems like they've managed to subscribe to the repo, and can't figure out how to unsubscribe.
< aj>
deleted comment from 20317 says "I was on this planet well before the computer" so rules out the 12yo theory? :)
< bitcoin-git>
[bitcoin] ajtowns opened pull request #20353: configure: Support -fdebug-prefix-map (master...202011-ccache-debug-prefix) https://github.com/bitcoin/bitcoin/pull/20353
< bitcoin-git>
[bitcoin] practicalswift opened pull request #20355: fuzz: Check for addrv1 compatibility before using addrv1 serializer/deserializer on CSubNet (master...fix-sub_net_deserialize) https://github.com/bitcoin/bitcoin/pull/20355
< az0re>
both ECONOMICAL and CONSERVATIVE suggest *way* too high fees, and I suspect that estimatesmartfee is creating its own reality here: It's suggesting high fees, and so the mempool gets a bunch of really high fee txes, and so the estimator keeps suggesting really high fees
< az0re>
However, txes are clearing at way, way lower feerates
< sipa>
az0re: that's kind of expected; the estimator does not look at the mempool, but only at the rates of which feerates confirm
< az0re>
And if all installed estimators suddenly cut their suggested feerates 10x I suspect nothing would change but people saving a TON on tx fees
< sipa>
so it's inherently delayed
< az0re>
sipa: Looking at the rates of which feerates confirm should *also* result in a much lower suggested fee
< az0re>
In the last 12h I've cleared--in one block--some txes with feerates as low as 3 sat/vbyte
< az0re>
OK, no idea how these estimates are calculated, but the super sharp drop off is a red flag
< sipa>
may be worth opening an issue for
< darosior>
az0re: it's expected. It's possible, but there are not historically enough of them for the estimator to choose this bucket.
< darosior>
For example, if you had a new mode -say RECKLESS- with far less probability than ECONOMICAL, then it'd probably give you these estimations.
< az0re>
darosior: "not historically enough of them" -- of what?
< phantomcircuit>
az0re, the problem is that setting the fee rate too high is a known cost, setting it too low tends to result in people with transactions that are stuck
< phantomcircuit>
the latter being much worse for 90% of people
< az0re>
phantomcircuit: I totally understand that, and I totally understand being conservative. However:
< az0re>
1. Isn't this why there is CONSERVATIVE and ECONOMICAL? `1 ECONOMICAL` in this case really should not be giving me 200 sat/vbyte!
< az0re>
2. We should still see a smooth drop off. But `1 CONSERVATIVE` and `10 CONSERVATIVE` are identical.
< az0re>
And it extends beyond 1 and 10; very frequently I see huge swathes of the estimates sitting at exactly the same feerate
< az0re>
If `1 CONSERVATIVE` gave 200, `10 CONSERVATIVE` gave 40, and `1 ECONOMICAL` gave 20 I would have nothing to complain about
< az0re>
<phantomcircuit> [...] result in people with transactions that are stuck
< az0re>
Also, isn't this why RBF is a thing? :)
< az0re>
I understand that doesn't necessarily solve the problem in all cases, but it should reduce the caution for suggesting too-low feerates, especially for ECONOMICAL
< az0re>
Finally, I will stop spamming after this I swear, but the bimodal distribution of the mempool seems fundamentally broken to me. In my idealized vision of the world, there would be monotonically increasing mempool pressure with decreasing feerate.
< az0re>
The current structure suggests to me that the auction mechanics of the mempool are not really working correctly
< queip>
long term, would bitcoind start with 1s/vb and then in background keep slowly RBFing it until it works? In transaction you would specify how much you want to wait, how much you can wait, and absolute limit (so before it it sets very max fee) and the max fee
< queip>
e.g. dust consolidations would be like 144,1440,6000, 100[s/vb] while regular onchain shopping could be 3,6,24,1000 for example
< bitcoin-git>
bitcoin/master 1dfe19e Wladimir J. van der Laan: Merge #20153: wallet: do not import a descriptor with hardened derivations...
< bitcoin-git>
[bitcoin] laanwj merged pull request #20153: wallet: do not import a descriptor with hardened derivations into a watch-only wallet (master...importdesc_silent_fail) https://github.com/bitcoin/bitcoin/pull/20153
< nanotube>
sipa: yes, i still run gribble. mempool command fetches data from mempool.space
< nanotube>
az0re: fees command pulls from blockstream.info api
< bitcoin-git>
[bitcoin] dongcarl opened pull request #20359: depends: Various config.site.in improvements and linting (master...2020-11-config-site-cleanup) https://github.com/bitcoin/bitcoin/pull/20359
< stevenroose>
I'm reading "A heap is used so that not all items need sorting if only a few are being sent." in net_processing.cpp.
< stevenroose>
I don't understand how the C++ heap implementation works I think, does it order lazily?
< sipa>
stevenroose: the STL implementation shouldn't matter
< sipa>
it constructs a heap in O(n) time from the input elements
< sipa>
then you can extract the top element from it in O(log n) time
< sipa>
so if you only need m elements, it's O(n + m*log(n)) work
< sipa>
while full sorting would need O(n*log(n)) work
< stevenroose>
oooh, I'm reading that's just the way a heap works, didn't know that. didn't know it had those properties, fancy
< sipa>
yeah, it actually more of a priority queue algorithm than a sorting algorithm
< sipa>
heapsort is first structuring the input into a heap, and then iteratively extracting the top level, shrinking the heap, and using the new space to put the extracted element
< sipa>
*top element
< meshcollider>
real_or_random: done ^
< * luke-jr>
wonders where the generational gap is between "computers take time to do things" and "wtf? why isn't it done yet?"
< stevenroose>
sipa: thanks
< luke-jr>
(I didn't learn to care about optimisation until maybe ~18yo or so)