<Sjors[m]>
Though if we want to rename the utility eventually, it's always better to do it early.
S3RK_ has quit [Ping timeout: 276 seconds]
vasild_ is now known as vasild
arejula27 has joined #bitcoin-core-dev
<vasild>
jarolrod: Excellent!
PaperSword has joined #bitcoin-core-dev
<vasild>
ajonas: b10c: Thanks for https://adamjonas.com/bitcoin/coredev/retro/coredev-2024-retro/! "Interested in Learning More About ... Network layer optimizations +6". What's that? Lets discuss this, here or in #bitcoin-core-network-layer-optimizations or elsewhere...
<vasild>
(#bitcoin-core-network-layer-optimizations IRC channel, not hashtag ;)
Guyver2 has joined #bitcoin-core-dev
arejula27 has quit [Ping timeout: 252 seconds]
jonatack has quit [Ping timeout: 252 seconds]
mudsip has joined #bitcoin-core-dev
pyth has joined #bitcoin-core-dev
purpleKarrot has joined #bitcoin-core-dev
jonatack has joined #bitcoin-core-dev
infernix has joined #bitcoin-core-dev
mudsip has quit []
instagibbs5 has joined #bitcoin-core-dev
instagibbs has quit [Ping timeout: 248 seconds]
instagibbs5 is now known as instagibbs
jonatack has quit [Ping timeout: 244 seconds]
arejula27 has joined #bitcoin-core-dev
arejula27 has quit [Ping timeout: 268 seconds]
jonatack has joined #bitcoin-core-dev
arejula27 has joined #bitcoin-core-dev
greypw1495085 has quit [Quit: Connection reset by beer]
<glozow>
sipa: you're correct that each peer will have a vector of items corresponding to each announcement, so that's how we'll track the count per peer
<glozow>
however that refactoring doesn't really make sense to do before we change the trimming loop
<sipa>
glozow: got it
<sipa>
glozow: do we have a bound on the sizes of m_work_set within each peer's structure?
<glozow>
no, we don't
<glozow>
I think we could consider flushing it more proactively
<glozow>
because you could theoretically have a ton of nullptrs in there
<glozow>
Er sorry, they aren't pointers. I mean wtxids corresponding to transactions that don't exist
<glozow>
Perhaps when a workset grows large (e.g. to 100), we delete the items that don't exist before appending more
<sipa>
keeping m_work_set.size() - m_announcements_per_peer.size() bounded may be enough
<sipa>
hmm, but we may need to think about the maximum amount of work a single incoming announcement may have on trimming the m_work_set's down.
<sipa>
if you can quickly find the ones whose wtxid is gone, that's not a problem, but if you need to loop for that, perhaps it is
<glozow>
you'd iterate through work set and query m_orphans which is a map
<glozow>
mm, wonder if we should make it an unordered_map
johnny9dev584508 has joined #bitcoin-core-dev
<sipa>
right, but if you have N elements in work_set, and N-1 of those are in m_orphans, and it's too big, you may need N-1 iterations to find the one that's missing, which is O(N log N)
jonatack has quit [Ping timeout: 244 seconds]
<sipa>
eh, O(N), sorry, it's unoredered
jonatack has joined #bitcoin-core-dev
<sipa>
still, a single incoming announcement may trigger up to max_announcements evictions, if each of those needs an O(N)-cost trimming of m_work_sets, that may be a problem, though i'm not sure if there actually is a scenario with such a cost
<sipa>
i guess you'd do the announcement evictions first, and then trim the m_work_sets
<glozow>
I might be misunderstanding, but I'd envision that we trim work sets in AddChildrenToWorkSet, not when we LimitOrphans
thoragh has quit [Ping timeout: 260 seconds]
<sipa>
Ah, yes, that is sufficient.
<sipa>
My thinking was that in order to bound memory usage, we need to limit how much bigger the work set can be than the number of announcements of a peer (because the latter is already bounded and thus accounted for, and adding a constant term on top of that suffices), so that if announcements are removed, that may mean reducing m_work_set too... but there is really need reason why it cannot be delayed
<sipa>
until the next time something is added to the work set.
bugs_ has joined #bitcoin-core-dev
jespada has quit [Ping timeout: 248 seconds]
jespada has joined #bitcoin-core-dev
eugenesiegel has joined #bitcoin-core-dev
<glozow>
Yeah, workset can't grow with AddTx which is nice. Another problem though is workset can have duplicates... so this thing can get pretty big
<sipa>
oooh
<sipa>
that may be an issue
<glozow>
Yeah...
<glozow>
While we're at it, maybe we think about size of the outpoints map too
<glozow>
(memusage + tax is starting to look better huh)
<bitcoin-git>
[bitcoin] fanquake opened pull request #31820: build: consistently use `CLIENT_NAME` in libbitcoinkernel.pc.in (master...reuse_client_name) https://github.com/bitcoin/bitcoin/pull/31820
<sipa>
glozow: it would be nice if we could have a formula of thr form memusage <= A + B*tx_weight
<sipa>
so then the reservation csn be set to A*num_tx + B*sum_weights, for num_tx/sum_weight of the largest concurrently being relayed orohan set we want to resolve
<glozow>
as in, we define the memory usage metric to be A + B*weight?
brunoerg has quit [Remote host closed the connection]
brunoerg has joined #bitcoin-core-dev
<sipa>
glozow: right, rather than use real measured memory usage, use A+weight*B as approximation for it, which is simple enough to be tied to real use cases (which will be about num_tx, weight, num_inputs, ...) as those are not related to real memory usage
<instagibbs>
"tied to real use cases"?
<glozow>
instagibbs: it's still easy to explain what the orphanage limit is
<instagibbs>
ah :)
brunoerg has quit [Ping timeout: 265 seconds]
<sipa>
well, "one maximally-sized ancestor set will resolve" e.g.
<instagibbs>
i.e., given a fixed A/B, we set the per peer reservation to a value that would protect 24 orphans at 404/24 kWU?
<instagibbs>
each*
<glozow>
hold on I'm not really understanding how we solve for A and B
<glozow>
or is this formula supposed to encompass both of the dos scores?
jespada has quit [Quit: My Mac has gone to sleep. ZZZzzz…]
Guest32 has joined #bitcoin-core-dev
Guest32 has quit [Client Quit]
<sipa>
glozow: just memory usage
<bitcoin-git>
[bitcoin] Christewart opened pull request #31823: tests: Add witness commitment if we have a witness transaction in `FullBlockTest.update_block()` (master...2025-02-07-featureblockpy-witnesscommitment) https://github.com/bitcoin/bitcoin/pull/31823
<sipa>
my thinking is: we try to find a "simple" formula for computing an approximate/upper bound on actual memory usage for a transaction (maybe it involves more parameters than just num_tx and weight, it could also use num_inputs, num_output, ...)
<sipa>
because if the formula is just DynamicMemoryUsage(tx), then it's unclear how high to set the reservation for each peer to make a particular use case like "a maximal ancestor set will resolve"
<instagibbs>
if you add in num_inputs term, f.e., you'd have to make sure the maximal number inputs are allowed for the "maximal ancestor set". Might be easier to overestimate and say degenerately they're proportional to weight
jarthur has joined #bitcoin-core-dev
<sipa>
yeah
<sipa>
right
eugenesiegel has quit [Quit: Client closed]
eugenesiegel has joined #bitcoin-core-dev
Earnestly has quit [Ping timeout: 252 seconds]
Earnestly has joined #bitcoin-core-dev
Guyver2 has left #bitcoin-core-dev [Closing Window]