< bitcoin-git>
[bitcoin] ajtowns opened pull request #19130: doc: make it easier to work out size of bloom filter (master...202005-bloom-doc) https://github.com/bitcoin/bitcoin/pull/19130
< bitcoin-git>
[bitcoin] jonathanschoeller opened pull request #19131: refactor: Fix unreachable code in init arg checks (master...fix-Wunreachable-code-loop-increment) https://github.com/bitcoin/bitcoin/pull/19131
< fanquake>
practicalswift asked for #18288 to be their high-prio PR; so I've added it.
< gribble>
https://github.com/bitcoin/bitcoin/issues/18288 | build: Add MemorySanitizer (MSan) in Travis to detect use of uninitialized memory by practicalswift · Pull Request #18288 · bitcoin/bitcoin · GitHub
< Kiminuo>
Hi, can I make my PR https://github.com/bitcoin/bitcoin/pull/18210 more attractive for review somehow? I have just rewritten the first post to articulate more clearly what the PR does.
< jonatack>
Kiminuo: rebasing promptly helps, and giving review is a good way to (a) to see the review bottleneck and (b) receive review back
< jonatack>
Kiminuo: because there are more people who open PRs than people who review them
< bitcoin-git>
[bitcoin] vasild opened pull request #19132: qt: lock cs_main, m_cached_tip_mutex in that order (master...lock_order_m_cached_tip_mutex) https://github.com/bitcoin/bitcoin/pull/19132
< vasild>
Either a deadlock (infinite hang) or a crash if the (possible) deadlock is detected. I am not sure if the deadlock detecting code is always enabled - in release builds and for all compilers?
< fanquake>
vasild: how is it an issue since 0.19 if the PR that supposedly introduced it (17993), was only merged 3 days ago?
< vasild>
well, the problem does not exist in 0.19
< fanquake>
Right. So the issue only currently exists in master?
< vasild>
yes
< fanquake>
If so, there is no need to backport anything.
< vasild>
by "regression since 0.19" I mean that the problem does not exist in 0.19 but would exist in 0.20 (if released without a fix)
< vasild>
right, I think no need to backport anything
< fanquake>
I don't see how it can exist in 0.20 if it's only been merged into master
< fanquake>
It will be fixed in master, and should hopefully never exist in any release.
< vasild>
It _would_ exist in 0.20 if 0.20 is released without a fix.
< vasild>
All I am saying is that this is a new problem that IMO should be fixed before the release. If the problem existed in 0.19 (an old one), maybe it would have been less important to get the fix into 0.20.
< fanquake>
vasild: the 0.20 release is currently based on the 0.20. branch: https://github.com/bitcoin/bitcoin/tree/0.20 (not master), so if the PR introduced the issue (17993) has. only been merged into master, then there is no issue with 0.20.
< Kiminuo>
jonatack, My PR is buried at this point, so I don't expect anyone to find it really..
< Kiminuo>
and rebasing also does not make too much sense because it only claims valuable resources for other PRs
< Kiminuo>
(I don't modify cpp code)
< vasild>
fanquake: oh, this is my confusion - I assumed that 0.20 is yet to be created from master, sorry. So 0.20 does not have the bug, meaning it should only be fixed before 0.21! :)
< fanquake>
vasild: 👍
< vasild>
What about attaching a "lock order" number to each mutex (hardcoded in the source code) and enforce that locking always happens in ascending order?
< hebasto>
vasild: maybe some of thread safety annotations could be useful?
< gwillen>
MarcoFalke: I'm curious what inspired you to fix all the codespell typos a few days ago -- I was never sure if those only show up for me, or for mac users, or what
< gwillen>
I always assumed they must not be in whatever dictionary Travis uses, or something
< jarthur>
promag: definitely, at least vs single RPC per HTTP request. A new HTTP request is expensive. From dynamic languages that have an expensive stack like Python and JavaScript, not having to go up and down the stack as much for multiple sends carries a performance benefit. JSON-RPC Pipelining is better on memory usage than batching, just harder to program support for.
< promag>
at least vs single RPC per HTTP request <- yeah I don't mind in this case
< promag>
so in the case the connection is established, only saving is on http protocol right?
< promag>
because for the server batching is actually worse, at least memory wise
< promag>
s/mind in /mean/
< aj>
sipa: "after how much time are we okay with attackers knowing everything that was in the mempool at the time?" -- if 99.9% of tx's are relayed to 99.9% of nodes within X seconds, then 2*X seconds? (fsv of 2 and 99.9%)
< sipa>
aj: maybe that's even to strong a question
< sipa>
"after how much time beyond the time it takes to send an INV are we ok with attackers knowing our mempool"
< sipa>
if the answer is: whenever we send an INV, attackers can know our entire mempool before that point... it's very easy
< jarthur>
promag: right, assuming the server supports pipelining multiple requests on the same connection (like JSON-RPC 2.0 TCP Transport w/ JSON Splitting, Stratum Mining TCP Transport, Stratum Wallet TCP Transport etc), I don't see an advantage batching would have.
< sipa>
aj: if so, we'd just keep track of the last timestamp when we sent out a tx INV, and respond to requests for anything that was in the mempool before that point
< sipa>
we'd still need a relay pool + bloom filter for dealing with things not in the mempool
< aj>
sipa: "whenever we send an INV, an attacker can know the presence or absence of every tx that entered our mempool prior to the latest tx in that INV" maybe. i don't see a lot of benefit to doing much else, since an attacker could just make more connections to get around it?
< sipa>
but the latter can go away once fast NOTFOUND handling is widespread on the network
< aj>
sipa: is this protecting against anything other than tracing transaction sources?
< sipa>
i don't think so
< sipa>
aj: here is a downside: for a very new connection, by observing the difference between what we actually announce to a peer, and what they can request (after our first INV)... they get an exact point distinguishing transaction we learned right before and right after the connection was created
< aj>
sipa: ha. could hack around that by not sending the first INV worth of tx's to them?
< sipa>
does that help?
< sipa>
right, it does a bit
< aj>
sipa: it makes it no more leaky than anytime you get an INV after the two prior INVs had less than 100 txids, i think?
< aj>
wish it was easier to keep all the relay related rules in my head at the same time