< aj>
sipa: why wouldn't you solve per-per dandelion DoS by just round-robining between peers proposing dandelion txs? (and limiting the per-peer incoming dandelion queue)
< sipa>
aj: elaborate?
< aj>
sipa: incoming dandelion tx goes into a per-peer queue, that's capped by size. pick a rate limit for outgoing dandelion txs, use poisson delays to do outgoing dandelion txs at most at that rate limit. when sending an outgoing dandelion tx, pick your own tx if you have one to send, otherwise send peer X's earliest dandelion tx and increment X so you use the next peer next time.
< aj>
(or use peer Y's oldest dandelion tx where Y has a dandelion tx and no peer Z, X <= Z < Y, has a dandelion tx)
< sipa>
and if the queue overflows, just convert into a real tx?
< aj>
if the queue overflows, fluff the oldest entry in the queue? maybe?
< sipa>
aj: interesting, i think that could work
< aj>
\o/
< glozow>
dumb question, can u rate limit by sigops?
< sipa>
glozow: sigops are (for the purposes of fee estimation/standardness) converted to vsize
< sipa>
a transaction's effective vsize is max(real vsize, 20*sigops)
< glozow>
ohhh right đ§
< sdaftuar>
sipa: aj: just getting caught up here. its been a while since i've thought about this but i think anything that uses a rate limit can be trivially attacked, to force worst case behavior--
< sipa>
sdaftuar: yeah, that's also what i remember from thinking about it earlier
< sdaftuar>
and it seems to be that worst case behavior is just one-hop dandelion. one-hop dandelion is probably still pretty good (would be nicce to see a writeup on that), but if that's all we can guarantee, it might be a lot easier to just implement one-hope dandelion and all it a day
< sipa>
but in this design, if you have a few peers that constantly send you stem txn at max rate, they'll just cause their own queue to overflow, and not affect relay of other peer's stems i think?
< glozow>
whatâs one-hop dandelion? like only 1 stem?
< sipa>
glozow: "when you have a new tx, only send it to one peer, and don't add it to your own mempool (except after a delay)"
< glozow>
mm, thanks sipa
< sdaftuar>
with dandelion you take all your inbound transactions and funnel them to 1 or 2 outbounds, so i think you could cause your outbound peers to fluff all your ransactions even if none of your inbounds are exceeding the rate limit
< sipa>
need to think more about this
< aj>
i think if average dandelion stem length is 10, and there's 10k publicly reachable nodes, then the average node only stems 1/1000th of the number of tx's it sees flooded, so if you set the dandelion rate limit to 10% of your tx rate limit, and only have 100 tx-relay peers, no honest peer will hit the rate limit?
< aj>
(i think the bip's proposed 10% odds of fluffing gives an average path length of ~10)
< sdaftuar>
sipa: aj: did i ever share a writeup of the issues i found with dandelion with you guys? i know i analyzed this at some length a couple years ago, but i need to find my notes
< aj>
sdaftuar: "one-hope dandelion" is a great typo :)
< sdaftuar>
lol
< sipa>
sdaftuar: i only know of your bitcoin SE answer
< aj>
sdaftuar: pretty sure i wasn't paying attention in depth at that time
< sdaftuar>
i remember writing up something more detailed explaining why i thought we needed to double the mempool (ie, allocate stempool memory equal to the mempool) to immplement dandelion without introducing DoS vectors
< sdaftuar>
but i need to find that again
< aj>
sdaftuar: i think one-hop dandelion would already solve all the protocol work; so upgrading to n-hop dandelion would be a per-node "relay-policy" update after that too
< sipa>
one-hope dandelion could even be done without any protocol changes (if we expect the peer to always convert a stempool tx to a real one, just send it as a normal one)
< aj>
aww
< sdaftuar>
one of things i remember discussing with morcos was that we could implement a modified version of dandelion where we fall back to fluffing everything if any kind of DoS scenario seemed to be happening... his suggestion (if i remember right) is that such an outcome would strictly be better than what we do today
< sipa>
i remember that too, and i remember not being convinced it's worth the implemention complexity in that case
< sdaftuar>
my concern was that i didn't like the idea of advertising that we implement Dandelion (as described in the paper) but in practice we get behavior worse than that whenever adversarial conditions strike, which is something that is likely unobservable
< sdaftuar>
but if we just implemented one-hop dandelion, i think that's pretty trivial -- it's basically just a wallet behavior
< aj>
it's changing RelayTransactions to have a flag to only pick a couple of outbound nodes, instead of everyone?
< sdaftuar>
the only downside to that is there's no writeup of how that improves privacy
< sdaftuar>
but it probably does
< sdaftuar>
aj: yeah something like that, and not accepting the transaction to our own mempool
< sipa>
sdaftuar: i remember talking to giulia fanti at FC about this; she mentioned a paper on one-hop dandelion (or some other simplified version), but never saw anything of it
< sdaftuar>
on some timer
< sdaftuar>
sipa: yes i recall discussing the same with her. i was hoping we'd get an analysis :)
< aj>
there's no actual value to spamming dandelion, it's just a DoS, right? (spamming the mempool by comparison lets you use it as distributed storage or a broadcast medium)
< sdaftuar>
yeah free relay (use network bandwidth for free)
< sipa>
and possibly it may reduce the privacy of those legitimately trying to use dandelion
< sipa>
not sure if you would call that "value"
< sdaftuar>
sipa: right, depending on what we do in response
< aj>
but stemming doesn't let you choose who it gets relayed to?
< sdaftuar>
it's hard to discuss without a specific proposal but i think if there's a scenario where (say) exceeding a rate limit causes everything to be fluffed, we probably should assume that an attacker will find a way to exceed the rate limit whenever they care to deanonymize other transactions
< aj>
i guess if you control 2% of the network, you've got 20% chance of your msg hitting a node you control, and that's only 220 nodes
< sdaftuar>
i think morcos' point to me was that we still get *some* benefit in that scenario, but i figure we might as well just code up and promise what we can actually deliver -- if it's just one-hop dandelion, that's pretty simple too
< sipa>
(very much brainstorming) if the issues with dandelion are purely due to its funneling effect... would it make sense of an alternative that doesn't have that (e.g. define a random routing network between all your peer, but keep it bijective)
< aj>
funneling effect?
< sdaftuar>
aj: at any given moment, a small fraction of nodes will fluff most of the transactions
< sipa>
funneling = the fact that it maps many input peers to few output peers
< sipa>
which is what i think interferes with setting rate limits, because whatever rate limit you are willing to accept on your inputs will be lower than what you may be producing as output
< sipa>
even under honest conditions
< aj>
is that true after the first hop? the only nodes receiving stems are reachable nodes, but for a reachable node, the number of incoming connections from reachable nodes on average should be less than the number of outgoing connections?
< aj>
mmm
< aj>
seems like the best idea is to resurrect the sims, see how one hop dandelion performs, and then re-sim more complicated things?
< sdaftuar>
aj: the reason dandelion achieves privacy is because we assume that fluffing is the observable behavior (and the stem phase is mostly unobservable), and so the only way we can get privacy is by having a small fraction of nodes fluff more than their fair share of transactions (otherwise, an attacker could try to learn the mapping of origin node to fluff node, roughly)
< aj>
sdaftuar: hmm, i thought the argument (at least for n-hop dandelion) was that the mapping between source node and fluff node was unpredictable and changed regularly
< sdaftuar>
i think if there was just a bijection of input node to output node, that an attacker could just learn the network by connecting to all the listenign nodes and relaying one transaction and seeing where it came out?
< sdaftuar>
you'd have to repeat it as peers cycled, but seems like not too hard to learn
< sdaftuar>
i think multiple input to one output is how you make observed transactions indistinguishable (as far as source goes)
< bitcoin-git>
[bitcoin] practicalswift opened pull request #21169: fuzz: Add RPC interface fuzzing. Increase fuzzing coverage from 65% to 70%. (master...fuzzing-rpc) https://github.com/bitcoin/bitcoin/pull/21169
< bitcoin-git>
[bitcoin] martinus opened pull request #21170: bench: Add benchmark to write JSON into a string (master...2021-02-benchmark-BlockToJsonVerboseWrite) https://github.com/bitcoin/bitcoin/pull/21170
< michaelfolkson>
I'm not 100 percent clear on when you should open an issue in the Core repo if you get an error you are not expecting so posting here. Maybe this should be in #bitcoin even
< michaelfolkson>
Regardless I upgraded one of my 0.19 nodes (Ubuntu) to 0.21 and got this error message
< michaelfolkson>
"2021-02-13T11:42:47Z ERROR: DeserializeFileDB: Failed to open file /home/michael/.bitcoin/anchors.dat"
< michaelfolkson>
Is that expected?
< michaelfolkson>
anchors.dat was only introduced in 0.21 right? But why should it be an error message? It is expected behavior if upgrading
< michaelfolkson>
(not to have an anchor.dat file)
< michaelfolkson>
I guess not having the file is a reasonable motivation for an error message
< michaelfolkson>
I'm just misunderstanding when it should be labeled ERROR and when it should be "nothing to worry about" logging
< jonatack>
michaelfolkson: that "Failed to open file" error message if no anchors.dat is found seems to be expected, if a bit surprising after an upgrade, see addrdb.cpp::DeserializeFileDB() and streams.h::CAutoFile::IsNull(), and git grep -ni "failed to open" shows that this seems to be the standard error message on file.IsNull()
< jonatack>
thatsaid, idk if there is a bool flag or function for "we just upgraded" that could be checked...
< jonatack>
(if the version of bitcoind is persisted somewhere)
< jonatack>
i'm not aware of one
< jonatack>
could it be added to settings.json?
< michaelfolkson>
Instinctively when I see ERROR I think "Uh oh something is wrong" rather than "Don't worry this file isn't yet created but we can create it for you"
< michaelfolkson>
Presumably if you installed 0.21 from scratch (ie not upgrading from 0.19 to 0.21) you would also get that error message
< jonatack>
sure. my guess is the added complexity required to handle that more-or-less-one-off case wasn't considered to be worth it, if it was considered
< bitcoin-git>
[bitcoin] jonatack opened pull request #21171: Save client version to the settings file on shutdown (master...persist-version-on-shutdown) https://github.com/bitcoin/bitcoin/pull/21171
< jonatack>
michaelfolkson: ^
< michaelfolkson>
Cool, I think this could be useful though others may need to be convinced on what exactly it will be used for
< michaelfolkson>
Potentially it is a first step to ironing out some kinks in the upgrading process?
< michaelfolkson>
Recognizing when an upgrade is happening and providing better logging around it to the user?
< jonatack>
Yes, anything that could benefit from knowing if the client version changed since last shutdown.
< michaelfolkson>
Nice, thanks for thinking about it and opening a PR on it. Wasn't expecting that :)
< jonatack>
đ
< luke-jr>
I'm not sure we want to behave the same if some user sets -version=foo?
< luke-jr>
maybe calling it "lastrunversion" or something less accident-prone
< jonatack>
luke-jr: yes, hesitated to call the field "last_run_version" and went for the simpler "version", pending feedback (I'm confused by what you mean with "user sets -version=foo")
< luke-jr>
jonatack: anything in settings.json doubles as a bitcoin.conf option at least, on?
< luke-jr>
no?8
< jonatack_>
gleb: in case it's helpful, after your latest push the erlay PR now builds for me cleanly and the functional tests run without --enable-debug, but I am still seeing the many lock order errors I described in the PR comments after building with --enable-debug
< jonatack_>
luke-jr: checking
< jonatack_>
luke-jr: yes, seems so. hm, -version isn't a conf option a
< jonatack_>
afaik, but point taken to maybe use a more specific field name
< jonatack_>
luke-jr: thanks
< luke-jr>
np
< prayank>
only "labelled" addresses belong to address book in bitcoin core wallet?
< luke-jr>
prayank: not anymore
< luke-jr>
IIRC
< prayank>
Interesting. I have few questions.
< luke-jr>
prayank: define "labelled" âș
< luke-jr>
in particular, pay attention to the distinctions in c5966a87d1f
< prayank>
2. According to comments in Cwallet::IsChange, payment to a script that is ours, but is not in the address book is "change address". Is this correct or its more complex?
< luke-jr>
prayank: it can be in the address book now
< luke-jr>
just not with a label
< prayank>
luke-jr: What is address book in bitcoin core wallet? What makes an address belong to address book?