< jonasschnelli>
gmaxwell: yes. rekeying is done after a fix amount of traffic in bytes. But re-hashing the secret would not change anything if ECDA of logged traffic can be broken?
< gmaxwell>
jonasschnelli: right, we should also perhaps consider rekeying once an hour. What rekeying accomplishes, assuming the old key is deleted, is that if a system is compromised you can't extract the keys from memory to decrypt traffic you logged before compromise.
< gmaxwell>
The reason I suggest triggering on time is because if you have e.g. a SPV client, it might be days until it has transfered 1GB of traffic. which might make it interesting to try to go seize other nodes a target under observation was connected to in order to decrypt their traffic. Admittedly a really fring risk, but it should be ~free to avoid.
< gmaxwell>
(basically a similar motivation to why we don't log IPs by default)
< sipa>
the only reason not to rekey on every message is performance, right?
< gmaxwell>
Right.
< gmaxwell>
sha2 is slower than chacha.. :)
< gmaxwell>
interestingly, I'm not aware of any well known cipher mode which natively has irreversable state.
< sipa>
chacha takes a 256 bits key, and produces blobs of 512 bits of output
< sipa>
why not say encrypt every message with the current encryption key, and then afterwards extract another 256 bits from the stream, which become the new encryption key?
< sipa>
chacha has 0 initialization cost
< gmaxwell>
Because thats not a well studied construct. it would also be 50% of the speed of using it normally.
< sipa>
why would it be slower?
< sipa>
ah, if you assume the messages are small i guess
< gmaxwell>
ah I thought you meant per 512 bits of output rather than per protocol message.
< gmaxwell>
we could also email djb to ask, might well be that someone has published on a mode that does this. Though I think elsewhere where this concern was addressed, it was always just addressed by rekeying from a higher level rather than at the block cipher level.
< gmaxwell>
Though as I was saying, I think it's kind of a fringe concern, if we want to do something complicated, I'd rather it be armoring against ECDH break then N-th level optimizations to how fast we forget keying material.
< gmaxwell>
(or even better, get the indistinguishable authentication protocol finished)
< sipa>
right; i'm mostly wondering why "use a prng-based stream cipher, and after each message, read the next encrpytion key from the stream" isn't a common construction
< gmaxwell>
Because almost everything has key init costs?
< gmaxwell>
also because the whole reason you normally use a stream cipher is random access.
< gmaxwell>
there have been some 'reuse resistant' quasi-stream-cipher proposals perhaps some of those get irreversability as a side effect. dunno.
< fanquake>
wumpus 13938 should be ok to go in
< fanquake>
Also 13808
< Varunram>
is the bot dead?
< jonasschnelli>
gmaxwell, sipa: the new protocol does have encryption optional, therefore the question rises if detecting the key handshake versus a version message is sane
< jonasschnelli>
I guess its acceptable to assume a version message (an not a key) when we detect a message magic and the rest of a legacy header
< jonasschnelli>
I guess it's almost impossible to derive a pubkey with the network magic & version part of the header
< reald0ff1>
hi
< reald0ff1>
Can someone please provide me download stats (or at least share in %) of bitcoin core, regarding the different platform versions (Win, Linux, OSX, etc) ?
< reald0ff1>
that would be very helpful for my master thesis
< reald0ff1>
would very appreciate it, if someone could help me with that question
< harding>
reald0ff1: I don't know if anyone has that information for BitcoinCore.org, sorry. In addition, the binaries can also be downloaded from Bitcoin.org (maintained by a different team) or via a torrent (with optional magnet URI) that contains the binaries for all platforms.
< reald0ff1>
well thanks for the answer. I think I will try to contact bitcoincore.org via email and bitcoin.org also via email to the website maintainer. Maybe some of them could provide me stats
< reald0ff1>
I developing a security tool for cryptocurrency users and I selected windows as target plattform. I have the feeling that the most users use windows (I am not talking about devs, etc.)
< reald0ff1>
however, would be still nice to have some stats to prove that "feeling"
< devmob>
hi, I'd really like to know how bitcoin does gossip, like how the gossip protocol is implemented
< jonasschnelli>
gmaxwell: about dolbeau's Chacha20 AVX/SSSE implementation: "Beware: those implementations are purely designed for speed on recent Intel architectures (mostly Haswell and newer), and ARMv8 (64 bits) with the crypto extension. They were not verified to be resistant to side channel attacks."
< jonasschnelli>
The later would probably require further analysis since the timing side channel attackes seems to be one of the big benefits of chacha20 (I may be wrong though)
< jonasschnelli>
*attack resistance
< MarcoFalke>
sipa: Wouldn't the stempool make every mempool action half as fast (since everything would have to be done once for the mempool and then for the stempool)
< MarcoFalke>
Also I am not sure about the memory overhead of having the mempool duplicated
< MarcoFalke>
The transactions are shared ptrs, but still...
< sipa>
MarcoFalke: well, dandelion needs some way of dealing with unconfirmed dependencies
< sipa>
MarcoFalke: the reference code the authors posted included a stempool, though i commented on memory usage concerns
< MarcoFalke>
I know that the BIP mentions a stempool
< MarcoFalke>
Agree that we need to handle dependencies
< sipa>
an alternative would be to have a 2-tier mempool, where each transaction has a flag whether it's public or not
< sipa>
and accepting a public tx ignores (and kicks out) any nonpublic conflicts
< MarcoFalke>
That sounds like every single line of txmempool.cpp had to be amended with an if(public) else ...
< sipa>
i doubt that, tbh
< sipa>
most of it is just data structurr maintenance which would be unaffected
< sipa>
but i don't think it'd be a trivial change either
< sipa>
the set of non-public transactions in general should be very small
< sipa>
as you expect every non-public tx to become a public one after some time (and the auto fluff after timeout essentially guarantees that after some timeout)
< sipa>
so perhaps the "stempool excluding mempool" can be small and have lower consistency requirements
< sipa>
like, we run ATMP to accept things into it, but don't require it is at all times consistent with the actual mempool
< sipa>
as things expire quickly from the extra set, it can have a tight memory limit and not much avenue for dos
< MarcoFalke>
sipa: The dos protection should happen per edge (peer) and not on the global mempool, no?
< MarcoFalke>
The stempool limit would only be a fallback limit
< MarcoFalke>
We wouldn't want one peer use up all the stempool capacity
< sipa>
right
< MarcoFalke>
Also, I am certain that we leak information by using the global (shared among all peers) stempool
< sipa>
so perhaps it can even be a per-peer small set of unconfirmed dandelion txn, which you use to do dependency checks for dandelion txn coming from that peer
< sipa>
which has much clearer privacy and dos reasonong
< MarcoFalke>
You'd forward but then later discard dandelion txs
< sipa>
well the combined set of those extra txn is your set of to-fluff things
< MarcoFalke>
So if an attacker send the same dandelion tx twice with a rbf one on another route they can guess part of the route
< sipa>
how so?
< MarcoFalke>
(talking about the shared mempool) Not the per-peer set of txs
< sipa>
i like the per-peer set :)
< sipa>
i think you're right that there is risk in a global stempool
< sipa>
the per-peer set sounds like it wouldn't need much more than a way to pass in an entra map with txn to ATMP
< MarcoFalke>
That would have a compute overhead
< sipa>
hardly, i think
< MarcoFalke>
(re-calculating the set of dependencies for all txs)
< MarcoFalke>
just to check on tx
< sipa>
no no
< MarcoFalke>
why?
< sipa>
just something that feeds into the lookup of utxos being spent logic
< sipa>
"if not found in mempool or chainstate, also look here'
< sipa>
but you don't do complete conflict analysis or replacement or whatever in those extra sets
< MarcoFalke>
So you could send a tx that spends and output and the output that was used to create that output (assuming 1in-1out txs for now)?
< MarcoFalke>
s/and/an
< sipa>
right
< sipa>
perhaps you could even permit double spends inside the extra set
< MarcoFalke>
So a peer could drain your allowance
< MarcoFalke>
for free
< sipa>
what allowance?
< MarcoFalke>
"allowance" = txs your dandelion destinations accept
< MarcoFalke>
num tx/minute or whatever
< sipa>
i'm confused
< MarcoFalke>
I think we just concluded that the "cheap check" (pass in set of previous txs) can lead to thinking an invalid tx is valid
< MarcoFalke>
so we'd forward invalid txs
< sipa>
right
< sipa>
well, not invalid
< sipa>
but conflicting, yes
< MarcoFalke>
They'd never be accepted to a real mempool
< MarcoFalke>
never as is invalid consensus
< sipa>
that depends on the order those extra txn get added to people's mempool
< sipa>
no, you do full consensus validation
< MarcoFalke>
So you need to calculate all mempool dependencies and stuff
< sipa>
how so?
< sipa>
validity is just a) can we find the inputs b) are those inputs not yet spent by another mempool txn c) do scripts validate
< sipa>
i suggest skipping just (b) for dandelion relay
< MarcoFalke>
Though, for a) you use the set of {mempool inputs} OR {prev dandelion txs inputs}
< sipa>
right
< MarcoFalke>
so if a dandelion txs spent mempool tx
< sipa>
but whenever the mempool changes, you don't update the extra sets, so they can grow inconsistent with eachother
< sipa>
but i don't think that's a problem; you'll notice when trying to fluff
< MarcoFalke>
hmm, give me a sec
< MarcoFalke>
How do I draw a picture in irc?
< sipa>
haha
< sipa>
we should discuss this on the ML though
< MarcoFalke>
Assume mempool has one output: A. Assume dandelion tx spends this input A and creates output B. We send this dandelion tx. Assume another dandelion tx spends {A,B} and creates output C, which is valid, since we use the set of outputs in the mempool and previous dandelion txs, but the tx itself is consensus invalid. Send this tx. Repeat with {A,C}->D, {A,D}->E ... for free
< sipa>
i see your point.
< MarcoFalke>
I hope you prove me wrong, because I also like the per peer set
< sipa>
is there some rate limiting on dandelion txn per peer?
< MarcoFalke>
In my implementation, yes
< sipa>
is there in the BIP? (i haven't read the latest draft)
< MarcoFalke>
not explicitly mentioned
< MarcoFalke>
Maybe there is in the appendix (reference implementation), haven't looked too closely at that, though
< sipa>
if you disallow replacement of dandelion txn, it becomes a lot easier
< MarcoFalke>
yeah, but we don't want to kill rbf for dandelion txs
< sipa>
and perhaps that's not crazy; you can replace, but first need to wait until the dandelion relay has settled into the mempool
< MarcoFalke>
I'd rather enforce rbf
< MarcoFalke>
(which is what my cache is effectively doing, I think)
< sipa>
but you don't support dependencies between dandelion txn, or do you?
< MarcoFalke>
nope
< MarcoFalke>
You'd have to use rbf to "eat up" all dependencies
< sipa>
replacement generally seems to be something that happens in the scale of hours, and certainly longer than interblock time
< sipa>
both in use cases and incentives
< sipa>
while dependent transactions can be in the scale of seconds
< sipa>
(blobs of interdependent txn)
< MarcoFalke>
What about the use case of "replacement to avoid a change output-round-trip"
< MarcoFalke>
i.e. avoid long chain of unconfirmed
< sipa>
if you're doing that in a scale of seconds-minutes you should probably just batch better
< MarcoFalke>
hmm, starting to like that idea
< gmaxwell>
jonasschnelli: It's somewhat implausable to me that someone managed to make a sidechannel vulnerable chacha20 which was also fast. I'm happy to review them for it.
< gmaxwell>
sipa: two layer mempool sounds hard to now screw up and accidentally leak data.
< gmaxwell>
A per peer stempool (which of course shares the actual tx data itself across all peers) makes sense to me.
< gmaxwell>
it it requires you augment the protocol to route the dependencies along the same path as the parent.
< gmaxwell>
which might have privacy implications. .. I think none of the research on dandelion so far really considered chains of unconfirmed txn.
< gmaxwell>
(I'd _generally_ expect that routing children along the same path as parents would be privacy improving, but there may be factors like leaking out of the stem at different points that have bad effects like reducing the privacy of the whole chain to that of the weakest one)
< sipa>
gmaxwell: read on
< sipa>
ah, you already saw the per-peer idea
< gmaxwell>
I also agree that we don't need to care about stem transactions getting invalidated by mempool txn. But I think we do want to check them against each other. In particular I shouldn't be able to give you 100 distinct spends of the same coin and have you route them all out to the same peer. To send two of them to two different peers would be ducky.
< sipa>
gmaxwell: yeah, if you don't care about replacing txn while they are not in the mempool that sounds easy
< sipa>
it means you don't need the dependency tracking or replacement or whatever logic
< sipa>
just verify against the combined set of chainstate+mempool+ peer-specific set of unconfirmed dandelion txn
< gmaxwell>
right, but again, if dandelion parents are peer spectific, we must endeavor to route children along the same path as parents, or otherwise they'll propagate poorly.
< sipa>
dandelion already does that; it has a per-peer destination peer
< sipa>
so subsequent transactions will go to the same outgoing peer
< gmaxwell>
sipa: it has _two_.
< sipa>
unless there is a shuffle in between
< sipa>
only one per incoming peer
< sipa>
two globally
< gmaxwell>
oh right, okay.
< MarcoFalke>
What is the use case for tx chains of dandelion txs?
< gmaxwell>
MarcoFalke: uh, being able to spend your funds without waiting for a block.
< sipa>
MarcoFalke: what is the use cade for tx chains in general? :)
< sipa>
same answer
< MarcoFalke>
why block, you can wait for a fluff
< sipa>
that's ~minute or so?
< gmaxwell>
if someone pays you 1 BTC, you spend 0.1 ... now your wallet interface needs to randomly _fail_ and tell you that you can't spend again, until a fluff has happened?
< sipa>
you're right, waiting for a block is not relevant here
< MarcoFalke>
yeah, I mean if we don't allow replacement of dandelion txs, we might as well not allow chains
< sipa>
MarcoFalke: i disagree
< MarcoFalke>
and ask people to batch if the time between spends is ~1minute
< sipa>
there is a different timescale
< gmaxwell>
That would mean that we couldn't use dandelion as the standard way to announce transactions, if that were the decision I'd say we shouldn't bother implementing it at all.
< sipa>
as i said before, i think it's reasonable if replacement only works in a timescale of minutes/hours
< gmaxwell>
Ideally people sould batch, sure, but someone cannot guarentee that they won't need to make another payment 40 seconds after the last.
< sipa>
but dependencies need to work in seconds
< gmaxwell>
Why wouldn't we allow replacements?
< MarcoFalke>
Would be more expensive to check
< MarcoFalke>
potentially scales with the number of txs in this edges cache (stem)
< gmaxwell>
you don't actually 'replace' the transaction, but you can relay a transaction that conflicts with the peer's stemppool if it would otherwise pass the replacement criteria.
< sipa>
gmaxwell: i think that's an order of magnitude more complex to implement
< gmaxwell>
how so? you have a map of tx parents. It's just like the orphan pool.
< gmaxwell>
in any case, I don't see a fundimental reason to not allow replacement... it would probably be fine to skip it for now due to complexity.
< sipa>
gmaxwell: the rules for replacement are a complex piece of policy.. that depends on relay fee, discard fee, mempool size, cyclic dependency checks, ...
< MarcoFalke>
^
< sipa>
all of those don't really have a direct translation to multiple layers of mempool
< gmaxwell>
so uh, how would we handle a dandelion txn which would be a replacement for something in the mempool?
< sipa>
we shouldn't?
< MarcoFalke>
That works
< gmaxwell>
Then I think it's busted.
< sipa>
heh?
< MarcoFalke>
Of course you can replace mempool txs with dandelion txs
< sipa>
oh, ugh.
< sipa>
of course that needs to work
< MarcoFalke>
I mean, maybe only once, but it works
< gmaxwell>
again: I think we cannot make dandelion the standard way to announce txn we should not deploy it. And if it kills replacement of long ago announced txn, then we can't do that.
< sipa>
right
< MarcoFalke>
agree
< sipa>
i don't think that's an issue though
< gmaxwell>
It's simple in any case, see if ATMP would accept, and if so it's eligable for stem relay if not conflicted in the peers' stem cache.
< sipa>
dandelion tx validation operates on the sum of mempool + extra tzn
< sipa>
but it doesn't need to deal with replacements
< sipa>
just validation against that set
< gmaxwell>
also I think we can also 'support replacement' by fluffing anything that passes ATMP but conflicts with our stem cache.
< sipa>
MarcoFalke gave an example above where that's busted
< MarcoFalke>
sipa: I said it works
< MarcoFalke>
[16:55] <MarcoFalke> That works
< sipa>
oh? what about your a/b, a/c, a/d example?
< MarcoFalke>
Well, that is what I meant with "I mean, maybe only once, but it works"
< gmaxwell>
I'm not following.
< MarcoFalke>
We fell back to the earlier discussion
< gmaxwell>
okay
< MarcoFalke>
[15:54] <MarcoFalke> Assume mempool has one output: A. Assume dandelion tx spends this input A and creates output B. We send this dandelion tx. Assume another dandelion tx spends {A,B} and creates output C, which is valid, since we use the set of outputs in the mempool and previous dandelion txs, but the tx itself is consensus invalid. Send this tx. Repeat with {A,C}->D, {A,D}->E ... for free
< sipa>
if ATMP needs to do complex replacement checks w.r.t things already in the extra set, it becomes hard
< sipa>
replacement checks against the mempool of the form "would this be accepted to the mempool" are easy
< gmaxwell>
the combination of replacement and chaning is cancer. :(
< MarcoFalke>
jup
< MarcoFalke>
So pick one
< sipa>
however, if replacement within the extra set is not allowed, it's easy enough - discard anything that conflicts with the extra set already
< gmaxwell>
Well we can support replacement for non-chained, and also support chaining.
< sipa>
otherwise, validate against the mempool with full policy check, getting utxos from the extra set as needed
< gmaxwell>
and for the kind of replacement we don't support, I think we could still queue the transaction and not propagate it but fluff it when it times out.
< sipa>
if accepted, put in the extra set (which is limited is size, and automatically ezpires through auto fluffing)
< gmaxwell>
so at least chained replacements work, they just might have worse privacy/propagation.
< sipa>
and fluffing is just implemented as adding to the local mempool... which means that stuff that has been invalidated by intermediate mempool action just gets ignored
< gmaxwell>
so the criteria for going into the extra-set is "doesn't need a parent in the extraset and passes ATMP OR it needs a parent in the extraset, doesn't conflict with the extra set and with the parent its consensus valid/standard"
< gmaxwell>
and if you get something that conflicts with the extraset, and doesn't pass ATMP, you throw it in the orphanmap. It'll get connected once the parents get fluffed.
< gmaxwell>
Then: replacement works, chaining works, and chaining+replacement turns into orphans which still work after the parents fluff.
< gmaxwell>
I totally agree that wallets shoudl be batching and whatnot, but consider: we don't even have a friendly way to do that... There is no dohicky in bitcoin core where you can queue a payment, have it draft it, but not send it, waiting for either more payments it can be bached with, timeout, or shutdown trigger.
< MarcoFalke>
So fluffing a chained dandelion tx also fluffs its parents? (even though one of the parents might still be "traveling" on a stem)
< gmaxwell>
thats why I was saying 'weakest in the chain' above. :(
< MarcoFalke>
Yeah, so the suggestion would be to avoid chaining, but support it
< sipa>
don't fluff things which have an unfluffed parent?
< MarcoFalke>
You'd be keeping them much longer in the cache/embargo (on average) and thus use more space for chained txs than unchained ones on avg
< MarcoFalke>
A child times out, but you couldn't fluff it because the parent's timeout is in the future
< sipa>
i feel like there should perhaps be something where a dependency in the extra set results in the two txn being merged into a packaga
< sipa>
and then have the timeout for the package become a weighted average of the inout timeouts or so