< bitcoinvolumetra> hey guys have 305,000 coins to liquidate at a discount. satoshi test will be supplied to a wallet of your choice on the grounds the buyer can provide validation of funds. preferably mt199/799 supplied to the sellers bank by the buyers bank. any questions please pm me.
< sipa> bitcoinvolumetra: go away
<@sipa> bitcoinvolumetra: this is a development channel, not a marketplace. Also, if you were really selling 2 billion dollars worth of BTC, you wouldn't be ramdomly asking strangers on irc
< midnightmagic> he ain't got no time for your logic
< gmaxwell> Full Billionare mode.
<@sipa> *double* billionaire mode
< luke-jr> XD
< luke-jr> sipa: maybe it is at such a large discount that it isn't $2B? :P
< * midnightmagic> makes head-exploding motions with hands but nothing happens
< ossifrage> Now that I'm not running out of file descriptors for sockets, I've been auditing my connections and was suprised how many IPs have multiple connections to my node
< gmaxwell> spys.. they'll probably stop doing that once 0.17 is widely deployed.
< gmaxwell> since it defeats what they're trying to accomplish.
< gmaxwell> (by connecting multiple times they effectively speed up how fast transactions are relayed to them, making it easier to determine origins)
< ossifrage> One 111.6.90.203 has 6 connections, whois says it belongs to chinamobile
< ossifrage> I wonder if that is actually legit traffic from heavily NATed users?
< gmaxwell> could be, in some cases.. reasons like that are why we don't outright limit.
< wumpus> it's possible though the I'd expect the chance is very low that a bunch of NATed users would connect to your node, of all things, all the same time
< wumpus> two of them, okay, but six?
< wumpus> especially if they connected aruond the same time, too, and have the same agent string, it's clear
< wumpus> did I mention I miss the github merge bot here already
< luke-jr> wumpus: DNS seeds and how light wallets use them can result in a lot of nodes connecting to a small set of listening nodes sometimes
< wumpus> luke-jr: that's a fair point, so indeed, I don't see it as impossible just unlikely
< gmaxwell> sipa: skip RW locks and go straight to URCU https://lwn.net/Articles/573424/ ? :P
< wumpus> so according to #12624 it's time to split off the 0.17 branch today
< gribble> https://github.com/bitcoin/bitcoin/issues/12624 | Release schedule for 0.17.0 · Issue #12624 · bitcoin/bitcoin · GitHub
< wumpus> let's see how far we get...
< kallewoof> wumpus: I think I've reviewed all the 0.17 PR's that I feel comfy reviewing
< kallewoof> is there anything else that needs doing besides review?
< wumpus> we'll probably want to put up the release notes in the wiki
< wumpus> so that people can work on the remaining items in #12391
< gribble> https://github.com/bitcoin/bitcoin/issues/12391 | TODO for release notes 0.17.0 · Issue #12391 · bitcoin/bitcoin · GitHub
< wumpus> I'm also working on doing a final translations update before the branch-off
< wumpus> but as for things holding back the branch, yes I think reviewing the ones tagged 0.17 is most important now
< * kallewoof> can't even find the wiki link from github, so going to leave that one for someone who's not clueless :P
< kallewoof> got it. I'll double check if anything was updated and re-review.
< kallewoof> even better is to actually try the code and not just stop at utACKing...
< kallewoof> Ohh... no wonder i couldn't find it
< kallewoof> Thanks
< fanquake> wumpus I think 13808 is ready. Also 13938, but I assume it must break something for 0.17, if it's been tagged with 0.18?
< kallewoof> Huh... https://pastebin.com/cC29k9Sz
< kallewoof> I tried [ createpsbt "[]" "[{\"$(./bitcoin-cli getnewaddress)\":0.01}]" ], which worked fine. But decodepsbt errors for the result.
< * kallewoof> should probably read up on how these commands work
< wumpus> I don't get why #13938 is labeled 0.18, but I don't realy like it either so I'll leave it like this
< gribble> https://github.com/bitcoin/bitcoin/issues/13938 | refactoring: Cleanup StartRest() by DesWurstes · Pull Request #13938 · bitcoin/bitcoin · GitHub
< wumpus> really tired of these twiddle-the-code-a-bit-without-really-doing-anything
< wumpus> agree 13808 is ready
< wumpus> thanks everyone for reviewing that one it was needed !
< fanquake> Plenty more refactoring type PRs to review heh
< fanquake> 13899 maybe ready, but probably needs at least one tested ACK from someone using GCC
< wumpus> nothing wrong with refactoring, but man, 'this functiona cannot fail at the moment' so we have to propagate this through the entire call chain upwards!!! seems somewhat self-defeatng
< wumpus> so I'd say we close that kind
< wumpus> redundant declarations meh
< fanquake> wumpus I feel a lot of closed PRs in the near hours
< wumpus> fanquake: haha if it was only up to me
< wumpus> no, removing redundant declarations is okay, but I don't see it as a focus point for 0.17
< fanquake> Seems like there is "some" agreement around #13666 ? Still tagged for 0.17.
< gribble> https://github.com/bitcoin/bitcoin/issues/13666 | Always create signatures with Low R values by achow101 · Pull Request #13666 · bitcoin/bitcoin · GitHub
< fanquake> I'm re-reading the thread, as I remember there were some concerns about it early on, but don't feel overly confident reviewing the changes there.
< wumpus> fanquake: I'm somewhat reluctant on that one
< wumpus> maybe someone can convince me why it's important to merge that last minute before 0.17
< wumpus> ah the signature size counting issue for feerate was resolved
< wumpus> and it has plenty of utACKs
< wumpus> so I think it should just go in
< wumpus> fanquake: reducing the entropy of the nonce was the biggest concern it seems
< fanquake> wumpus thanks, just merging/running tests etc
< wumpus> and making signing time double--but that seems unimportant and nneglible in comparison to utxo selection and such when sending a transaction
< MarcoFalke> wumpus: I think https://github.com/bitcoin/bitcoin/pull/13905 can go in to 0.17 as doc bug fix (tiny one)
< MarcoFalke> manpages need to be regenerated anyway
< MarcoFalke> (after the version bump)
< MarcoFalke> lol, why do we have appveryor as ci check?
< fanquake> MarcoFalke appveryor?
< MarcoFalke> Oh well, only on this commit
< MarcoFalke> because it was also pushed to some other remote (with appveyor)
< fanquake> huh ok
< MarcoFalke> Going to merge #13905 as well.
< gribble> https://github.com/bitcoin/bitcoin/issues/13905 | docs: fixed bitcoin-cli -help output for help2man by hebasto · Pull Request #13905 · bitcoin/bitcoin · GitHubAsset 1Asset 1
< MarcoFalke> I think after that we are good to branch off
< MarcoFalke> (and backport any remaining bugfixes)
< MarcoFalke> later on
< wumpus> yay
< MarcoFalke> Oh wait, we should merge the loose release notes file into the master doc
< MarcoFalke> Otherwise they will be sitting around forever
< wumpus> I've already put the release notes in the wiki, probably want to do that there
< wumpus> there's no need to do this before the branch, after the branch I'm going to remove all release notes (including the loose ones) on the master branch
< fanquake> So backport 13917 and 13788 after branch off?
< wumpus> if there is anything we *know* that should be backported and is open now, I say we should merge it right now
< wumpus> backporting post-branch is for things we don't know about yet
< fanquake> One needs a rebase, the -disable-asm changes I'm not sure about.
< MarcoFalke> Yeah, needs a rebase and re-ACK. That might take some days
< wumpus> yes the latter seems to be controversial
< wumpus> ok agreed, probably makes no sense to hold up rc1 for a few days for that
< wumpus> might even postpone it to 0.17.1
< fanquake> Sounds like it's time to branch then
< wumpus> might want to do a hardcoded seed update first
< wumpus> BLOCK_CHAIN_SIZE was recently adapted afaik
< wumpus> the chainparams stats too
< fanquake> Yes, to 220 I think.
< wumpus> last hardcoded seeds update was in januari, so I'm going to do that
< MarcoFalke> sounds good
< wumpus> any versions that should be removed from PATTERN_AGENT while building the list? (it has 0.13.0 and up after adding 0.16.[012])
< fanquake> I guess 0.13 has reached its "end of life"
< fanquake> Or did, at the start of the month https://bitcoincore.org/en/lifecycle/
< fanquake> Does that warrant removal?
< wumpus> it did, is that enough reason, or does it need any specific reason to exclude it based on compatiblity?
< wumpus> I don't think it matters much, there shouldn't be many 0.13.x nodes around anymore
< fanquake> I guess it's just 0.13.1 + that have segwit support
< wumpus> I'll just remove it
< wumpus> #13951
< gribble> https://github.com/bitcoin/bitcoin/issues/13951 | Hardcoded seeds update pre-0.17 branch by laanwj · Pull Request #13951 · bitcoin/bitcoin · GitHub
< wumpus> I'll wait for some acks on that
< MarcoFalke> The default for GetDesirableServiceFlags didn't change, so looks correct
< wumpus> so I think it's ready for branch?
< fanquake> It would seem that way
< fanquake> manpages? Or they can be done later on
< wumpus> that needs to be done after upping the versions
< MarcoFalke> lets do it!
< fanquake> on time!
< wumpus> yes this is fantastic, we didn't have to postpone it this time :)
< wumpus> versions bumped, if you want to do an update-manpages PR go ahead
< wumpus> (need a seperate one for master and 0.17)
< TheHoliestRoger> thank you for your hard work all
< MarcoFalke> no pressing need to bump on master for now
< MarcoFalke> Probably enough to do it twice a year or so
< MarcoFalke> But the gitian-descriptor would need to be bumped on master?
< TheHoliestRoger> you got 3 gitian sigs yet?
< wumpus> MarcoFalke: yes, good point, the gitian descriptor needs to be bumped there to avoid overlapping caches
< wumpus> TheHoliestRoger: lol :)
< TheHoliestRoger> what you lolling at me for? :)
< TheHoliestRoger> the thank you was serious
< MarcoFalke> Its been like 6 minutes after the branch
< TheHoliestRoger> was waiting to merge 0.17 into my altcoin
< MarcoFalke> TheHoliestRoger: There is altcoin-dev on irc
< MarcoFalke> not here please
< TheHoliestRoger> eh?
< MarcoFalke> This is bitcoin-core-dev
< TheHoliestRoger> I know, that's why I just thanked you guys for your hard work...
< fanquake> I opened a PR for the descriptors
< fanquake> Can do the 0.17 manpages as well
< TheHoliestRoger> MarcoFalke: are we not supposed to talk in here or somethnig?
< jonasschnelli> Can I safely assume that the first socket read is always > 32 bytes (connecting a new peer)?
< jonasschnelli> Or should the encryption handshake be splittable in socket reads (makes implementation a bit more complex)?
<@sipa> you cannot make such an assumptiom
< wumpus> you always need to handle smaller reads
< wumpus> it's possible worst-cast for it to get fragmented into 32 1-byte reads
< sipa> also, is that really a problem? there is already code for reading data into buffers; you shouldn't need to rewrite it
< wumpus> yes
< jonasschnelli> Yes. Your right... but since the message can also be a standard-version message, the in-code handling is quite ugly
< jonasschnelli> But yeah... shouldn't be a problem
< jonasschnelli> I guess I'm getting lazy and should take a rest. :)
< wumpus> we can all use a rest :)
< sipa> ooh, a branch
< instagibbs> !!!
< gribble> Error: "!!" is not a valid command.
< lightningbot> instagibbs: Error: "!" is not a valid command.
< sipa> MarcoFalke, gmaxwell: about the problem of having interdependent dandelion transactions... i think there is no issue
< sipa> thanks to orphan handling
< sipa> when a dandelion tx progresses to a normal tx (either because the random percentage change triggered, or because the timeout passed), it is simply processed as if it were received at that time as a normal tx
< sipa> that means if a child fluffs before its parent, it will simply become an orphan (which is processed later when the parent fluffs, or is received from the network)
< sipa> if the parent fluffs first, no change in behaviour
< gmaxwell> well I think that isn't really quite right. Because in the protocol now if someone hands you an orphan thats also an implicit INV for the parents...
< sipa> well in this case there would just not be someone to send the inv to
< gmaxwell> I think it's silly to have parents with longer fluff time than children, any increase in privacy is imaginary, because you know a parent was created before the child, so once you see the child you have a maximum timestamp for the parent.
< sipa> this approach will cause parent and child to be effectively fluffed at the same time, in case the parent would be longer
< gmaxwell> yes, but perhaps we should do that explicitly rather than depending on the rather lossy orphan handling.
< sipa> the alternative is to reduce the parent's fluff time in case a child is present... but that sounds like a possibly observable bias
< sipa> ok, that's fair
< gmaxwell> but if thats an observable bias, so is the send the child first, since from a "knoweldge about tx times" perspective, they're the same.
< sipa> right
< gmaxwell> The only difference is that sending the child first makes us rely on orphan processing, even worse because we'll ignore the getdata for the parent.
< sipa> but i think it's semantically the right (and simplest) thing to do: in case a child expires before its parent, delay it until the parent expires
< gmaxwell> I think thats right and fine.
< gmaxwell> basically for every txn compute an exp time and take the max of its own and its maximum parent.
< gmaxwell> and make the expire time processing obey the order by using depth as a tiebreaker or similar.
< sdaftuar> gmaxwell: sipa: in the case of receiving a dandelion child tranaction prior to the (dandelion-)parent transaction having made it to the mempool -- it seems to me like it would be simpler to just delay (dandelion-)relay of the child until the parent is in the mempool
< sipa> sdaftuar: that wasn't what the discussion was about, though
< sdaftuar> the concern about breaking one's wallet (not being able to transact for some window) doesn't seem to apply, in that we need to solve that anyway
< sdaftuar> sipa: oh maybe i misunderstood your earlier discussion
< sipa> sdaftuar: it's about the case where the timer for a parent transaction goes off because that of a child (both of which are dandelion transactions, which were received in order)
< sipa> *before that
< sdaftuar> sorry. yes, i undetstood that's what you were just talking about
< sdaftuar> i thought greg said earlier that it was a non-starter to delay relay of a child transaction until the parent fluffed
< sdaftuar> (i think in the normal dandelion case)
< sdaftuar> greg said yesterday "if someone pays you 1 BTC, you spend 0.1... now your wallet interface needs to randomly fail and tell you that you can't spend again until a fluff has happened"
< sdaftuar> we already need to do something for our wallet because until the fluff as happened, the change output won't be in your own mempool
< sdaftuar> and hence won't be spendable
< sipa> ugh, yes - that's an extra complication
< sdaftuar> that something can be looking into our own wallet's set of stem transactions, for instance
< sdaftuar> but we can still slow down relay of children without totally breaking our wallet
< sdaftuar> (if we need to)
< sdaftuar> my first thought is that t
< sdaftuar> oops
< instagibbs> also comes into consideration with checking balance, yes?
< instagibbs> (with current code)
< gmaxwell> sdaftuar: I think we can slow the relay down, we just can't _drop_ the transaction.
< gmaxwell> which is what I thought was being previously proposed.
< sdaftuar> ah okay
< gmaxwell> though slowing it might also turn out to be not great, I don't think it's a non-starter.
< gmaxwell> e.g. you make a chain of 6 transactions... and you're waiting minutes for the last to be broadcast even though you made them all within seconds?
< sdaftuar> well i was thinknig that delaying transactions whose parents aren't all available as either confirmed inputs or in the memmpool might prevent some dos attacks
< sdaftuar> but now i am coming up with new dos attacks that don't even rely on that
< sdaftuar> say you have 1 mempool trnasaction with thousands of outputs
< sdaftuar> i send you 1000 child transactions, each spending a different output
< sdaftuar> only the first 25 or so will be accepted
< sdaftuar> due to chain limits
< sdaftuar> but how does a dandelion relayer avoid relaying all? i think you would need a stempool
< sdaftuar> but then a global stempool seems like it might introduce information leakage about routes
< sdaftuar> while per-peer stempools seem like unacceptably awful overhead
< gmaxwell> well a "per peer stempool" is really needs to be per output peer, I believe. Also the transactions could be shared between them.
< sdaftuar> i think if you have any stempool sharing between peers you could start to infer whether a pair of target nodes are dandelion routing to each other
< nmnkgl> MarcoFalke: Could you point me to anything would prevent bitcoin core nodes from having crazy everlasting loops in case when there is a ring of whitelisted peers? In regular mode the loop will break at least because of using INVs. That may be very unlikely case though.
< sdaftuar> nmnkgl: are you referring to a dandelion routing loop? that sounds unfortunate but seems like it's mitigated with a 10% chance of fluffing on every hop, no?
< gmaxwell> Hm? well dandelion as was proposed on the list has a stempool, so you'll never stem relay the same transaction twice.
< sdaftuar> yeah that too. i think marco's PR doesn't have one (yet?)
< nmnkgl> Yes, Dandelion. For non-whitelisted peer loops the solution is just "do nothing once I hear it second time, and eventually fluff". For whitelisted peer loops it would be *send transaction back and forth (statistically not more than 10 times I guess)*
< gmaxwell> why is whitelisted making a difference in your example?
< sipa> i think we should ignore the whitelist status for dandelion
< sipa> its scope shouldn't be extended further
< gmaxwell> We should change DEFAULT_WHITELISTFORCERELAY to off in any case. Armory said they don't need or want it anymore when we last asked IIRC.
< gmaxwell> it's really a weird and specialized thing.
< nmnkgl> gmaxwell: because in the implementation we will relay from whitelisted even if we hear it second time. That's not an issue in regular protocol cause we relay it through INV-GETDATA, and it will go out very fast
< gmaxwell> nmnkgl: only if WHITELISTFORCERELAY is on, which we should turn off because it's a disaster that screws up the usefulness of whitelisting in the first place.
< gmaxwell> the only reason I didn't rip that out eons ago was because there was a belief that some parties depended on it, but the only evidence we have for that has since (I think) been invalidated.
< jeremyrubin> Was discussing with wumpus the viability of timing attacks on PR 13666, he suggested I ask here.
< jeremyrubin> Essentially additional bits leak out if the signature takes longer to generate.
< sipa> jeremyrubin: you're right, but i believe it doesn't matter
< sipa> how much does it help you to know (e.g.) that something took 10 tries to generate?
< jeremyrubin> Was going to paste in your response
< jeremyrubin> 1 sec...
< sipa> ah
< jeremyrubin> sipa: "Yes, indeed. Information theoretically this is a leak. But the only way an attacker can find out whether a particular nonce is in that subset of 1/1024 is by doing 10 EC multiplications on the preceeding nonces... the exact operation we're trying to make expensive."
< sipa> ah, seems i commented on this before :)
< jeremyrubin> Yeah -- I somewhat agree, I can't think of a use for this information off the top of my head, especially since the nonces are uncorrolated because of the hash in their generation
< jeremyrubin> But as only an armchair cryptographer I don't feel comfortable asserting that this information is entirely useless
< sipa> especially since the hash also includes the message, even a precomputed table (assuming there was space to store it) wouldn't help you reduce the search spac
< jeremyrubin> Correct.
< jeremyrubin> I guess here's an example: Let's say we have a signature we observed to take 2 signing periods. Now, in order to grind the key out, we can filter the keyspace based on guessing keys for that specific message and aborting if the determinic nonce takes only 1 to generate.
< sipa> right, but that test requires an EC multiplication
< jeremyrubin> So here's my question
< jeremyrubin> Let's say I have another test I've observed to be 2 signatures
< jeremyrubin> that is k1 and k2 such that serializesize(k{1,2}XG) = 71
< jeremyrubin> err
< jeremyrubin> ignore 14:51
< jeremyrubin> Another signature I've observed to be 2 signing periods
< jeremyrubin> that bit of information should be independent, right?
< jeremyrubin> It still requires an ec mul to test directly
< sipa> right
< sipa> i believe the best this can do is reduce the keyspace by N by doing N EC multiplications, with different messages
< jeremyrubin> but is there any correlation such that if I know sersize(k1 x G) = 71 and sersize(k2 x G) =71
< sipa> no, they're independent outputs of a hash function with different inputs
< jeremyrubin> it would allow me to more efficiently test (k1 + k2) x G = 71
< sipa> if there is such a correlation, sha256 is broken
< jeremyrubin> I guess I'm asking about ec addition
< jeremyrubin> like if k1 G is sersize 71
< jeremyrubin> and k2 G is sersize 71
< jeremyrubin> do we know anything about k1+k2?
< sipa> ah; i believe these is no such test, but no proof that there isn't
< jeremyrubin> (nothing to do with sha256)
< jeremyrubin> If such a test did exist
< jeremyrubin> Then you could use this timing attack
< sipa> i think we may reasonable weaken the statement that it would require an attacker to perform 2^256 sha256 hashes to meaningfully reduce the keyspace
< sipa> which is far slower than a direct 2^128 EC DLP attack
< sipa> but this is a good point
< jeremyrubin> Ah also
< jeremyrubin> I have the bit that k1_0 is 72 sersize
< jeremyrubin> and k2_0 is 72 sersize
< jeremyrubin> I think that's the bit that gets leaked that's more interesting
< sipa> perhaps - but it also requires hashes to discover
< jeremyrubin> Yeah
< jeremyrubin> Anyways, it's far from a practical attack... I just would prefer that we draft exactlty what the attack is/impact could it be rolled out before releasing 13666
< sipa> yes, a writeup of the concerns and reasoning would be useful
< jeremyrubin> I'm interested to work on it, but it's a bit beyond my depth as a cryptographer
< jeremyrubin> Incidentally, the timing attack can be addressed by generating 256 signatures always, partitioning on serialized size, and then picking a random one from the small size partition.
< jeremyrubin> 256x signing time might be not worth it, but I think this should basically work ;) (and if none are 71 bytes in 256, then re-do -- I think we're good leaking bits 1/2^256 times)
< sipa> that too results in a bias
< jeremyrubin> What's the bias there?
< sipa> (an equally harmless one, imho, though)
< sipa> lower probability for R values that are the outcome of key/msg that have higher than average number of low-R results
< sipa> (in their set of 256)
< jeremyrubin> How is this leaked? we reveal 1 at random out of the low-R set?
< jeremyrubin> Ah can't be random, right
< sipa> yes, but it's biased
< sipa> again, i don't thin this would be a concern
< jeremyrubin> so would have to be with some deterministic sort, pick 1
< sipa> but if leaks of any kind are a concern, this is one too
< sipa> nope; still biased
< jeremyrubin> (yep, just making the correction that made my algo non-det)
< jeremyrubin> Trying to understand the bias
< sipa> a correct solution would be to introduce a small amount of true randomness in the hash :)
< sipa> as that's unobservable to an attacker
< jeremyrubin> then no longer deterministic ;)
< sipa> precisely.
< jeremyrubin> I still don't quite get the bias being leaked.... it's just the original 1 bit of low-R/high-R?
< sipa> but some R values have higher probability than others
< jeremyrubin> Interesting... didn't know that
< jeremyrubin> I thought it's supposed to be uniform
< sipa> the message is public
< sipa> so we assume m is fixed
< sipa> your algorithm is to take the secret private key, and feed it through 256 different hash functions (essentially; in practice they also take the msg as input, but we treat that as fixed part of the hash functions)
< sipa> agree with me so far?
< jeremyrubin> yes
< sipa> then map those 256 hash outputs to points, and look at their lowrness (i love that word)
< jeremyrubin> haha, yep
< sipa> and then pick randomly one of the lowr ones (which will approximately be 128, but could be more or less)
< jeremyrubin> * deterministically
< sipa> oh, deterministically!
< sipa> then it's even easier
< sipa> your entire construct is a deterministic function from private key to point
< sipa> which you can test for by iterating
< jeremyrubin> No? The message is included.
< sipa> the message is known to the attack, which we can treat as part of the definition of the hash function
< jeremyrubin> fair
< sipa> *attacker
< jeremyrubin> So how is a bit leaked?
< jeremyrubin> (or a < bit)
< sipa> the entire private key is leaked :)
< sipa> information theoretically
< jeremyrubin> really? Now I'm lost..
< gmaxwell> jeremyrubin: "grind against the key" is an uninteresting attack. oh I was just about to point out what sipa did, that if you assume a computationally unbounded attacker every signature leaks the key.
< gmaxwell> (or just publishing the pubkey leaks the key)
< sipa> jeremyrubin: information theoretically the attacker can try every private key, and look at which R point comes out of each, and compare it with the R you produced
< sipa> jeremyrubin: that leaks the entire private key :)
< sipa> (this is a problem that every deterministic algorithm has, and it's not interesting because we don't care about information theoretical security)
< gmaxwell> also, as an asside your conjectural 'weakness' goes away if we provide use the extra random input.
< * jeremyrubin> sighs
< gmaxwell> Which I suppose we should just do regardless, unless configured otherwise.
< sipa> if you care about _computational_ security, a bias only matters if it lets you meaningully speed up the attack
< jeremyrubin> I'm just talking about the timing attack
< sipa> so am i
< gmaxwell> jeremyrubin: also RFC6979 _inherently_ has this kind of timing attack because of the restriction into the scalar range.
< jeremyrubin> Yes, the above potential attack about correlation in serializable size under addition
< gmaxwell> If the HMAC_DRBG result is larger than N it tries again.
< jeremyrubin> Interesting.
< sipa> jeremyrubin: sorry for bringing up information theoretical security; i just want to point out that "leaking a bit" inherently isn't a proble
< sipa> the problem is leaking information that leads to a faster attack
< sipa> because every deterministic algorithm leaks *all* the bits already, but not in a usable way
< jeremyrubin> Well, is secp256k1 bijective ;)
< gmaxwell> do you mean is the scalar range mappable onto points sG both directions? of course. But the mapping is (we hope!) only efficiently computable one way.
< gmaxwell> If your attacker has unbounded computing time, however, he could compute it both ways.
< gmaxwell> Which is why it has only computational security.
< sipa> yes, informational theoretically ECDSA is trivial to break overall
< sipa> again my point: leaking a bit isn't what matters
< jeremyrubin> be back in a... bit
< gmaxwell> jeremyrubin: Consider the legacy signer. You produce a stream of signatures. Roughly half are low R, roughtly half are high R. The low R ones are ones where the selected K got the low R on the first try, and the ones with a high R are ones where it was high on the first try. Every determinstic algorithim always 'leaks' all the data, but the 'leak' is not useful.
< sipa> right; the legacy signer already leaks the 'bit' of information whether or not the first was low R or not
< sipa> the only thing added is that it's the total number of attempts, and not just whether it was 0 or nonzero
< gmaxwell> It's essentially the same 'leak' as you're talking about with timing. (technically timing is more than 1 bit, but you can get that by not just looking at low vs high R)
< sipa> (possibly, to a timing attacker)
< gmaxwell> e.g. it's isomorpic to looking at R and counting how many leading 0s it has.
< sipa> to a non-timing attacker the new algorithm actually leaks less