< gmaxwell> jonasschnelli: maybe when you post a patch for the encryption, I'll go and quickly staple newhope to it and we see how we feel about the combination.
< jonasschnelli> gmaxwell: Sure. My current priority is encryption, then auth, then NH/PQ. I can also try the NH stuff earlier..
< gmaxwell> sipa: 13989 is doing avx512 sha256d64, and gets a 19% speedup for the MerkleRoot benchmark.
< wumpus> time to tag rc1?
< wumpus> I'm sure enough issues will come up during gitian building (although I was able to do so succesfully) for the first time with bionic, but I think it makes sense to start testing?
< fanquake> wumpus sounds good
< wumpus> ok! let's do it then
< wumpus> yesterday no one protested either so...
< fanquake> Just take that as silent optimism/agreement or something
< wumpus> I'm just trying to avoid forgetting a dumb step (yes, version has been bumped)
< wumpus> * [new tag] v0.17.0rc1 -> v0.17.0rc1
< wumpus> there we go
< fanquake> That's what rc's are for anyways. Wouldn't be the first time we've fixed something up quickly with an rc2 etc.
< fanquake> wew!
< wumpus> 0.17.0rc1 gitian sigs (unsigned) up, wonder if anyone reproduces
< jonasschnelli> wumpus: still building: https://bitcoin.jonasschnelli.ch/build/743
< jonasschnelli> windows and osx match though... linux: will see
< wumpus> awesome !
< jonasschnelli> linux is a match as well
< jonasschnelli> Benchmarks for v2 message format composing (encrypted):
< jonasschnelli> 100 blocks: V1 legacy (dblSHA): 1.43978, V2 (ChaCha20/Poly1305): 1.42594
< jonasschnelli> (and this is with SHA SSE4.1/AVX versus non NI acceletarted chacha)
< fanquake> thanks for the quick build ken2812221, more matching sigs
< ken2812221> fanquake: nice
< wumpus> jonasschnelli: nice
< promag> with 0.17 branched, #13529 can be reviewed
< gribble> https://github.com/bitcoin/bitcoin/issues/13529 | Use new Qt5 connect syntax by promag · Pull Request #13529 · bitcoin/bitcoin · GitHub
< promag> wumpus: ^
< promag> maybe I should push a rebased commit?
< wumpus> promag: will review
< wumpus> looks like you first need to address the issues, apparently you did break some things!
< promag> wumpus: yeah I saw that after writing the above :(
< gmaxwell> jonasschnelli: well the reason I was offering to try sticking on newhope is that I _think_ I can add it in with only a few lines code change. If we're of the view that we'll want it long term, then maybe it'll make sense to do up front if it really does turn out to be that simple.
< jonasschnelli> gmaxwell: Yes. Agree. I just too deep into implementing the ecdh/chachapoly stuff right now... but as soon as some tests are finished, I'm happy to try the NH implementation.
< jonasschnelli> I need to research (or ask you) a bit more, though
< gmaxwell> OKAY but thats why I was offering to help. :P
< gmaxwell> Sure. Fortunately it's really simple.
< jonasschnelli> What implementation are you looking at?
< jonasschnelli> Doesn't it also require SHA3?
< gmaxwell> No.
< jonasschnelli> ok
< gmaxwell> it uses chacha20 internally just as a random number generator.
< gmaxwell> oh actually there is a sha3 impl there too. I guess it's using that to hash the final state.
< jonasschnelli> gmaxwell: replace that with sha2? -> https://github.com/newhopecrypto/newhope-usenix/blob/master/ref/newhope.c#L58?
< gmaxwell> yes, we can just replace that with sha2.
< gmaxwell> You see how it works though? https://github.com/newhopecrypto/newhope-usenix/blob/master/ref/test/test_newhope.c#L36 Initator runs newhope_keygen and sticks senda on their message, responder runs newhope_sharedb with that as an argument and gets the shared keyb and sendb to send, intiator gets sendb and feeds it to newhope_shareda and gets the same shared secret out.
< gmaxwell> The messages are constant length (1824 and 2048 bytes IIRC).
< gmaxwell> And we'd probably go and change the API slightly so that it takes randomness as an argument rather than reading /dev/urandom itself.
< gmaxwell> though for an intial test that could be ignored.
< jonasschnelli> gmaxwell: so it basically works the same as DH (in terms of network interactions)?
< jonasschnelli> Could we append send_a (the message) to the ecdh-32byte-pubkey?
< jonasschnelli> I guess the NH handshake doesn't have to follow under the ECDH encrypted channel?
< gmaxwell> Correct on all counts.
< gmaxwell> It's not _exactly_ the same as DH in terms of network interactions because in DH both alice and bob can send their pubkeys at the same time, but in newhope alice sends then bob sends. But we're not using DH that way anways.
< gmaxwell> So we can implement it just by appending send_a to the initator pubkey, and send_b to the responder pubkey.
< gmaxwell> then we just hash the secrets that come out of both DH and newhope.
< jonasschnelli> gmaxwell: I see. Whats the length of the newhope secret output? 32b?
< gmaxwell> 32 bytes, yes.
< jonasschnelli> Okay. From a protocol level, this seems easy.
< jonasschnelli> Are the NH messagedsidentifiable (DPIish)?
< gmaxwell> No, not more than the secp256k1 public keys.
< jonasschnelli> ok. then ECDH doesn't need to go first
< jonasschnelli> yeah. I don't see a reason why we should not add newhope to the handshake (unless I completely must understand it)
< gmaxwell> The only arguments I can make against it are that (1) it might turn out to not add much security (e.g. if newhope gets broken), and then we're stuck carrying it around buring bytes, loc, and cpu cycles on it (2) implementers that insist on writing everything from scratch instead of using public domain C code will have a harder time, (3) more stuff to integrate.
< gmaxwell> For other applications, like using it in SSL the size and speed impacts matter more. For us, I think they're basically irrelevant. Newhope is as fast as (or maybe even faster than, with AVX in use) ECDH.
< jonasschnelli> I hope my implementation for detecting a version message or a key-handshake will still work since it's now longer then 32b
< gmaxwell> You should just be able to read the first N bytes and decide, then keep reading.
< jonasschnelli> yeah... avoid pubkeys with netmagic, read 4 bytes and continue with handshake when not equal the net magic
< wumpus> RISC-V node (on actual hw) started syncing
< sipa> wooho
< gmaxwell> wumpus: \O/
< gmaxwell> now just submit some patches upstream to add hardware SHA2 and ECC so that the next one will finish in reasonable time. :P
< sipa> just add a HW instruction vrfybtcnblk
< sipa> wait, what did the R in RISC stand for again?
< wumpus> hehe
< gmaxwell> well, brainfuck has fewer operations, so clearly RISCV isn't RISC? :P
< wumpus> it stands for Rich Instruction Set Computing
< sipa> ah.
< sipa> gmaxwell: try whitespace
< sipa> i guess whitespace has more instructions, but just encoded in 3 characters
< gmaxwell> wumpus: as opposed to constrained instruction set computing?
< wumpus> though tbh at this point I'm less worried about performance than about obscure chip and compiler issues
< wumpus> gmaxwell: exactly!
< gmaxwell> jonasschnelli: in any case, newhope is a member of the class of fast PQ schemes that are newer and more likely to turn out to be insecure against even classical computers. But the only alternatives that aren't have properties that would make us not use them.. e.g. the McEliece public keys are like 1MB.
< gmaxwell> jonasschnelli: oh, actually we'd want to be using this implementation https://github.com/newhopecrypto/newhope/tree/master/ref as it's their NIST submission, and has a substantial simplifcation to the protocol innards.
< jonasschnelli> ok. thanks... I'll look into it.
< wumpus> apparently the MMC interface on the HiFive unleashed is really slow, either due to the current kernel version, or due to some hardware limitation
< wumpus> the debian/fedora developers use nbd, though currently I have a 100mbit switch connected for the embedded LAN so that's not going to be great either :-)
< wumpus> but all in all I'm quite happy with this, for the first larger-scale ASIC implementation of a new architecture it's cool how far this gets
< sipa> the risc-v instrunction encoding is pretty cool
< sipa> everything is natively a 32 bit instruction, but there are optional extensions that compress certain instructions down
< wumpus> yes how they do variable-length is interesting
< sipa> but that compression can be implemented as a pure postprocessing step in the assembler
< wumpus> isa : rv64imafdc I think -c is the 16-bit compressed instruction extension, so it has that
< wumpus> M=Integer
< wumpus> Multiplication and Division, A=Atomic instructions, F=32-bit fp, D=64-bit fp, C=Compressed instructions
< sipa> no B=bit manipulation?
< wumpus> apparently not!
< wumpus> "This chapter is a placeholder for a future standard extension to provide bit manipulation instruc-
< sipa> ah
< wumpus> the version of the spec I have calls it future, at least
< sipa> i guess basic bit manipulation is available in the base set of instructions
< wumpus> yes: ANDI/ORI/XORI
< wumpus> they're in the mandatory part
< wumpus> I guess the idea of -B will be to provide a more extensive set, say for direct injection/extraction of bit sequences, popcount, count leading/trailing bits, etc
< jonasschnelli> gmaxwell: I guess a configuration option that would allow ecdh only handshake would make little sense?
< jonasschnelli> I down side with newhope could be the implementation burden if we also want SPV clients to adopt it.
< gmaxwell> jonasschnelli: I don't think it would make sense to make it optional.
< gmaxwell> jonasschnelli: there are, for example, java implementations and whatnot.
< promag> meeting?
< * luke-jr> pokes wumpus
< wumpus> hello
< sdaftuar> hi
< wumpus> #startmeeting
< lightningbot> Meeting started Thu Aug 16 19:03:54 2018 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
< jonasschnelli> Yeah... hi
< meshcollider> hi
< promag> hi
< wumpus> #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr btcdrak sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark mi
< wumpus> chagogo marcofalke paveljanik NicolasDorier jl2012 achow101 meshcollider jnewbery maaku fanquake promag provoostenator
< gmaxwell> darn, just beat me to it.
< achow101> hi
< sipa> hi
< wumpus> apparently I divided michagogo in two
< sdaftuar> ouch
< meshcollider> Hope he's not too cut up about it
< wumpus> PSA: we've tagged 0.17.0rc1, let us know if you have any trouble gitian-building due to the upgrade of the guest to Ubuntu 18.04/bionic
< jonasschnelli> As mentioned its a bit sad that we dropped debian 9 (newest version) as gitian host,... but apparently compiling lxc was simpler then expected
< wumpus> now that 0.17 release cycle is started, it might be time to bring back "high priority for review" as a topic
< wumpus> I have some instructions for debian 9 here: https://gist.github.com/laanwj/c62e101bfd68718f0686926dfd10666b
< jtimon> hi
< wumpus> (yes, you need to build lxc and debootstrap from source)
< jonasschnelli> Nice wumpus
< wumpus> we should probably integrate that into the documentation
< jonasschnelli> Yes. Until lxc 2.1.1 is supported by debians apt
< wumpus> yes, 3.x is probably overkill
< luke-jr> wumpus: did we ever get a solution for making a bionic base VM? :/
< jonasschnelli> Works fine here with 2.1.1
< luke-jr> (for qemu)
< achow101> also, with the suite bump, don't forget to do bin/make-base-vm again
< wumpus> luke-jr: I don't think so, I think LXC and Docker are the only options at the moment
< luke-jr> achow101: last I checked that doesn't work
< achow101> luke-jr: there's a fork of vmbuilder that works for bionic
< wumpus> but the fork might be worth a try !
< wumpus> anyhow -- please ask in this channel if you have any problems getting gitian running
< wumpus> #topic High priority for review
< wumpus> https://github.com/bitcoin/bitcoin/projects/8 there's only one PR in there at the moment, #13100
< gribble> https://github.com/bitcoin/bitcoin/issues/13100 | gui: Add dynamic wallets support by promag · Pull Request #13100 · bitcoin/bitcoin · GitHub
< sipa> i'd like #13723
< gribble> https://github.com/bitcoin/bitcoin/issues/13723 | PSBT key path cleanups by sipa · Pull Request #13723 · bitcoin/bitcoin · GitHub
< sipa> (it's the basis for further psbt/descript integration)
< instagibbs> https://github.com/bitcoin/bitcoin/pull/13968 bugfixes for psbt stuff too(0.17 backport)
< instagibbs> #13968
< gribble> https://github.com/bitcoin/bitcoin/issues/13968 | [wallet] couple of walletcreatefundedpsbt fixes by instagibbs · Pull Request #13968 · bitcoin/bitcoin · GitHub
< ken2812221> #13866
< gribble> https://github.com/bitcoin/bitcoin/issues/13866 | utils: Use _wfopen and _wfreopen on Windows by ken2812221 · Pull Request #13866 · bitcoin/bitcoin · GitHub
< wumpus> 13723 added
< jtimon> if high priority is still for blockers, https://github.com/bitcoin/bitcoin/pull/13311 is kind of a blocker for https://github.com/bitcoin/bitcoin/pull/8994 which is itself a blocker for toher things I wanted to do
< wumpus> the other two as well
< wumpus> #13311
< gribble> https://github.com/bitcoin/bitcoin/issues/13311 | Dont edit Chainparams after initialization by jtimon · Pull Request #13311 · bitcoin/bitcoin · GitHub
< promag> wumpus: can you replace 13100 with #13529?
< gribble> https://github.com/bitcoin/bitcoin/issues/13529 | Use new Qt5 connect syntax by promag · Pull Request #13529 · bitcoin/bitcoin · GitHub
< wumpus> promag: ok
< wumpus> jtimon: added
< jtimon> yeah, thanks
< achow101> I would like #12493
< gribble> https://github.com/bitcoin/bitcoin/issues/12493 | [wallet] Reopen CDBEnv after encryption instead of shutting down by achow101 · Pull Request #12493 · bitcoin/bitcoin · GitHub
< promag> ty
< wumpus> achow101: ok, yes, probably makes sense to merge that early in the 0.18 cycle
< wumpus> achow101: it sounds sort of--risky
< gmaxwell> achow101: did we ever work through the problems that were preventing "don't create a wallet until a key is requested or until encryption is added" ?
< gmaxwell> As that would also get rid of the shutdown on encryption for many users.
< achow101> gmaxwell: the problems with that were backups IIRC
< achow101> gmaxwell: actually that problem was with generate keys on use, not create wallet on use
< achow101> in that case, I don't think so. but I also don't remember the problems with create wallet on use. I think I just never got around to implementing it
< gmaxwell> right. I thought there were some dumb bugs with create on use that were going to get fixed as a side effect of pending multiwallet work.
< wumpus> any other proposed topics?
< sipa> short announcement: we're working on an extension to descriptors to support nested and/or/threshold constructions
< wumpus> cool!
< sdaftuar> topic suggestion: open floor for people to share what they
< jonasschnelli> nice
< sdaftuar> re wokring on
< sdaftuar> (since we don't have any other topics apparently :P)
< sipa> i like wok rings
< wumpus> #topic open floor for people to share what they are working on
< jonasschnelli> Working on p2p level encryption since a couple of weeks (thats why I'm pretty quite on github). Will open PR in 1-2 weeks.
< wumpus> sipa: so this is for further improvements to scantxoutset I suppose?
< sipa> wumpus: and all the things :)
< wumpus> right
< sipa> wumpus: so you can import and(xpub/...,or(xpub/...,xpub/...)) into your wallet as watch-only chain for example
< achow101> wumpus: hopefully for eventually a replacement-ish thing to the wallet
< sipa> and get psbt to sign for it
< instagibbs> so excite
< wumpus> that's neat
< jonasschnelli> sipa: how would/could that lead to xpub watch only wallets?
< sipa> jonasschnelli: yes
< sipa> that's the goal of the descriptors
< instagibbs> achow101, not sure if he was joking, but he said he wouldnt have to implement it if I did the HWI version. I'll keep leaning on him
< jonasschnelli> That would be a great feature... could also be extended to the GUI including coin selection (send screen), but don't sign/broadcast (obviously) but will create a PSBT (file)
< instagibbs> my guess is for any hope of Core support of HWW, we want to not have to support a bunch of PSBT drivers...
< instagibbs> oh sorry wrong window
< jonasschnelli> instagibbs: you mean more support then the PSBT?
< sipa> jonasschnelli: seems reasonable yes
< sipa> i need to think through how to integrate things in the wallet itself, so you can import descriptors
< sipa> and how to make it compatible with all existing RPCs etc
< gmaxwell> sipa: not just and/or/threshold but also CSV and hashlocks?
< instagibbs> gmaxwell, yes
< sipa> gmaxwell: oh yes
< instagibbs> (oh rhetorical)
< sipa> but my goal is that the wallet eventually consists of a bunch of descriptors with metadata (and labels and transactions, and precomputed pubkeys to replace keypools)
< gmaxwell> If lots of people can help sipa finish that work it would be good. :P
< sipa> but independently andytoshi and i started looking into how we can efficiently compile arbitrary and/or/k-of-n/locktime/hash expressions to script
< sipa> which would just plug into descriptors, and then be available to everything that uses them
< wumpus> so I"ve been working on the RISC-V support, today I was able to do basic bring-up of the hardware (HiFive Unleashed) and test the gitian-built executables, which work!
< wumpus> been able to run the test_bitcoin succesfully and sync part of the chain, I'll keep the node running
< wumpus> probably the first RISC-V bitcoin node in the world
< jonasschnelli> \o/ nice!
< gmaxwell> nmkgl and I have been working on reconciliation based transaction relay. (With sipa's help too, that why I want people to help finish the descriptor work, ... :P )
< sdaftuar> gmaxwell: as in set reconciliation, eg to sync mempools?
< sipa> sdaftuar: yup
< gmaxwell> sdaftuar: Yes, but I believed we found a better design that doesn't sync the mempools directly.
< sdaftuar> neat, i'd be interested in learning more if you guys have a summary at some point.
< sipa> sdaftuar: it uses a more computationally expensive set reconcilation protocol than IBLT, but it's much more space efficient
< gmaxwell> muuuch more.
< sipa> (basically the amount of data transferred is equal to the expected difference between the two sets)
< sipa> but it's in the order of 40ms here to find 300 differences
< gmaxwell> I also came up with a new reconcilation protocol with a different performance tradeoff (much faster to decode, but slightly less space efficient), though it doesn't look like we'll have cause to use it.
< gmaxwell> sdaftuar: in any case the short summary of what we're thinking now vs before is that instead of reconciling mempools, reconcile the transactions that the peers would have otherwise INVed to each other.
< gmaxwell> this avoids cases like a mempool policy difference causing transations to get 'stuck' in the difference set forever until they get mined.
< sipa> oooh nice
< sipa> i missed that
< gmaxwell> Then it can just be coupled with some simple mechenism to fast-start an empty or stale mempool.
< gmaxwell> (I have a proposal for that too, just a simple one/two message protocol)
< instagibbs> would this be complementary to a ip-hiding protocol like dandelion?
< sdaftuar> ah, that sounds neat
< BlueMatt> hmm, and then you could bucket and do reconciliation slowly for low-feerate buckets, or no reason to do that anymore?
< BlueMatt> instagibbs: I'd presume so, this would be during fluff-faze :p
< sipa> instagibbs: yes, orthogonal
< sipa> this is changing the diffusion phase
< sipa> not the stem
< instagibbs> BlueMatt, not exactly fluffing anymore :)
< gmaxwell> instagibbs: it's orthorgonal, the main motivation is getting rid of the really high bandwidth overhead for rumoring.
< instagibbs> sipa understood
< gmaxwell> Probably some differences in context here. :)
< gmaxwell> Right now, ignoring peers that IBD, the vast majority of node bandwidth is wasted on INV rumoring.
< BlueMatt> gmaxwell: wait, was that a response to my question?
< gmaxwell> BlueMatt: sure, it could be split by feerate too, though I'm not sure we'll have a reason to because the reconcilation is so efficient.
< BlueMatt> hmm, guess I need to think about this more
< sdaftuar> do you have an estimate on the bandwidth reduction?
< gmaxwell> No, don't have a mature enough simulator yet. Also I haven't measured overheads since we improved inv batching.
< gmaxwell> But previously INV overhead was something like 80% of node bandwidth (excluding IBDing peers).
< gmaxwell> And this should mostly eliminate that overhead.
< wumpus> looks like we've run out of topics, but not out of time yet
< gmaxwell> In any case, we've been largely off in number theory land optimizing the recon itself... but I think we've got what we need there now. :)
< sdaftuar> btw i've been thinking about dandelion recently, trying to work through anti-DoS measures for stem routing
< gmaxwell> sdaftuar: Good! This seems to be one of those things that is easy in theory (if either you ignore getting it right or ignore how hard it is to implement) but hard in practice. :)
< wumpus> would be good to reduce the DoS surface there, yes
< wumpus> #endmeeting
< lightningbot> Meeting ended Thu Aug 16 20:00:03 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< sdaftuar> ploinkidoof?
< jonasschnelli> rekeying and message queues is a nightmare
< sipa> it would be much easier if it wrre deterministic rather than negotiated
< jonasschnelli> sipa: not sure. Since you need to decrypt the length in chacha20poly1305@openssh
< jonasschnelli> Or maybe whenever a message exceeds the limit, the following one will be encrypted with the new key
< jonasschnelli> but the time limit could be tricky,... the byte limit probably not
< gmaxwell> wait why is rekeying and message queues a nightmare.
< sipa> gmaxwell: because parsing protocol messages happens before processing them
< jonasschnelli> gmaxwell: because decomposing and queuing (where you decrypt) is done before processing
< sipa> a rekey means you need to undo the parsing for unprocessed things
< gmaxwell> the rekeying should be handled at the decryption layer, when you decrypt and find a rekey message then the very next bytes out you decrypt with the new stuff.
< jonasschnelli> I currently try to approach where I pause the read channel when I'm detecting a rekey during parsing
< sipa> i guess it would be easier if rekeying was done at a meta layer
< sipa> say a flag bit in the encrypted packet
< sipa> rather than an actual protocol message
< gmaxwell> Is the e.g. an out of range length.
< jonasschnelli> Rekeying when the decrypted length is oor seems fragile though,...
< jonasschnelli> I kinda like gmaxwell approach of putting the rekeyin logic in the decryption handler
< gmaxwell> Why?
< jonasschnelli> gmaxwell: isn't it technically possible that you get a valid length (<MAX_LENGTH) even with an invalid key?
< gmaxwell> so? if your stream is corrupted you'll dsync, fail auth, and disconnect.
< jonasschnelli> gmaxwell: is the probability very of an valid length with an unexpected key that low that a reconnect would be an acceptable workaround?
< jonasschnelli> *very low
< gmaxwell> I think we're probably talking past each other.
< jonasschnelli> heh...
< gmaxwell> what I'm trying to suggest is that to signal rekeying, under the old key, you send a specific length value which we would otherwise never use (due to it being out of range)
< jonasschnelli> Assume when I rekey in case of decrypting to an invalid length... i guess there is a probability that the decrypted length with the now invalid key is within the MAX_SIZE boundary.... right?
< gmaxwell> No.
< jonasschnelli> aha! I see
< jonasschnelli> Wait..
< gmaxwell> You don't decrypt the same data again. You see an invalid length, you rekey and throw out that length and continue.
< jonasschnelli> So... peer A asking for a rekey by setting an invalid length encrypted under the old key, then next message will use the new key?`
< gmaxwell> Yes.
< jonasschnelli> So the message with the invalid length is a dummy, right?
< gmaxwell> yes.
< gmaxwell> I dunno if thats easier than just handling the rekey message at the decryption layer.
< jonasschnelli> Could the dummy message be a rekey message. :)
< gmaxwell> I was just presenting it as an alternative. :)
< gmaxwell> sure. But it could also, for example be an empty message (length 0) which maybe is easier structurally to handle.
< jonasschnelli> Yes. I like the invalid length approach since it doesn't require the parsing message logic
< jonasschnelli> Length 0 would be an option and would not be confused with a real invalid decryption
< jonasschnelli> But length 0 would also be prone to DPI I guess
< jonasschnelli> Since it would be the smalles package always
< jonasschnelli> (could artificial blow up though)
< sipa> sdaftuar: my one line summary: we have a way to compute a 'sketch' from a set of N bit elements, with a size of M*N (so equal in size to M elements), in such a way that you can recover the contents from a sketch as long as there aren't more than M elements in it. Now, XORing two sketches gives you a sketch of the set of elements that are in one of the two input sets (but not both)
< gmaxwell> this protocol doesn't resist traffic analysis.
< jonasschnelli> Yes... right,..
< jonasschnelli> But with the 32byte pure pubkey handshake and the encrypted length, we are pretty stealth and make lives a bit harder for DPI configurators
< gmaxwell> jonasschnelli: I don't see how the length0 thing changes anything.
< gmaxwell> the length is still encrypted.
< jonasschnelli> I think the length 0 package could still contain a payload of the size of an inv
< jonasschnelli> Yes. I just though the size of that package (if someone analyses package burst) a rekey if the payload would be length 0 would be easy to identify ... but probably so other commands.
< jonasschnelli> *with
< jonasschnelli> But I guess my point is weak... length 0 seems to be the most advance idea how to tigger a rekey on the encryption layer
< gmaxwell> Sending a minimum length (just len and auth tag) message is no more or less identifable than any other size. If socket handling merges multiple messages, they're not identifyable at all, and if socket handling splits every message, the lengths are implicitly visible.
< gmaxwell> the only downside I see to length 0 is that we might otherwise want to use them for keepalives. :)
< gmaxwell> (which you might want to send more often than once per ten minutes)
< gmaxwell> but I guess we ping at the protocol level, so nevermind. :P
< jonasschnelli> We could also use MAX_INT32 for the rekey
< gmaxwell> I forget how the length is encoded. Is it encoded with a variable length encoding?
< gmaxwell> sipa, sdaftuar: An astute reader might notice that a set of M N bit elements can be communicated in log2(2^N choose M) bits. so this scheme is not quite perfectly efficient, but it's close.
< jonasschnelli> gmaxwell: length is a fix 4 byte uint32
< jonasschnelli> encrypted with a key only used for chacha20-crypt the length
< gmaxwell> jonasschnelli: oh, okay, for some reason I thought we had something with lower overhead there. I guess it doesn't matter much since most of our messages are fairly long.
< gmaxwell> jonasschnelli: indeed MAX_UINT32 could just be a length 0 message that triggers rekey.
< sdaftuar> sipa gmaxwell: what's the failure mode/fallback scenario if a sketch can't be reconciled?
< gmaxwell> sdaftuar: Sketch data can be incrementally sent. so if M wasn't enough I can just send you one more value and now you have M+1.
< sdaftuar> oh neat
< sipa> it can even be incrementally computed
< sdaftuar> does that require saving the old sketch?
< sdaftuar> oh
< sipa> (but computing a sketch is very cheap, recovering data form one is expensive)
< jonasschnelli> gmaxwell: ideally, there would be a way to flag the rekey in a non-extra message,.. so flag in the first message using the new key
< gmaxwell> In practice we'll have a limit on the maximum M, both for memory reasons storing the sketches and computation (decode is quadratic). ... so if we exceeded that, you'd just fail to relay those transactions since the last reconcile with that peer, better luck next time around.
< jonasschnelli> (to avoid accessing the push message logic from with the encryption logic)
< gmaxwell> sdaftuar: The sketch decode has two steps, the first step is perfectly incremental. So we waste no cpu getting the sketch data a little at a time. The second step is not incremental, though we can arrange things so that we can be pretty sure if it'll be successful or not before starting it.
< gmaxwell> jonasschnelli: well just steal the most significant bit of the length.
< gmaxwell> 2^31 byte messages should be enough for anyone.
< jonasschnelli> But megablocks!
< jonasschnelli> Yes. Great idea
< gmaxwell> 2^31 is still 2GB. :P
< * jonasschnelli> got called to bed
< gmaxwell> night
< gmaxwell> sdaftuar: the actual construction of this system is a binary BCH code, with a message 2^n bits long which has up to M 'errors' (the set difference) in it that we need to correct. So obviously, we should call it something like BCH faulting transaction relay. :P
< sdaftuar> just bch relay sounds nice and concise
< * warren> blinked hard upon reading BCH
< sdaftuar> anyway that sounds awesome, i assume it's close enough that we can hope to have it for 0.18?
< gmaxwell> I don't see why it couldn't be done in a couple months, assuming that as we implement we don't get stuck on stupid protocol issues.
< gmaxwell> or fall too far down the number theory rathole of microoptimizing the set reconcillation.
< sdaftuar> :)
< sipa> sdaftuar: the algorithm is based on a project called 'pinsketch', which relies on a library called NTL, which is LGPL and a lot of code
< gmaxwell> it's really quite a lovely problem to work on. :)
< sipa> sdaftuar: so we reimplemented inlt from scratch in a few hundred lines, and it's faster too :)
< sdaftuar> yeah i look forward to being nerdsniped when you share your results
< sdaftuar> sipa: lol
< gmaxwell> and came up with a bunch of moderately smart optimizations...
< sdaftuar> i was about to groan about dealing with a 3rd party library and lgpl, i should have known better than to think you'd let that be a problem :)
< gmaxwell> sdaftuar: that was what held me off from doing more of this a year ago.
< sipa> sdaftuar: curiously, the author of pinsketch is someone i've met at real world crypto; he did a talk on UTXO commitment data structures
< luke-jr> achow101: MarcoFalke: that VMBuilder fork doesn't actually work :<
< luke-jr> at least not for bionic
< kevink> I'm using Bitcoin Core's API and am wondering whats the difference between `duplicate-inconclusive` and `duplicate` when calling `submitblock`. I'm just trying to verify that a block exists in the blockchain and `submitblock` seems like the simplest way to do that.
< sipa> kevink: try getblockheader instead
< phantomcircuit> kevink, submit block is definitely not the way to do that, getblockheader is
< kevink> The reason I wanted to use submitblock was because I'm encoding and decoding the blockdata into an image using https://gist.github.com/laanwj/51f276c44ba9882bb4b27cc6f3a499a4 and wanted to check that it also remained a valid block after being decoded.
< sipa> compute its hash, and query for it
< gmaxwell> I guess we don't have a decoderawblock rpc that would construct the hash for you!