< bitcoin-git>
bitcoin/master 62d3f50 Jon Atack: qa: fix deprecated log.warn in feature_dbcrash test
< bitcoin-git>
bitcoin/master 2410088 fanquake: Merge #16491: qa: fix deprecated log.warn in feature_dbcrash test
< bitcoin-git>
[bitcoin] fanquake merged pull request #16491: qa: fix deprecated log.warn in feature_dbcrash test (master...test-fix-feature_dbcrash-warn-deprecation) https://github.com/bitcoin/bitcoin/pull/16491
< kallewoof>
achow101: https://github.com/bitcoin/bitcoin/pull/16440#discussion_r308332901 I basically need the wallet interface to give me a way to sign things from the QT side, and this was the only approach I could think of. sipa suggested I ping you to make sure I wasn't trampling on stuff with the descriptor wallet changes you are working on.
< hugohn>
sipa: fwiw I found the issue, the redeem script was expanded in the signing provider but it was a _different_ provider. the caller needs to retrieve the expanded provider (instead of using the original one it got from Parse()) before doing the lookup.
< sipa>
hugohn: ha
< hugohn>
silly mistake :D so the correct importing flow is to TopUp(), which generates the cache, then call GetScriptPubKeyMan() to retrieve an expanded provider using the fresh cache, then do DetermineOutputType.
< hugohn>
*GetSigningProvider, not GetScriptPubKeyMan
< bitcoin-git>
[bitcoin] Bushstar reopened pull request #15709: wallet: Do not add "setting" key as unknown (master...walletdb-settings) https://github.com/bitcoin/bitcoin/pull/15709
< meshcollider>
fanquake: for example, if you have an old wallet, the legacy wallet structure is wrapped as a scriptpubkeymanager, and all address types would point to it
< meshcollider>
fanquake: in a pure descriptor wallet you would probably have 6 separate scriptpubkeymanagers though yes
< meshcollider>
if you have a wpkh() descriptor for your bech32 addresses it wouldnt make sense for the legacy output type to point to that for example
< fanquake>
meshcollider: 👍
< bitcoin-git>
[bitcoin] practicalswift closed pull request #16312: tests: Reduce memory usage by 0.5 GB when compiling script_tests.cpp (master...fix-absurd-memory-usage-when-compiling-script_build) https://github.com/bitcoin/bitcoin/pull/16312
< jonasschnelli>
sipa, BlueMatt: I'm unsure about the fixed 4byte message string. We would "waste" 3 bytes on each inv compared to the short-id approach. I don't expect much collisions with a space of 256-12 possible short-IDs.
< jonasschnelli>
New commands that are not broadly used on the network could still use a string,.... and only get a short ID after some time (to avoid collisions).
< jonasschnelli>
What we could do, instead, of using the value 1-12 for string based message commands, ... we could use value 0 as an indicator for a 4 byte string base command.
< jonasschnelli>
(compared to the fixed 4 bytes command it would be +1 byte)
< jonasschnelli>
The argument, "but parsing variable length!" I don't buy that completely,.. because one needs to deal with vlen anyways
< jonasschnelli>
you need to verify the MAC tag therefore deserializing,... so one needs to read the whole content of the stream anyways. Maybe I don't see the point, but messages are vlen anyway.
< jonasschnelli>
Skipping a message needs one to read the "header" (in v2 case the command ID/name)
< elichai2>
I finished with the descriptors but I'm trying to figure out `IsSolvable` right now. and i'm not sure how does taproot fit in the SigningProvider API. the first thing oviously to check is to convert the point to P2PK and check if it can sign for that key. but if it can't I need to somhow be able to reconstruct the tree out of the signing provider. I'm thinking of adding it as a member, but is it okay that there
< elichai2>
might be duplicate scripts between that tree and the `scripts` map? or maybe I should store there a tree to `CScriptID` and then get the right scripts through that map and try to solve for all the possible leafs(and the first that return true i'll return true)
< elichai2>
hmm and even if I have the correct tweak I need to find the correct internal key, and I guess brute forcing the tweak against all the pubkeys in the map isn't a good solution heh
< elichai2>
so I can negate the tweak and add it to the taproot key to get the internal key. cool
< BlueMatt>
jonasschnelli: I really hope we're not so starved for bandwidth that we can't afford 3 extra bytes
< BlueMatt>
and if we are, we can just have more things in each inv, no?
< jonasschnelli>
BlueMatt: What does afford means in that context? :-) We can afford it, but It's more a question of waste/optimizing.
< jonasschnelli>
BlueMatt: bundling inv's.. sure. Not always possible.
< BlueMatt>
over-optimizing is the root of all evil :p
< jonasschnelli>
Sure. But I think we are in an acceptable range of optimizing.
< BlueMatt>
I mean if we're in a place where we can't bundle invs, then we have low tx traffic, and we can afford a few more bytes per message without anyone caring about the bw usage
< BlueMatt>
if we're not, we can bundle more
< BlueMatt>
I just dont really think an extra 3 bytes on a 32 byte thing to send really has much cost
< BlueMatt>
in exchange for a constant length header
< BlueMatt>
which is way easier to deal with
< sipa>
jonasschnelli: i don't care much about variable length either, but it seems a sticking point for BlueMatt
< sipa>
jonasschnelli: but also just diminishing returns; there is a 16 byte MAC per message; maybe saving 8 bytes per message matters, but saving 3 more is quickly becoming negligable for all but the shortest messages
< sipa>
elichai2: IsSolvable meams "you know enough to sign under all conditions, if you had the private keys"
< sipa>
elichai2: taproot descriptors would always be solvable
< elichai2>
i'm talking about a different `IsSolvable` haha
< elichai2>
in sign.cpp
< elichai2>
(the one that's used in descriptors_test.cpp )
< bitcoin-git>
bitcoin/master 0c78e49 practicalswift: tests: Switch one of the Travis jobs to an unsigned char environment (-fun...
< bitcoin-git>
bitcoin/master 74f1a27 Wladimir J. van der Laan: Merge #15134: tests: Switch one of the Travis jobs to an unsigned char env...
< bitcoin-git>
[bitcoin] laanwj merged pull request #15134: tests: Switch one of the Travis jobs to an unsigned char environment (-funsigned-char) (master...unsigned-char) https://github.com/bitcoin/bitcoin/pull/15134
< sipa>
elichai2: the concept of solvable there needs to go away i think
< sipa>
elichai2: it captures whether you're able to estimate fees
< elichai2>
wait what. I thought it checks if you have the right script+private key
< elichai2>
like, it literally tries to produce a signature
< jonasschnelli>
BlueMatt, sipa: sorry... was on the phone. BlueMatt: what are the benefits of a constant length "header"?
< jonasschnelli>
Also, V2's "header" is the encrypted length & MAC TAG
< jonasschnelli>
A 32byte field for defining the message type seems very large for the message variety we are expecting in the next years.
< jonasschnelli>
I also don't fully buy into the anit-collision argument
< jonasschnelli>
Maybe if we consider collisions be a thing in the future, we could say short-ID tables are bound to a certain net-protocol version?
< sipa>
elichai2: it basically checks if you can sign with a dummysigner (which doesn't need private keys)
< sipa>
elichai2: you shouldn't need taproot specific logic; just signing logix
< emilengler>
How does bitcoin-qt detects when Ctrl-L was pressed? I can't find something like a QShortcut or a QAction
< jonasschnelli>
[16:46:57] <BlueMatt>in exchange for a constant length header [...] which is way easier to deal with
< jonasschnelli>
^^^ can you elaborate a bit more on this?
< elichai2>
so I need to "try" producing signatures for the scripts, right? because I can't asusme it's using a dummy signer and will return true for whatever pubkey. so I need to try with the internal key as P2PK and then with each script until `SignStep` returns true
< jonasschnelli>
You have a raw packet on the stream [3 bytes encrypted length] + [ ? encrypted payload ] + [ 16 bytes MAC]
< jonasschnelli>
(varlen already)
< jonasschnelli>
you check the mac and decrypt
< jonasschnelli>
then "deserialize" the message ([ ? payload]), where the header is actually the message command (which is varlen compared to the 24byte V1 headers)
< phantomcircuit>
jonasschnelli, it's infinitely easier to parse a constant length header than a variable length one
< jonasschnelli>
phantomcircuit: easier in what context?
< jonasschnelli>
I mean the message that follows has very likely also variable length fields
< sipa>
elichai2: yes, you need taproot signing logic
< sipa>
if you have that, IsSolvable will work automatically
< elichai2>
great. so i'll add a map of taproot trees to `FlatSigningProvider` and recalc the internal key by negation, Thanks :)
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #16493: test: Bump rpc_timeout in feature_dbcrash (master...1907-testRpcTimeoutBump) https://github.com/bitcoin/bitcoin/pull/16493
< sipa>
elichai2: i'd add TaprootPath entries to SigningProvider (with a map from keyid, or from pubkey, unsure), one for each spending path
< sipa>
elichai2: for key paths, it would contain the tweak and the pubkey it's derived from
< sipa>
elichai2: for script paths it would contain leaf version, script, and Merkle path
< sipa>
the advantage is that this approach works evenn when you don't know all scripts
< elichai2>
hmm so you're suggesting to split it to paths in the first place for the situation that you don't know all the paths
< sipa>
right
< elichai2>
although i'll argue that you shouldn't be able to sign for an address that you don't know all the paths for
< sipa>
i don't
< elichai2>
because that mean you might not really "own" the coins
< sipa>
there is a difference between treating a coin as yours, and participating in signing
< sipa>
the descriptor side of things will always have the whole tree, and is used for determining what is yours
< elichai2>
and in the end of the path do you think it should be a CScript or a CScriptID to the map of scripts that is already that (and probably will already have those scripts because i'm using descriptors for the leafs so ExpandHelper will fill it up)
< elichai2>
sipa: right
< sipa>
i think it can be CScript
< elichai2>
even with the redundancy?
< sipa>
the scripts shouldn't be in the normal script map i think
< sipa>
as they're distinct anyway
< sipa>
as in: they ought to be a distinct namespace, because taproot scripts have different semantics
< hugohn>
sipa: wouldn't that complicate the new IsMine() logic, which is now just a look up in the "regular" scripts map?
< elichai2>
sipa: but if I have `tap(key, [p2k(...), p2kh(...)])` I want to use the PKDescriptor and PKHDescriptor descriptors
< elichai2>
and in the end it's still `OP_CHECKSIG`
< sipa>
hugohn: the IsMine logic will just be descriptors
< sipa>
in no way should taproot stuff influence the old legacy IsMine code
< sipa>
elichai2: ok, so?
< sipa>
elichai2: expanding a PK descriptor gives you a script; inside a taproot descriptor you put that in the tree rather than directly in the script map
< hugohn>
yeah I mean for native descriptor wallets, not the legacy ones. i.e., DescriptorScriptPubKeyMan::IsMine() . That will have to change if we add tapscripts to a different map, no?
< sipa>
hugohn: no
< elichai2>
I just realized that when I'm expanding I use `MakeScripts` on the subdescriptors and not`ExpandHelper` so it wouldn't automatically get added to the FlatSigningProvider
< sipa>
hugohn: there is a map of scriptPubKeys
< sipa>
hugohn: that is not the same map as the mapScripts in signing providers
< elichai2>
makes sense. so the ExpandHelper could also generate all the possible script paths and add them to the provider
< sipa>
right, or the taproot specific code does that
< elichai2>
right
< elichai2>
btw, I see that your code right now hard codes `0xc0` in the consensus code, and it doesn't compose it in a way. I'm thinking if it's worth the effort to really add a leaf version already or do the same (which isn't good in engineering stand point but not sure if it's worth the effort assuming that other parts hard code it)
< sipa>
elichai2: i think it makes sense that descripts for now only support tapscript v0
< sipa>
*descriptors
< sipa>
i think for psbt extensions we do want to support encoding other tapscript versions
< elichai2>
👍 yeah. probably won't see a new tapscript version for a while haha
< hugohn>
sipa: you're right I got those 2 mixed up, oops. So it's the `LookupHelper` for SigningProvider that will have to change to account for tapscripts, IsMine won't.
< sipa>
hugohn: i don't think the descriptor wallet code will need to change at all
< sipa>
hugohn: taproot descriptors will still be populating the list of scriptPubKeys to watch for
< sipa>
the mapScripts/taproot tree elichai2 and i were talking about is for internal scripts, not scriptPubKeys
< elichai2>
yep. the scriptPubKey is the easiest part.
< phantomcircuit>
jonasschnelli, it means you can just copy the a buffer into the fields without much real logic
< sipa>
jonasschnelli: well for new message types they can be selected to not collide with existing ones
< hugohn>
sipa: right descriptor part won't change, but the Signing Provider will need to change, right? as it involves looking up for things to sign which include internal scripts?
< sipa>
hugohn: signingprovider will get extra fields yes
< sipa>
but they should 't really interact with descriptor wallets
< sipa>
jonasschnelli: but it also supports "uncoordinated new messages" because until there are 1000s of message types, the chance for collisions will be negligae
< sipa>
elichai2: i think BlueMatt's comment is mostly for the sake of other implementations
< jonasschnelli>
phantomcircuit: I get the point. But in out context and with the V2 encryption packet that is already vlen this point has little weight IMO
< elichai2>
oh. but they will need to implement the variable size type/functions anyway for the rest of the stuff in bitcoin. so why not use the same type we use everywhere?
< BlueMatt>
elichai2: we actually dont for the headers
< sipa>
jonasschnelli: i'm personally perfectly fine with the approach you suggested, but given BlueMatt's criticism i found this idea of just shortening the field perhaps a useful middle grou d
< jonasschnelli>
Yes. I think in general it's an acceptable idea... but...
< elichai2>
BlueMatt: oh really? ok. sorry, I don't know much about the p2p layer so i'll remove myself from this conversation ha
< BlueMatt>
jonasschnelli: the header is not, no? and its a different layer - net vs protocol/net_processing deserialization
< BlueMatt>
the net code isnt aware of what the message contains, or how to deserialize it
< BlueMatt>
and it is dead simple
< BlueMatt>
but, honestly, I dont think its *really* worth it. I find variable length headers distasteful given the million and one issues that have existed in implementing them, but if you really, really want to save 3 bytes, I'm not gonna argue for them
< jonasschnelli>
In current v2: the headers is the short-ID where ID 1-12 is a varlen string message-command
< BlueMatt>
dont think endlessly debating it is really worth it, that is
< BlueMatt>
right, yes, I was under the impression that was the only variable length in the message header/the part that net has to deal with
< * BlueMatt>
-> lunch
< sipa>
well, the payload is also variable size, obviously
< jonasschnelli>
BlueMatt: From a design perspective, I dislike variable length headers as much as you do. But I think in our case its fine and the complexity it adds is near 0.
< jonasschnelli>
sipa: excactly.
< BlueMatt>
the compexity it adds is, however, non-0, and the cost it removes *is* 0 :p
< hugohn>
sipa: descriptors do depend on the Signing Providers to expand their cache, so they interact somewhat? (which was my bug yesterday). But descriptors don't need to know how SPs generate the cache (and therefore won't need to change at all for Taproot to work).
< jonasschnelli>
BlueMatt: I disagree. :)
< sipa>
hugohn: yes, descriptors very much interact with signingproviders
< sipa>
hugohn: but the wallet code shouldn't
< jonasschnelli>
That's why I'd like to hear what the "non-0 complexity" is,.. why a vlen headers adds complexity in our case (not in a general case)
< jonasschnelli>
BlueMatt: ^
< hugohn>
sipa: right. ok it's clear to me now. thanks!
< sipa>
jonasschnelli: actually one issue about the variable length header scares me a bit, namely tha6 the encryption is designed to hide message types
< sipa>
though i need to read up on the openssl chacha/poly1305 design to understand it better
< jonasschnelli>
sipa: Yeah, that's maybe a point. Though, in most cases, you'll never ever use variable length messages.
< jonasschnelli>
BIP324 defines short ids (single byte command) for every message.
< jonasschnelli>
New messages could do the same,...
< sipa>
jonasschnelli: there may be coordination issues there
< jonasschnelli>
using string based message commands is more for extensions that don't want to deal with short IDs
< jonasschnelli>
sipa: eventually,... but as long as all new message types come with a short ID, I don't think there is.
< jonasschnelli>
The fallback to string base commands is unused in the ideal case
< jonasschnelli>
except a fork-coin adds shit messages and don't bother with adding a short ID
< phantomcircuit>
jonasschnelli, it means you can have a simple bent pipe that isn't doing decryption but does understand the headers
< jonasschnelli>
And I do think 3 bytes matter. Especially if we know that almost 50% of our messages. are <=64bytes.
< jonasschnelli>
Which results that 3bytes may be 5%.
< jonasschnelli>
(BTCFlood though)
< jonasschnelli>
phantomcircuit: ehm?
< jonasschnelli>
If you don't understand decryption, you won't be able to read the payload length and therefore would never come to the point where a fix length header would matter? Not?
< sipa>
jonasschnelli: let me think think things through the next days
< sipa>
hugohn: yw
< jonasschnelli>
sipa: Sure. Thanks for digging in!
< bitcoin-git>
[bitcoin] promag opened pull request #16494: wallet: Drop support to serialize OldKey (master...2019-07-drop-oldkey-serialize) https://github.com/bitcoin/bitcoin/pull/16494
< phantomcircuit>
jonasschnelli, the length is encrypted?
< jonasschnelli>
jup
< phantomcircuit>
i guess that makes sense
< phantomcircuit>
shrug
< BlueMatt>
jonasschnelli: to answer your question, in a general case, the way you write a network layer (eg the way bitcoin core does it today) is step 1) read message header, step 2) check stuff about the header, step 3) allocate buffer for the size it told you, step 4) read message. this keeps things nice and simple, you dont have much allocation complexity and you avoid most buffer overflow issues if you misallocate in your network stack.
< BlueMatt>
hence my (strongly-held) view that variable-length, when it is reasonably avoidable, is always bad protocol design. obviously there are tradeoffs, but more lines of code that allocate and then read into a buffer in async networking have been the source of such a huge number of buffer overflow vulns over the years....
< BlueMatt>
in this case its not as much an issue cause at least its only twice
< BlueMatt>
so, again, not worth endless debate around it, I'm happy to not care anymore.
< BlueMatt>
just answering your question of why
< jonasschnelli>
thanks! i totaly agree with all of your point... but... we deal with inner messages (decrypted)
< jonasschnelli>
the buffer is allocated and the data has ben read regardless of a fix length header or not (in our case of encrypted messages)
< sipa>
jonasschnelli: the 'encrypted length' field covers the sum of message type and payload, or not?
< sipa>
(i think it should)
< jonasschnelli>
the payload of a encrypted packet is [message command short ID + message payload]
< jonasschnelli>
so the encrypted length is on the network layer
< sipa>
ok, so the encrypted length is the length of the encrypted payload, not just the message payload?
< jonasschnelli>
yes
< BlueMatt>
jonasschnelli: I'm not sure you read my point? my pont was that most protocols have variable length bodies, but those are handled at a different level than the constant-length header.
< sipa>
BlueMatt: right, but here it is still a fixed length header + variable length message; the command is just the first few bytes of that variable length message
< sipa>
so there are no additional variable length buffers or so involved
< sipa>
just a decision where the command ends and where the message payload begins
< jonasschnelli>
yes.
< jonasschnelli>
eventually the v2 header is the 3 byte payload length
< jonasschnelli>
I see all the design points. but I don't think there are practical costs for a variable length message command in v2
< jonasschnelli>
(though I hope I'm right)
< sipa>
jonasschnelli: so... either you're going to make the decision between splitting the command from payload on the fly (before you've validated the MAC) or you do it afterwards
< sipa>
if you do it on the fly there is a risk of creating a decryption oracle (not saying there is... but running any code based on decrypted but unauthenticated data is a red flag)
< sipa>
if you do it afterwards, it'll may hard to avoid a copy
< jonasschnelli>
I see...
< sipa>
with fixed command size you could split on the fly without that risk
< jonasschnelli>
hmm?
< sipa>
it'd just be "first 3 bytes go here, other bytes go there"
< sipa>
without looking at what those bytes are
< sipa>
(3 or 4 or whatever)
< jonasschnelli>
I see
< sipa>
there actually is another solution: putting the information on variable length command or not in the length field
< sipa>
but that's... even less conventional
< jonasschnelli>
so.. your saying, before we decrypt, we place the 4 byte command string into a buffer and the payload
< sipa>
if decryption is in-place, sure
< jonasschnelli>
then decrypt the fix length command and not needing to deal with buffers anymore
< jonasschnelli>
I think this is a valid point
< sipa>
let me write things in pseudocode
< promag>
hi, make on depends/ gives "./google/protobuf/stubs/common.h:38:10: fatal error: 'assert.h' file not found" .. hints?
< jonasschnelli>
though, we could optimize of that case and avoid a copy if the message command is a 1byte short ID
< jonasschnelli>
and if not, we do a copy.
< sipa>
jonasschnelli: copy should always be avoidable
< jonasschnelli>
yes.
< phantomcircuit>
sipa, putting the mac after the data is just asking for implementers to decrypt the payload before checking the mac
< jonasschnelli>
I need to go through the code...
< phantomcircuit>
(although i do understand it being after makes it easier to stream data out)
< sipa>
phantomcircuit: but it's also the only option if you want to avoid a copy
< phantomcircuit>
sipa, uh why
< phantomcircuit>
wait a copy on the sender or a copy on the receivers side?
< sipa>
phantomcircuit: you'd need to buffer the unencrypted data when sending
< sipa>
or the encrypted data, also possible
< phantomcircuit>
sipa, it's probably getting buffered anyways but in smaller pieces
< sipa>
maybe
< phantomcircuit>
either way objects being serialized are going to be copied into a buffer
< sipa>
sure, but putting MAC first means an additional buffering either way on top of whatever else is being done
< phantomcircuit>
but there's no easy way as of now i think to serialize directly into the buffer you're going to use for the send() call
< sipa>
ok, fair, put otherwise: you're going to need to go over your data twice
< phantomcircuit>
i think right now you'd do something like, serialize a bunch of stuff into a buffer forming an unencrypted message, copy chunks for chacha20 and mac, then send the chunk and the mac after you've processed all the chunks
< phantomcircuit>
but if you just do the chacha20 encryption of the buffer with the unencrypted message in place you can easily calculate the mac before without a copy
< sipa>
the mac is computed over the encrypted data
< phantomcircuit>
yeah that's what i said
< sipa>
so you can do one pass of encrypting-in-place + mac, send mac, and then go back over your encrypted data and send it out
< sipa>
if the mac goes last, you can interleave encryption and sending
< phantomcircuit>
yeah but you cant deallocate anything anyways so it's just a very small latency win
< sipa>
for large messages it may matter, but agree it's a small point
< phantomcircuit>
as long as it's very clear you cant decrypt things before checking the mac im sure it's fine... but doing so is going to look like a performance/latency win to a naive implementer
< phantomcircuit>
i mean our largest message is like 2MB right?
< sipa>
well you very much can decrypt before checking mac
< sipa>
you just can't use the data before doing so
< phantomcircuit>
chacha20 + mac over 2MB on even an rpi3 is going to be basically instant
< sipa>
or can't make any decision that the other party can observe based on that decrypted data even
< phantomcircuit>
right
< phantomcircuit>
i cant see that being an issue in most languages... but things that are very very "async" might try to decrypt, consume and calculate the mac in parallel
< sipa>
phantomcircuit: 7.5 ms on a RK3399 ARM64 CPU to decrypt/auth 1MB of data
< sipa>
that's not nothing
< sipa>
the poly1305 code can be optimized more, though
< jonasschnelli>
sipa: I don't think we need to copy
< jonasschnelli>
We have a. CDataStream where the encrypted payload sits. Then we decrypt that "buffer" with ChaCha (after the MAC check). We read the eventually variable length command from that now decrypted CDataStream,... then directly deserialize from that CDataStream
< sipa>
jonasschnelli: yeah, that's fair
< jonasschnelli>
Looking at the (old) V2 code,... I think there is no copy involved
< sipa>
it'd be nice to be able to decrypt on the fly though, so if you have a slow CPU + slow network, the message is already mostly decrypted by the time the last part of the packet arrives
< jonasschnelli>
So the variable length message command is more or less part of the message (which is vlen anyways)
< sipa>
and you can, and it's probably fine... just a bit scary
< jonasschnelli>
sipa: decrypting on the fly is orthogonal to the fixed length command string, right?
< sipa>
ah i see, with that CDataStream trick it is
< sipa>
you effectively leave the command inside the CDataStream you give to the net processing code, but with the offset placed after it
< jonasschnelli>
yes
< sipa>
yup, agree
< jonasschnelli>
Also, the important part of "on the fly" is probably precomputing the ChaCha20 stream up a a certain "cache" size.
< jonasschnelli>
We then eventually only do xors in the network code
< sipa>
heh, good point
< sipa>
ok, i agree that the ability to decode on the fly isn't an argument against variable length commands
< jonasschnelli>
The remaining question is whether the variable length command increases message type detectability
< sipa>
not if the encrypted length covers the command as well
< jonasschnelli>
Yes. It does. So I also think it's not an agrument.
< jonasschnelli>
Which makes me again think saving 3 bytes is worth the variable length command
< sipa>
i like the perspective of "the command is just the first bytes of the message payload, and we detect the command after allocating/receiving/decrypting/authenticating the whole payload"
< jonasschnelli>
Maybe one could think passing a CDataStream to the next layer with a pointer not pointing the the buffer start is ugly stuff
< jonasschnelli>
sipa: Yes. I think that definition clears up things
< sipa>
so i'm fine with whatever
< jonasschnelli>
Somehow I care about those 3 bytes... maybe wrongly.
< sipa>
including BlueMatt's suggestion earlier of only supporting 1-byte and 12-byte commands
< jonasschnelli>
Though there is a yes/no wether a V2 inv is larger than a V1 inv.
< jonasschnelli>
though with a 4 bytes fixed length command. It would still be 1 bytes smaller then V1.
< jonasschnelli>
*than V1*
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #16497: gui: Generate bech32 addresses by default (take 2, fixup) (master...1907-guiBech32Take2) https://github.com/bitcoin/bitcoin/pull/16497
< sipa>
gleb: have you seen the discussion last thursday on changes to wallet rebroadcasting? i wonder if you have any thoughts on interaction with erlay
< bitcoin-git>
[bitcoin] TheBlueMatt reopened pull request #16324: Get cs_main out of the critical path in ProcessMessages (master...2019-07-peerstate-initial-moves) https://github.com/bitcoin/bitcoin/pull/16324
< BlueMatt>
^ seeking concept review :)
< BlueMatt>
also seeking ibd benchmarks from multiple peers....(jamesob :p)
< gleb>
sipa: I think it would work right away by just applying the same reasoning and looking at it as if the transaction was new
< sipa>
gleb: but if different nodes had inconsistent rebroadcasting rules (or even randomization in whether/when to try rebroadcasting), erlay would suffer
< gleb>
Oops, you are right, it would announce already known transactions potentially.
< gleb>
There would be 2 kinds of nodes: those which have it in the mempool and those which don't. For the latter we can't do anything — we can only apply the default protocol (flooding/erlay) (unless we pass a flag along with a tx, but we're not gonna do this)
< gleb>
For the former — I guess according to the rebroadcast reasoning they won't re-relay anything if it's received from somewhere (only if their own timer triggers).
< gleb>
So the question is really how we should act at the node which triggers rebroadcast.
< sipa>
an alternative to rebroadcasting is of course mempool (full mempool, not relay) reconciliation...
< gleb>
Which is essentially just batching rebroadcasts over an hour or whatever.
< gleb>
Yeah, I think exchanging sketches of rebroadcasts every hour or so might work. It's really mempool reconciliation done right (no point to reconcile obviously known to both parties fresh txs or low-fee garbage)
< gleb>
I'm not sure estimation would work that well as in Erlay — I can't tell how often rebroadcasts would happen and how different the rules will be and all that.
< sipa>
it can be orthogonal of course; post-erlay the rebroadcasts can just stay normal invs too
< sipa>
though if their bandwidth is considerable, we'll want a reconciliation based solution, whether by integrating into erlay, or by having a separate independent reconciliation step for rebroadcasts
< gleb>
Yeah. I think we can't do much if rebroadcast uses regular invs though, as I explained above w.r.t 2 kinds of nodes. Anything coming to my mind about "integrating into erlay" is really complicated separate things.
< gleb>
So in this case if Erlay's estimation formula adjusts to this stuff we're good, otherwise we might loose a bit of efficiency.
< gleb>
In the case of regular inv rebroadcast *
< phantomcircuit>
BlueMatt, it's 10% faster to skip the bip30 checks
< BlueMatt>
hmm, fun.
< BlueMatt>
phantomcircuit: well you at least need to fix sdaftuar's comment on the pr, but mostly I'm highly dubious of reorg conditions around the bip30 shit
< BlueMatt>
if you write a test that tests shit like that, I'll be happy :p
< BlueMatt>
fanquake: care2merge #16433
< gribble>
https://github.com/bitcoin/bitcoin/issues/16433 | txmempool: Remove unused default value MemPoolRemovalReason::UNKNOWN by MarcoFalke · Pull Request #16433 · bitcoin/bitcoin · GitHub
< jonasschnelli>
BlueMatt: hell.. It's 4am for poor fanquake. Let me have a look. :)
< BlueMatt>
oh, ehh, bad timezone conversion, was thinking it was later than it was
< sipa>
I thought fanquake lived in a timeless realm
< bitcoin-git>
bitcoin/master 39763b7 Jonas Schnelli: Merge #16433: txmempool: Remove unused default value MemPoolRemovalReason:...
< BlueMatt>
jonasschnelli: right, I mean no reason to need or care about anoher ack on something that trivial
< jonasschnelli>
sure
< jimpo>
I assume the concern is some pathological case where there's a stale chain where some of the early stale blocks are more than 1 month old but the latest blocks in the chain are less than 1 month old?
< BlueMatt>
jimpo: right, thats why I was referencing
< jimpo>
In that cause, it's plausible that the node has been online less than a month and synced up the earlier stale blocks since the chain is "active" in some sense
< BlueMatt>
jimpo: yea, doesn't matter
< BlueMatt>
no, specifically it doesnt matter cause it doesn't reveal information
< BlueMatt>
if you are BLOCK_VALID_SCRIPTS at the top of the fork, then you are clearly also BLOCK_VALID_SCRIPTS everywhere down the fork path
< BlueMatt>
so hiding the fact that you have those blocks...welll...you're not hiding anything
< jimpo>
well yes, that too
< jimpo>
I mean, I still think having the BlockRequestAllowed check in addition to the BLOCK_VALID_SCRIPTS check is valuable
< jimpo>
but just on the stop block should be fine
< BlueMatt>
hmm? no my point with that comment was that BlockRequestAllowed already does the BLOCK_VALID_SCRIPTS check for you
< BlueMatt>
so....no reason to call it again
< jimpo>
yes, I agree with that
< jimpo>
I am removing the redundant check
< jimpo>
but it's still better to do the full BlockRequestAllowed check instead of just the BLOCK_VALID_SCRIPTS check
< BlueMatt>
not just better, you must
< jimpo>
just to prevent fingerprinting, right? Or is there another reason?
< BlueMatt>
right
< phantomcircuit>
BlueMatt, either im missing something or this will cause reorg issues in exactly the same way an invalid script would
< phantomcircuit>
except that it doesn't let a miner steal coins just destroy them
< phantomcircuit>
so like
< phantomcircuit>
why
< BlueMatt>
phantomcircuit: ehhh, the cache flushing shit is....complicated
< BlueMatt>
phantomcircuit: go read the comment(s) there, and then test it anyway :P
< phantomcircuit>
BlueMatt, there's no cache flushing issues
< phantomcircuit>
the BIP30 checks are literally always cache miss down to disk unless the blocks invalid
< phantomcircuit>
sdaftuar, did i read your comment right?
< jimpo>
BlueMatt: To be clear, you agree that there's no need to additionally check BlockRequestAllowed on the start block right? Is your reasoning the same as mine?
< phantomcircuit>
BlueMatt, oh and the 10% is probably the smallest improvement possible, an rpi with a spinning disk would almost certainly be more than 10%
< BlueMatt>
jimpo: thats my understanding yes, not sure what your reasoning is
< BlueMatt>
phantomcircuit: ah, right, sorry, I'm mis-thinking about this
< phantomcircuit>
BlueMatt, wait does leveldb do a bloomfilter lookup for key existence?
< sipa>
yes
< phantomcircuit>
so maybe it doesn't do a disk lookup and instead is just the cost of the bloomfilter that's being saved?
< sipa>
yes
< sipa>
and the bloomfilter is likely cached in ram
< jonasschnelli>
#proposedmeetingtopic bitcoin-dev mailing list moderation
< BlueMatt>
oh boy
< jonasschnelli>
heh... don't fear
< jonasschnelli>
It's just about ramping up more people so we can have actual debates without +48h lags
< phantomcircuit>
sipa, well it's most definitely faster without the 34m lookups at least
< phantomcircuit>
it also made generating (very bad) data on leveldb Get calls vs dbcache easier
< emilengler>
I don't get where the bitcoin-qt rpc console detecs when Ctrl-L is being pressed? With the help of the grep command I've only found the way how Ctrl-Q is being handled. Can someone tell me where I find it?
< fanquake>
promag: it’ll be a missing SDK. Assuming your building for MacOS
< bitcoin-git>
[bitcoin] instagibbs opened pull request #16500: GetFee should round up to avoid undershooting feerate (master...fix_filter_mempool_mistmatch) https://github.com/bitcoin/bitcoin/pull/16500
< bitcoin-git>
[bitcoin] ronaldstoner opened pull request #16501: Update configure.ac to remove extra no in $use_tests (master...master) https://github.com/bitcoin/bitcoin/pull/16501
< bitcoin-git>
[bitcoin] ronaldstoner closed pull request #16501: Update configure.ac to remove extra no in $use_tests (master...master) https://github.com/bitcoin/bitcoin/pull/16501
< fanquake>
promag: oh, did you recently update your macOS or Xcode?
< promag>
fanquake: nope, I don't recall doing it, at least recently
< promag>
why?
< fanquake>
In newer versions of Xcode some of the headers our depends packages rely on aren’t where they used to be, and there a a script you can run that’ll put them back. See this comment: https://github.com/bitcoin/bitcoin/pull/16392#issuecomment-515268092 and the one I’m referencing.
< fanquake>
Otherwise unsure atm.
< promag>
should it work if I have the previous sdk?
< promag>
fanquake: ^
< fanquake>
promag: no, in that PR it’ll be looking for the newer one. Thinking about that, we could drop the requirements.
< fanquake>
Although, at least it’s easy to download and extract the new SDK on a mac.
< phantomcircuit>
BlueMatt, what am i missing
< bitcoin-git>
[bitcoin] ariard opened pull request #16503: Remove p2pEnabled from Chain interface (master...2019-07-remove-p2p-chain) https://github.com/bitcoin/bitcoin/pull/16503