< BlueMatt>
anyone around who might have a node which restarts occasionally, could you grep for "prev block not found" in debug.log and report the frequency?
< jonasschnelli>
Holly! The Github traffic on weekends is extremely intense. Catch up after a weekend (people where supposed to not work on weekend,.. once) needs 1-2h! :)
< sipa>
weekends are highly overratee
< sipa>
weekends are highly overrated
< jonasschnelli>
Yah! Tell that my wife. :)
< GitHub84>
[bitcoin] promag opened pull request #7857: Add fee to fundrawtransaction (master...enhancement/add-fee-to-fundrawtransaction) https://github.com/bitcoin/bitcoin/pull/7857
< BlueMatt>
hmm, the sendheaders stuff is pretty bitcoin-core-specific
< BlueMatt>
we DoS a node for sending us a header that doesnt connect to our chain, because bitcoin core keeps track of what it thinks its peers have for headers (and assumes they never throw away headers)
< BlueMatt>
sipa: ^
< BlueMatt>
if, for example, a peer didnt want to keep track of what it thought we had in our block-headers-tree, it couldnt implement sendheaders
< sipa>
you can always send all headers along the new branch you're switching to
< BlueMatt>
also, we assume a node doesnt have a given set of headers based on only the most recently received or sent hash/header...this works fine for non-forked chains, but if the chain forks at the tip, we may very well fall back to sending invs
< BlueMatt>
sipa: but we dont do this
< BlueMatt>
sipa: also, why not just remove the DoS?
< BlueMatt>
sipa: and if it doesnt connect, just send a getheaders for further-back headers
< sipa>
which case are you talking about, a non-core peer sending headers to core, or the other way around?
< BlueMatt>
in this case I'm talking about core-to-core
< BlueMatt>
from my (not so in depth) reading, this looks like it can very easily fall back to invs if the chain is forked near the tip
< sipa>
it should send all new headers along the new branch
< sipa>
unless there are too many, in which case it intentionally falls back to using invs
< BlueMatt>
eg we just got a new block on our chaintip B (based on A), but we only know they have C (which we may not even have, but lets say its based on A)
< sipa>
then we send the header for B
< BlueMatt>
I dont think it will? It will only ever send invs/headers that were added to vBlockHashesTonnounce
< BlueMatt>
no, because B wont be in vBlockHashesToAnnounce
< BlueMatt>
only D (based on B) will be
< BlueMatt>
in any case, it is much more flexible to handle the headers-msg-that-didnt-connect case by requesting a headers with our locator and see what they come back with instead of marking that peer a DoS
< BlueMatt>
would make it much easier for other implementors (they can skip all block inv sending, though that would be a bit less effecient)
< sipa>
hmm, i'd still like to understand the use case
< BlueMatt>
sipa: the use-case is simpler code for other implementors (or maybe us in the future)
< sipa>
we have D-B-A as tip, a peer has C-A as tip
< BlueMatt>
sipa: they could even skip the inv logic
< BlueMatt>
sipa: that is a separate bug
< sipa>
let's talk one thing at a timr
< sipa>
but the peer will have the headers for D and B already
< sipa>
eh, B
< sipa>
so just sending D should be fine?
< BlueMatt>
this only works if either the last header we sent them or the last header they sent us is B
< sipa>
it's possible that B was sent just as an inv
< sipa>
and the peer hasn't getdata'd yet
< BlueMatt>
no, its worse
< BlueMatt>
they could have had B already from another peer
< BlueMatt>
well, yea, i suppose it would be an inv, then
< BlueMatt>
but they wouldnt request it
< sipa>
we would still have either sent an inv or a header, indeed
< BlueMatt>
anyway, minor bug, just means if chain is forked when you first connect to a peer, you will probably send block as inv instead of header
< BlueMatt>
butttt, this does feed into my more general protocol suggestion/question here, which is that we seem to have half-assed the sendheaders stuff....I can get that if you're sending a lot you might prefer invs over headers, but if you're only sending a few, this semi-duplicated logic doesnt make sense to me
< BlueMatt>
why not just always use headers and if you got headers that you cant connect, treat them as invs and request more headers
< wumpus>
oh, that's too bad, I saw a "Merge button" setting under repository settings, I thought because we use a custom script we could disable it entirely. Unfortunately "You must select at least one option"
< wumpus>
going to request this
< sipa>
wumpus: that will be interesting :)
< wumpus>
jonasschnelli: hah! and remember that if you take all time zones into account, I think there's only one full day it's 'weekend' everywhere
< Chris_Stewart_5>
Is CScriptNum(0) bitwise equivalent to OP_0?
< Chris_Stewart_5>
aka CScriptNum(0) OP_0 OP_EQUAL would push an OP_TRUE on the stack?
< Chris_Stewart_5>
sanity check for my own sake..
< sipa>
CScriptNum(0) is a C++ expression
< sipa>
OP_0 is a script operator
< sipa>
OP_0 pushes an empty bytearray onto the execution stack
< sipa>
which, if fed as input to CScriptNum will indeed produce the same as feeding it the number 0
< Chris_Stewart_5>
interesting. Is that just something we have to live with since Satoshi put it their or have we put that in after the fact?
< Chris_Stewart_5>
but the little script I gave about would evaluate to OP_FALSE then since an empty byte vector != 0 right?
< Chris_Stewart_5>
Seems like CScriptNum(OP_0) is a little strange.. surprised that even type checks
< sipa>
Chris_Stewart_5: "CScriotNum(0) OP_0 OP_EQUAL" has no meaning, i cannot answer your question
< sipa>
CScriotNum(0) is C++
< sipa>
it is not something that makes sense in a bitcoin script
< sipa>
Chris_Stewart_5: CScriptNum(OP_0) unfortunately works due to C++ type conversion (OP_0 is part of the opcodes enum, and gets automatically converted to an int when passing to CScriptNum's constructor
< Chris_Stewart_5>
sipa: is the number 0 and the empty byte vector bitwise equivalent in Script? I think that is what I was really trying to ask
< Chris_Stewart_5>
I'm just trying to understand how exactly it is handled, it seems in the interpreter these lines are how it is interpreted
< Chris_Stewart_5>
I'm not sure how the idea of 0 is handled in most PLs, is it represented as 0x00 or just an empty byte vector... I guess i'll have to more research on that later :-)
< sipa>
Chris_Stewart_5: in most languages, numbers and byte arrays are different data types, so the question does not even arise
< sipa>
but in Scriot, yes, an empty stack elemented is interpreted as tje number 0, when fed to ancoperator that expects a number
< Chris_Stewart_5>
Thanks!
< sipa>
there are other valid serializations for 0, though.
< Chris_Stewart_5>
sipa: Are you planning on doing something similar to tx_valid/invalid.json as you did with your recent pull request?
< Chris_Stewart_5>
wrt to script_valid/invalid.json
< sdaftuar>
gmaxwell: sipa: i was looking at 7840, and i think i must not understand gmaxwell's comment about "by eliminating queued entries from the mempool response and responding only at trickle time, this makes the mempool no longer leak transaction arrival order information..."
< sdaftuar>
it seemed to me that if we are responding to mempool messages only at trickle time, then we might as well include everything and clear the relay queue for that peer instead?
< sipa>
sdaftuar: that would allow you to bypass the rate limiting by continuously requesting mempool
< gmaxwell>
you already bypass the rate limit via the mempool results itself though.
< sdaftuar>
ah
< sdaftuar>
but with my approach, you can learn about more than just 35 transactions or whatever
< gmaxwell>
At least my view is that it's not necessary to enforce the limit in that case. The limit is there to soften traffic surges when floods of new transactions come in. If you're making a mempool request, you clearly don't care. ... the alernative would be to add the mempool to the inv to send set and subject it to the rate limiter... but that would make mempool requests rather slow. :)
< sipa>
that makes sense
< sdaftuar>
so if we're not concerned about rate limiting being weakened by a mempool-attacker, is there any downside to responding with everything?
< gmaxwell>
I think it should respond with everything*, and remove them from the set-to-send as it goes. (*ignoring my general complaint about there existing any P2P message that sends megabytes of response to a single query)
< sdaftuar>
ok that makes sense to me, though i don't know if there's any point to relaying something that has been evicted in between deciding to relay and trying to relay
< sdaftuar>
it's an unlikely edge case regardless
< gmaxwell>
It probably shouldn't relay it.
< sdaftuar>
actually, with the rate limiting, we should probably remove things that are mined or evicted prior to relay, right?
< sdaftuar>
it's conceivable that you could have a big backlog if someone dumped their mempool at you
< gmaxwell>
Not a PR yet because I wanted pieter's input before going and ripping out the feerate from the RelayTransaction prototype everywhere.
< gmaxwell>
So that applies the bloom and fee-filter right before relaying, not on the queue... so that effects of changing these filters is not delayed.
< sdaftuar>
that seems nice, and makes the code clearer too i suspect
< sdaftuar>
i guess at the expense of slightly more memory, especially for bloom-filtering peers?
< sdaftuar>
i'm not sure i understand the privacy gain though
< gmaxwell>
Very slightly, but it doesn't increase the worst case.
< sdaftuar>
agreed
< gmaxwell>
sdaftuar: without that change, I can rotate my filtering to 'time tag' arrivals.
< gmaxwell>
So even though the queuing destroys timing information, by modulating my filtering I could restore some of it.
< sdaftuar>
assuming a single peer connection?
< sdaftuar>
or multiple?
< gmaxwell>
If it's done with just a single, it means you'll miss some transactions. More effective with multiple.
< gmaxwell>
But it's justifyable just on the basis of making the relays more accurate. Likewise, something that was evicted shouldn't get relayed. Though thats corner case enough I don't know if it's worth checking.
< sdaftuar>
ok i think i see what you're talking about, makes sense.
< sdaftuar>
do you think it's worth trying to drain a peer's tx relay set faster if there's a big backlog? i'm a little worried that this thing is bounded only by the size of the mempool, and drains at a fixed rate
< sdaftuar>
100k transactions would take 4 hours or so, and in the meantime you're using a bunch of memory and slowing down the heapify, i suppose
< gmaxwell>
I'd rather drop stragglers than drain much faster.
< sdaftuar>
yeah that seems better
< sdaftuar>
otherwise a peer who dumps you garbage ruins relay for everyone...
< gmaxwell>
right now someone flooding at the boundary of what you'll accept turns you into a big relay amplifier, spamming people with junk. The fee filter will help that a lot, but not completely. I was thinking about using a map on id->{depth,feerate,insert time}, and using the first two for sorting (without taking the mempool lock), the feerate for filtering (again no mempool lock), and the insert time
< gmaxwell>
to expire things that sat around too long.
< JackH>
!seen bramc
< gribble>
I have not seen bramc.
< GitHub88>
[bitcoin] sdaftuar opened pull request #7862: Use txid as key in mapAlreadyAskedFor (master...inv-to-txid-mapalreadyaskedfor) https://github.com/bitcoin/bitcoin/pull/7862
< Chris_Stewart_5>
Why does the DISCOURAGE_UPGRADABLE_NOPS flag allow a OP_NOP?
< Chris_Stewart_5>
i.e. this test case ["1", "NOP", "P2SH,STRICTENC,DISCOURAGE_UPGRADABLE_NOPS", "OK", "Discourage NOPx flag allows OP_NOP"]
< instagibbs>
OP_NOP isn't in the set of NOPx
< instagibbs>
although that probably doesn't answer your question
< instagibbs>
OP_NOP is 0x61 while the other NOPS are 0xb0, so they were probably created for different reasons
< instagibbs>
Meaning OP_NOP will probably not be relied on for future softforks, especially if they aren't currently discouraged