< kanzure>
"The must be at least one output whose scriptPubKey is a single 36-byte push" word missing?
< phantomcircuit>
sipa: there three preparation commits in 7910 seem like they could be separe pr's
< phantomcircuit>
(and appear to be trivial to review as such)
< phantomcircuit>
as well as the "don't check genesis block" commit
< phantomcircuit>
any reason they aren't proposed as separate pr's?
< NicolasDorier>
question: during a sync headers first. chainActive is updated to the last block first when it received the header chain then rewinded if a block is incorrect, or does chainActive is updated as blocks are verified ?
< sipa>
NicolasDorier: chainActive is the best known fully validated chain
< NicolasDorier>
thanks
< sipa>
phantomcircuit: they are
< sipa>
phantomcircuit: they're in #7749
< murch>
btcdrak: I saw that you last updated the SegWit adoption overview: https://bitcoincore.org/en/segwit_adoption/ A lot of discussion had been linking to that table, is there a way that one could get a status update by the other projects, so that the table is up-to-date as SegWit discussion recommences right now?
< murch>
btcdrak: What I meant, is there some sort of communication channel that could be useful to get companies to check if they should be updating their status on the table? I think it would help show that secondary work on SegWit is progressing.
< btcdrak>
murch not sure I follow you.
< murch>
btcdrak: E.g. adam3us has referred to the table 16 days ago (https://www.reddit.com/r/Bitcoin/comments/4d3pdg/clearing_the_fud_around_segwit/d1nxxq6) as showing that secondary projects are working on SegWit. I've called him out on that, because the table doesn't actually show that. (I'm Xekyo on reddit). I would like to propose that development teams listed on that table get nudged to update their status, because it is my understanding
< murch>
The table is still the same as 16 days ago. :)
< murch>
My intent is to point out an easy improvement that could be made in the communication with the broader Bitcoin community. Unless I'm mistaken and the table is up-to-date. Then I'll rest my case of course. ;)
< gmaxwell>
You shouldn't expect it to change much until it's out there in the network. For anything not a full node there is no use in having it ready ahead of the network; so so I would expect most things to be in a working on it state for a bit yet.
< murch>
Okay, I guess I'm mistaken then.
< sipa>
i think it's good in general to ask teams to keep updating their status
< sipa>
"in progress" can mean a lot of things
< sipa>
and there could be a "Ready to rollout whenever the network is ready"
< gmaxwell>
Indeed, but probably in the form of getting something more granular than in progress.
< gmaxwell>
like ... "in tested on segnet/testnet"
< murch>
Completely different topic: I'm going to have a meeting later today about a potential master thesis in the area of Bitcoin. I've proposed to do my master thesis on Coin Selection (which I've did some simulation work before, e.g. gmaxwell had commented then). I've seen that CoinSelection is scheduled to be "looked at" for 0.13. Do you think it would make sense to have a more systematic analysis of CoinSelection even though that's schedu
< sipa>
I don't think you should see that "looked at" as anything more than that, and certainly not a promise that it will be investigated thoroughly or overhauled :)
< sipa>
And further analysis would be very useful.
< sipa>
For example,
< murch>
Cool, because I'd like to work on something useful for my thesis. :)
< gmaxwell>
it's tagged as to be looked at because were in need of some stopgap because the behehavior now is not great. But I don't think we're planning on doing much more than a bandaid.
< sipa>
creating multiple change outputs is an open question (when to do it, how to do it, what effects it has on the UTXO set in your wallet, and potentially indirectly the size of created transactions)
< murch>
sipa: Oh, interesting. I hadn't thought of that. But, shouldn't transactions aim to create the least necessary new UTXO?
< sipa>
murch: there are 2 reasons i know of to create multiple outputs: 1) privacy (if one of the change outputs looks identical to at least one of the payments, you can't tell anymore which of the outputs is the real change)
< sipa>
murch: another is that it may result in a more balanced set of wallet UTXOs to pick from, and that that may result in better coin selection for future transactions
< murch>
sipa: Yeah, one of my proposed algorithms then tried to create change of similar size to the target instead of "smallest possible". That's definitely an avenue I'd like to further investigate.
< sipa>
another question: does preferably picking outputs sent to the same address whenever one such output is already used (because you've payed the privacy tax already) help?
< sipa>
it's a complicated problem, because its priorities are unclear: privacy of address linkage, current transaction size, and future transaction sizes
< murch>
Yeah, that's why I think it's a good topic for a thesis. You can try a lot of different things, you can simulate and model it well, and finding a better solution might benefit the project. :) Anyway, I'm glad I asked, because I felt that the topic might become obsolete with the SegWit discount and the scheduled review.
< gmaxwell>
no, not at all.
< sipa>
segwit should be orthogonal
< sipa>
all it does is change the cost metric for computing fees
< murch>
sipa: I'll have to see if I can find more information on the occurrence of address reuse.
< murch>
sipa: Yeah, but I was afraid that the small changeset that was finally adopted from my last CoinSelection push has even made UTXO growth worse
< murch>
gmaxwell: Great! :)
< sipa>
from an email i have from satoshi over 5 years ago:
< sipa>
Another case is breaking large change outputs into two random sizes to increase backup safety so you're not rewriting your entire savings in every transaction. It would create more varied sizes for SelectCoins to choose from in the future. Some may also want to do it to smooth out priority use or increase privacy wrt the amounts.
< gmaxwell>
murch: yes, I believe it has; but thats not your fault. We discussed it then.
< murch>
the pruning of inputs was added then, although my simulation hadn't conclusively shown that it would be an improvement, and I'm afraid that it contributes by keeping small UTXO from being consolidated now
< gmaxwell>
sipa: I think we should always split by default at some threshold.. it's silly to make outputs which are hundreds of btc or more.
< sipa>
but for example: what if you have the choice between making an tiny change output (close to dust, unlikely to be used ever) and adding an extra input and creating two outputs (one of which is similar to the payment)
< sipa>
should you do that or not
< sipa>
eh, creating two change outputs
< murch>
also an interesting thought. I had been considering an approach of #inputs should be greater than #outputs, but you really don't want to tell the recipient your whole balance every time. Anyway, I'm assuming that I'll be investing considerable effort to create a fitness function for evaluation
< sipa>
oh, there is a way in which segwit may be relevant: adding an extra input will be cheaper than adding an extra output
< gmaxwell>
well there will also be the added complication from aggregation in the not that distant future.
< murch>
sipa: I thought it would make the two about equally expensive?
< gmaxwell>
Where its much cheaper to spend more inputs at once.
< gmaxwell>
sipa: it's about equal because of the witness costs of the additional input (id selection)
< * sipa>
opens spreadsheet
< murch>
another question: Will Schnorr signatures not make inputs vastly cheaper, as the signatures can be combined? Wouldn't that also help consolidate the UTXO set? When would we be expecting a switch to Schnorr?
< sipa>
murch: the effect of Schnorr isn't that large anymore after segwit
< murch>
I see, thanks.
< sipa>
(if you're talking about transaction-wide aggregation)
< murch>
I believe I was. :)
< gmaxwell>
sipa: thats now getting called schnorr signatures due to that article. :(
< murch>
Thank you so much for the input, that'll help me later in my meeting with my (hopefully) future advisor convince him that it's a relevant topic. :D
< sipa>
murch: where are you studying?
< murch>
sipa: Karlsruhe Institute of Technology
< sipa>
ha, nice; i'm in stuttgart currently
< sipa>
that's pretty close i think
< murch>
Yeah, it is. I thought you were in Switzerland?
< sipa>
it's complicated :)
< sipa>
but i'll be here a lot the next month/months
< murch>
I'd love to grab a beer with you sometime. Perhaps throw around a few more ideas about my thesis once I'm further along. :)
< sipa>
sure!
< murch>
Do I remember correctly that I've read somewhere that you enjoy rock climbing? ^^
< sipa>
i believe that's petertodd
< gmaxwell>
actually petertodd does caves. rock climbing is too safe.
< murch>
ah, nevermind then. :D
< gmaxwell>
(maybe he also does rockclimbing, but certantly more caves)
< sipa>
he's a miner, after all
< kinlo>
:p
< gmaxwell>
::groan::
< gmaxwell>
I think not anymore.
< murch>
sipa: We've actually met once (very briefly) at Bitcoin 2014. :) I said hi, and told you that I'm a moderator on Bitcoin.SE. and you said, you love our work. Heh.
< jtimon>
I'm sure I have done that stuff several times already myself, so I thought it would be more productive tol help potential new devs do it whenever they want rather than me rebasing all the time a branch that it's just too disruptive to be merged at once, while PRing many little commits separately is a good way for me to run out of rebase patience (while making more noise with more PRs open)
< helo>
+1 wonderful
< paveljanik>
MarcoFalke, do you have an issue for the wallet.py test?
< sipa>
cfields has been looking into it as well
< paveljanik>
I did a quick test yesterday and end up with db-> rename returning 2...
< paveljanik>
and I'm not able to reproduce the problem today 8)
< sipa>
morcos: if you negotiated in the version message to not send transactions, and then send a feefilter, it will start relaying txn
< morcos>
sipa: so this is just a way to have more flexibility, to allow turning sending of txs back on?
< morcos>
i don't think i have any issue with that concept, but i'm a bit hesitant on whether its worth the complication
< morcos>
for instance isn't there code to disconnect/ban peers that send you txs when you didnt' ask for them
< morcos>
is that corrected to account for this (haven't reviewed the whole pull)
< sipa>
well this is about sending invs, not getdata or tx
< sipa>
and it does make sense that if you want to use feefilter, you want to make sure that you first get to tell the peer what feefilter you want before he starts sending transactions
< sipa>
(same as with bip37 filters)
< morcos>
i'm confused, are you talking about for other implementations sending this (such as lite clients)?
< sipa>
yes
< sipa>
not that any of them do send feefilter right now
< sipa>
i'm just trying to guess why greg's adding a seemingly unrelated change here
< morcos>
i guess i never thought of feefilter as a strict requirement. you aren't required to obey it, its only an approximation of where you actually want the cutoff to be, and there are unknown delays in sending it
< morcos>
ha ha, thats what i'm trying to guess
< morcos>
anyway, i don't see a problem with it, but i don't see a specific advantage either, and it seems to me that if we don't know of a use case where you want to start with no txs and then turn them back on later, that there is no reason to add this.
< sipa>
it's not that you'd want to start with no txs
< sipa>
it's that you don't want to be bombed with a torrent of txs before you've had a chance to tell your peer what fee filter you want
< morcos>
but you never get a torrent of invs
< sipa>
but there are arguments against it: 1) what if you want both a bip37 filter and a feefilter? sending either will turn on invs before sending the second
< sipa>
i know :)
< morcos>
and if you are doing something like a filtered block or mempool command then you have the choice to send the feefilter message before you ask for those
< morcos>
to be clear, i like the idea of moving the filtering to the end piece. it'll result in many more mempool lookups (once per inv), but i think thats ok
< morcos>
its just tying it into the fRelayTxes variable that i don't see the point of.
< sipa>
ok
< morcos>
yeah in talking to sdaftuar, i don't really see the feefilter as analogous to the bip37filter. feefilter seems almost unlikely to be used by lite clients, why would you not want to know about all relevant txs to you anyway, you can do your own logic if you dont' think the fees are high enough. i see feefilter as a tool for full nodes to eliminate traffic that would be dropped at mempool acceptance anyway
< jtimon>
ping #7728, all nits resolved, I think
< morcos>
we don't guard the printing of IP address for incoming connections with fLogIPs. is that intentional?
< kanzure>
i have completed my read-through of the segwit pull request, now on to organizing thoughts/notes/bugs...
< GitHub21>
[bitcoin] MarcoFalke opened pull request #7918: [qa] mininode: Use hexlify wrapper from util (master...Mf1604-qaMininodeHexlify) https://github.com/bitcoin/bitcoin/pull/7918
< gmaxwell>
sipa: why I was making it there was because I went to go fix the lack of locking on fRelayTX and then realized feefilter didn't trigger it, which surprised me.
< gmaxwell>
I've yanked out those changes.
< sipa>
thanks
< sipa>
sdaftuar, morcos: did you see BlueMatt's proposal to treat non-connecting headers as invs, and go fetch it?
< sdaftuar>
sipa: i did. i think that's a fine change to make, though i'd be wary of doing it too much, as part of the point of headers relay is to avoid the extra round trip to propagate a reorg
< BlueMatt>
sdaftuar: we can still be smart about what we send, but if another client doesnt want to implement that much and just wants to have a lazy protocol to announce we should be willing to recv smarter
< sdaftuar>
right, i agree with that
< sipa>
right, though i vaguely remember that there can be edge cases in which we can't orevent non-connecting headers
< sdaftuar>
oh, and you're suggesting that we just send the header anyway, rather then revert to inv?
< sdaftuar>
we could do that too
< sdaftuar>
there's not much benefit though i think?
< sipa>
i forget the whole discussion; i think it's more about us dealing well with others sending unconnecting headers
< sdaftuar>
AcceptBlockHeader will give a DoS (10) to a peer for sending a header with an unknown prev block
< gmaxwell>
My understanding of Matt's complaint is that this then forces you to do the rather complex tracking.
< BlueMatt>
it introduces a lot of required state in order to do sendheaders
< BlueMatt>
which seems kinda crazy to me since sendheaders is so conceptually simple
< sdaftuar>
i don't really follow why a peer would choose to implement sendheaders if they weren't going to do the tracking. how is it better than sending inv's, which are smaller?
< sdaftuar>
i do agree that we could handle unconnecting headers more gracefully though
< BlueMatt>
sdaftuar: i mean you know that sending headers to your peer is gonna make their download more effecient, so you'd prefer to, why not?
< BlueMatt>
having to track the current state of your peer's chain is so gross
< sdaftuar>
we already mostly have to track the state of their chain, to avoid re-announcing blocks to them that they know about
< sdaftuar>
and to know which blocks we can download from them
< BlueMatt>
no we dont
< BlueMatt>
you can announce any shit you want and your peer will figure out if they want to request it or not
< BlueMatt>
it says nowhere in the protocol that you have to be reasonable about what you announce
< BlueMatt>
it also says nowhere that you have to download from all your peers or be a downloading peer in order to sendheaders
< BlueMatt>
or, at least, you shouldnt be
< BlueMatt>
not to mention its always been a general goal for the protocol to be as stateless as possible :/
< sipa>
BlueMatt: calm down
< sipa>
BlueMatt: sdaftuar was just saying that we are already tracking that anyway
< BlueMatt>
hmm? I'm not upset? anyway, point was that we're requiring other protocol implementors to track it...we arent the only ones who have to implement this protocol
< sipa>
yes, i'm aware of that
< sipa>
i don't think anyone is disagreeing with anyone else
< sipa>
so the changes required would be 1) not DoS when unconnecting headers are received 2) trigger a getheaders in that case?
< BlueMatt>
I thought that was sufficient but went and did that and saw some cases where the resulting headers werent leading to chain sync, I didnt investigate why
< BlueMatt>
might have been unrelated bugs causing it, however
< kanzure>
yes i know it's the hash being signed...
< sipa>
sorry :)
< sipa>
may not be obvious to everyone
< kanzure>
*shrug* hard thing to name anyway. i don't have any good ideas for that.
< sipa>
ComputeTransactionHashForSigning
< kanzure>
"but this is a different transaction hash, we promise"
< kanzure>
in some old source code i wrote, i called txid "txhash" because "well txid is a silly name" but now we're about to get a "transaction hash" in some rpc outputs heh
< phantomcircuit>
cfields, i'd like to add something like the benchmarking framework for fuzzing
< phantomcircuit>
i tried copy/pasting the benchmark stuff but i cant seem to get it to work
< phantomcircuit>
any chance you have spare cycles for this?
< phantomcircuit>
essentially just want to have a separate set of binaries which are fenced off by --enable-fuzzing
< BlueMatt>
heh, I didnt realize petertodd had posted the bribe-miners-to-do-aml shit on reddit/his blog