< sipa>
BlueMatt: i was wrong about only needing a std::shared_block<const CBlock> and deserializing into it. You'll also need to add a deserialize_t constructor in CBlock (whose body can just be 'ss >> *this')
< sipa>
petertodd: re twitter... i was certainly aware that something like segwit could be done in an extension block like way... what was new was that it was 1) doable 2) backward compatible with old wallets
< sipa>
gmaxwell: ?
< BlueMatt>
sipa: ahh...well if its more of a patch I'm gonna skip it until you have it written
< BlueMatt>
and, maybe #9014 will be merged first :p
< sipa>
cfields: i don't know whether caching the result of a dereference actually has that much of an effect... there are no concurrency guarantees on shared_ptr dereferences, so i expect the compiler to just cache the result
< sipa>
through common subexpression elimination
< cfields>
sipa: hmm, I'd always assumed that dereferencing required an atomic operation, but a quick look says otherwise. Makes sense, since it's guaranteed to be in scope. TIL.
< sipa>
cfields: using multiple shared_ptr objects that refer to the same pointed-to object is threadsafe
< sipa>
cfields: using the same shared_ptr from multiple threads is not
< jtimon>
sipa: did you change 9125 or just rebase?
< jtimon>
sipa: cool, but you're the one who already looked at it :p it was for the others
< jtimon>
I mean, I'm definitely interested in kowing if the nits are solved
< jtimon>
knowing
< roasbeef>
hmm, chjj and I seem to be encountering some odd behavior with segwit enabled core nodes on testnet: they aren't including witness_tx inv's in their getdata's when fetching transactions, so they fetch witness output spending transactions without the witness data (though the service bit is set). this should cause an immediate rejecvt due to the clean-stack rules, but they keep accepting the txns, sending a delayed reject minutes afterwards
< sipa>
roasbeef: they send MSG_TX getdatas instead of MSG_WITNESS_TX ?
< roasbeef>
sipa: yeh
< chjj>
sipa: yes. we have witness service bits, so do they. they repeatedly request our witness txs as non-witness.
< roasbeef>
the delayed reject is nonstd-script-verify due to empty witness
< roasbeef>
this then repeats as they aren't added to the reject cache
< chjj>
sipa: seems to be both 13.0 and 13.1 nodes
< sipa>
chjj, roasbeef: thanks, looking
< roasbeef>
hypothesis: these nodes have been online since before segwit activated on testnet, and somehow the service bit they check isn't the most up-do-date version within GetFetchFlags
< sipa>
found bug: getdatas of transactions sent in response to received transactions with missing inputs are always MSG_TX
< roasbeef>
ahh yeah he's generating really long chain of multi-sigs
< roasbeef>
a really*
< chjj>
oh, haha
< chjj>
yeah, 3k nested transactions
< chjj>
yeah, that would do it. potential dos vector found i guess. i should spam the network more often. :)
< sipa>
thank you so much
< * roasbeef>
giggles
< gmaxwell>
yea, thats an issue.
< gmaxwell>
We just added the orphan fetching in 0.13, so it won't work with witness txn. Considering that we've managed to go this long without having that orphan fetching at all, the world won't end. But that'll be good to get fixed in 0.13.2.
< gmaxwell>
chjj, roasbeef thanks for finding that!!
< sipa>
what do you mean by 3k nester transactions? if that's the length of the chain, we won't ever accept that into the mempool
< gmaxwell>
chjj: I don't know what 3k nested txn means, but we won't take an unconfirmed chain longer than 25 in the mempool.
< chjj>
gmaxwell sipa: no problem. we were sitting here for the past hour dumbfounded. glad we figured it out here.
< roasbeef>
he means that he's generating a really long chain, they accept up to 25 (or reject them), but then the remainder are sent but at that point they're have orphan inputs from their PoV
< chjj>
sipa: yeah, but it's a bunch of orphans. too-long-mempool-chain isn't checked for those i guess.
< gmaxwell>
yea, doesn't really check any of the consistency, orphan pool is mostly a big bag of transactions.
< gmaxwell>
And the 'fetch missing parents' is really just treating the orphan transaction like an implicit inv.
< chjj>
yeah
< chjj>
i was just signing a bunch of nested txs because roasbeef wanted me to. he wanted some big witness blocks. :)
< roasbeef>
yeh, I put him up to it ;)
< chjj>
wrote a script to endlessly sign and broadcast them
< crescendo>
heh, glad I reconnected in time to see that one come across. great work, @BlueMatt
< sipa>
BlueMatt: yay
< dermoth>
Hi there... I'm maintaining an opensource block explorer that calculate stats such as total bitcoins in circulations and btc days destroyed... Those stats are based not only on created coins but also permanently destroyed ones.
< dermoth>
Obviously there is no way to know about *all* destroyed coins, but there are obvious ways to destroy them which are factored in...
< dermoth>
Is there an official place to look for what should count as destroyed coins?
< btcdrak>
dermoth: there are some known unspendable addresses, like the counterparty burn, and 1bitcoineater etc. Certain type of counterparty and other type transactions which encode various messages are also known to be unspendable.
< midnightmagic>
dermoth: there are piles of proveable unspendable outputs, including e.g. the genesis block coinbase tx, the duplicated coinbase tx, the OP_RETURN spam, etc. As for the addresses -- the 1eater address and similar is probably not easily detected algorithmically. How can you tell for certain which addresses are constructed and which are in fact legitimate addresses.
< midnightmagic>
dermoth: Meanwhile, how do you categorize e.g. those fees which were destroyed by miner underpays, and do you want to distinguish those from the fees+reward underpay that I did (which as far as I can tell I'm the only one to do it.)
< midnightmagic>
Is coin which never existed, actually destroyed?
< midnightmagic>
"Due to various bugs and miners experimenting with code, some blocks claim less than allowed." <-- That is very kind of you sipa. :)
< sipa>
lock 124724 tried to intentionally claim 0.00000001 BTC less than allowed, but accidentally also failed to claim the fees, losing 0.01000001 BTC.
< sipa>
^- that refers to you, i think
< midnightmagic>
sipa: In retrospect, if I knew then what I know now, I wouldn't have changed it--causing 0.00000001 to never have come into existence is a better way of doing it.
< midnightmagic>
BlueMatt: thanks for the idea!
< * midnightmagic>
tips hat
< dermoth>
Thanks... I'm aware of how many ways coins can be destroyed... and indeed I never thought of underpay; I assumed reward+fee always matched first tx inputs (it could even have been enforced...)
< dermoth>
I was just curious if there was an "official" list as far as stats calculations are concerned, so that everyone can compute stats the same way and more easily detect errors.
< sipa>
dermoth: i think that's hopeless
< sipa>
there may be 100000s or millions of BTC lost due to destroyed/forgotten/... wallets
< dermoth>
I know it's hopeless... between lost keys, valid address that were creates without the private side, custom scripts we have no way to verify they'll ever be usable.... That why I think there should be an official list of ways to destroy coins... I think underpay, OP_RETURN, null address, genesys/dup coinbases for a start... maybe addresses that are provably undependable (i.e. you can quickly and mathematically prove there is no private key
< dermoth>
s).
< dermoth>
I don't think we should start going into more complex address even if we know some of them are impossible to create as a key pair, because there is not clear limit where to stop
< dermoth>
BTW, I proposed a long time ago that all unspent coins after x blocks (many years) could be scavenged and redistributed to miners as a way to recycle destroyed coins (avoid inavoidable depletion) and keep miner's fee higher. For some reason ppl rejected that idea withotu much thought.
< dermoth>
I think after a certain number of bitcoin cycles (210k blocks) we could scavenge all unspent coins from the oldest 210k block, put them in a pool and let miners siphon 1/210000th of that pool every block. At that point the older part of the chain could be completely dropped as it would no longer have any use, and pooling the fund would ensure miners get a predictable reward.
< gmaxwell>
sounds like a great reason to refuse to mine peoples' transactions near that boundary.
< gmaxwell>
You might want to assume less about what people have given thought to and haven't.
< dermoth>
well that is definitively a good point
< rabidus>
you would kill the "store of value" property of bitcoin
< dermoth>
and it wasn'T the reason given back then
< rabidus>
if i would need to move my voins from year to year just to ensure that i own them
< dermoth>
rabidus, what's wrong with that? but yeah, gmaxwell's point is quite an issue...
< rabidus>
if i might lose my coins just because i didn't move them, it is not very good store of value.
< dermoth>
right now these tx would be prioritized already... I guess the only way would be to add even further priority (ex no count toward block size, aggressive p2p forwarding...) and a system to reject blocks that include less than a specific % amount of the pool os these tx'es... And now I could understand why we wouldn'T want that
< gmaxwell>
"would be prioritized already" huh?
< dermoth>
if you know after how long you need to resend the coins, there's not excuses to loose them. I was thinking at least 10+ years before they would drop anyway
< dermoth>
tx that moves very old coins are prioritized aren't they?
< rabidus>
i love to put my bitcoins into trezor, and forget it into safe or somewhere, just to be found for my grand children after 20 years
< rabidus>
oops, it's empty.
< dermoth>
which - another think - I think tx with less outputs than inputs should be prioritized too, since they help reducing the utxo size
< dermoth>
or to find that tour trezor's no longed functional, and your paper backup's ink washed out
< sipa>
dermoth: there is only one rational prioritization rule, and that's fee per byte
< sipa>
in the past, a few policies have existed that permitted old/larger coins, but they're no longer used by miners
< luke-jr>
some miners still use them
< gmaxwell>
jonasschnelli: the coin control gui sorts amounts lexographically!
< sipa>
dermoth, rabidus: also, please take this to #bitcoin
< dermoth>
sipa, don't you think at equal size a tx that reduce the node's uxto size should have hither priority than one that increase it?
< * luke-jr>
thinks right now they sort by txid in that scenario
< dermoth>
yeah and my apologies; I though I was in #bitcoin-dev (not -core-)
< sipa>
dermoth: yes, i think they should. but the only way to make that happen is by having consensus rules that make that strategy actually rational for miners to follow
< sipa>
dermoth: which segwit improves upon (but not enough, imo)
< luke-jr>
or miners just decide to do the better thing
< sipa>
luke-jr: we should build a system that incentivizes the right behaviour, not just hope for it
< jonasschnelli>
gmaxwell: my CoinControl sorts it per value per default
< jonasschnelli>
On which column do you have the "sort arrow"?
< luke-jr>
sipa: protocol changes need consensus. we can't force them on people. but it only takes good willed miners to use pro-Bitcoin policys.
< jonasschnelli>
gmaxwell: Qt persists the last sort order and column.
< sipa>
luke-jr: if we're going to depend on miners beibg the judges of good and bad, i'd rather remove PoW, give them identities, and hold them accountable
< luke-jr>
that would also be a protocol change.
< luke-jr>
also, this isn't a matter of dependency
< luke-jr>
the system doesn't fail if miners choose bad policies. it just works better if they choose good ones.
< luke-jr>
and a better working system makes their bitcoins more valuable, so arguably it's already incentivised
< sipa>
in the same way that a central bank is incentivised to not destroy a country's economy
< sipa>
anyway, probably off topic here
< luke-jr>
all incentives come down to value anyway
< luke-jr>
eg, it would be pretty stupid for miners to fill blocks with 100% super-high-fee spam for months on end, if it meant their rewards devalued to zero at the same time
< gmaxwell>
jonasschnelli: if I click "amount" it swaps ascending and decending order, but the sort is not numeric.
< gmaxwell>
e.g. it goes 1 10 2.
< jonasschnelli>
gmaxwell: Ah.. I see your point...
< jonasschnelli>
gmaxwell: strange... works on my end.
< gmaxwell>
hm. qt difference perhaps?
< jonasschnelli>
1 2.4 9 10 25
< jonasschnelli>
I'm using Qt5.7
< jonasschnelli>
Testing now on 5.6
< jonasschnelli>
Also works with Qt5.6.1 (gitian)
< jonasschnelli>
(OSX)
< gmaxwell>
5.4.2 here. okay, well if it's just me then I won't worry about it!
< * jonasschnelli>
booting up Ubuntu
< gmaxwell>
Unrelated, I think the GUI interface that says "N hours behind" is sometimes misunderstood to mean that it will take N hours to catch up.
< jonasschnelli>
gmaxwell: Bug confirmed on Ubuntu with master built with gitian Qt5.6.1
< gmaxwell>
\O/
< jonasschnelli>
though... probably an upstream bug.
< wumpus>
don't call it upstream bug too soon, qt is pretty good at sorting :p
< jonasschnelli>
"probably"
< jonasschnelli>
But the fact that its working on OSX and not on Ubuntu makes me think its very likely upstream
< gmaxwell>
I should use the GUI more, it does a lot of things nicely.
< wumpus>
itemOutput->setText(COLUMN_AMOUNT_INT64, strPad(QString::number(out.tx->vout[out.i].nValue), 15, " ")); // padding so that sorting works correctly
< jonasschnelli>
Ah... there is a hack involves.
< jonasschnelli>
involved
< wumpus>
eh ... :-(
< cfields>
strPad(QString::number(out.tx->vout[out.i].nValue), 15, " ")); // padding so that sorting works correctly
< cfields>
bah!
< cfields>
beat me to it!
< wumpus>
I somehow don't think that's the canonical way to do that
< jonasschnelli>
Hey! Where is @Cozz...
< wumpus>
you shouldn't need to define extra hidden columns to get sorting right, IIRC you can give the model a separate data item that will be used for sorting
< wumpus>
but yes I don't think any of us ever deeply looked at that code
< jonasschnelli>
Yes.
< jonasschnelli>
The hack was probably made because of the QTreeWidgetItem
< wumpus>
it was not the prettiest code but works pretty well, generall
< jonasschnelli>
Which maybe has some differences internally over a QTableView
< wumpus>
maybe some event handling difference, causing the hidden column not to be used for sorting?
< wumpus>
either that or strPad(QString::number(out.tx->vout[out.i].nValue) returns something different on different OS but that soounds unlikely.
< * wumpus>
wonders why there's an inefficiently implemented strPad when Qt has a perfectly fine rightJustified() method on QString
< Victorsueca>
morning
< jonasschnelli>
hi
< Victorsueca>
how the hell does one remember so many Qt element calls?
< sipa>
Victorsueca: google is your friend
< Victorsueca>
yeah lol
< wumpus>
use the search/grep/google whatever you prefer
< wumpus>
you don't have to remember every detail, just where to find things when you need them
< sipa>
exactly, you remember what things are possible, and how to use those
< sipa>
you can always find the details of how to do them again later
< Victorsueca>
yeah that's what I do with linux commands
< wumpus>
jonasschnelli: most common suggestion seems indeed to be to override QTreeWidgetItem <
< Victorsueca>
but when you're coding google not always brings the best way to do something for your specific case
< wumpus>
oh sure there's still some skill involved
< wumpus>
most important is that you're able to judge whether something is a good solution or not, so you can reject bad ones
< wumpus>
jonasschnelli: so probably our tree widget item subclass should have int64 fields for the amount and date, which are used for sorting instead of the text in the column, for those columns
< gmaxwell>
Lauda: yea, that was the projection at the time the text was written.
< Lauda>
Indeed. 8 blocks left
< jonasschnelli>
wumpus: yes. Working on a fix. Though, it not to most important thing. :)
< btcdrak>
Launda: thanks, will update
< Lauda>
Good btcdrauk
< cfields>
BlueMatt: maybe we should discuss approaches? Seems we might have something different in mind for encryption
< cfields>
and sipa ^^
< BlueMatt>
cfields: probably, but bedtime for me rn
< cfields>
BlueMatt: ok, np. I just re-read my comments and realized they could've sounded a bit hostile. Not meant to be, for sure. I went back and forth on different approaches, very open to something other than what I settled on.
< cfields>
jl2012: those are likely ckpool, which will insert whenever it sees the rule
< sipa>
cfields: you did not misunderstand it.
< sipa>
but it's not what bitcoin core does
< sipa>
(which is irrelevant, as bitcoin core is not used for mining)
< cfields>
sipa: it inserts whenever it sees the rule. If it sees the rule and default_witness_commitment, it compares them to make sure that the computed value is sane
< sipa>
what is 'it'?
< sipa>
ah, ckpool
< cfields>
and cgminer, and the others I've done
< cfields>
For the simple stratum server, it only inserts default_witness_commitment if present
< BlueMatt>
cfields: huh? I didnt read any comments you made as hostile? I think you needed sleep :p
< cfields>
BlueMatt: ok good
< sipa>
cfields: i wonder if we could have a model where CNetMessage or whatever it is, is a virtual class that has an interface to produce a message on the fly
< sipa>
cfields: which could be used to delay loading a block from disk until it's actually going to be sent out through the network, for example
< sipa>
which could move the serialization of the checksum to the network thread as well
< sipa>
ah, but the checksum is in front, so it can only be donr after fully serializing
< cfields>
sipa: the current approach just leaves padding upfront, so i don't think that breaks anything?
< cfields>
i'm not sure what you mean by "on-the-fly", though. You mean install a callback that triggers when a condition is met, then requests data to send out?
< cfields>
sipa: ah, i see. it just inserts a place-holder, and when that comes up for sending, it requests the actual data.
< gmaxwell>
cfields: I wish we always inserted the commitment when segwit rule is enabled. Without it, we may find the commitments fail after activation... not the end of the world, but more potential for delays.
< cfields>
gmaxwell: agreed
< cfields>
it'd be helpful to setup a patched bitcoind that validates the commitments as though they were required, in the off-chance that we might notice a busted one before activation
< cfields>
i suppose it wouldn't actually be checking that much, but better than nothing
< gmaxwell>
Well we're due for a 0.13.2, we could put the commitment in by default when the rule is enabled.
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9188: Make orphan parent fetching ask for witnesses. (master...witness_the_orphans) https://github.com/bitcoin/bitcoin/pull/9188
< sipa>
cfields: maybe in bip151 the checksum should be at the end
< sipa>
so it can be computee on the fly during sending
< cfields>
sipa: ack
< luke-jr>
gmaxwell: the commitment is inserted by the GBT client, not bitcoind
< gmaxwell>
luke-jr: we make a proposed commitment that the client can use in lieu of implementing consensus rules when they don't need to (and thus won't put in the attention to do it right)
< luke-jr>
oh, I guess I didn't realise that wasn't unconditional
< gmaxwell>
for some reason it seems to be undocumented by any BIP. :( :(
< gmaxwell>
But nom, it only shows up when there are segwit txn right now. AFAIK.
< sipa>
right, default_witness_commitment is only added when there is an witness tx in the block
< sipa>
we can change that
< luke-jr>
yes, it's not in any BIP because it contradicts decentralised mining; it's a bitcoind-specific extension, not a GBT standard
< sipa>
it'll be a bitcoind-specific extension that the whole ecosystem depends on, because doing it in the client is pointless
< gmaxwell>
luke-jr: GBT as actually used contradicts decenteralized mining. We should fix decenteralized mining, not make a superficial stand that has a side effect of making the software more error prone.
< gmaxwell>
Because if you ask someone who _really_ doesn't care about the extra flexiblity there to handle computing the commitments themselves, they'll _hate it_ and be more likely to not implement it at all or to do so and get it wrong.
< gmaxwell>
Since they're getting the cost, but no one benefits because the end result doesn't improve mining decenteralization at all.
< luke-jr>
sipa: nobody uses it AFAIK
< sipa>
luke-jr: everyone uses it
< sipa>
cfields: right?
< luke-jr>
gmaxwell: it's not much more work than doing the txid merkle tree?
< cfields>
sipa: yea, it's upstreamed in several pools/miners now
< luke-jr>
cfields: ⁈
< sipa>
luke-jr: why redo the work when bitcoind already does it for you
< cfields>
luke-jr: the pools have enough things they could be breaking as-is. I don't see why we should pile more on
< gmaxwell>
luke-jr: It's similar to the merkle tree, but it is completely pointless when the software doesn't support changing the transaction set... and also not fast.
< luke-jr>
so why not just pass merkle branches in GBT too, if we're just going to throw the towel?
< luke-jr>
then we could have a BIP standard for centralised GBT that's complete
< sipa>
luke-jr: i think that is what we should do
< sipa>
not because i'm in favor of more centralization of mining, but because none of this matters at all
< gmaxwell>
It isn't necessarily centeralized though. e.g. phantomcircuit's unreleased protocol work does that and yet it's plausably much better for decenteralization than the deployment of GBT ever was in practice.
< morcos>
sipa: agreed.. there should at the very least be an option of doing it the easy simple way where bitcoind does all the hard work for you.
< sipa>
nothing is in a better position to do block composition than bitcoind... it has all the fee information, consensus rules, mempool data, ...
< luke-jr>
well, fee info is passed over GBT; but sure
< sipa>
if we want people to do their own composition, we should enable them to run bitcoind
< sipa>
not hope they'll do the same thing in a layer on top
< gmaxwell>
(phantomcircuits' work let you combine consensus from one source (e.g. your own bitcoind) with coinbase requests from another...)
< morcos>
well to be honest, in an ideal world there would be nothing to differentiate transactions other than their fee/weight , so no reason at all to need your own tx selection
< luke-jr>
so should I spend some time writing up a BIP for this? or is it just a low priority "maybe someday" thing?
< sipa>
luke-jr: i think we should first have a functional implementiation
< luke-jr>
morcos: disagree; but that may be all we end up having in practice perhaps
< gmaxwell>
We've had too much BIP writing without any code at all lately. IMO. Actually implementing something grants a lot of insight into the construction.
< luke-jr>
ok, but is there a desire to prioritise this?
< sipa>
luke-jr: in an ideal world, there exists no means for miners to even distinguish any properties of a transaction other than its size and fee... anything else is an avenue for censorship
< sipa>
luke-jr: i think the current situation with mining is pretty bad
< gmaxwell>
There are other serious problems with mining, the total lack of authentication is a looming serious risk to the network.
< luke-jr>
indeed
< gmaxwell>
Which, if exploited, we'd be saying in hindsight we were foolish to not resolve more agressively.
< sipa>
"hope is not a strategy"
< cfields>
luke-jr: i think this arose from the idea of having 0.13.2 always injecting segwit commitments if the rule is active. Hence, default_witness_commitment would always be available and we'd want it documented
< sipa>
i guess we can output default_witness_commitment always whenever the client is segwit-ready?
< gmaxwell>
I think we should.
< luke-jr>
sipa: IMO that's a "just do it" thing; should be simple and uncontroversial
< luke-jr>
looks like maybe 2 lines of changed code
< gmaxwell>
Yes, would need to be release noted as there is some fringe risk of negative interaction with mining pool software.
< gmaxwell>
But otherwise, I think it's a no-brainer.
< luke-jr>
actually, this might not be as simple as I assumed
< luke-jr>
nm
< gmaxwell>
hah
< sipa>
luke-jr: i'm on it, it's easy
< luke-jr>
what's the purpose of GetWitnessCommitmentIndex in GenerateCoinbaseCommitment?
< luke-jr>
sipa: ok
< luke-jr>
(that GetWitnessCommitmentIndex still looks pointless though?)
< cfields>
luke-jr: making sure there's not one there already, i think?
< sipa>
yup
< luke-jr>
cfields: but there never would be, in the one place it's called, no?
< cfields>
luke-jr: miner could've added it in
< sipa>
indeed
< luke-jr>
cfields: it's only called from CreateNewBlock
< luke-jr>
which is before the miner gets it
< luke-jr>
(I had assumed it was being used for the miner's submitted block as well, which would pose a risk to invalidating submissions without a commitment, but that doesn't appear to be the case)
< cfields>
luke-jr: dunno, maybe it was added before the mining side was fleshed out
< luke-jr>
probably
< sipa>
i believe so, yes
< luke-jr>
maybe it should be removed if it's dead code (in master, not 0.13.2)
< bitcoin-git>
[bitcoin] sipa opened pull request #9189: Always add default_witness_commitment with GBT client support (master...alwayscommit) https://github.com/bitcoin/bitcoin/pull/9189
< sipa>
^ i ran the rpc tests and unit tests, but i haven't tested the behaviour
< luke-jr>
sipa: I don't think you need the additional check in GBT itself
< luke-jr>
if segwit is active, we fail earlier on if the client doesn't support it
< sipa>
luke-jr: yes, but there's no point producing a default_witness_commitment when the client doesn't understand it
< sipa>
i guess it doesn't really matter
< luke-jr>
there shouldn't be a commitment if the network doesn't have segwit active either, right?
< sipa>
the intent here is that there would be, whenever the client and the server support it
< sipa>
as it's a very nice end-to-end consistency check
< sipa>
to see that their entire stack works correctly
< luke-jr>
IMO it would be broken to insert a commitment (regardless of whether it's calculated) in a block without segwit being active on the network
< sipa>
arguably, it's just a weird OP_RETURN output in that case, and not actually a commitment
< cfields>
luke-jr: it's scary to count on all miners working 100% after activation
< cfields>
at least this way we get to see if someone thinks they should be ready but is horribly broken
< luke-jr>
if they add a commitment without "segwit" in "rules", there are already not working.
< cfields>
(since the only way they'd insert is if they're signaling that they're ready)
< luke-jr>
s/there/they/
< gmaxwell>
luke-jr: I disagree strongly. There is nothing wrong with a commitment that has no effect, and this means that the risk of a "hard cut" activation is reduced.
< gmaxwell>
Since if someone is producing broken commitments there is an oppturnity to go nag them to fix them, before the rule is enforced.
< gmaxwell>
I do think that there shouldn't be a commitment unless they have segwit in the rules, however.
< sipa>
gmaxwell: well if they don't support bip145, they will just ignore default_witness_commitment
< luke-jr>
sipa: even if they DO support BIP 145, they will ignore it..
< gmaxwell>
yes, but you might do something with the commitment but fail to realize that you need to send the rules. I guess it wouldn't be that bad an outcome: you'd just fail to mine segwit txn.
< luke-jr>
because "segwit" won't be in "rules" until it activates
< sipa>
luke-jr: i'd say they are allowed to ignore it
< gmaxwell>
yea, they're allowed to ignore it.
< sipa>
but they don't have to... there is nothing wrong with adding a commitment ahead of time
< gmaxwell>
Though I think it would be a good risk reduction if they didn't.
< luke-jr>
I can't think of any downsides.
< luke-jr>
(to including it prematurely)
< cfields>
i suppose we have enough padding to ensure it doesn't push us over max size?
< luke-jr>
cfields: ?
< sipa>
cfields: if we have enough padding post activation, we certainly have enough before
< luke-jr>
block size padding is 1000 bytes. whether that's enough depends on the client
< luke-jr>
which is why Eloipool will modify the template in some circumstances I guess
< cfields>
sipa: i suppose that was a vague question aimed at both sides of activation
< cfields>
ok
< meownow_>
anyone seen julian assange?
< gmaxwell>
luke-jr: refresh my memory, can the GBT client request a smaller template (e.g. because it knows its coinbase will be bigger)?
< sipa>
gmaxwell: not afaik
< sipa>
meownow_: off topic
< gmaxwell>
that should really be added.
< luke-jr>
gmaxwell: no, I don't think so. (either way, bitcoind doesn't support it)
< luke-jr>
probably
< luke-jr>
maybe "sizelimit" as a request param, to override -blockmaxsize?
< luke-jr>
(and "weightlimit" the same)
< sipa>
or a parameter to communicate the coinbase size?
< sipa>
so the limit can still be set in bitcoind
< luke-jr>
hmm
< gmaxwell>
yea, coinbase size seems better.
< gmaxwell>
harder to screw up at least.
< luke-jr>
yes, the client in theory might not know the network max
< sipa>
the client in theory _shouldn't_ need to know the network max
< luke-jr>
"gentxsize" in the request then?
< sipa>
(though gbt pretty much forces the client to know consensus rule)
< luke-jr>
sipa: GBT provides the client with it, but only after the request
< sdaftuar>
weight, not size
< luke-jr>
sdaftuar: same thing basically in this case
< sdaftuar>
yes, but confusing
< luke-jr>
I'd find it less confusing to provide as size *shrug*
< gmaxwell>
cbmaxweight "construct a template assuming that the coinbase transaction will have a weight no greater than x"
< jtimon>
BlueMatt: so what's the next PR for the moveonly ?
< Lightsword>
what’s the best way to compile 0.13.1 on debian 7? apparently doesn’t have full c++11 support