< sdaftuar>
oddly, travis doesn't appear to have run #8295
< luke-jr>
fwiw, there appears to be no changes affecting bitcoin-cli between 0.12.0 and 0.12.1
< gmaxwell>
Can anyone think of why we'd get a "CommitTransaction(): Error: Transaction not valid" on a new transaction being created that paid twice the nodes minrelay fee, ... and then managed to broadcast when the rebroadcast ran?
< gmaxwell>
it looks like it was spending an unconfirmed input.
< gmaxwell>
okay I think the issue was he managed to make a 25 deep unconfirmed chain, and got the last txn in it rejected.
< gmaxwell>
Doesn't look like the wallet handles that well.
< gmaxwell>
I think we should avoid using coins that are at the unconfirmed limit maximum unless not otherwise possible.
< gmaxwell>
Also ooRBF to sendmany would have eliminated 25 transactions here.
< GitHub68>
bitcoin/master 0ce8e99 Wladimir J. van der Laan: windows: Add testnet link to installer
< GitHub68>
bitcoin/master 975a41d Wladimir J. van der Laan: windows: Add testnet icon for testnet link...
< GitHub68>
bitcoin/master da50997 Wladimir J. van der Laan: Merge #8285: windows: Add testnet link to installer...
< GitHub129>
[bitcoin] laanwj closed pull request #8285: windows: Add testnet link to installer (master...2016_06_testnet_link_windows) https://github.com/bitcoin/bitcoin/pull/8285
< instagibbs>
gmaxwell, yes the maximum ancestor/descendant stuff means that the transaction will be marked invalid and the funds deducted from your wallet until rescan :/
< instagibbs>
err reindex or something
< gmaxwell>
instagibbs: in this case the transaction when through after the parents confirmed.
< gmaxwell>
the wallet rebroadcast accepted it the next time around and tada.
< gmaxwell>
probably the coinselection process, when it considers 0conf inputs should only consider thoes with a low enough depth.
< gmaxwell>
(perhaps two below the maximum to leave room for CPFP), and only if it's unable to meet that should it fall back to considering all inputs.
< instagibbs>
gmaxwell, oh so even in failure it rebroadcasts?
< gmaxwell>
yea, it saves it in the wallet before it tries to mempool it.
< gmaxwell>
and since its saved, it keeps trying to rebroadcast.
<@wumpus>
as long as the transaction is in the wallet it rebroadcasts
< gmaxwell>
I dunno if the depth of an unconfirmed coin is easily discernable though from the wallet.
<@wumpus>
not at is, it would need another (ugly) mempool dependency
< gmaxwell>
bleh
<@wumpus>
then again we already do an InMempool() check in IsTrusted, which determines whether the outputs of an transaction are considered spendable, so...
<@wumpus>
if looking up the depth in the mempool is relatiely cheap that could be added too
<@wumpus>
but yes I agree with the 'bleh' sentiment
<@wumpus>
ideally mempool implementation details shouldn't matter to the wallet
<@wumpus>
"if it cannot be submitted yet due to depth, try again later" seems a better approach
<@wumpus>
the wallet already tries to avoid generating chains of unconfirmed
<@wumpus>
so if it does, it really needs to
< gmaxwell>
it doesn't really: so it tries to avoid spending unconfirmed coins, but if it must it treats short and long chains the same.
< gmaxwell>
So in the example with the user I was supposting today, they had plenty of unconfirmed coins that were only 1 deep.
<@wumpus>
yes I don't mean to imply that it looks at the mempool depth, it obviously doesn't
< gmaxwell>
and one chain of 25.
<@wumpus>
yes that would make sense - count *unconfirmed* depth
<@wumpus>
then prefer coins that are shallow
< gmaxwell>
it ended up that way because they had multiple unconfirmed few-bitcoin outputs, and then a 1 bitcoin output and were making many not huge payments... so it decided to keep reusing the change because it considered them all equal.
<@wumpus>
that wouldn't require asking the mempool at all
< gmaxwell>
yea, don't' really care about the mempool, the ismine tracing could count the depth until confirmed I guess.
<@wumpus>
coins with a smaller unconfirmed depth could get some advantage in selection
<@wumpus>
another thing in the long list of things that could be better in coin selection
< gmaxwell>
well the way to do the advantage is the same way it prefers confirmed coins.
< gmaxwell>
try the selection first with any too-deep coins excluded, and only if it fails attempt without that restriction.
<@wumpus>
that's a possiblity, yes
<@wumpus>
though at some point the 'exclude these coins and do the selection again' list would become very long, and is less suited for non-boolean properties like input size (https://github.com/bitcoin/bitcoin/issues/7664)
<@wumpus>
but sure, this could be bolted on too :)
< gmaxwell>
well, it fits here since the failure results in an (temporarily) invalid transaction. ... just leaving them out is the right call for max depth, at least.
<@wumpus>
I think at some point it would be nice to have a per-coin scoring system, then make the coin selection use that
< gmaxwell>
yes, for some things, but a score for this should be infinite unless there is no other choice... since it guarentees the txn will not relay until its ancestors confirm some. :)
<@wumpus>
but you make assumptions about other people's mempool depth
<@wumpus>
I think a better approach would just be to *prefer* shallower transactions
<@wumpus>
without an absolute theshold
< gmaxwell>
They're not exclusive, prefer shorter but absolute threshold at the point where you won't relay the thing yourself anymore.
< gmaxwell>
this usecase would actually have been better off with some kind of replacement, as they could have saved a good 20 transactions.
<@wumpus>
and if it does manage to generates a temporarily invalid transaction, handle that more friendly, e.g. a warning instead of an error
<@wumpus>
having the wallet have a hard dependency on a property of the mempool seems really brittle
<@wumpus>
and it's not a solution that wallets that don't have such a close coupling to a node could use
< gmaxwell>
I don't think this is really a mempool dependency.
<@wumpus>
in principle it's a matter of optimizing the speed at which the transaction can be submitted, longer chains result in slower confirmation times
<@wumpus>
very long chains even in temporary rejection
< gmaxwell>
If it's spending here the transaction is ismine, which means all unconfirmed ancestors are wallet txn.
< gmaxwell>
So it just needs to know the longest depth.
<@wumpus>
but long chains, even if they are accepted, are not a good thing in itself
< gmaxwell>
(er not IsMine, but IsFromMe)
< gmaxwell>
wumpus: they're not that bad, esp with ancestor feerate mining; and relay being not totally broken for them in 0.13.
<@wumpus>
but I hope you agree that they are worse than shallow transaction chains
<@wumpus>
'not that bad' yes there are always much worse things :)
< gmaxwell>
all things equal... but compared to a choice that is less private or pays more fees?
<@wumpus>
well the individual scoring weights would have to be determined, yes, or even depend on some setting
<@wumpus>
paying more fees may be ok if it means the transaction has a larger chance of being picked
< gmaxwell>
because we never split large change output, a lot of usage patterns result in no real choice in any case.
<@wumpus>
less private, well long transaction chains are a great way to say 'hey this is a wallet sending out transactions serially'
< gmaxwell>
sure, but often better to just respend change than join a dozen otherwise unrelated inputs.
<@wumpus>
if it requires joining them, yes I agree
< * wumpus>
wishes someone would do a serious study about what wallet behavior would make sense
< gmaxwell>
Would we do anything with it?... a while back someone proposed a change (to remove extranious inputs), I suggested that it might result in wallets grinding down coins into small amouts more often. He made a simulator that showed it would, then we took the change. Then later users showed up complaining about the wallet grinding down inputs when they didn't use to...
<@wumpus>
well the problem is that we're too busy running from issue to issue to look at a higher level
<@wumpus>
well at least I am
< gmaxwell>
right. I agree with that.
< gmaxwell>
just collectively we are.
<@wumpus>
so now long chains are an issue, long chains are fixed by adding yet another special case, but without considering impact on other things
< gmaxwell>
You're right that someone looking at it more holistically would be good, part of the problem in that issue was that even there it was only asking a very narrow question.
<@wumpus>
at some point I just worry all those hacks make things worse, instead of picking some simple algorithm that does fine in most cases
< gmaxwell>
I don't really think there is much to consider on the subject of "avoid going over your own maximum chain depth if at all possible".
<@wumpus>
but I don't know, I don't have the ovierview
< gmaxwell>
since producing a txn even you won't relay when you could have avoided it seems clearly wrong.
< phantomcircuit>
gmaxwell: the wallet should probably have all of the unconfirmed depdencies of a transaction
<@wumpus>
so dropping of extraneous inputs was bad?
< phantomcircuit>
and then remove the dependencies after some high number of confirms which aren't relevant
< gmaxwell>
phantomcircuit: in these cases all the unconfirmed dependencies will be wallet transactions-- IsFromMe otherwise it won't spend the coin!
<@wumpus>
normally you'd say that choosing a minimum set that covers the value to be spent would be optimal
< phantomcircuit>
but yeah you're right it has to pass IsMine which means the wallet already has all the info you need to calculate depth with just coins view
<@wumpus>
but yes, I'm sure there are lots of other contraints and scores that should be taken in to account
< phantomcircuit>
reading top down so my comments might be already worked out :P
< gmaxwell>
wumpus: the result ends up being the smallest possible change. so it breaks your wallet into lots of tiny little coins.
<@wumpus>
which was my point before
<@wumpus>
so why wasn't that reverted?
< gmaxwell>
I don't know/understand why it wasn't.
< gmaxwell>
I think when it went in there was some expectation that further improvements would come, and they didn't. Then it was released. Then people showed up and complaints (opening an issue) and we figured out that the behavior change was due to removing extranious inputs.. and then? I don't know
<@wumpus>
we just have too much on our plate
< gmaxwell>
then something else caught fire.
<@wumpus>
sometimes I really want to run around screaming with my hands on my head
< phantomcircuit>
wumpus: that sounds like fun
<@wumpus>
it's not possible to handle this all anymore
<@wumpus>
we really need someone that focuses on wallet improvements
< gmaxwell>
(privacy wants you to spend all payments to a single address at once, so that you don't get a rolling linkage that eventually cross taints every address in your wallet--- so maybe just attempting to do this would counteract most of the grinding. thats the kind of thing you want tne step back and overview to answer)
<@wumpus>
someone that has an overview of what the heck is happening in the wallet
< gmaxwell>
There are a number of positive wallet things going on, at least. But there is a lot of time spent twiddling things in the weeds rather than setting up prioties and identifying larger scale issues.
<@wumpus>
but how to sort 'bad' improvements from good ones?
<@wumpus>
I don't know anymore
< phantomcircuit>
i've been trying to improve the wallet
<@wumpus>
yes, thanks for that
< phantomcircuit>
unfortunately yeah it's a lot of running around in the weeds trying to fix things up... to make it easier to do bigger things
<@wumpus>
all in all the utxo/coin based approach does cause a lot of non-trivial difficulties, both at the node (utxo set sprawl) level as in the wallet
< gmaxwell>
in any case, I didn't bring any of this up here to complain-- I brought it up because initially I couldn't figure out the cause, and then updated when I did.
< gmaxwell>
it avoids a lot of replay problems however, which avoids other kinds of sprawl
<@wumpus>
no, I'm not trying to complain either, but sometimes I just don't know anymroe
< gmaxwell>
well, from another perspective-- we have no deadline. These same issues existed in 2011 (actually, much worse), but it didn't matter because hardly anyone used the system. :)
<@wumpus>
but that's part of what worries me, yes these same issues existed in 2011
<@wumpus>
no, no deadline, but e.g. the utxo growth is worrying
<@wumpus>
we're running against (soft) walls
< gmaxwell>
yes, I'm concerned about utxo growth.
< gmaxwell>
I happened to chart data from all those reindexing tests I just did, and the time to verify a block is increasing (for the same amount of txn data).
<@wumpus>
and all the while joe user is complaining about lack of *scaling*, while the system is already seemingly bursting at its seems
< gmaxwell>
I was theorizing that this was from polylog behavior in the database and worrying, but phantomcircuit gave an alternative argument that the reduction in spammy transactions relative to non-spammy ones may be resulting in lower cache hitrates.
< gmaxwell>
so I'm going to try to figure out if thats the case.
< gmaxwell>
(by increasing, I mean increasing enough that it was clearly visible on a last-8064 block plot)
<@wumpus>
"the time to verify a block is increasing (for the same amount of txn data" yes, indeed, I also intended to plot time per block, but that much was clear just from looking at timestamps :(
< gmaxwell>
well the difference between 1MB and smaller blocks was expected, but the increase still existed when looking at only large blocks.
<@wumpus>
yes
<@wumpus>
recent blocks verify slower
<@wumpus>
even of the same size
<@wumpus>
so *let alone* what bigger blocks would do
< gmaxwell>
At least segwit doesn't increase the worst case amount of utxo growth or accessing...
<@wumpus>
yes I see: 96e8d120336cf4312cd5f42ba2f9aff17d4ad414
<@wumpus>
that was my own stupid idea probably
< MarcoFalke>
Up to now I have not seen proof that this made things considerably worse
< MarcoFalke>
Someone should come up with a framwork that mimics common wallet use cases
<@wumpus>
ok
< MarcoFalke>
e.g 'exchange': something with a huge in-out volume/count
< MarcoFalke>
then a "normal user": always waits for confirmations, low volume
<@wumpus>
right, that would make sense, for a project taking the wallet part seriously
< MarcoFalke>
and then different behaviors like 'always send small coins' (pay for coffee each day)
< MarcoFalke>
and 'receive large coins' (get payed in btc)
< MarcoFalke>
etc
<@wumpus>
right.
<@wumpus>
gmaxwell mentioned above that he did see some reports of worse behavior after the change
<@wumpus>
now it's not clear whether to revert it or not
< gmaxwell>
We recieved an issue with a business with a large wallet reporting specifically the expected behavior there.
<@wumpus>
why don't businesses with large wallets never help with the wallet development?
<@wumpus>
it seems it would be in their best interest
< gmaxwell>
because there are very few left, most are using fully hosted things that have custom software.
<@wumpus>
but I don't think this was even reported on github
< gmaxwell>
I thought it was.
<@wumpus>
it's just frustrating
< MarcoFalke>
it was
<@wumpus>
oh?
< gmaxwell>
And utxo growth coincided with it. This was previously discussed in meetings IIRC and then ... I'm not sure what happened.
<@wumpus>
that's not important. What is important is what to do now
<@wumpus>
revert it?
< MarcoFalke>
#7664 #7657
< gmaxwell>
maybe it was dropped for good reason. I just can't recall.
< gmaxwell>
At the end of the day, the pruning itself was correcting a bug. The potential issue is that the bug was covering up that the non-bugged behavior is bad (and the simulations showed that it, as I said, would grind down wallets into lots of tiny coins)
< MarcoFalke>
Imo it shouldn't matter if we revert it or not. I haven't yet seen simulations which prove that either would place an advatage/disatvantage
< gmaxwell>
MarcoFalke: IIRC there were simulations posted that showed that the behavior caused far more utxo.
< MarcoFalke>
No one verified that the simulations were implemented according to the current coin selection code
< MarcoFalke>
I have seen single satoshi outputs in those simulations
< MarcoFalke>
which does not happen with our code
< MarcoFalke>
Also the author found a bug in the implementation about two weeks ago or so
< gmaxwell>
The result is also intutive to me. It results in making the smallest possible outputs. This is pessimal when you consider that avoiding utxo size wants, for a given users balance, the largest value outputs.
< MarcoFalke>
Ideally, we'd implement the wallet benchmark framework in cpp and have it just use our wallet code?
<@wumpus>
if it would be modular enough for that :)
<@wumpus>
possibly separate the coin selection logic out to a separate file, which could also be used by the simulation framework
<@wumpus>
it could use an abstraction of a list of coins with some properties instead of a wallet as input
<@wumpus>
it would make some things a lot easier, like trying out what the algo does in certain circumstances, without actually having to fake a wallet
< gmaxwell>
An even better behavior would be to add all other inputs to the same addresses being spent, up to some limit to prevent very large transactions, to sweep them up. This is a privacy improving strategy, as well as a fee minimizing strategy under an assumption that fees will increase in the future. My guess is that I didn't NO DONT the pruning change because in and of itself it was right, and the
< gmaxwell>
grinding should be fixed by something like this.
< gmaxwell>
but it went off my radar, also changes to coinselection are a pain because a bunch of th tests freeking hard code the behavior.
<@wumpus>
if that was the only thing making changes to coin selection a pain :)
<@wumpus>
it's just almost impossible to agree what is better behavior, which is what talled #4906 for so long and resulted in it eventually being merged, probably wrongly
<@wumpus>
so a) it should be better behavior b) it needs to be implemented correctly, and indeed, we have no good tests or simulation framework
<@wumpus>
sweeping up as many as possible inputs to the same address would make sense
<@wumpus>
there is no privacy benefit from having more spends to one address
<@wumpus>
then again it assumes bad-case behavior from the user, reusing addresses
< gmaxwell>
I don't want to be hard on you, but Xekyo ran simulations which showed 25% to 400% increase in the UTXO set size under some simulation loads.
<@wumpus>
I think it would make sense to design a standard interface for wallet coin selection algorithms, separately from any sepecific wallet
< gmaxwell>
We had simulations, they might not have been ideal.
<@wumpus>
so that research and simuation can be done outside of the specific scope of bitcoin core
< gmaxwell>
They showed bad behavior from this change. From intution I predited specifically this bad behavior and asked for the simulations. Our issue is not that we need more simulations.
< gmaxwell>
(well we do, but that didn't help here!)
<@wumpus>
ok, never mind then
< gmaxwell>
and still MarcoFalke is not agreeing that there is a potential issue.
<@wumpus>
just trying to think of some things that might help, out of the blue
< gmaxwell>
yea.
<@wumpus>
if it's all hopeless I'll just shut up
< gmaxwell>
xekyo also identifyied a useful strategy, making the coinselection target double the value instead of the value. This results in change of useful sizes.
<@wumpus>
*more* simulations may not be a solution, but better, correctly implemented ones would
<@wumpus>
if the answer of this is up to whether the simulation is implemented correctly, then we're not really helped a bit, I agree
< gmaxwell>
re: assumes bad-case behavior from the user, well, indeed, it's no effect if the user doesn't reuse. But no harm, and reuse is ubiquitious. The (vast?) majority of bitcoins in circulation are held in reused addresses.
<@wumpus>
the vast majority of those may not be using bitcoin core in the first place
<@wumpus>
many wallets encourage address reuse, or can only do address resuse
<@wumpus>
that rule may help them a lot :)
< gmaxwell>
pretty much every active user has some reuse though, consider that thing that sends tips for commits.
<@wumpus>
anyhow that speaks too of generalizing the coin selection algorithm beyond the specific software level
< gmaxwell>
means that all of us have reuse, even if we otherwise act perfectly ourselves. :)
<@wumpus>
yes, that is true, ideally that should use some BIP32 construction
<@wumpus>
to be honest I always use manual coin selection
<@wumpus>
(apart from testing)
< gmaxwell>
I do too. (and always spend all the coins connected to an address, when I spend any)
<@wumpus>
and that hould ideally coincide with changing the donation address
< gmaxwell>
they've gotta get spent someday or the coins are lost, fees are likely lower now than in the future... and spending at once avoids privacy harm from constantly interlinking inside the wallet.
< gmaxwell>
yea, I do that with my bct address. change the one on the site, when I spend all the coins.
< gmaxwell>
for that tip commit thing, I just haven't been spending its payments, I think... as I dunno how to change it.
<@wumpus>
they have a little web interface where you can log in and change the address IIRC
< gmaxwell>
I wonder what the average coin value is in the utxo set and how thats evolved over time. (it's not quite as simple as the set size evolution over time, since more coins have also been introduced)
< gmaxwell>
"errors": "WARNING: abnormally high number of blocks generated, 4477 blocks received in the last 4 hours (24 expected)"
< gmaxwell>
lol
<@wumpus>
yes that would be an interesting utxo statistic
< murch>
I was actually the one that provided the simulation in 4096
<@wumpus>
I mean there are obvious concerns such as CPU usage and memory usage, fee minimization, but also more 'tragedy of the commons' issues such as privacy concerns, utxo growth concerns (though that also coincides with performance a bit, keeping the wallet's utxo set small keeps the global set also smaller)
< murch>
Actually, I think that the pruning should have had little effect, as there should only be anything to prune when a second pass is needed. Otherwise, since the last added UTXO would be the smallest, there should not be any UTXO prunable.
<@wumpus>
it has very little effect
<@wumpus>
(but apparantly visible in some cases)
<@wumpus>
though of course it's not entirely certain whether the reported problems were due to this change, or another change, or a change in the general usage of bitcoin not directly related to a change in our wallet
<@wumpus>
or a combination of factors including this change
<@wumpus>
in any case, if it has 'little effect' that's also enough reason to revert, I think. If it achieves hardly anything good, it shouldn't have been changed.
< murch>
wumpus: Completely agree.
< murch>
Well, I hope that I'll be able to provide some real improvements in the following months. :)
< murch>
Although I was surprised how well the current implementation does. It's not trivial to improve on it, and not make it deterministic in some fashion. :)
< murch>
Oh, and providing some metrics to compare Coin Selection approaches is the focus of the work
<@wumpus>
I hope so too!
<@wumpus>
good to hear the current approach is fairly good, on the other hand, that is going to make it even harder to replace by something better :)
<@wumpus>
I think the most common complaints are that it does bad with very large wallets, or containing lots of small inputs
< murch>
By the way, wumpus, do you know of any other wallet usage data? I got the moneypot.com wallet from #4096, but more good testcases would be grand.
<@wumpus>
apart from that it's not something people usually stumble upon
<@wumpus>
no, unfortunately not
<@wumpus>
I don't have any realistic big wallets
< murch>
Okay, too bad. :)
<@wumpus>
(this is also an issue which has existed from 2011, which affects both coin selection and general wallet scaling work, people with big wallets are very reluctant to share them even with developers :-) )
< murch>
Yeah, I'd imagine. :-/
<@wumpus>
which is also why I always encourage companies which have to cope with them to get involved in development themselves, that avoids having to share anything with third parties
< murch>
The code in wallet.cpp is pretty hard to understand, too. I'd love to refactor it, maybe even to just understand it properly…
<@wumpus>
phantomcircuit is working on that too
< murch>
oh really?!
< murch>
mh, just refactoring it, or also improving?
<@wumpus>
in a way it's kind of a chicken and egg problem though, people rarely dare to change things *becuase* they're not sure how they work exactly, but evaluating refactors is also made hard by that because it's non-trivial to see that behavior stays the same
<@wumpus>
both, but not coin selection AFAIK
< murch>
Yeah, on the other hand it's in a state where it is really hard to add unittests to check whether the behavior remains unchanged.
< murch>
okay, excellent
<@wumpus>
yes, that too.
<@wumpus>
and years of proposals of doing things differently have made me think that it being hard to understand is not so much a result of the organizatin of wallet.cpp, but that the underlying subject matter is hard
<@wumpus>
people tend to read the code and then blame that the code is difficult, but it's not entirely clear whether it is *unnecessarily* difficult
<@wumpus>
it's clearly a complex problem too
<@wumpus>
maybe it could be divided up into better manageb;e sub-problems though
<@wumpus>
manageable*
<@wumpus>
the thing is, before wallet.cpp was written, cryptocurrency wallets was a problem no one ever considered before, it was necessarily ad-hoc
<@wumpus>
this is very different from say, a web server, where everyone knows what is expected from it, how it usually should be structured, and so on
< murch>
wumpus: It's surely complicated, but there are some methods that could be extract, and some things are just somewhat obfuscated by very brief variable names. E.g. I've been looking at how and when the fee gets decided for the transaction and it's still baffling me.
<@wumpus>
so the refactor is also a discovery process, learning how to best structure this novel application
<@wumpus>
well yes it's complicated - but does it need to be complicated, is there a less complex way that would be just as good ,or better? would that simpler way still satisfy the requirements? (and in some cases: what are the requirements, even?)
< murch>
wumpus: I think it could be more comprehensible if key management and coin selection were separated into different classe for example. They are pretty far apart for being part of one class.
<@wumpus>
things like variable names are trivialities, sure, on first reading of the code it helps to have better names, but once you learn what they are it doesn't really matter what they are anymore (e.g. mathematicians do fine with names such as a b c :)
< murch>
Most of Coin Selection could actually be completely static, as it basically is just reliant on the spending target, the UTXO set and the desired fee level.
<@wumpus>
yes coin selection is a clear unit
<@wumpus>
I think it would be nice to factor it out to a separate unit with a separate interface, that just gets the information that it needs and returns the selected list
<@wumpus>
this would also be useful for simulations
< murch>
well, wallet.cpp is just a bit of a moloch to delve into at 3500 LOC. ;)
< murch>
wumpus: exactly
<@wumpus>
I've seen so much worse at some companies I've worked at :-)
< murch>
Also, adding, watching and tracking UTXO could probably be separated off
< murch>
I am not nearly as familiar with the github repository as I'd like. :-/
<@wumpus>
the idea is quite nice, but it's *hard* to see what its interaction with different things would be, such as reorganizations and conflicts
< murch>
I should probably check out all issues tagged with "wallet" soon.
<@wumpus>
yes :)
<@wumpus>
in any case, coin selection in itself is enough of a subject to fill a master's thesis I think, I'd warn against scope creep, or trying to fix the world at once :)
< murch>
hehe
< murch>
yeah, it's a bit overwhelming at times. I've been reading about Subset Sum solvers the past days, and delving through wallet.cpp to understand how fees are handled.
< murch>
Anyway, back to work. ;) TTYL
<@wumpus>
later :) hope you manage to make head and tail of it
<@wumpus>
should this be a hidden/debug option? if not, it should be documented better
< jonasschnelli>
sipa, wumpus: Do I get this right: -reindex does re-create the block index and recreates the utxo set (including all signature validation) while -reindex-chainstate will only recreate the utxo set with the current block index?
< sdaftuar>
wumpus: -blockmaxcost is supposed to be the recommended way to configure the mining code, going forward
< sdaftuar>
wumpus: and -blockmaxsize is in the we-want-to-deprecate-in-the-future category
< sdaftuar>
(the mining code is optimizing for fee-per-block-cost, not fee-per-serialized-byte)
< sdaftuar>
but yes, we need to document all this before release
< sdaftuar>
heh, yeah that --help message text is not very informative
<@wumpus>
sdaftuar: a translator was wondering what kind of cost this was, I think they assume that it's something with fee
<@wumpus>
jonasschnelli: correct
< sdaftuar>
oops. yeah that is bad language
< sdaftuar>
what's the timeline for improving that text, is it too late for making changes now that affect translation?
< sdaftuar>
i guess we can just document in the release notes?
<@wumpus>
well given that translators are confused by it (and hence unable to translate it effectiely) I wouldn't mind changing the message
<@wumpus>
there also needs to be mention in the release notes, but release notes aren't really documentation, just 'news'
<@wumpus>
e.g. you wouldn't assume that someone that is looking for documentation for an option to go through all previous release notes to find it documented
< sdaftuar>
a simple change might just be to reference the BIP where it's defined; that wouldn't necessarily impose an additional burden on translators if it's just "(BIP 141)" or something at the end of it
<@wumpus>
what needs to be made clear is that this is just an abstract cost
<@wumpus>
referencing the BIp would make sense, yes
<@wumpus>
in any case, even if it's too late for this to be translated, a better english documentation message would go further than a confusing translated one :)
< sdaftuar>
agreed!
< sdaftuar>
perhaps block cost shouldn't be translated at all actually, if we're referencing the BIP
< sdaftuar>
i updated #8294 so we don't forget this
< MarcoFalke>
gmaxwell: I don't think lack of progress in improving of coinselection is due to the test hardcoding the behavior.
< MarcoFalke>
Right now they save us from introducing accidental regressions, which is nice
< MarcoFalke>
If someone comes up with a new idea, the unit test need to go anyway and will be replaced by new ones, so not really an issue
< MarcoFalke>
Also the 25% to 400% performance loss may be flawed, as the coin generator was not adjusted to what happens in the real network
< MarcoFalke>
(re simulation)
< MarcoFalke>
I remember there was a site which put every address which was ever in a wallet. (Combining whenever two addresses happen to be used in the inputs of the same tx)
< MarcoFalke>
This could be useful to verify coin generators in simulations are doing their job properly
< morcos>
wumpus: gmaxwell: i don’t feel strongly enough to make a big argument about it, but if it was up to me i wouldn’t bother reverting 4906. I agree that we didn’t have sufficient justification to merge it in the first place, but we already crossed that bridge, and discussed it after the fact (more than once). I’m not sure we’re confident enough that its clearly worse to risk making the same mistake in the other directi
< morcos>
direction by reverting it
< morcos>
I suppose my point is I’d err on the side of being conservative with changes to coin selection and only making them when someone has put in the effort to really study them. Since 0.12 has been out for a while running with 4906, it now seems like a change to me to revert it.
< jonasschnelli>
sipa: A full IBD (against random peers) with current master took ~3h4min, a -reindex-chainstate took ~1h10min
<@wumpus>
morcos: I don't think 'we crossed that bridge' is a good argument, if this was no improvement, it should not have been merged, and it should be reverted (it should have been reverted a long time ago!)
<@wumpus>
morcos: I don't see any reason to keep the code if it i not an improvement
<@wumpus>
and if we should be conservative about changes to coin selection, which we should have been in the first place, then again we shouldn't have this change without understanding it
< morcos>
wumpus: ok like i said i won't argue, it just makes me nervous that somehow reverting could be worse (interaction with something else that changed that we're not considering), but i don't have any reason to believe that
<@wumpus>
I don't think having the mistake out in 0.12 is a reason to keep it now
<@wumpus>
don't get attached to your mistakes :)
< morcos>
i won't get attached to somebody's mistakes :)
<@wumpus>
then again I don't feel strongly about it either, but I'm tired of it coming up every time
<@wumpus>
let's just make a damn decision about it
<@wumpus>
I'm sure if we decide not to revert it now, then someone will bring it up again after a month or so
<@wumpus>
I don't want this following me around forever
<@wumpus>
I don't see how this could interact with anything else
< bsm1175321>
Woah. I just built github master and on testnet something has gone horribly wrong: I have a negative balance!
< morcos>
wumpus: ok 3rd topic. i thought i head you guys mention that something has gotten slower recently. is it just reindex, or actually something with connecting blocks?
< bsm1175321>
Next question: I need to fund transactions and be sure that their txid is not malleable. I was hoping 'fundrawtransaction' had an option to take only segwit inputs, but it seems it does not. What's the best way to achieve this?
<@wumpus>
morcos: AFAIK the summary there was: gmaxwell accidentally -txindex
< sipa>
morcos: and nobody recently tested -reindex with default -dbcache
< morcos>
wumpus: oh i definitely missed that conclusion. :)
<@wumpus>
morcos: yes, more recent blocks are slower to validate, comapred to older blocks of the same size - but this isn't a reversion in the code
< morcos>
wumpus: wait, yes its that last thing i'm asking about
< morcos>
why is that?
<@wumpus>
it's likely a by-product of increasing utxo size
< morcos>
and that affecting cache hit rate?
<@wumpus>
and the utxo database is reaching the size that leveldb can handle w/ good performance, e.g. now lots of crazy seeking and reading all over your disk, can't be good for performance
< morcos>
oh
<@wumpus>
yes that was another hypothesis
<@wumpus>
<gmaxwell> I was theorizing that this was from polylog behavior in the database and worrying, but phantomcircuit gave an alternative argument that the reduction in spammy transactions relative to non-spammy ones may be resulting in lower cache hitrates.
< morcos>
eyeballing the numbers i'm not seeing a noticeable slowdown since march
<@wumpus>
which probably means it's a combination of both
<@wumpus>
in any case the increased default dbcache should alleviate either problem, at least for a while
< morcos>
ok, yeah my numbers are with a big dbcache (2G)
<@wumpus>
there may be a better way to structure the utxo set on disk - but having looked into other databases it seems that leveldb has the best performance for our use, at least during sync
<@wumpus>
if you set a huge dbcache you won't notice much, as sipa says it's probably why the surprise that indexing was so slow with default dbcache
< sipa>
for 0.14 we should prioritize working on chainstate backups
< sipa>
(and vigorously fight the potential push for servers with 'helpful' backups to download...)
< bsm1175321>
I need to fund transactions and be sure that their txid is not malleable. I was hoping 'fundrawtransaction' had an option to take only segwit inputs, but it seems it does not. What's the best way to achieve this? (I'm willing to write such an option if there isn't a better way) @sipa?
< sipa>
bsm1175321: patches welcome :)
< sipa>
seems like a very useful feature
< bsm1175321>
Ok so the answer is that this isn't possible currently?
< sipa>
for now, you'll need listunspent
< bsm1175321>
Cool, I'll see what I can do.
< sipa>
i believe i've heard someone else ask for the same feature
< bsm1175321>
One complication with such an option is that it's possible for your wallet to have enough funds to fund the transaction, but not enough segregated witness inputs. A solution would be to create an intermediate transaction whose outputs have segregated witness. Thoughts?
< sipa>
fundrawtransaction shouldn't do such thing
< sipa>
imho
< sipa>
but it can fail with a nice error code, instructing you to clean up your wallet
< bsm1175321>
You'd rather it fail in such a case? That's fine with me too...
< sipa>
yes, insufficient funds :)
< sipa>
(or perhaps you should use separate wallets if dealing with that situation is hard)
< bsm1175321>
What would you expect a user to do in such a case? Is there a procedure to "clean up your wallet"? That people will know?
< sipa>
it's no different from the situation where you have a wallet with some watchonly coins, and the non-watchonly together are not enough to pay
< bsm1175321>
True, sort of, except you have the funds. I'll make a descriptive error message.
< sipa>
i think the real solution is that if you have a need for segwit-only inputs, you run with a segwit-only wallet
< bsm1175321>
In effect that's what I'll do. But have to start from a non-segwit wallet. Or is there a spent-to-myself-segwit-out wallet RPC call?
< sipa>
you can just use sendtoaddress to send from the old wallet to the new?
< sipa>
also, wallet integration of segwit is really preliminary at this point
< sipa>
for example, change outputs won't be segwit
< bsm1175321>
ooohhh...hmmm...
< sipa>
(which they should be when it becomes ready for production, but that's not done yet)
< bsm1175321>
Well if you 'fundrawtransaction ... segwitOnly' I can make it use a segwit change address.
< bsm1175321>
Or you can add an explicit change address
< sipa>
true
< bsm1175321>
If no objections, I'll make the 'segwitOnly' option take only segwit inputs, and generate segwit change.
< instagibbs>
what is/is there a way to link to an external secp library during compilation?
< bsm1175321>
sipa: do you have a "wallet wishlist" for segwit? I'm seeing that a lot of infrastructure is missing here.
< sipa>
bsm1175321: thinks like default witness addresses
< sipa>
post softfork
< bsm1175321>
I'm confused regarding addresses.
< bsm1175321>
What is the output of 'addwitnessaddress'? (what kind of address is that?)
< sipa>
p2sh
< bsm1175321>
Ok, duh. (hadn't made a testnet p2sh before, was confused by the 2)
< bsm1175321>
Ok I think I understand. A wallet bitcoin.conf really should have 3 possible values then: non-segwit, segwit, and segwit-in-p2sh.
< bsm1175321>
*wallet setting
< bsm1175321>
Do I understand correctly: The point of P2SH nesting is that non-upgraded wallets will check the destinations (though not the signatures)?
< jl2012>
bsm1175321: I'd say the point of P2SH nesting is to allow non-upgraded wallets to send money to upgraded wallet
< bsm1175321>
Can't they always do that though? Do you mean the other way around?
< arubi>
it's made for upgraded nodes to be able to accept payment from unupgraded nodes to a segwit scriptpubkey, yes
< jl2012>
for upgraded -> not upgraded, you just use P2PKH address (1xyz......)
< arubi>
bsm1175321, I'm curious, what kind of malleability are you trying to avoid? are you the one signing the transaction? can't you use normal inputs too and just sign it properly?
< bsm1175321>
arubi: I'm going to be handing out txids, including txids that may be in the mempool, not mined yet.
< arubi>
signed by you and you only? you could still "maul"(?) if you wanted to, even with segwit
< jl2012>
arubi: without BIP62, that's not reliable
< bsm1175321>
The non-upgraded nodes that receive funds from an upgraded node doesn't verify signatures though. (correct?)
< arubi>
jl2012, talking about the anyonecanpay|single sighash, sign once, permute how many you'd like
< arubi>
bsm1175321, the script that they see doesn't talk about checksigs, so they don't check any signatures
< jl2012>
arubi: there are still other forms of third party malleability, e.g. non-canonical push
< bsm1175321>
arubi: signed by me and me only.
< arubi>
jl2012, sure, using standard scripts is a must here if you don't want 3rd parties doing things, but as a single signer, you could still do it to your own txs, even with segwit
< jl2012>
well, if you want to guarantee non-malleability to only yourself, segwit is the only way
< bsm1175321>
jl2012: can you elaborate? Malleation of the script (which is part of the witness data) doesn't change the txid, and I don't think I care about that.
< arubi>
jl2012, true. I am asking because bsm1175321 explains that he will hand out txids (not the transactions themselves, as I understand)
< arubi>
so as a single signer, he could still cause malleability, even if he manages to convince the other party all inputs are segwit
< arubi>
*and standard scripts, or even better, even only with p2wpkh
< jl2012>
arubi: that's double-spending
< jl2012>
not malleability
< arubi>
how so? payments are still paid, but it's the order of the inputs outputs that's changed
< arubi>
you're using the same inputs
< bsm1175321>
arubi: Are you saying I could change the order of outputs, keeping the same segwit txid?
< arubi>
no, you're changing it
< bsm1175321>
Ok then someone looking for one txid I handed them won't see it at all, and there's no point in me giving the wrong txid to someone. (A txid which never get mined because I malleated it is useless)
< arubi>
right
< arubi>
they should be checking their addresses for incoming transactions, and that's it. txid is something that's only relevant when spending
< arubi>
*should only be, I guess
< bsm1175321>
However if they act on a txid in the mempool, and then I broadcast a malleated version which gets mined. That's your usual double spend.
< arubi>
how is it a double spend? there is no spending from the malleated version
< pigeons>
well if you only malleate the signature its not a double spend
< bsm1175321>
Let's say I malleate the output order. This is a weird thing to do. I'll have to think on it...
< arubi>
same can be done for inputs
< pigeons>
if you have someone taking action on the txid of the unconfirmed transaction they will be affected, otherwise, not right
< arubi>
a spender still has to reference a txid as input, so even if the outputs order isn't changed, the input is invalid in case of malleability
< arubi>
oh you mean if the bad one wasn't mined, right
< bsm1175321>
In this case, I'm proving control of coins. Malleating my own txn is shooting myself in the foot.
< arubi>
I'm guessing, you're proving control by announcing the txid before the transaction is seen on the network?
< bsm1175321>
Not *before* but it could be simultaneous. And I'm wondering what the consequences are of fiddling around...
< bsm1175321>
But If I claim to prove something with a txid, and you look for it on the network, I'm only hurting myself by messing with it. I'm having trouble thinking of any negative consequences here.
< arubi>
why not use signed messages to prove ownership of addresses? as long you commit to an address, which is kinda like commiting to specific coins
< arubi>
hm. I guess if it's a p2sh that's not so simple, because you'd have to disclose the redeemscript
< adiabat>
hey, I'm spamming testnet and have some unexpected behavior
< adiabat>
I think I get what "size", "cost", "strippedsize", "Vsize" all mean
< adiabat>
"cost" is basically "Vsize" * 4
< adiabat>
so a block "cost" must be < 4M
< adiabat>
(<=, whatever)
< adiabat>
so in .conf, blockmaxsize should set the cost limit of created blocks
< adiabat>
but it seems to be targeting "size" instead
< bsm1175321>
arubi: I want to be able to commit to coins regardless of the address type, and as you say, I'd need the redeemScript.
< kvnn>
Hello everyone. I'm offering a 1BTC bounty for 3&4 here: https://github.com/drivechain-project/docs . Please let me know if you are interested. (if this is not okay in this channel I apologize & will discontinue posting about it)
< sdaftuar_>
adiabat: -blockmaxsize continues to refer to total serialized bytes of a block, counting witness and non-witness parts the same. -blockmaxcost is the option that specifies the limit you probably want here.
< adiabat>
hmmm ok
< adiabat>
so there's both
< sdaftuar_>
Yeah. The hope is that -blockmaxsize is deprecated in the future.
< sdaftuar_>
(My preference was to change the semantics of -blockmaxsize to refer to segwit's virtual size, but that's not what we ended up with)
< luke-jr>
well, depends what you want to limit
< roasbeef>
segwit miners on testnet seem to be limiting to ~750k cost
< luke-jr>
sdaftuar_: then miners would need to set blockmaxsize to 250k to avoid making blocks >1M
< gmaxwell>
06:43 < MarcoFalke> gmaxwell: I don't think lack of progress in improving of coinselection is due to the test hardcoding the behavior.
< luke-jr>
the goal of blockmaxsize is to avoid blocks larger than the size. cost/vsize doesn't matter here
< gmaxwell>
I can tell you that personally I've written improvements that I haven't PRed because they required throwing out all the existing tests.
< sdaftuar_>
My preference was to introduce a new option that would control serialized bytes
< luke-jr>
sdaftuar_: why a new oppppption when we have one for it already?
< sdaftuar_>
Because the mining code doesn't optimize for it
< sdaftuar_>
So it's misleading to suggest it's supported
< luke-jr>
it does what it's supposed to do.
< gmaxwell>
I don't think it makes sense to even have a seralized size setting, we don't have a setting to limit the size of the sighashed bytes, or a setting to limit the size of the block's CTransaction encoding. We don't have a limit to control the size of the compactblock form, or a limit to control the size of a zlib compressed block.
< gmaxwell>
But I don't bother complaining because luke was going pyric over this before, and it shouldn't hurt that much having it either.
< luke-jr>
gmaxwell: at this time, serialised size is a critical factor in practice
< sdaftuar_>
gmaxwell: +1
< gmaxwell>
luke-jr: critical for what? it doesn't reflect block propagation time especially well.
< luke-jr>
on the p2p network it sure does..
< gmaxwell>
validation time is more closely related to the number of utxo operations and how well relayed the txn in hte block were.
< gmaxwell>
luke-jr: we have BIP152 now.
< luke-jr>
not deployed, unfortunately.
< luke-jr>
to be clear, I'm all for removing blockmaxsize once it doesn't matter and isn't used.
< luke-jr>
hence why I don't think optimising block creation for it is important
< gmaxwell>
(but even without it, seralized size is still not the ideal predictor of propagation time.)
< luke-jr>
I just worry that we'll end up with >1 MB blocks before the network can handle it sanely
< gmaxwell>
K. indeed 0.12 doesn't have BIP152. Perhaps that does suggest that it should default to four million in 0.13?
< gmaxwell>
luke-jr: thats inevitable...
< gmaxwell>
see also the utxo comments by wumpus last night.
< sturles>
Am I correct that the 0.13 HD wallet implementation will only support new wallets, and only support hardened keys?
< gmaxwell>
sturles: correct.
< sturles>
Will it support importing private keys from an old style wallet?
< gmaxwell>
I believe it does, I've not actually tested that!
< sturles>
A HD wallet won't work with older versions, right? Should use the opportunity to switch DB for the wallet to something > libdb 4.8?
< gmaxwell>
Do you want software that never makes progress? Thats how you get software that never makes progress.
< gmaxwell>
It can't use a later libdb or it will make all non-HD wallets also unreadable by other versions.
< helo>
i believe a hd wallet should work with older versions
< gmaxwell>
helo: hows that?
< sturles>
Yes, I know. I guess it is impossible to use different versions depending on wallet type, e.g. via dlopen?
< gmaxwell>
sturles: in theory possible but that would be a lot of work without any benefit.
< helo>
it will work to a limited extent... hd keys already added will be visible
< helo>
untested, ofc
< gmaxwell>
if so, thats a bug. it should get rejected due to having a too new version.
< Lightsword>
can we just migrate to sqlite?
< gmaxwell>
if it isn't then you could put yourself in a weird state.
< gmaxwell>
I will have to start stabbing people.
< helo>
nah, that's not necessary. the version check probably functions as intended :)
< gmaxwell>
hah
< sturles>
I'd say it is benificial to switch to a newer, supported version of libdb
< gmaxwell>
the latest versions have incompatible licenses.
< sturles>
Oh.
< gmaxwell>
there is newer than 4.8 that is okay, but the latest stuff has a much stronger copyleft than we'd normally use.
< gmaxwell>
really the 'database' isn't actually used for much--- the wallet is largely entirely in memory, and the database is only really used for persistance.
< luke-jr>
eh, -walletupgrade won't work? O.o
< Lightsword>
would working on migrating the wallet to sqlite be something worthwhile at this point?
< CubicEarth>
Block validation is, theoretically, very amenable to parallelization, correct? Is the main serial component the depth of the merkel tree?
< gmaxwell>
signature validation is, and bitcoin core runs that in paralle... but normally almost all of the signatures in a block are already validated before it shows up.
< gmaxwell>
parallel*
< gmaxwell>
the general database handling ends up taking much of the time, which isn't really paralizable.
< CubicEarth>
gmaxwell: so thinking about the initial chain sync and validation, it could (with lots and lots of careful work) be made to harness GPU's, and spread the signature validation across *many* cores? I'm not familiar with the nature of the general database handling you are referring to, but for the initial sync is that mainly keeping track of the utxo's as it increments tough the blocks?
< gmaxwell>
I've seen no reason to believe that a gpu would help at all, GPU cores aren't particularly good at the work of validating. (64 bit arithmetic helps a lot!)
< gmaxwell>
and nodes don't even verify the signatures far back in the chain.
< gmaxwell>
but sure more work could be done to extract more parallelism out of it.
< gmaxwell>
but even with signature validation turned off completely, with default settings a reindex takes almost 9 hours currently.
< gmaxwell>
increasing the db cache to a really huge size (such that the sync runs entirely out of ram) lets the whole sync _with_ signature checking run in about 3.5 hours.
< CubicEarth>
Is the default 100MB? I never knew that!! I'll be curious too see what kind of speedup I can achieve on an i5 - dual core. Are we talking 100GB of ram here, or would setting it to 4 or 8 GB make a substantial difference?
< sipa>
setting it to 2 GB is more than sufficient
< sipa>
higher numbers don't matter much
< CubicEarth>
Is it a rolling-window thing?
< sipa>
no
< sipa>
it's a cache
< sipa>
when the cache is full, we write it to disk
< sipa>
and start over
< sipa>
very silly
< gmaxwell>
I think you actually need more than 2GB now to get full performance, but 8GB is more than enough.
< midnightmagic>
maybe an iovec scatter write might help some systems write faster
< midnightmagic>
or is it just a plain sequential write thing and the rest is leveldb
< CubicEarth>
Well it's easy enough to set it higher. I just never knew. A default of 100MB can make sense for older systems, but if there was place for 'performance tips', that would be a good place to let people know. Maybe everyone already (node operators) knows except for me.
< gmaxwell>
I think a lot of people don't realize what a big difference it makes.
< gmaxwell>
looks like we'll increase the default to 300MB.
< gmaxwell>
wumpus: on the leveldb stuff, without txindex, I'd suggest making that 4mb instead of 2. 4MB was something like 1% faster for me when testing leveldb in isolation.
< gmaxwell>
maybe 2 is a win at the 300mb limit, but with it bumped 4mb would be... and probably is more conservative overall.
< CubicEarth>
I did a fresh sync a week ago, 2013-era i5 dual core laptop. Ubuntu, 4GB of ram, spinning HD. 75Mbps cable internet. It took about 20 hours to finish. What was interesting though was how the last 20 weeks just crawled. Seemed less than linear, even estimating about the effects of fuller blocks.
< sipa>
yeah, try dbcache 4000 or so
< CubicEarth>
after I order more ram :)
< CubicEarth>
gmaxwell: earlier you wrote "and nodes don't even verify the signatures far back in the chain.". That's due to checkpoints, right? They don't seem like a good thing to rely on. I understand their practicality, but yeah.
< CubicEarth>
I'm just trying to understand the theoretical performance limits... I'm not crusading against checkpoints
< gmaxwell>
well you'd be welcome here to do that, I want them gone and have for years.
< gmaxwell>
But even with them gone, one can avoid skipping signature validation during the initial sync for blocks burried by thousands of additional blocks with negligible risk-- consider, if miners went rogue enough to reorg thousands of blocks the system is already screwed... and if its only during initial sync they'd only trick new nodes in any case.
< gmaxwell>
if it makes a difference between a user running a node or not-- the best choice is clear.
< gmaxwell>
Meanwhile bitcoinxt and classic have disabled signature checking for any block where the miner supplied timestamp is a day back or more... meaning they can be fooled without a reorg at all. And no one seems to care.
< CubicEarth>
Arn't miners only afforded the option to have their timestamp be off by more than 2-hours?
< CubicEarth>
(as an aside)
< gmaxwell>
no, not in the past direction. In the future direction yes.
< gmaxwell>
The past direction is limited by the median of the last 11 blocks-- but that median can be arbritarily far back.
< CubicEarth>
So practically speaking, the way to detect the issue is when the client said it was 'synced' and you noticed that is was reporting yesterdays date?
< midnightmagic>
.. did they actually commit that change? the >24hr non-t-verify change?
< gmaxwell>
CubicEarth: no, because miners can simply add a correctly timed block immediately after.
< CubicEarth>
That would appear as if the hashrate had fallen precipitously... not blocks for 24hr!
< gmaxwell>
midnightmagic: XT did and released, it's merged in classic, but I don't know if it made it into a release yet.
< gmaxwell>
These folks are extremely and dangerously incompetent.
< gmaxwell>
CubicEarth: no, just a wonky timestamp, miners provide the timestamps and sometimes they're pretty wonky.
< gmaxwell>
and while _syncing_ you have no idea that the timestamps were wonky.
< gmaxwell>
e.g. someone can produce blocks that are -24h, -23h, -22h .... normal.
< midnightmagic>
so.. the peer preference thing means some of the alt-branch software will soon start disconnecting the only nodes that are immune to the attack, making themselves *more* vulnerable to it?
< gmaxwell>
midnightmagic: I don't think that matters that much, but right now bitcoin "xt" is only not totally partitioned due to inbound connections from core to XT, but part of segwit is that segwit capable nodes will strongly prefer to connect out to only non-segwit nodes (because they must in order to get blocks once segwit is activated)-- the result will be making that partition complete.
< midnightmagic>
i thought -classic was now disconnecting non-classic nodes
< gmaxwell>
no, Xt. I don't think classic is doing that yet.
< slackircbridge1>
<alp> let them lose money for people
< slackircbridge1>
<alp> its the only way theyll learn
< gmaxwell>
It's not that simple. See also: Ethereum and DAO.
< gmaxwell>
which adversely impacted the bitcoin price even though many of us have been pointing out the related risks and distinguishing bitcoin on the basis of them for some time.
< slackircbridge1>
<brg444> :O
< CubicEarth>
gmaxwell: well I'm pretty sure the rise of ETH was hurting the bitcoin price to some extent. I think this event was a decoupling, where Bitcoin shakes the monkey off its back.
< slackircbridge1>
<brg444> did he read your post :O
< slackircbridge1>
<alp> Short term pain, long term gain
< slackircbridge1>
<moli> guys, we can read your posts on IRC lol
<@btcdrak>
wait that's wrong. I'm trying to set slackircbridge1 +q