< petertodd> anyway, I'm arguing that what's important isn't the fraud proof, but rather the proof of correctness - take the fraud thing to it's logal conclusion
< petertodd> given that the existance of fraud proofs can be countered by making them impossible to generate, we need *another* level of protection, and that level of protection makes fraud proofs irrelevant
< sipa> *sigh*
< petertodd> note how in my scheme, a fraud proof is actually the *challenge*, which unless met with valid data is proof of fraud
< sipa> it's trivial to bypass that weakness by requiring all data committed to is revealed
< sipa> which means that partial nodes don't get a bandwidth reduction
< petertodd> I get that...
< petertodd> you're missing my point: whose using partial nodes?
< sipa> does that matter?
< petertodd> yes!
< sipa> anyone who likes to
< petertodd> if I'm a fraudulent miner, I'll sybil the network with a bunch of partial nodes that never give up fraud anyway
< petertodd> now, if my sybil isn't100% succesful, fraud proofs don't help, but validity challenges do
< sipa> if someone is able to sybil you, then yes, fraud proofs fail
< sipa> 00:53:14 < sipa> the assumption is that other nodes are either full, or do together validation for all txids whose hash starts with (not 0)
< sipa> 00:53:21 < petertodd> right
< kanzure> i am not sure that "reveal all the committed data" is the only way to solve this
< sipa> 00:53:25 < sipa> and that you're not censored from them
< sipa> oh, snarks can do it too :p
< petertodd> right, but in the non-100% succesful sysbil example, the best defence is the validity challenges, not pure fraud proofs, because the former is what lets me find out which nodes are lying to me
< petertodd> SNARKs can do anything; not interesting :)
< sipa> give me an example of a validity challenge?
< petertodd> so, in my simple merkletree example above, a validity challenge would be "I think leaf X in merkle tree Y is invalid, prove that it (locally) isn't"
< petertodd> if that validity challenge is unmet, it's a strong suggestion that there is fraud - if you have at least one peer that validated that part of the blockchain, when the challenge will be responded too with valid data
< kanzure> traditional fraud proof example is "here's two transactions that were committed to, and they are both spending the same inputs". the non-fraud proof version would be "show that this input is only used once" i guess?
< petertodd> kanzure: sure, but like I argued above, no-one would actually make a series of blocks where that can be proven and publish it, there's no point, resulting in the need for fraud challenges
< kanzure> well my message was an attempt to show a non-fraud proof variant of the same, but i think i failed :)
< petertodd> kanzure: yeah, the non-fraud proof would actually be to for a variety of nodes to challenge + respond to parts of the block(s) involved in that potential fraud, until there's a challenge that isn't extingished by valid data
< kanzure> "prove that there are no other inputs used" would require, as far as i can tell, something on the order of "send me all of your data"
< petertodd> kanzure: in that specific example, it'd be the part of the merkle tree leading to where the second spend of that output would be that'd fail to be met with valid data
< kanzure> why would you know where the second spend would be?
< petertodd> kanzure: only in a badly designed protocol :) a good protocol will have TXO commitments that let you prove validity of spending locally
< petertodd> kanzure: process of elimination
< kanzure> elimination sounds a lot like "send me all your data" except you might get lucky and bail early
< petertodd> kanzure: that should be telling... :)
< petertodd> kanzure: if you have a fraudulent peer, they're never going to send you a fraud proof, or the data that lets you generate one
< petertodd> kanzure: (which is why the whole idea is suspect anyway)
< kanzure> transaction commitments offer a proof of spending only once?
< kanzure> oh, transaction output commitments..
< petertodd> kanzure: note how my linearized tx history scheme redefines what fraud is to make double-spend fraud compactly provable (for a kinda terrible definition of 'compact')
< petertodd> kanzure: yeah
< kanzure> yes so i think that we can avoid the total bandwidth reduction that sipa was worried aobut above
< petertodd> what do you mean by "total bandwidth reduction"?
< sipa> anyway, i'm not planning on adding fraud proof support in the first version anyway; perhaps more discussion is needed - i'm just wary of changes that make compact fraud proofs less viable overall
< petertodd> sipa: sure, and the entire fraud proof vs. fraud challenge debate is orthogonal - an expensive fraud proof is an expensive validity proof
< petertodd> sipa: so convince me that a merkle tree of prev-block-contents is acceptable :)
< kanzure> petertodd: by "total bandwidth reduction" i meant "total bandwidth increase" heh. sipa was concerned about losing out on the bandwidth reduction benefits.
< sipa> kanzure: no
< sipa> kanzure: oh, you mean by committing to the prev block + witness?
< sipa> i will think more about that
< GitHub179> [bitcoin] dooglus opened pull request #7262: Reduce inefficiency of GetAccountAddress() (master...faster-getaccountaddress) https://github.com/bitcoin/bitcoin/pull/7262
< morcos> sipa: petertodd: sometimes i find following these conversations over IRC very difficult. But I have to say I mostly have many of the same thoughts as petertodd. It's not really clear to me what situations we envision fraud proofs being actually used in.
< morcos> I wouldn't go as far as he has. But I think its worth really carefully writing up the scenarios where we think they might be useful, so we know whats worth worrying about making compact and what isn't.
< morcos> Also whether 4MB is compact or not is relevant to the situation in which these things might be used
< sipa> i gave one... whether you consider the additional assumptions (censorship resistance from other nodes that do validation of the parts you don't) to be a worthwhile outcome is something else :)
< morcos> sipa: so if you don't get the bandwidth reduction, then what exactly is a partial node saving?
< sipa> UTXO set maintenance, signature validation, ...
< morcos> so perhaps thats valuable during the bootstrapping phase?
< morcos> don't get me wrong, its not that i dont think these tools are of value, but trying to sketch out how we envision them actually being used is important
< sipa> fair enough, there is a lot of work left there
< phantomcircuit> morcos, signature validation from tip down, which is expensive to do without the fraud proofs
< phantomcircuit> (but which is largely incompatible with pruning)
< jayd3e> so, I'm finding the bitcoin-core code pretty dense, it does a lot of things and is configured for a number of different platforms
< jayd3e> are all of the open threads for sending/receiving messages from other peers located in StartNode?
< jayd3e> in net.cpp
< sipa> there is 1 network thread (which sends/received between the network and CNode buffers) and one message habdling thread (which runs ProcessMessages and SendMessages in main.cpp)
< sipa> cfields is working on replacing a significant part of the network code with libevent
< jayd3e> sipa: gotcha thanks
< maaku> petertodd: I find any proposal that requires miners or worse hashing hardware to have full block data to be a dangerous regression over the segwit proposal
< petertodd> maaku: dangerous? I consider that to be a good thing
< maaku> right now transaction selection can be delegated to a third party, not so under your proposal I believe
< petertodd> maaku: yeah, you don'twant tx selection to be delegatable
< brg444> maaku is transaction selection delegation desirable?
< maaku> petertodd: I strongly disagree. We don't live in an ideal world where every hashing hardware is running a full node.
< petertodd> maaku: I know, that's why I'm not proposing that yet
< maaku> having the hardware poll one source for coinbase rewards, and another for transaction selection would be an important incremental improvement over the current situation
< petertodd> maaku: I'm just proposing we ensure that mining pools have the data sufficient to validate
< maaku> something which is very much possible today but no one is doing
< petertodd> maaku: why would that be an improvement?
< maaku> petertodd: transaction selection would no longer be confined to the same centralization pressures as mining hardware (power availability, distance from manufacturing, etc.)
< petertodd> maaku: are you assuming DRM tech?
< maaku> petertodd: ideally, yes, but it is still an improvement without
< maaku> we could have 100% hashpower in Shenzhen, but if it is smart property miners tied to transaction sources all over the world, with hundreds of orgs providing that data
< petertodd> maaku: I think that's incredibly unrealistic - the guy with the power switch isn't going to delegate control like that
< petertodd> maaku: at best, the hashing power can be turned off by force
< petertodd> maaku: equally, I'm not very worried about that kind of centralization, as cheap power has diseconomies of scale
< maaku> petertodd: turning off the hashers is a risk under any scenario
< maaku> DRM just makes it the only thing he can do
< petertodd> maaku: DRM puts you in possitions where mfg's are encouraged to produce back doorable hardware
< maaku> petertodd: eh, cheapest power has diseconomies, but it's not like the whole globe evens out when your power usage goes up
< alpalp> .
< petertodd> "whole globe evens out"<- what do you mean by that?
< sipa> i would also be opposed to a system that requires full block access to anything that does not do the transaction selection
< petertodd> again, I think you're crazy and optimising for the wrong things
< sipa> but it seems like that may not be needed to combat the validationless mining degradation
< petertodd> maaku: ^^^
< sipa> so stop arguing
< petertodd> heh
< sipa> whether you agree or not is not relevant
< maaku> i'm not convinced we need to combat validationless mining, that's the issue :\
< petertodd> maaku: with segwit validationless mining is a significantly worse problem, as non-validating miners both have an advantage, yet can still collect tx fees
< sipa> maaku: or validationless transaction selection, if you will
< sipa> petertodd: of course, if there is trust, mining can always get pooling advantages by doing block construction in one central place for multiple pools
< petertodd> sipa: which we don't want
< sipa> agree
< sipa> but it may be inevitable...
< petertodd> sipa: so? why hasten it?
< sipa> i'm not arguing either way
< sipa> just observing
< petertodd> sipa: the blocksize limit needs to be kept low enough to keep that from being a major problem; if the ecosystem wants to go elsewhere, I'm leaving bitcoin development, and so should the rest of you
< petertodd> keep in mind I'm designing for a system that's worth designing - other designs are possible, but solve "problems" that I'm not interested in solving (and frequently are simply bone headed stupid)
< sipa> no disagreement :)
< petertodd> if startinga new pool is ever difficult (assuming enough hashing power joins to keep variance reasonable) then we've failed and the system we have isn't bitcoin as we know it
< maaku> petertodd sipa: I'm aware that segwit makes non-validating mining easier to do. I'm not convinced that this is a problem, at least so much of a problem that we take actions which constrain the mining space
< petertodd> maaku: I'd judge my "nightmare scenario" in my dev list post to have >50% probability of happening - miners are pretty lazy in what they implement
< maaku> so far you guys are not arguing from first prinicples. rather i'm hearing a cached 'miners must have the data they need to validate!' without explaining why the cost is necessary
< petertodd> maaku: equally, I'd judge the probability of DRM hardware getting developed in the way you imagine as being <5%
< petertodd> maaku: currently the system assumes miners validate; if they stop validating we risk massive reorgs at best
< sipa> maaku: not just easier; it makes it more profitable and less observable too (by no longer being restricte to minkng empty blocks without validation)
< petertodd> maaku: heck, even in treechains you still need to force miners to actually have blockchain data for the system to work - bitcoin is at its core a proof-of-publication system, and the only thing forcing miners to publish right now is validation
< petertodd> sipa: and validationless blocks with txs in them are much more dangerous
< sipa> petertodd: though, compared to downloading the full block right now and disabling script verification, there is not much difference
< sipa> petertodd: segwit just makes that use a constant factor less bandwidth
< petertodd> sipa: downloading the block is a huge bottleneck, and equally, the fact that everyone has to download it discourages the development of infrastructure where that isn't possible
< sipa> which may be an issue still
< petertodd> sipa: e.g. w/ segwit as described, we're guaranteed to get specialized relay networks that don't even propagate witness data at all
< dcousens> why does OP_CLTV care about the transactions lock time?
< dcousens> (when it just checks the stack item pushed just prior?)
< sipa> dcousens: ?
< petertodd> dcousens: CLTV checks the stack item against nLockTime
< sipa> dcousens: it compares that stack item with tx.nLocktime
< dcousens> Need to re-read that BIP then
< petertodd> dcousens: read the source
< sipa> petertodd: that is risky though... a miner could create a block with invalid witness data but not reveal that witness data, and not build on top himself, while his buddies all waste time on it
< petertodd> sipa: creating such a block costs 25BTC - it's a good bet that they won't
< petertodd> sipa: exactly like today...
< sipa> yup
< sipa> yup
< sipa> but that means that relying on such a relay network means you are trusting your buddies to a pretty significant degree
< sipa> same as with a relay network today
< sipa> if you don't fully validate
< petertodd> sipa: why? there's PoW proof that they're risking 25BTC, so the block is very probably valid
< petertodd> sipa: how many delibrately invalid blocks have *ever* been created? I'm guessing zero in recent history
< dgenr8> petertodd: how low is "low enough"?
< petertodd> and remember, if we don't constrain that form of mining now, it'll be much more difficult politically to constrain it in the future
< petertodd> dgenr8: huh?
< dgenr8> [15-12-28 19:39:52] <petertodd> sipa: the blocksize limit needs to be kept low enough to keep that from being a major problem; if the ecosystem wants to go elsewhere, I'm leaving bitcoin development, and so should the rest of you
< petertodd> dgenr8: I'm working on a document actually to set design criteria - a major one is the hashing power in adttendance at scaling bitcoin came to consensus that under attack conditions the orphan rates of largest and smallest pools should vary no more than +- 0.5%
< petertodd> dgenr8: that's a business requirement for there to be a level playing field
< dgenr8> do you know what that measurement is today?
< petertodd> dgenr8: no I don't, because we're not under attack conditions
< dgenr8> what are those? not just full blocks / spam?
< petertodd> dgenr8: miners who aren't cooperating with other miners is the big one
< dgenr8> not relaying others' blocks?
< petertodd> dgenr8: in fact, we're going to have to fix some exploits to be able to even meet that criteria with 1MB blocks, although at least fixing those exploits is easy
< petertodd> dgenr8: making blocks that contain not-previously-relayed (or recently relayed) transactions
< dgenr8> ah
< petertodd> dgenr8: basically, all the relay optimizations that work in the averge case can't be taken into account for that +-0.5% figure
< dgenr8> maybe the relay network should be turned off
< petertodd> dgenr8: there's no way to force people to do that
< phantomcircuit> <petertodd> sipa: how many delibrately invalid blocks have *ever* been created? I'm guessing zero in recent history
< phantomcircuit> petertodd, that's only true when mining is largely centralized
< petertodd> phantomcircuit: huh?
< phantomcircuit> petertodd, there's effectively a reputation system at work with mining today
< phantomcircuit> the stratum mining stuff is whitelist based
< petertodd> phantomcircuit: huh?
< petertodd> phantomcircuit: well yeah, I want to stay *at* the status quo, and not make things *worse*
< maaku> sipa: (re: pm) My point is that this is a spectrum not a binary choice. Things we'd close off is a hasher using an external transaction validation source but injecting its own transactions from a white list
< phantomcircuit> im saying that comparing mining off blocks w/o witness data validated isn't exactly comparable to the stratum mining that's happening today
< petertodd> phantomcircuit: I'd rather make things better, but that's probably politically/technically impooossible right now
< petertodd> phantomcircuit: the stratum stuff has a strong disincentive to lying: if the other pools connect to you anonymously, you can't feed them bad data without hurting your own hashers
< phantomcircuit> petertodd, the problem is self correcting if mining becomes less centralized
< petertodd> phantomcircuit: why?
< phantomcircuit> petertodd, at least the stratum one does
< petertodd> phantomcircuit: I'm still not seeing your point
< phantomcircuit> reputation management is much more difficult with lots of players
< petertodd> phantomcircuit: this has nothing to do with reputation
< petertodd> phantomcircuit: (I mean, obviously it can, but stratum-using validationless mining works even if you don't trust the other guy, so long as you can connect to their pool anonymously)
< phantomcircuit> petertodd, there's an asynmetry between smaller and larger pools though
< petertodd> phantomcircuit: how so?
< phantomcircuit> a smaller miner can signal a nonsensical block with cost x * 25 while the larger miners cost is (x*2) * 25
< petertodd> phantomcircuit: sure, but you can weight that if you want, and it's still expensive
< petertodd> phantomcircuit: that also doesn't negate my segwit objections
< phantomcircuit> petertodd, maybe but the reasoning here isn't so simple
< petertodd> phantomcircuit: again, I'm simply preventing an ecosystem from developing where people regularly create blocks with txs in them w/o validating
< aj> phantomcircuit: hmm, back-of-the-envelope calculations seem to indicate it's never profitable for a miner to create fake blocks to trick SPV-miners to work on a bad chain (and it's only break-even if everyone else is SPV mining)
< aj> phantomcircuit: (assuming: no external profits, eg shorting bitcoin on a different exchange; and the cost of creating a hash of an invalid block is the same as for a valid block -- if PoW was changed so you could produce a hash that simultaneously attested to a good and a bad block, that'd change)
< dcousens> anyone know of an up to date reference for coinswap?
< dcousens> (or is https://bitcointalk.org/index.php?topic=321228.0 still considered the latest?)
< jcorgan> i don't think it ever made it to an implementation, and malleability would have broken it anyway. of course, now, all these things deserve a second look.
< dcousens> I didn't necessarily mean impl, but, just in terms of an algorithm
< dcousens> Only curious because I'm currently playing with an algo. that might be more optimal, and just wondering if I could get some review over it
< jcorgan> probably a good -wizards discussion
< dcousens> ok
< jayd3e> how can nThreadsServicingQueue be called as a function in scheduler.cpp
< jayd3e> on line 13
< jayd3e> it's defined as an int in scheduler.h
< jayd3e> nvm figured it out
< jayd3e> what does the "<>" indicate in this line: newTaskScheduled.wait_until<>(lock, taskQueue.begin()->first)
< jayd3e> from line 58 of scheduler.cpp
< jayd3e> instagibbs: nice, thanks. That makes sense