< phantomcircuit>
sipa, i want to (optinally) save the mempool and sigcache
< phantomcircuit>
on shutdown
< phantomcircuit>
where's the right place to put that?
< gmaxwell>
don't save the sigcache. the reload of the mempool from disk will repopulate it.
< gmaxwell>
the stuff that doesn't get repopulated shouldn't be there in any case.
< phantomcircuit>
gmaxwell, yes but repopulating the mempool will take forever without saving the sigcache
< gmaxwell>
phantomcircuit: 300mb of validation isn't ~that~ big a deal.
< gmaxwell>
at least so long as it reloads the mempool in the background.
< gmaxwell>
okay, perhaps worse than I'm thinking due to the fact that the signature validation isn't parallel
< gmaxwell>
but even if the 300mbytes is alll txins, thats 145 bytes per signature, or about 2 million signatures, which even on a single moderately fast core should only be about 121 cpu seconds.
< gmaxwell>
well whatever, 30% slower since I'm assuming libsecp256k1 with performance features we're not using yet.
< phantomcircuit>
gmaxwell, it's quite a bit more than 300MiB on my system :)
< gmaxwell>
phantomcircuit: having a mempool much larger than typical shouldn't increase your hitrate much and if you're mining may cause your to mine poorly propagated crap.
< phantomcircuit>
it wont improve the hitrate for sigcache or compact blocks very much
< phantomcircuit>
i dont see how it can result in you mining poorly propagated things though?
< gmaxwell>
say someone advertises a txn with a very low feerate, it doesn't enter or quickly falls out of most nodes mempools.
< Chris_Stewart_5>
If I add a different version of boost to my /usr/include I end up getting errors with core not being able to find it, is there some where I need to reference a newer version?
< Chris_Stewart_5>
cfields:
< gmaxwell>
phantomcircuit: but not yours... later, the network runs low on transactions in the pool or CPFP ranks that straggler up. now you mine it. And it's a surprise to everyone.
< petertodd>
bsm117532: for consensus critical applications, you're going to need to use the raw iterator, and even worse, follow the byte-level tests in the segwit codebase exactly
< phantomcircuit>
gmaxwell, true
< phantomcircuit>
i think that is generally not easily solved though
< phantomcircuit>
indeed unless you have the exact same limit as the entire rest of the network
< phantomcircuit>
you're screwed there
< phantomcircuit>
otoh my brain is on fire so maybe im wrong
< jtimon>
so NicolasDorier and I are discussing libconsensus
< gmaxwell>
murch: ya, thanks.
< jtimon>
do we want locks inside libconsensus? my understanding was no, due to complexity reasons. NicolasDorier thinks it's fine with std C++11 but it wasn't fine with boost
< jtimon>
well, personally I think libconsensus should move towards plain C, but that's another discussion it seems I lost already
< sipa>
jtimon: i was talking about this with NicolasDorier earlier
< sipa>
jtimon: i expected that the point of contention would be the idea of it managing its own state
< sipa>
(which would imply using locks for that state)
< NicolasDorier>
ah yes you wanted to keep the libconsensus "stateless"
< sipa>
i don't know whether it should be stateless - having state, especially for caches, would hugely simplify things
< sipa>
but i think that should be the first thing to discuss
< sipa>
as it using locks or not follows naturally
< phantomcircuit>
sipa: that pr doesn't seem to have improved removeForBlock times by much
< gmaxwell>
phantomcircuit: that means it improved them at all?
< phantomcircuit>
im gonna need to do the mempool save/restor thing to know for sure unfortunately
< phantomcircuit>
if it did it certainly wasn't spectacular
< NicolasDorier>
I'm working right now on a verify_block for consensus lib. (like jtimon doing) I think the question of state won't come immediately to me as I first need to finish that. Once I done my verify_blocks, I'll just make several proposal for handling the hash cache of segwit
< NicolasDorier>
I have several idea, will just try to code them see if they make sense at all :p
< sipa>
NicolasDorier: how does your verifyblock get access to the chainstate and the block index?
< jtimon>
sipa: well, I think the interface for the caches can be really simple, and we can provide the implementations we have for those who don't want to implement their own, so I disagree on "hugely simplify things"
< NicolasDorier>
sipa: By cheating, I delegate to the client the calculation of the flags and contextual informations (height, bestblock, mtp)
< NicolasDorier>
such that all the consensus code does not depends on CBlockIndex at all
< NicolasDorier>
but just to 2 types
< phantomcircuit>
KwukDuck, hardware error
< NicolasDorier>
CConsensusContextInfo and CConsensusFlags
< jtimon>
but yeah, my plan has always been to have a C++ version of verifyBlock first, well, I mean, first VerifyHeader, verifyTx...but never got past the point of simply moving the consensus functions out of main...
< NicolasDorier>
both can be calculated by "main" with pIndex trivially. Then passed to consensus methods
< sipa>
jtimon: yes, but that means the api changes whenever a cache is added/changed
< jtimon>
yes, if you add a cache the api changes, not sure what you mean by changing the cache
< sipa>
if it ends up working differently somehow
< jtimon>
just as if the storage changes, the api for the storage would change as well
< jtimon>
assuming people want libconsensus to be storageless
< sipa>
jtimon: yeah, i understand the use case
< sipa>
it's just a lot harder to introduce abstractions for block indexes and chainstate, especially without degrading performance
< jtimon>
I was working on https://github.com/jtimon/bitcoin/commits/jt but I broke the branch when backported bip9 and never cared to fix it (but review is still very welcomed since I plan to rewrite/cherry pick most of the stuff from there)
< NicolasDorier>
my main problem with interface is that the callback will probably cross language boundary (C++ calling C#) and it is a performance nightmare (but maybe it has gone better now). I'm not against callback but this has to be coarse grained.
< NicolasDorier>
ho I finish my PR I'll see when I hit there
< KwukDuck>
phantomcircuit: Hardware error? Like what..? I think my hardware is fine? Experiencing no other issues with the system, even tried running it on a different disk.
< jtimon>
at least for bitcoin core as a caller (assuming one day bitcoin core directly calls libconsensus instead of its internals) I don't see how performance would be affected very negatively, I worry more about serializing and deserialzing the parameters (like the tx in verify) for example, since people have complained (that's the reason why libbitcoin copies the code instead of using libconsensus' API directly)
< gmaxwell>
KwukDuck: that kind of issue is usually caused by hardware error, lose or bad disk cables, or bad memory (run memtest86?)-- sometimes antivirus software can screw with the files bitcoin is using.
< jtimon>
NicolasDorier: as said privately, I think people should use the function pointer to call their DB directly, not calling their external language there, but you said that may be complicated too
< gmaxwell>
or running out of disk space potentially.
< NicolasDorier>
jtimon: yes, it really depends on the backend technology. Some might have poor support of C lib. :(
< KwukDuck>
gmaxwell: I considered a bad disk that's why i ran it on another one. Not sure about the memory though, but i'm not experiencing any other issues so... I guess i'll be running memtest later. Thanks :)
< NicolasDorier>
also I know I'm not pro in C so it would take me 10 times more than you to make use of a C lib even if it existed
< jtimon>
NicolasDorier: it just occurred to me that you can always implement my function pointers interface by pre-filling memory with whatever data you were counting on passing as a parameter. Therefore it will be trivial to offer the all-data-upfront interface you want while using my more generic API inside. The costs will be to reading the pointer to a function, the function and executed it, but the function should be merely a map from
< jtimon>
your parameters to a pre-filled position in memory. Does that sound accetable to you? no C# inside
< jtimon>
we can offer both interfaces with trivial development costs I think, let's see
< jtimon>
just force pushed 0.12.99-consensus
< jtimon>
but I will force push again soon
< bsm2357>
I expect a common error for people using python-bitcoinlib will be to call SignatureHash with the scriptPubKey of the corresponding segwit input, rather than an actual script. This is easy to detect, and insert the corresponding script for calculating the sighash, but causes the SignatureHash function to depart from the behavior of the one in core.
< bsm2357>
What are your opinions? Is detecting this and inserting the correct script desirable or not?
< bsm2357>
Hmmm. Taking this question to #bitcoin-dev.
< sipa>
bsm2357: perhaps also good to discuss on the repository itself in an issue
< bsm2357>
Of course. I'll push my latest there soon with a working SignatureHash. Just looking for some quick feedback from people who might use it, before I go write code.
< Chris_Stewart_5>
Is it unusual for nodes to give back one ip address back for a getaddr message?
< sipa>
we only respond to one getaddr per connection
< sipa>
after that, only normal addr relay
< Chris_Stewart_5>
Ok, but it it unusual to respond with only one ip address? The behavior I am seeing is connecting to a node, I send a getaddr message, then I only get ONE ip address back in the addr message
< Chris_Stewart_5>
and that ip address is the node's ip address. I guess I was under the impression that it would send back the nodes that it is connected to
< sipa>
it's not responding at all
< sipa>
the addr you see is likely just an independently relayed addr
< sipa>
hell no, nodes try to hide who they are connected to
< xinxi_>
jtimon: are you still there?
< Chris_Stewart_5>
so how do you connect with more than one node on the network then? It seems that you could only connect to dns seeds then.
< sipa>
i don't understand the question
< sipa>
normally, you get 1000 IP addresses back in response to getaddr
< sipa>
unless the peer does not know about many, of course
< Chris_Stewart_5>
I should preface this with it is testnet, perhaps the behavior is different?
< sipa>
it's very likely that on testnet the peer just does not know about many peers
< jtimon>
xinxi_: yes
< sipa>
also, if you send getaddr multiple times, the second and later times it is just ignored
< xinxi_>
jtimon: great. I want to discuss with you about the libconsensus that you are working on.
< xinxi_>
it's a great project.
< jtimon>
xinxi_: absolutely, always interested in discussing that
< Chris_Stewart_5>
sipa: I found that out the hard way :-). But it seems unlikely that a response to a getaddr message would have just ONE ip address even on testnet, right?
< xinxi_>
my I know when this will be merged into the master branch?
< jtimon>
xinxi_: uff, no idea myself, I wish I knew, I believe the idea is focusing on refactors like this and the ones going on in net.o and wallet right after forking 0.13's branch, which should be soon
< jtimon>
whether the PRs will be merged or not, I cannot know, depends on review
< xinxi_>
does libconsensus include the wallet and the network parts?
< jtimon>
no
< jtimon>
but there's other people doing refactors both in the net and wallet parts
< jtimon>
ie cfields and jonasschnelli respectively
< morcos>
sipa: i think on machines with a lot of cores, the sig cache actually slows down block validation. my guess is its the contention on the lock since we erase every time
< morcos>
sipa: i think it might be better to batch erase, or randomly erase or something else
< cfields>
morcos: i've scratched my head at that one several times. seems we'd be better off nuking a bucket at a time or so
< morcos>
cfields: it defaults to nuking a bucket at a time if it gets full anyway (although i'm not sure thats super efficient either, but that tends to happen in ATMP, so who cares)
< jtimon>
morcos: I once tried with a huge sigcache just to see if the swap partition was working. didn't benchmarked but it certainly didn't seem to accelerate things
< cfields>
morcos: eh? i thought it just killed a single entry each time
< jtimon>
xinxi_: no, there's still some in main.cpp, but the idea it's that it all should end up there (without counting the code in src/crypto and libsecp256k1)
< cfields>
morcos: looks to me like it just picks a random bucket and removes the first entry in it. am i misreading?
< morcos>
cfields: the problem i'm referring to is that on validating a block, you erase each stored signature as you read it
< morcos>
but yes, if it's gotten to big anyway, then it'll pick a random bucket and remove entires until its below the limit, but isn't that what you said?
< jtimon>
I will keep updating jtimon/0.12.99-consensus moving more things there though
< xinxi_>
jtimon: I guess most of the work has already been done?
< Chris_Stewart_5>
sipa: also you said it can relay 1000 addresses at a time, doesn't that effectively give a history of the nodes they are connected to?
< morcos>
the second thing happens when trying to set a new entry, which is in ATMP, so not important to shave micros... as it is in block validation
< sipa>
Chris_Stewart_5: nodes typically know way more 1000 addresses
< sipa>
Chris_Stewart_5: and typically know of many addresses they are not currently, or even never have been, connected to
< Chris_Stewart_5>
sipa: So basically caching addresses from addr messages?
< sipa>
Chris_Stewart_5: yes, and from dns seed responses
< Chris_Stewart_5>
unsolicited ones
< sipa>
Chris_Stewart_5: that's addrman
< cfields>
morcos: ah yes, i was talking about the full case
< xinxi_>
jtimon: we plan to prove the correctness of the consensus part.
< sipa>
xinxi_: again, correctness compared to which specification?
< morcos>
cfields: yeah the problem is lock contention on validation since each of your script validating cores is contending on the sig cache lock since they each have to erase from it. if you eliminate or batch those erases, it speeds up validation significantly
< jtimon>
xinxi_: well, the part of making a complete verifyBlock, I would say it has been done several times. exposing it as a C API is another thing, specially since probably the APIs will be discussed extensively (plus I never decoupled Consensus::Params from uint256)
< sipa>
morcos: interesting
< xinxi_>
sipa: as discussed, we are going to derive a specification from the code.
< jtimon>
xinxi_: interesting
< sipa>
xinxi_: a specification derived from the code will by definition match the code, no? :)
< jtimon>
sipa: I guess the goal is that from then on you can't change the code without appropriately changing the specifications or the tests will fail
< sipa>
jtimon, xinxi_: yes, that would be awesome
< cfields>
morcos: hmm. ignoring larger fixes, if the contention is that high, i wonder if the shared lock is doing more harm than good
< xinxi_>
sipa: i would say it's going to match the code as much as possible unless we find bugs/inconsistencies in the specification.
< sipa>
xinxi_: then the specification is wrong
< jtimon>
hehe
< sipa>
(but it would still be very interesting to know)
< jtimon>
the specification is clearly my branch
< xinxi_>
sipa: yeah, as discussed, we don't want to have a specification with flaws or bugs. if we find that, we should upgrade.
< sipa>
xinxi_: i think you're dreaming if you think that's going to happen
< morcos>
sipa: on my 16 core machine, i think it speeds up the "Connect total" part by about 40% to just not erase.. But I didnt' run that over a long enough period to model the lower hit rate that'll eventually result from not having smart erasing. If we can conveniently batch the erases from the block that'll be best
< xinxi_>
sipa: why is that?
< xinxi_>
a hard fork is not possible?
< sipa>
xinxi_: because i expect a hard fork to switch to something with only theoretical advantages to be very controversial
< xinxi_>
sipa: well, we will definitely test that first.
< cfields>
morcos: whoa
< jtimon>
xinxi_: yes, but a hardfork doesn't magically get deployed in all computers when you update the specification, whatever is deployed is the specification in practice
< sipa>
xinxi_: not trying to discourage you... i would be very exciting if you can contribute to the correctness of the system using formal proofs, at whatever level
< sipa>
xinxi_: but please... keep it about analysing the actual code
< sipa>
*excited
< xinxi_>
sipa: the actual code is in C++, which is almost impossible to prove.
< xinxi_>
if you rewrite it in C, we may be able to help.
< sipa>
even that has been controversial in the past
< jtimon>
xinxi_: in the end the new tests are going to fail when you change either the specification or the implementation, right?
< xinxi_>
jtimon: what do you mean by failure?
< sipa>
xinxi_: you could of course do a rewrite in C, and analyse that... we could learn very interesting things from that
< sipa>
xinxi_: but that still doesn't mean we need to actually switch to that C code
< jtimon>
if you changed the specification, sipa was right, the bug was there, if you change the implementation but you were't expecting change in funtionality, then your changes are wrong
< xinxi_>
jtimon: how did you guys fix bugs in the past I am wondering.
< sipa>
xinxi_: soft forks
< xinxi_>
so you just leave the bugs there?
< jtimon>
let me search for a ccc talk about what I'm thinking you wanted to do
< sipa>
xinxi_: if they're not dangerous and merely annoying.. yes, we have no other choice
< jtimon>
xinxi_: no, sipa answered: softforks
< sipa>
xinxi_: we don't decide the rules of the network
< xinxi_>
OK. so can we just deploy the slightly different bug free version using a soft fork?
< sipa>
maybe
< sipa>
that depends on the kind of changes you want to make
< jtimon>
for example, the code in bip99 fixes a known bug which is not a priority
< jtimon>
but that fix is a hardfork
< xinxi_>
so can you explain to me what kind of changes require a soft fork and what kind of changes require a hard fork?
< xinxi_>
and what's the difference between the hard and soft forks?
< sipa>
xinxi_: everything that leaves things that were previously invalid invalid is a softfork
< xinxi_>
how do you define invalid?
< jtimon>
meh, can't find the ccc video about proving correctness, I though you were coming from there
< sipa>
xinxi_: abstractly speaking, the consensus rules are black box which you feed a sequence of blocks
< sipa>
xinxi_: and it returns whether that sequence of blocks is valid or not
< jtimon>
their "specification" was basically another program for an abstract machine
< sipa>
it cannot depend on the order in which those blocks were received, or whatever earlier states the node went through
< sipa>
xinxi_: a softfork leaves all previously invalid inputs invalid
< xinxi_>
so a soft fork can only invalid a previously valid input.
< sipa>
correct
< jtimon>
or in other words, it only add restrictions, doesn't remove them: everything that was invalid remains invalid
< Chris_Stewart_5>
xinxi_: You can think of the set of all consensus rules R with cardinality n, a soft fork increments the cardinality of R one.
< xinxi_>
and a hard fork has no such restriction.
< Chris_Stewart_5>
xinxi_: Any removal from the set R is a hard fork though
< gmaxwell>
xinxi_: the whole purpose of the system it to come to a consistent worldwide agreement on the history of the ledger. This is the first and foremost definition of correctness in the system.
< sipa>
xinxi_: and it's very surprising how many things are actually possible with softforks
< xinxi_>
sipa: sure, i can imaging that.
< sipa>
xinxi_: one analogy is seeing the consensus rules a block of marble from which pieces are cut away that give the complexity
< xinxi_>
so how do hard forks and soft forks differ in terms of deployment?
< jtimon>
some people theorize that everything that is possible via hardfork is also possible via softfork in some other way, but I'm not convinced that means we should always prefer softforks
< sipa>
a hard fork: convince the whole world to switch to different code; period
< sipa>
a soft fork: read BIP34 and BIP9
< sipa>
(BIP9 is newer)
< xinxi_>
has any hard fork happened?
< sipa>
very early on in the history of the system, yes
< sipa>
in the first 1.5 years
< jtimon>
xinxi_: for softforks miners need to coordinate for deployment, then other nodes can upgrade when they want/can. for hardforks everybody needs to upgrade before deployment, which makes them slower to deploy
< sipa>
xinxi_: there is also BIP50 which is a post-mortem of a consensus failure that occurred between old and new versions of the system
< xinxi_>
jtimon: how can you upgrade without deployment?
< xinxi_>
basically, you can automatically activate when a certain height is reached?
< sipa>
xinxi_: softforks only apply new rules to blocks after some changeover point
< sipa>
they use old rules for everything before
< xinxi_>
sipa: yeah, you've explained that well.
< sipa>
BIP30 used a fixed timestamp for the changeover point
< jtimon>
xinxi_: for softforks bip9 describes it, for hardforks there's different opinions on what would be the best way to signal activation
< xinxi_>
jtimon: got it.
< sipa>
BIP34 used block versions above a certain number to let miners indicate they are enforcing the new rules
< xinxi_>
so who makes the final decision on whether a fork should be deployed or not?
< sipa>
xinxi_: for a soft fork: miners
< sipa>
as it's really just miners deciding to start enforcing some additional rules in addition to what the network requires them to
< petertodd>
xinxi_: a hard fork is arguably the creation of a new currency for starters...
< sturles>
xinxi_: All bitcoin users.
< sipa>
xinxi_: some people believe developers or some committee should decide on hard forks, and then assume everyone will follow them
< sipa>
xinxi_: others think hard forks should be reserved for clearly uncontroversial things
< xinxi_>
is there any democratic decision making process?
< sipa>
democracy is pretty hard in a system designed to avoid fixed identities
< petertodd>
xinxi_: there's no agreement on what a "democratic decision making process" even looks like
< jtimon>
bip2 tries to clarify things, but it can only decided on whether something was "accepted" or not observing adoption a posteriori. BIP99 also tries to clarify things, but it's based on a vaguely defined concept of "uncontroversial"
< xinxi_>
sipa: sybil attack.
< petertodd>
xinxi_: whose votes do you count? miner-triggered soft-forks at least have a natural metric: hashing power
< sdaftuar>
15:43 < sipa> xinxi_: for a soft fork: miners
< xinxi_>
then, can we just let miners vote?
< sdaftuar>
^ this is not correct, users have to adopt the rules as well
< sdaftuar>
but please, this discussion should be elsewhere
< jtimon>
I wouldn't call that a vote, but just confirmation that they have upgraded
< jtimon>
for coordination
< sipa>
xinxi_: miners should absolutely not be allowed to decide on hardforks (they could for example decide to increase their own reward that way)
< jtimon>
the softfork is supposed to be uncontroversial in the first place
< xinxi_>
sipa: why not?
< petertodd>
xinxi_: "why not?" is a political question...
< sipa>
xinxi_: "who watches the watchmen?"
< jeremyrubin>
xinxi_: probably discuss this on #bitcoin
< jtimon>
sipa: not only they shouldn't not be allowed but they have no power over hardforks in practice, "should" or "shouldn't" is irrelevant here
< sipa>
xinxi_: they're called consensus rules because every user in the system choose to accept them
< xinxi_>
sipa: sure. that's a good reason.
< sipa>
agree, this is getting off topic
< sipa>
sorry
< jtimon>
yep, sorry, #bitcoin I guess
< xinxi_>
well, it's not off topic actually.
< jtimon>
well, it's not really development
< sipa>
it is off topic for the developer of bitcoin core, which is what this channel is about
< xinxi_>
i need to make sure that tremendous amount of effort that could make Bitcoin even greater will be deployed.
< petertodd>
xinxi_: I'd suggest you take that kind of discussion to #bitcoin-wizards at least
< sipa>
xinxi_: if your tremendous amount of effort requires a hard fork, the answer is that nobody can promise you it will happen
< xinxi_>
if there is no certainty but extremely high risk, we will just have to give up.
< sipa>
xinxi_: i think there are very interesting things you can do if you aim at something a bit less ambitious than proofs about the entire consensus system
< petertodd>
xinxi_: if you want certainty, then yes, I'd suggest you give up. but I'd prefer you took sipa's advice
< xinxi_>
petertodd: so you mean i should at least try no matter what the result is?
< sipa>
xinxi_: you can focus on for example just a scripting language... rewriting that part in C would not be very hard, and the result could be very interesting
< petertodd>
xinxi_: I'm just saying, that anything that for any work you do that may require a hard-fork to deploy, you should accept that you may need to create a new currency, economically distint from bitcoin, to deploy your work in production
< sipa>
proofs about its complexity, memory usage, equivalence after certain changes
< petertodd>
xinxi_: whereas if you want high certainty of being able to deploy your work in production on bitcoin, you'll need to set your sights lower to thinks that can be done in a soft-fork
< xinxi_>
sipa: complexity, memory usage, etc are certainly important but not the most critical thing.
< sipa>
xinxi_: also, we can always propose a softfork that effectively delegates validation under certain conditions to a completely new scripting language
< petertodd>
xinxi_: for example basically all my scalability research is stuff that I do knowing full well that it may never be able to be deployed on bitcoin
< Chris_Stewart_5>
sipa: Does it make any sense at all that a dns seed on testnet would only relay one addr from a getaddr message? Sorry if answered already but I'm still unclear on this
< sipa>
xinxi_: which can be implemented in a different language
< sipa>
xinxi_: BIP143 (which we're in the process of finished up) makes that very easy
< sipa>
xinxi_: it's effectively designed to allow new scripting language to be plugged in in a softfork compatible way
< xinxi_>
petertodd: do you have a fulltime job?
< sipa>
xinxi_: there are people working on thinking about improvements that could be made in a new scripting language
< sipa>
xinxi_: and for that, all options are open
< xinxi_>
sipa: why does a scripting language make a difference?
< petertodd>
xinxi_: I'm a consultant, who works pretty much full time on contracts in this space (including Bitcoin Core development contracts)
< sipa>
xinxi_: by 'scripting' we mean bitcoin Script here, which is the language used inside transaction outputs to describe the condition under which they can be spent
< sipa>
xinxi_: it's not scripting as in bash or python
< xinxi_>
petertodd: So Bitcoin Core developers get paid?
< sipa>
it's a bytecode language inspired by Forth
< sipa>
xinxi_: and arguable, the implementation of that language interpreter is the most complicated piece of the consensus rules
< petertodd>
xinxi_: many do - there's no "bitcoin foundation" that pays core developers, but there's lots of ways to get paid
< xinxi_>
sipa: yeah, i know what you are talking about. but doesn't that require a hard fork?
< sipa>
xinxi_: nope!
< petertodd>
xinxi_: (there is a "bitcoin foundation", but they don't pay anyone, and are essentially bankrupt last I checked)
< sipa>
xinxi_: the basic idea is that we can redefine some of the remaining NOP opcodes in the language
< sipa>
xinxi_: it goes further than that, but it would lead me to far to explain it all here
< xinxi_>
sipa: i know what you mean.
< sipa>
and redefining NOPs can be done as a softfork
< sipa>
(as can the new script versions introduced by BIP143)
< xinxi_>
sipa: you invalid more blocks syntatically, but you add more semantics.
< sipa>
xinxi_: we basically take a particular script whose meaning was "anyone can spend this", and replace it with a new meaning
< sipa>
aka moving to a new piece of marable
< sipa>
marble
< xinxi_>
now I also have a feeling that softforks can solve all problems.
< sipa>
to give an extreme example... we can switch to a different proof of work function using softforks
< petertodd>
sipa: explain :)
< sipa>
(in a hacky way that likely wouldn't actually work, as it would go against the interest of miners who decide on softforks...)
< sipa>
petertodd: define a height-dependent function that maps difficulty of the old PoW to difficulty of the new one
< sipa>
petertodd: now demand that every block satisfies both PoWs
< sipa>
and choose the function so that the difficulty of the old one becomes almost irrelevant over time
< petertodd>
sipa: eh... that's not "switching" to a different PoW, that's additing an additional PoW
< Chris_Stewart_5>
interesting
< sipa>
of course... in a softfork we can by definition only 'add' rules
< sipa>
but in practice it would mean switching
< sipa>
(this is a contrived example, and not something i'd ever actually propose)
< petertodd>
sipa: also, I'm not sure I'd want to call that a soft-fork, given that the cost for an attacker to attack old clients goes down to zero - I'd limit the use of the term "soft-fork" to things that retain lite-client level security for non-adopting clients indefinitely
< petertodd>
sipa: example: the switch to most-work-chain from bitcoin 0.1's longest-chain rule is something I'd definitely call a hard-fork
< sipa>
petertodd: fair point
< sipa>
interesting... i'm not sure i would see the chain selection rule as a part of consensus
< petertodd>
I certainly would!
< sipa>
it's necessary for convergence... but many things are necessary for convergence, like a working p2p system
< xinxi_>
about soft fork, can you explain to me how it gets deployed and why we can only be more restrictive?
< sipa>
xinxi_: read BIP9 about how to deploy it
< petertodd>
I could make the chain selection rule be "only headers whose merkle-root end with the byte string 'peter todd'", and we'd quickly e in a situation where far less than 50% of hashing power is sufficient to attack non-adopting clients
< sipa>
xinxi_: why we can only be more restrictive... so that all rules demanded by old nodes remain satisfied by the majority of the hash rate
< petertodd>
sipa: _exactly_: your example fails the "majority of the hash rate" condition
< sipa>
petertodd: interesting
< sipa>
i agree
< sipa>
it is a validation-rule-more-restrictive change then, but not a soft forking change
< petertodd>
sipa: having said that, I think we need a new term for the case where a hard-fork is introduced gradually; maybe staged hard-fork?
< sipa>
we should name the different types of consensus rule changes after pokemon
< * sipa>
hides
< xinxi_>
Ha, I see the key is backward compatibility.
< * petertodd>
glares at sipa
< petertodd>
xinxi_: yup
< xinxi_>
nodes accepted by new nodes should be accepted by old nodes too.
< sipa>
xinxi_: otherwise, any old node will end up not accepting the majority chain
< xinxi_>
that's the easiest explanation to me.
< petertodd>
xinxi_: you mean, blocks accepted by...
< xinxi_>
petertodd: yep
< xinxi_>
that's a typo.
< sipa>
xinxi_: and the ledger will split in half, where each pre-existing coin becomes spendable on both sides of the fork
< xinxi_>
that's interesting.
< petertodd>
xinxi_: spendable on both sides of the fork, because the hard-fork _is_ the creation of a new currency
< xinxi_>
do you know COM+ invented by Microsoft? It's also backward compatible.
< xinxi_>
But they can keep releasing new versions of DirectX, which is based on COM+.
< sipa>
yup, just a currency that implicitly assigns the new coins exactly according to how they were distributed at the forking point in the old currency
< petertodd>
xinxi_: heck, I've been asked two or three times now by exchanges and the like to vet their plans for separating UTXO's cleanly onto both sides post-hard-fork, so they can trade/withdraw them separately (likely to be a legal requirement)
< xinxi_>
petertodd: that's funny.
< xinxi_>
so how to nodes check whether a block is valid or not?
< xinxi_>
do they just check the syntactics?
< petertodd>
xinxi_: what do you mean by "the syntactics"?
< xinxi_>
i mean the format of the block data.
< petertodd>
xinxi_: you mean, without the context of previous blocks?
< xinxi_>
petertodd: that's unlikely to be true. they need to verify transactions, which are dependent on history.
< petertodd>
xinxi_: indeed
< xinxi_>
petertodd: yeah, so how do they check?
< sipa>
xinxi_: 1) we receive headers and verify them syntactically, and check whether their PoW matches, then check whether they connect to previous headers, and their expected difficulty is correct, then store them
< xinxi_>
how about the block size? the op codes used?
< petertodd>
xinxi_: I'm not clear what you mean by "how" - are you thinking that full nodes check blocks differently than miners?
< sipa>
xinxi_: 2) we download blocks along the best headers from our peers, and when a block arrives, check it syntactically and that its transactions' hash matches what is in the claimed header we already know; and if so, then store them
< xinxi_>
petertodd: should miners and full nodes use the same set of rules to check blocks?
< sipa>
yes
< petertodd>
xinxi_: if they didn't, how would we ever get miners to enforce rules that we wanted enforced?
< xinxi_>
yeah, that's what I expected.
< xinxi_>
what if a node see a newer version of blocks?
< sipa>
xinxi_: 3) once we have all blocks along a chain whose total work exceeds that of our previous best chain, we look up its input in a database of unspent transaction outputs, validate the scripts, check resource limits, and various other things, ... and if succesful, remove the outputs it spent from the database, and add the outputs it created
< sipa>
xinxi_: what does that mean?
< petertodd>
xinxi_: then a soft-fork may have happened, adding rules to the existing set of rules, so the node should warn the user that the consensus rules enforced by it are no longer complete
< petertodd>
xinxi_: of course, changing the version number for soft-forks is just a nicity - we can't force miners to do that
< petertodd>
xinxi_: (modulo radical redesigns of how bitcoin works, like my client-side validation concepts)
< xinxi_>
i mean, if a Bitcoin client of a very low version sees a block generated by the newest version of Bitcoin client, what will happen?
< sipa>
xinxi_: it will just accept it
< sipa>
xinxi_: all software down to 0.2.10 should still work
< sipa>
(and that was because of a change in the p2p protocol)
< xinxi_>
so 0.2.10 is a hard fork.
< petertodd>
xinxi_: all that can happen is the node can warn the user that a soft-fork may have happened
< sipa>
xinxi_: nope, it was a p2p change
< petertodd>
xinxi_: not quite, that was a p2p change that can be easily worked around w/o changing the core consensus rules
< sipa>
xinxi_: you could create a bridge between 0.2.9 and the current network
< xinxi_>
based on your definition, soft forks should be backward compatible.
< sipa>
xinxi_: validation of the blocks should be backward compatible
< sipa>
and it is
< sipa>
the 0.2.9 software just doesn't know how to talk to newer clients anymore
< xinxi_>
OK. although the consensus protocol did not change, it's not backward compatible.
< xinxi_>
how did you guys manage to upgrade from 0.2.9 to 0.2.10?
< morcos>
is there really no better channel for this discussion?
< petertodd>
xinxi_: it'd be very easy for people to continue to run nodes that had backwards compatbility with 0.2.9
< petertodd>
morcos makes a good point...
< sipa>
ack, let's move to #bitcoin
< xinxi_>
so this is not even dev related?
< petertodd>
xinxi_: it's covering really basic material
< xinxi_>
but that's still dev. anyway, let's move.
< shatoshi>
INCREDIBLE! Send me some bitcoin and I can turn it into MUCH more, using special blockchain accelerating technology. Your bitcoin wallet will explode! Guaranteed to work & vouched by the OPS. PM me to begin!
< morcos>
ha ha, you've been replaced by somethign even worse
< sdaftuar>
thank you x 2
< MiraclePerson>
SUPER!!!! Want more bitcoin? Send me some Bitcoin and I'll instantly send you MORE back. I use special block-chain exploding skills. Totally safe & secure. Vouched by all the OPS! Pm me to begin!
< morcos>
sipa: how do you feel about boost::lockfree::queue?
< morcos>
as a proof of concept that solved the lock contention problem really well
< morcos>
jeremyrubin pointed me in this direction, but we weren't sure if this is somethign we wanted in the code
< BlueMatt>
actually lockless ring buffer, but close enough
< gmaxwell>
sipa: we could have a faster wide cryptographic hash, but last I looked low hitrate seemed to be the bigger performance ipediment than the time spent hashing.
< sipa>
gmaxwell: a sparse hash set would also be much more space efficient
< sipa>
(no malloc for each entry)
< sipa>
BlueMatt: interesting!
< sipa>
a ring buffer would suffice here, i guess... if it overflows you can always grab the lock and apply the erases regardless
< shangzhou>
sipa: the bitcoin.sipa.be data looks like not up to date
< sipa>
shangzhou: oh?
< sipa>
oh, yds
< sipa>
yes
< BlueMatt>
thoughts on decreasing transaction-serialization time?
< BlueMatt>
its currently slow as fuck, even when serializing into a static buffer
< BlueMatt>
caching it helps a lot, but eats a decent chunk of memory
< BlueMatt>
(and, mostly, caching tx serialization diverges fibre a decent chunk from core, which is annoying)
< cfields>
BlueMatt: working on a few optims to tx right now. not sure how applicable though...
< cfields>
BlueMatt: easy quick gain is lazy hashing
< BlueMatt>
cfields: I said serialization, not deserialization :p
< sipa>
BlueMatt: i think transaction data shoild be stored in a songle malloced buffer, with internal pointers
< cfields>
BlueMatt: oh right, heh. i just assumed you were working on the other direction. nevermind then :)
< cfields>
lazy hashing would be a quick way to slow you down, then :p
< BlueMatt>
what we really should do is calculate the hash at the same time as we serialize/deserialize, since thats easy
< sipa>
BlueMatt: that does require encapsulating all fields though
< BlueMatt>
sipa: yes, that is a bit longer-term, though
< sipa>
agree
< BlueMatt>
but, yea, that would help a ton of things
< cfields>
sipa: speaking of which, after profiling noticing how much time was spent in prevector, i've fixed up copying/moving to be much quicker. you opposed to dinking around in there?
< sipa>
cfields: please do!
< sipa>
cfields: actually, we should make it class safe...
< cfields>
sipa: better, i added static_asserts :p
< sipa>
and use std::move etc instead of realloc etx
< BlueMatt>
cfields: ironically, no, FIBRE is mostly tx copy/serialize - it copies most tx from mempool into a CBlock, and serializes those txn to build the data-FEC chunks from which to calculate the missing chunks
< BlueMatt>
it only ever has to deserialize txn that it missed from mempool (which is relatively few)
< sipa>
BlueMatt: we could also add a means to serialize a block from a list of tx shared_pointers
< cfields>
sipa: i started by special-casing it with enable_if<> and SFINAE, but I think we're better off just specializing for ~std::is_trivial<>
< gmaxwell>
What fiber FECs is not the block directly, but the block with padding to make transactions line up on packet boundaries.
< sipa>
BlueMatt: instead first needing a full deep copy of the tx into the block
< gmaxwell>
er fibre
< BlueMatt>
sipa: who said anything about serializing?
< BlueMatt>
oh, what gmaxwell said
< BlueMatt>
sipa: no, it just deep-copies into the CBlock that it hands to ProcessNewBlock
< sipa>
"FIBRE is mostly tx copy/serializr" -- BlueMatt
< sipa>
BlueMatt: ok, maybe we should just make CBlock have shared_ptr to transactions...
< BlueMatt>
that would help a lot, but it doesnt solve my serialize-time problem