< sipa>
sdaftuar, luke-jr, gmaxwell, wumpus: i believe that downgrading from 0.13.1 to 0.13.0 will just work
< sipa>
sdaftuar: even in case of a reorg that takes you back into witness-enabled code
< sipa>
s/code/blocks/
< luke-jr>
well, that's good, I think
< sipa>
the test that no witness data is present in non-witness blocks is only done in AcceptBlock, not in ConnectBlock
< sipa>
so we enforce it when receiving the block from somewhere, but not afterwards
< sipa>
which i think is exactly right
< gmaxwell>
sipa: what happens if you downgrad then upgrade again?
< sipa>
gmaxwell: newly added blocks will be rewinded
< morcos>
sipa: luke-jr: sorry, but i'm not sure i understand sipa's suggested compromise in 8459. sorry i don't remember exactly what the first version was, but i think it makes sense to explain that you don't NEED to specify both size and weight, otherwise its just confusing as to how you are supposed to set things up
< morcos>
as to making a recommendation whether or not to set size... i'm not sure why you say its not worse than 0.12, we don't know that. my guess is that it isn't significantly worse now, but it certainly could be post segwit, when you are trying to fit the last tx in a block and you keep trying EVERYTHING b/c the limit thats stopping you isn't the consensus limit
< morcos>
i do apologize for not testing at least the serialization effect (the other would be tricky to test), but i've been too caught up in trying to figure out why waking up threads is so abysmally slow
< luke-jr>
it doesn't serialise any more than without maxsize AFAIK.
< luke-jr>
GetSerializeSize doesn't serialize
< morcos>
ah so its expected that GetSerializeSize is fast? well that explains why anecdotally its not slower
< luke-jr>
morcos: I thought it was obvious the options were optional.. but if you disagree, I can add perhaps: "Both of these are optional, but at least blockmaxsize ought to be assigned by miners, to determine the largest block they are willing to create."
< morcos>
luke-jr: i assume you make suggestions like that on purpose
< luke-jr>
morcos: GetSerializeSize is literally just adding the sizes of each field
< luke-jr>
morcos: ?
< sipa>
i expect getserializesize time to be dominated by just memory access of iterating all vectors
< morcos>
luke-jr: you are against what you see as advice in the notes to use weight, but surely you understand that all of us are against advice to limit based on bytes? if any of us believed that was an issue, we'd be opposed to segwit
< luke-jr>
morcos: it is clearly an issue today. I am shocked that anyone would disagree.
< luke-jr>
0.13 will hopefully change that with CB, but it isn't deployed yet.
< sipa>
luke-jr: my view is that at worst it may make block propagation slightly worse, but at the same time it improves utxo storage incentives, which i'm much more concerned about long term
< morcos>
how about language along the lines of "If a miner wishes to limit the actual size of a block in bytes (as opposed to the consensus weight limit) then it is necessary to specify a blockmaxsize explicitly. If a miner does not wish to do that, it would be better to only specify a blockmaxweight"
< sipa>
a small constant factor was never something i opposed for block size. an expectation of it being increased under whatever perceived pressure is
< luke-jr>
morcos: that suggests miners ought not to limit size
< sipa>
i think miners ought not to limit size
< morcos>
something like that, not exactly "as opposed to" b/c that sounds like you're not enforcing the consensus limit
< sipa>
because it's unreasonable that all of them will
< morcos>
luke-jr: why does that language suggest they ought not limit size. its just saying that if you dont' care about a limit on actual size, then its better to not set blockmaxsize.
< morcos>
which is TRUE
< luke-jr>
if they don't care, they should set it based on someone's recommendation, but not necessarily the big blocker recommendation
< luke-jr>
if they care about Bitcoin (but not per se enough to research their own opinion on block size limits), they should follow recommendations to set it lower.
< sipa>
luke-jr: as a collective, perhaps
< luke-jr>
brb
< sipa>
luke-jr: there could be a gentlemen's agreement not to set it to the maximum immediately
< sipa>
but i believe that's unrealistic at this point
< morcos>
"If a miner wishes to limit the actual size of a block in bytes then it is necessary to specify a blockmaxsize explicitly. If a miner wishes only to limit block creation by weight (BIP141) regardless of how large in bytes the block is they should only specify a blockmaxweight"
< luke-jr>
sipa: every time I brought this up before, I was asked to "wait for the segwit release notes"; now I'm told it's too late?
< morcos>
i'm trying to make it as neutral as possible, while explaining the difference
< luke-jr>
(still brb mode)
< morcos>
what i don't want is a bunch of miners trying to mine max blocks and having a blockmaxsize set (especially after segwit). and we have to explain in detail enough for that to happen. i have no objection if they actually want to limit bytes
< sipa>
luke-jr: i can't remember ever recommending to wait for release notes. i have told you many times that if you disagree that weight is the limiting factor on the network, you should oppose segwit
< morcos>
sipa: want to think about something different an arcane for a second? in those checkqueue graphs i presented you, the scriptthreads were busyspinning waiting for work
< sipa>
morcos: i heard
< morcos>
it slows down the whole thing quite a bit (about 5ms per block) but still a big improvement, if you wake up the scripthreads only when there is work
< morcos>
even though it takes only about 100us for them all to wake up, and you can make that happen as early as the start of ConnectTip so they are all awake and busy spinning well before work comes in
< morcos>
i'm assuming its the fact that their caches have been blown away, but doesn't 5ms seem like a lot?
< morcos>
just wondering if you had any thoughts.. we're still working on it (mostly jeremy, i'm mostly just testing)
< sipa>
morcos: is the slowdown as significant when you have fewer threads?
< morcos>
sadly i didn't test that
< morcos>
yet
< sipa>
and this is 5ms clock time, not cpu time?
< morcos>
yes, 5ms clock time. it appears from debugging that the scriptthreads are just slower to run.
< sipa>
i guess if there are many memory locations to access one by one for script validation to run, the effect of not running could be many ram latencies
< sipa>
but i'm not sure whether than makes sense
< morcos>
ok, don't worry about it.. its mostly just bugging me. i realize that it doesn't make sense to micro optimize for one architecture anyway, but just trying to understand the effect
< PatBoy>
thx dev for all ur hard work !
< luke-jr>
sipa: I do not believe that opposing segwit is a serious suggestion.
< luke-jr>
morcos: then in the segwit announcement, we can re-evaluate and make recommendations that make sense given the new network conditions
< luke-jr>
(which might or might not include removing blockmaxsize depending on CB effectiveness and real-world testing)
< sipa>
luke-jr: miners will set block and weight likely to the maximum... that may be suboptimal, but it's inevitable that this is what will happen if segwit goes through
< sipa>
people can suggest to not do that... but for centralization risk (which is a long term effect), the important question is now what people do, but what they could do
< sipa>
*not
< luke-jr>
sipa: we don't know that yet, and our recommendations should always be what is sane even if they get ignored.
< sipa>
luke-jr: that's a reasonable position... but the code is written from a viewpoint that we will get weight-limited block construction
< sipa>
luke-jr: and the release notes should describe the code
< luke-jr>
then the code is broken (sabotaged, it sounds like) and fixing it should be considered a blocker for any release.
< sipa>
if that is your viewpoint, then it is segwit that is sabotaged
< sipa>
i disagree strongly with that
< luke-jr>
sipa: do you disagree strongly with 29000050e575a26b7f7c853cbccdb6bc60ddfdd9 for 0.13?
< sipa>
i think it's pointless
< sipa>
we're doing busy work to avoid stating reality in release notes
< sipa>
the reality is that segwit will mean that blocks go over 1 MB
< luke-jr>
reality is that blockmaxsize should be used for today and the near future
< sipa>
i disagree with that
< gmaxwell>
luke-jr: I think you are not paying attention to reality. Virtually all miners make blocks as large as they can. If at any point they make a block smaller than maximum, angry mobs show up and demand them to make maximum size blocks. Not wanting to deal with such nonsense, they simply do it.
< sipa>
or rather, i would agree with that if there was a reasonable chance that the entire ecosystem will self limit in that way
< sipa>
it won't
< gmaxwell>
this is the existing behavior in the network, and has been for a long time-- more or less since we made it configurable at all.
< luke-jr>
sipa: the entire ecosystem doesn't need to; why is it all or nothing?
< luke-jr>
gmaxwell: that's with 1 MB blocks. who knows if it will work out the same with 3 MB
< gmaxwell>
it especially will work out that way, with CB and relay mitigating orphaning effects.
< luke-jr>
I know that if miners start doing >1 MB blocks before the network is ready, I will intentionally NOT use a segwit wallet, to do my small part in preventing them from bloating the blocks. I think a lot of the community could be convinced to do the same.
< gmaxwell>
luke-jr: and if you want to posit a change in how things work you should suggest how.
< gmaxwell>
luke-jr: nothing held back blocksizes from growing from 250k to 1mb... not even the defaults in bitcoin core-- or large increases in orphaning rates.
< sipa>
luke-jr: smaller blocks will always be better for propagation relay effects, and larger blocks will always be worse
< sipa>
luke-jr: there is no 'ready' here; just a compromise that is acceptable
< sipa>
segwit shifts the relay effects one way, but shifts other incentives the other way
< luke-jr>
ugh, even IF miners will end up doing terrible things, it doesn't mean we need to sabotage miners who WANT to do what is best for the network.
< gmaxwell>
its not a question of "want to do what is best"; most people don't want to decide and do not have the time to do so in any case.
< gmaxwell>
You're thinking in terms of pixie unicorn miners.
< sipa>
the only way miners can do something that is good for the network is by making themselves replacable
< gmaxwell>
Go look at the actual blocks. The miners you're expecting just don't exist at a meaningful level.
< sipa>
luke-jr: moreover, we're not removing the ability to limit the block size
< sipa>
reality is that the code is optimized for limiting by weight, and segwit is designed with the assumption that that is what will happen... your suggestions seem to be that we should hide that
< sipa>
block size limiting is supported, and we should explain how to use it for those who want to
< luke-jr>
gmaxwell: yes, pixie unicorn miners who actually exist, despite being ignored by Core
< luke-jr>
sipa: either blockmaxsize has a notable cost or it doesn't. if it doesn't, there is no reason to claim it does.
< sipa>
luke-jr: so, benchmark
< luke-jr>
if it does, then it's sabotaging miners who use it
< gmaxwell>
luke-jr: they don't exist in a meaningful sense. a miner that gets a block a day is not making a meaningful dent in the systems' survivability.
< sipa>
luke-jr: limiting by block size is already taking a cut. fee income will be suboptimal if you limit by size
< sipa>
luke-jr: that's as much sabotaging
< luke-jr>
gmaxwell: so that justifies ignoring the requests of such miners, and sabotaging them with (allegedly) bad performance?
< sipa>
limiting by weight is a design decision which has costs
< gmaxwell>
'sabotaging' after I had to jump up and down and yell for over a month to get eligius to upgrade to 0.12 even though it made a _tens of fold_ increase in createnewblocktime?
< gmaxwell>
This is such bullshit.
< luke-jr>
ignoring that 0.12 broke numerous things Eligius provided that were left unmerged in Core since 0.6..
< sipa>
such as?
< luke-jr>
CPFP
< sipa>
i never trusted that code enough to merge
< sipa>
and i think i was not alone
< gmaxwell>
I am fed up with this.
< luke-jr>
same here.
< gmaxwell>
luke-jr: you are abusive towards me and the other contributors.
< gmaxwell>
you are obsessing over minutia on top of minutia.
< gmaxwell>
You are wasting countless hours exhausting all patience.
< gmaxwell>
Over matters which _do_ _not_ _matter_. The few obscure miners which will set non-defaults even though they get abusive and threatening contact from users (which drives away their hashpower); can _still_ do so. If it's slightly slower? so what--- the latest software is dozens of times faster to creates blocks than older software and they hardly cared to upgrade.
< gmaxwell>
it litterally makes no difference in the world, and yet you force people to spend hours and hours debating these things.
< gmaxwell>
and I get to spend my time asking others to not leave the project because they are exhausted by you; but it even exhausts me too.
< gmaxwell>
The last block from eligius was 64 hours ago. It contained _NO_ transactions. I would say that createnew block being merely 29.5 times faster than the old code it was running until recently instead of 30x faster won't matter. ... except it won't even see that difference when it mines empty blocks with no transactions at all.
< wumpus>
jonasschnelli: you'd say so, as the type is used; but does any compiler complain about this in practice? it's in a template, so it could be that you only need it at instantiation time
< wumpus>
I'm a little rusty on the exact rules there though
< jonasschnelli>
wumpus: I just added some stats stuff and included memusage.h, compiler was then complaining about the missing type prevector...
< jonasschnelli>
memusage.h is directly using (in a template but directly) prevector
< jonasschnelli>
But anyways... far away from important. :)
< wumpus>
does the data structure that you're trying to measure include a prevector
< wumpus>
?
< jonasschnelli>
no. I just want to measure a vector of structs... but mempool.h refers to prevector
< jonasschnelli>
I guess all other places where mempool.h is included, prevector was previously already included
< wumpus>
there is no mempool.h? I guess you mean memusage.h?
< jonasschnelli>
argm. memusage.h
< wumpus>
indeed, if including memusage.h forces you to include prevector.h then it should be included there
< jonasschnelli>
It's a nitpick and I'll try to cover that fix in one of my stats releated commits.
< wumpus>
sometimes the c/c++ dependency management is a bit unfortunate, very easy to accidentally have something creep into your scope so you forget that direct dependencies exist but have no include, then it turns up later when the header happens to be used independently
< wumpus>
right
< wumpus>
unlike missing prototypes in C, at least this cannot lead to ugly type-unsafety bugs, you either have to include it somewhere or it just won't compile :)
< btcdrak>
jeremyrubin: it's quite useful to use markdown task lists in the PR body, as those show up in lists and references on other tickets/PRs https://github.com/blog/1825-task-lists-in-all-markdown-documents and it also has the side benefit of making the WIP status clear
< jeremyrubin>
btcdrak: what are the tasks?
< btcdrak>
your todo list.
< jeremyrubin>
btcdrak: the only stuff really remaining is more testing.
< jeremyrubin>
btcdrak: actually I can think of two things
< sipa>
jeremyrubin: if all that remains is testing, i wouldn't call it WIP :)
< btcdrak>
sipa: exactly! if it's marked WIP no-one will test it.
< jeremyrubin>
sipa: oh ok... I'll un WIP it then, it's ready for testing
< jeremyrubin>
sorry was unclear at what point I should WIP/vs non WIP, thanks for the guidance
< jeremyrubin>
I'm going to be awk for a little bit, but will respond to more things later.
< jeremyrubin>
sipa: you may just want to view the whole diff rather than commit by commit
< sipa>
jeremyrubin: i tend to review per commit, as i think every commit should logically make sense... the combination into a pull request is there to see the bigger picture
< jeremyrubin>
yep I agree, each commit should compile and run on this PR but is maybe not fully documented ;)
< bsm117532>
wizkid057, the exploding bitcoin spammer, somehow got op permissions on #bitcoin-wizards and is banning people. Can someone here take care of him?
< jonasschnelli>
jeremyrubin: impressive PR... my review will take a while. :)
< pigeons>
he's not banning people, just kicked you only and its not the spammer it really is wizkid. probably just a mistake
< GitHub185>
[bitcoin] paveljanik opened pull request #8466: [Trivial] Do not shadow variables in networking code (master...20160805_Wshadow_net) https://github.com/bitcoin/bitcoin/pull/8466
< sipa>
i feel that 'New mempool information RPC calls' could move to 'Low-level RPC changes' in the release notes
< jtimon>
this doesn't touch the script flags and doesn't move anything out of contextualCheckBlock like the previous one
< jtimon>
it could be 1 or a few commits, but I kept it extremely separated just in case
< GitHub153>
[bitcoin] paveljanik opened pull request #8467: [Trivial] Do not shadow members in dbwrapper (master...20160805_Wshadow_dbwrapper) https://github.com/bitcoin/bitcoin/pull/8467
< sdaftuar>
sipa: ack
< sdaftuar>
sipa: more generally i think if you're going to clean up further, perhaps reordering the Notable Changes section might be called for. eg hd wallet support, segwit, cpfp mining, and compact blocks strike me as more important to highlight than c++11 code modernizations
< sipa>
maybe a section that aggregates all non-user-visible code changes
< sipa>
that would even include segwit, i guess
< sdaftuar>
sure. fyi the mining stuff references segwit, so if you move those around relative to each other, the text might need some edits
< GitHub14>
[bitcoin] paveljanik opened pull request #8468: Do not shadow member variables in serialization (master...20160805_Wshadow_serialization) https://github.com/bitcoin/bitcoin/pull/8468
< Chris_Stewart_5>
Is this entire block of code inside of bloom.cpp basically so that we can match redeemScript/pubkey hashes in scriptPubKeys?
< jonasschnelli>
sipa: how do I get the DynamicUsage from a struct?
< jonasschnelli>
MallocUsage(size_t alloc) is inline...
< sipa>
jonasschnelli: sum the dynamic usages of its member fields
< jonasschnelli>
sipa: what about MallocUsage(sizeof(mystruct))?
< jonasschnelli>
+allow external calls to MallocUsage
< sipa>
jonasschnelli: are you mallocing your stuct?
< jonasschnelli>
sipa: ... ah... no.
< sipa>
then you shouldn't be using either of them
< jonasschnelli>
I guess std::vector<mystruct> is on the stack
< sipa>
DynamicUsage is not how much memory a struct uses. It's how much _dynamic_ memory it uses. The static memory you find just using sizeof.
< sipa>
jonasschnelli: use DynamicUsage on the vector.
< sipa>
The vector is allocated on the stack. Its entries are allocated on the heap.
< sipa>
DynamicUsage(vector) will tell you how much memory that vector malloced.
< jonasschnelli>
sipa: yes. I'll do that. But i'd like to know how many elements i have to remove from the vector to match a usage target... therefore I need to calculate the size of a single element.
< sipa>
jonasschnelli: use sizeof.
< sipa>
also, removing elements from a vector does not reduce its memory usage.
< sipa>
you need to call shrink_to_fit
< jonasschnelli>
okay... but aren't the element malloced?
< sipa>
yes?
< sipa>
but not individually
< sipa>
a vector has a single allocated block with all its elements in it
< jonasschnelli>
Okay. I'll use sizeof(struct) and good point about shrink_to_fit!
< jtimon>
sipa: so how many sets of flags do we want to expose in libconsensus?
< sipa>
jtimon: one for every layer
< jtimon>
is locktime a layer?
< sipa>
there will be a script validation API and a block validation API, right?
< sipa>
maybe a transaction validation API as well
< sipa>
i think it's wrong to make them share flags
< sipa>
what if there was a script change that only applied to transactions with nVersion>=5... you'd have a block level flag to enable the soft fork, but it wouldn't map one-to-one to the script flag
< jtimon>
maybe a header validation API too, I think that would be the easiest thing to expose and discuss abstractions from storage
< jtimon>
sipa:yeah, now they don't need to map, the non-script consensus flags don't need to be in the internal script flags
< jtimon>
and some internal script flags are not exposed
< jtimon>
I don't undesrtand your txversion>5 example
< jtimon>
the flag would be used at the tx validation level either way, no?
< jtimon>
oh, I see what you mean
< sipa>
say we had a new OP_CHECKSIGAWESOME, but only enabled in UTXOs created by transactions with nVersion >= 5
< sipa>
there would be SCRIPT_VERIFY_CHECKSIGAWESOME, and a BIP9 deployment AWESOME
< sipa>
you'd pass AWESOME to the block validation functions
< jtimon>
yes, you're saying that you would do the consensus-to-internal-script conversion outside of the conversion function and inside, say verifyTx
< sipa>
right... there is not necessarily a clean mapping between the two
< sipa>
what flags the script code is called with is block level logic
< jtimon>
well, what we're exposing in libconsensus are the BIP9/older_sf flags, no?
< sipa>
for the script layer, sure
< sipa>
wait, no
< jtimon>
for all layers, no?
< sipa>
i'm confused :)
< sipa>
i'm assuming there will be a enum for block level validation flags
< jtimon>
the flags we expose in libconsensus right now are a subset of the script internal ones
< sipa>
and an enum (which already exists) for script level validation flags
< sipa>
for some softforks there will be a flag in both
< jtimon>
there's no flag for say bip68 and bip113, because thouse would be needed for verifyTx, but not for verifyScript
< sipa>
right
< jtimon>
well, the "block level" flags include all the "script level" flags, right?
< sipa>
not necessarily
< jtimon>
how do you tell verifyblock to verify p2sh otherwise?
< jtimon>
I mean, verifytx
< jtimon>
verifyblock could call getflags internally I guess, but that's not the point
< sipa>
i'd expect verifyblock to have something like BLOCK_VERIFY_CSV, and verifyscript to have something like SCRIPT_VERIFY_CHECKSEQUENCEVERIFY
< jtimon>
say I want to call verifytx specifying that I want p2sh and bip113 validated
< sipa>
sure, CSV implies that CHECKSEQUENCEVERIFY is passed to script
< sipa>
but CSV does much more than that
< sipa>
it also enabled nsequence behaviour
< sipa>
which has nothing to do with script
< jtimon>
so BLOCK_VERIFY_CSV is mapped into bitcoinconsensus_SCRIPT_FLAGS_VERIFY_P2SH and then bitcoinconsensus_SCRIPT_FLAGS_VERIFY_P2SH into SCRIPT_VERIFY_P2SH
< jtimon>
do we need also a TX_VERIFY_P2SH for exposing verifyTx ?
< sipa>
i'd keep the tx-level flags and block-level flags the same
< sipa>
but script flags seem separate
< jtimon>
well, the flags for BIP68 and for BIP112 are separated in that branch
< sipa>
that's just my opinion, btw... maybe other people feel that script and block flags should be combined
< jtimon>
they are separated: there's "all consensus flags" and "all consensus flags related to script plus some more script specific flags"
< jtimon>
well, from previous talks I thought this was exactly what you wanted, still not sure what would you prefer
< sipa>
yes, i feel they should be completely separate
< sipa>
not one being an extension of the other
< jtimon>
like not repeating p2sh in both of them? I don't undesrtand
< sipa>
but maybe others disagree... that's fine
< sipa>
yes, P2SH would be in both
< jtimon>
in this case one is not an extension of the other, what do you mean?
< jtimon>
I'm just adding new non-script flags there
< sipa>
bitcoinconsensus_BIP16, bitcoinconsensus_BIP66, bitcoinconsensus_BIP65, bitcoinconsensus_BIP68, ... for example
< jtimon>
the script ones were already exposed in libconsensus
< sipa>
and BIP16 would map to script verify P2SH
< jtimon>
they already do by hardcoding the same bit positions in both, I'm trying to solve that
< jtimon>
that's what ScriptFlagsFromConsensus() is supposed to solve
< sipa>
well why do you have both LOCKTIME_VERIFY_SEQUENCE and bitcoinconsensus_SCRIPT_FLAGS_VERIFY_CHECKSEQUENCEVERIFY in the same enum?
< sipa>
at the block level they're the same thing
< jtimon>
because I'm not creating a new enum for libconsensus, there's currently one for libconsensus and another for scripts, I'm keeping it that way, just decoupling each other from the bit positions having to match
< sipa>
to verifyblock you would just pass "CSV is enabled", and it would trigger all necessary things... including locktime nsequence checking and script checksequenceverify
< jtimon>
let's say in libconsensus the "block level" flags would be reused for verifyTx and verfiyScript, does that make sense?
< sipa>
i don't know what you mean by that
< jtimon>
I see, you want the block level ones to correspond with deployments, ie bip68 and bip112 go together
< sipa>
well if there is a reason why they would be implemented separately, they can be separate
< sipa>
it's about abstraction... the caller of blockverify shouldn't need to know what every softfork corresponds to internally
< jtimon>
but you don't want verifyScript to receive a CSV_bip68_bip112 flag that it internally maps to SCRIPT_VERIFY_CHECKSEQUENCEVERIFY (ignoring bip68 because at the script level it doesn't care)
< sipa>
that enum you have there feels to me like you want to expose all internal choices to the caller... that isn't necessary
< sipa>
right... have block validation flags that correspond to deployments/softforks, and script validation flags that correspond to script rules
< jtimon>
yeah they can be separated, we could also have a bool param for every option
< sipa>
some block validation flags will correspond to one or more script flags... some may not
< sipa>
but the caller shouldn't need to know that
< sipa>
again... all of that is just my opinion
< sipa>
i'd like to hear some others too
< jtimon>
agree on what you said about the caller, disagree that I want to expose more details than necessary to the caller, I'm happy to merge bip68 and bip112 flags, that seems orthogonal
< sipa>
just the naming of your flags is exposing things :)
< jtimon>
the main question is how many enums do we want and what should it be in them
< jtimon>
oh, come on, I expect people to bike-shedd
< jtimon>
don't tell me that's the main complain
< sipa>
at the block/tx level, you'd have bip16, bip34, bip66, bip65, bip68_112_113, bip141
< cfields>
paveljanik: np. the serialization one will require more effort.
< sipa>
at the script level, you'd have what already exists
< jtimon>
asume you fully decide the names in the enum
< cfields>
paveljanik: imo changing the name for those would be helpful, the version serialization is quite confusing as-is
< sipa>
jtimon: i don't care about the name... i care about the fact that it's mixing several layers!
< jtimon>
why are bip68_112_113 together?
< jtimon>
bip113 was deployed separately
< sipa>
because someone calling verifyblock shouldn't need to know that part of it is a script rule and part of it is not
< jtimon>
for the libconsensus caller, all he cares about is deployments, no?
< sipa>
IMHO, there should not be any flags at all, and you just give it the block headers :)
< sipa>
but i understand we're not there yet
< jtimon>
with my approach all they need to know is that each flag is a deployment
< sipa>
ok, i don't like it, sorry
< sipa>
that's not a NACK
< sipa>
but i'm tired of discussing it
< jtimon>
well, we can expose consensus::getflags
< jtimon>
but are you saying that verifyTx should call getflags internally?
< paveljanik>
cfields, yes. I took me a lot of time to get it done. This was my third attempt to do so.
< cfields>
paveljanik: understood. it's much appreciated.
< paveljanik>
cfields, I have more huge task... LOCK inside LOCK
< cfields>
paveljanik: one high-level request: if the param isn't used, please just leave the var unspecified
< paveljanik>
cfields, I thought about it too (esp. nVersion...) but this would make review more difficult.
< cfields>
paveljanik: otherwise we're not getting any closer to being able to enable -Wunused-parameter :)
< cfields>
paveljanik: how so?
< paveljanik>
every reviewer must read the whole function code...
< paveljanik>
without this change, it is almost enough to read the context
< cfields>
ok, fair enough
< paveljanik>
of course the compilation can find problems there...
< paveljanik>
BTW - LOCK inside LOCK problem: LOCK macro defines a local variable CCriticalBlock criticalblock. And thus LOCK inside LOCK shadows it.
< paveljanik>
I do not want to make the change in the syntax of LOCK, so I have to change the macro.
< gmaxwell>
thats presumably why there are LOCKN macros?
< paveljanik>
yes, but... LOCK2 at the beginning of the function grabs both at the beginning.
< paveljanik>
I do not think this is wanted.
< paveljanik>
I had an idea to change the name of the variable somehow - e.g. criticalblockLINENR or so.
< paveljanik>
but implementing this in the preprocessor's ## ## ## is currently above my knowledge ;-)
< paveljanik>
see e.g. ProcessGetData in main.cpp.
< sipa>
paveljanik: use the file number in the name of the variable :)
< sipa>
eh, line number
< paveljanik>
that was my plan :-) But not so easy to write it 8)
< sipa>
i'll try
< cfields>
can't you just use the incoming param's name?
< sipa>
cfields: what do you if it's called pnode->buf[(int)(sin(x)*35)].cs?