< TD-Linux>
gmaxwell, well 0.246 btc loss is directly proportional to their block size
< gmaxwell>
Indeed.
< Lightsword>
gmaxwell, do you have stats on kano’s ckpool and ck’s solopool?
< TD-Linux>
also I don't think there's really enough samples there to draw a conclusion. would be neat to automate this though.
< gmaxwell>
they didn't fine a block in the union of my and btcdrak's observation windows. I have mempool data for 435976 to now.
< gmaxwell>
s/fine/find/
< sipa>
gmaxwell: over how many blocks is this data?
< gmaxwell>
67
< gmaxwell>
here is the max from that data, BitClub -0.00042054 HaoBTC 0.03818631 BitFury 0.05457683 BTC.com 0.07372295 SlushPool 0.09595818 BTCC 0.09886828 ViaBTC 0.11170776 F2Pool 0.12080682 Bitcoin.com 0.12846755 AntPool 0.14341536 Unknown 0.16687057 BW.COM 0.18705609 GBMiners 0.24633568 Telco 0.709605 Eligius 1.03414
< gmaxwell>
I can also estimate mining process latency from this. I'm saving the fees for my gbt every 10 seconds.
< gmaxwell>
e.g. "you mined fees consistent with forming your block 30 seconds ago"
< jeremyrubin>
gmaxwell: can you normalize by block size?
< midnightmagic>
I'm going to regen the entire build instead of modifying the .assert in place to be able to say I ran it plus gverify against the other two sigs in there, michagogo et al
< midnightmagic>
sorry for the mixup
< gmaxwell>
jeremyrubin: okay, I added two columns, one is my mempool fees sscaled to the actual block size, the next is the difference.
< gmaxwell>
which now shows the small blocks as slightly negative, which makes sense, since they took the highest fee txn.
< luke-jr>
gmaxwell: that looks like the fallback where Eloipool has to guess the template itself until GBT completes
< luke-jr>
it's supposed to be based on the previous valid template, not sure what's going wrong there
< gmaxwell>
luke-jr: fix.
< luke-jr>
gmaxwell: looks like it was in wizkid057's GBT proxy thing.. [03:34:24] <wizkid057> oh, I never commited that to the production server
< luke-jr>
>_<
< whphhg>
Sup blockstream
< GitHub7>
[bitcoin] laanwj closed pull request #9022: Update release notes to mention dropping OS X 10.7 support (0.13...0-13-1-osx-notes) https://github.com/bitcoin/bitcoin/pull/9022
< jonasschnelli>
wumpus: heh. Yes. Someone should turn this into unit-tests.
< jonasschnelli>
Maybe open an easy-to-implement issue?
< jonasschnelli>
though not sure how easy it is.
< wumpus>
it seems pretty straightforward to run the tests, if the files + results are available. Fixing the discovered issues is proably far from easy-to-implement :)
< jonasschnelli>
Indeed...
< wumpus>
but even without that it'd be interesting to see how it compares
< wumpus>
hopefully there's nothing in the "parser crashed" category, we've done quite a lot of fuzzing
< jonasschnelli>
I'm glad all JSON operations are hidden behind the HTTP Auth...
< jonasschnelli>
With rest it gets a bit more risky...
< wumpus>
I've purposedly kept JSON parsing out of REST
< wumpus>
just simple query strings
< jonasschnelli>
Ah. Right. Only output.
< wumpus>
output far from as much of a risk as parsing
< wumpus>
+is
< wumpus>
still possible for there to be bugs there, but much less scope for trickery
< btcdrak>
btw this is the issue I found with Univalue https://github.com/jgarzik/univalue/pull/29 - wasted quite a few hours trying to work out why some tests were failing because of this.
< btcdrak>
oh, I see wumpus found the PR already :-)
< wumpus>
btcdrak: if tests are failing due to a trailing space you're doing comparison in the wrong domain
< wumpus>
I agree with your pull request but not that it should cause (non-JSON-pedanticness) tests to fail :)
< wumpus>
but I'd say, to compare two json documents: parse them and compare the underlying data. Don't compare pretty-printed representations
< btcdrak>
wumpus: well we have tests that compare the json output of "./bitcoin-tx -json ..." with a json file. trailing white space can get trimmed by IDE/editor settings. Trailing white space has no place in a json file. If it wasnt for that nice "log errors as diff" patch to bitcoin-unit-test.py submitted yesterday I would have lost my mind.
< wumpus>
I understand, but there is no standard way to pretty-print JSON
< wumpus>
having the tests depend on how the jSON lib happens to do pretty printing is fragile
< wumpus>
ideally the tests should compare the data, not the text
< btcdrak>
yes, I agree.
< wumpus>
I think we have some similar problems in other places, which complicated switching JSON libraries last time
< wumpus>
not a huge proiority to change ofcourse
< btcdrak>
but while indentation may not have a standard, I think trailing whitespace has no place in any output.
< luke-jr>
but what if you want to embed a Whitespace program? :p
< wumpus>
as I said I agree with your PR, I don't think emitting trailing whitespace is desirable, but if it causes test failures that points at a deeper issue
< btcdrak>
yup
< wumpus>
next time the problem may be the other way around, someone accidentally adds trailing whitespace to the example and the test fails
< wumpus>
and spend hours debugging that problem instead of something that matters :)
< luke-jr>
wumpus: do you have any expectation of further merges before tagging?
< wumpus>
luke-jr: if there are further improvements to the release notes
< wumpus>
otherwise, no
< jonasschnelli>
Do we follow a code-convention for instance variables? I guess we don't want this->var? Also, the prefixes (mapX, fX, etc.) are also used for non instance vars. What about using _Var for instance variables? Acceptable?
< jonasschnelli>
s/_Var/_var
< luke-jr>
I'd rather 'var' than '_var' for public stuff at least
< luke-jr>
I don't see a problem with o->var or o->fVar
< jonasschnelli>
Luke-Jr: my problems is, that the code is really not easy readable if you don't highlight instance variable in some form.
< jonasschnelli>
this-> clusters to code to much IMO, ... using _var seems acceptable to me.
< jonasschnelli>
Using fVar, etc. will not increase readability because we are also using this for non-instance vars (function parameters, etc.)
< luke-jr>
we're already using _var for local variables to avoid shadowing :/
< jonasschnelli>
argh... I though we are using _var for instance vars to avoid shadowing... do we also use _var in local scope?!
< luke-jr>
I didn't look at all cases explicitly, but when I encountered merge conflicts due to the shadowing changes, _var was always the local scope
< wumpus>
no, we have no naming convention for instance variables, just use whatever makes sense in the context
< jonasschnelli>
I personally like this-> but I know most people don't like that
< jonasschnelli>
I'll try _
< wumpus>
at least the qt coding convention recommends against using m_ or _ or such
< jonasschnelli>
The m prefix would not allow to use the fVar, etc. prefix.
< jonasschnelli>
mfBool would look strange. :)
< jonasschnelli>
i'd prefere _fBool
< wumpus>
m_fBool that would be, then
< jonasschnelli>
m_ yes... why not
< sipa>
wumpus: what does the qt coding convention suggest?
< wumpus>
sipa: no specific one, just use this->name where necessary
< wumpus>
in many cases there's no need to name instance variables any differently from local variables
< jonasschnelli>
wumpus: readability?
< sipa>
luke-jr: where do we use _var for local variables?
< wumpus>
jonasschnelli: I think usually it should be clear from the context what is a member variable and what is not, there's not much of a need to flag them
< sipa>
luke-jr: underscores are used in several places for formal parameters to avoid colliding with field variables
< wumpus>
but I don't know, I hate these kind of discussions
< jonasschnelli>
Reading through new code i often found myself checking if the variable is local or instance-wide
< sipa>
haha
< jonasschnelli>
heh
< sipa>
jonasschnelli: if the function body is not too long, it's usually pretty easy to see if there is a local variable with that name
< jonasschnelli>
sipa: yes. If. now open main.cpp. :)
< wumpus>
jonasschnelli: that probably means the code itself is badly commented / structured
< wumpus>
a shallow 'fix' like renaming instance variables won't help much in that case except check a checkbox
< sipa>
jonasschnelli: so help refactoring those functions to be morw readable :)
< wumpus>
the superlative of adding metadata into variable names is something crazy like Hungarian notation, and I don't think that makes code anything easier to read
< wumpus>
it's the typical pointy-haired boss solution to
< wumpus>
"code is unreadable"
< wumpus>
FORCE a coding style!
< wumpus>
now you have nicely formatted ununderstandable code :)
< sipa>
i realize that i know what pointy-haired-boss means in the context of dilbert, but not in real life. Do posses have pointy hair stereotypically?
< gmaxwell>
if style differences are making code much less readable for you, sounds like an oppturnity to refine your reading skills. :) -- there are obviously extreme examples, codebases that mangle everything with macros and other insanity. :P But really, a casual approach is best.
< wumpus>
sipa: I don't think so, it's just the dilbert stereotype, it doesn't have anything to do with hair :-)
< dcousens>
gmaxwell: certainly syntax can get in the way, but, majority of the time, readability is more about a reduction in complexity than consistently spacing things.
< dcousens>
improving readability*
< wumpus>
sipa: I think the gist is doing something for the sake of it being easy to enforce/check something, because the boss feels more in control that way and it superficiously looks like progress
< gmaxwell>
I wondered if perhaps PHB predated dilbert and dilbert was riffing off it, ... but I'd forgotten how old dilber is .. (1980-04)
< wumpus>
now we've done it, we're slacking off and discussing dilbert, we should come up with a business metric for IRC messages and employees should be rated on the number of on-topic IRC messages </s>
< wumpus>
time to tag 0.13.1 final?
< sipa>
i'm about to fall asleep
< wumpus>
I'll wait until you're asleep then
< dcousens>
ha
< * sipa>
goes into ACPI standby
< wumpus>
NN
< gmaxwell>
wumpus: all we need to is train some machine learning to read IRC and correlate that with commits, assigning score to IRC messages that come shortly before more commits.
< gmaxwell>
After we make the high scoreholder, Github151, in charge of the project I'm sure things will run much better.
< gmaxwell>
FWIW, my testing with RC3 all looks fine.
< wumpus>
hahahaa yes Github151 for president
< * luke-jr>
ponders writing-in Github151 on his ballot
< gmaxwell>
many states require a write in candidate register with them before being eligible to be counted. :(
< luke-jr>
I was joking anyway :p
< gmaxwell>
I think this is intended to help avoid "Which John Smith did we just elect?"
< luke-jr>
heh
< luke-jr>
of course, that wouldn't explain why real candidates are not allowed to register for write-in in some States (IIRC mainly NY and CA), but we're getting a bit too far off-topic I think
< wumpus>
maybe they should use a blockchain for registering candidates *ducks*
< luke-jr>
sadly, some people think that makes sense
< gmaxwell>
wumpus: so, final?
< wumpus>
yes, let's do it
< wumpus>
sipa's asleep
< wumpus>
* [new tag] v0.13.1 -> v0.13.1
< gmaxwell>
\O/
< luke-jr>
oh wow, rc3 just deleted my entire home directory …………….. jk :P
< gmaxwell>
cool "0.13.1 addresses user's concerns with excessive disk space consumption."
< wumpus>
hehe, always the positive side
< luke-jr>
lol
< jonasschnelli>
heh
< warren>
that sounds like one particular user had concerns
< wumpus>
huh, that looks like a bug in assertlockheld
< jonasschnelli>
maybe a different wallet instance...
< wumpus>
ah yes ofcourse
< wumpus>
maybe the lock naming should include instance pointer
< jonasschnelli>
Yes. My fault... different instances
< * jonasschnelli>
curses pwalletMain
< luke-jr>
hm, I didn't encounter such issues with multiwallet? O.o
< wumpus>
did you run with lock debugging on?
< wumpus>
(--enable-debug will do)
< luke-jr>
no
< luke-jr>
does the assertlockheld only work with that?
< wumpus>
yes
< wumpus>
it uses the same data structures as the lock order checks, there's a fair amount of overhead in tracking locks at run-time so it is not enabled in release builds
< michagogo>
🎉🎊
< * michagogo>
sends a message and requests that his computer be turned on
< GitHub174>
[bitcoin] rebroad opened pull request #9030: Don't process blocktxns when we have the block already. (master...BlocktxnExits) https://github.com/bitcoin/bitcoin/pull/9030
< GitHub66>
bitcoin/master ba26d41 Michael Ford: Update build notes for dropping osx 10.7 support...
< GitHub66>
bitcoin/master 83234d4 Wladimir J. van der Laan: Merge #9033: Update build notes for dropping osx 10.7 support (fanquake)...
<@wumpus>
compared to what?
< GitHub134>
[bitcoin] laanwj closed pull request #9033: Update build notes for dropping osx 10.7 support (fanquake) (master...Mf1610-docFanquake) https://github.com/bitcoin/bitcoin/pull/9033
< timothy>
0.13.0
<@wumpus>
yes there was at least a patch to boost
< timothy>
so can't I use vanilla boost?
<@wumpus>
sure, you always can
<@wumpus>
I thought you were talking about the gitian build, if you build using your OS' libraries there no need to do anything special
< whphhg>
Hej, is there a Bitcoin Unlimited channel on freenode?
< timothy>
lol
< rabidus_>
lol
< timothy>
it's like entering on FBI channel and ask for drug
< PatBoy>
hahahah
<@wumpus>
rofl
< whphhg>
Lol, I wasn't aware it was that bad. :o
< BlueMatt>
wumpus: so do i wait to update the ppa or just do it today?
<@wumpus>
BlueMatt: let me see, how many gitian sigs do we have now
<@wumpus>
three matching ones
< BlueMatt>
i said today, not now....still eating breakfast :p
<@wumpus>
but no code-signed ones yet. I guess it's somewhat strange to have the ppa built before the binaries available
< btcdrak>
Better not do it until the release actual announcement when we have everything done.
< BlueMatt>
btcdrak: meh, i often do it early...otherwise i forget
< Lauda>
BlueMatt please ppa as soon as possible 0.13.0 took forever. :)
< btcdrak>
Lauda: good point :-p
< michagogo>
BlueMatt: is it all ready in terms of packaging, i.e. just a matter of pushing the button?
< michagogo>
(Also, how long on average does it take from the time you push the build up until the server farm actually builds and publishes it?)
< michagogo>
If it's done with a command, you could avoid forgetting by setting a cronjob (or just a screen/tmux with a `sleep &&`) to do it in 24 hours
< michagogo>
Or 48 or something
< michagogo>
(Also, it's unfortunate that only cfields_ can produce the detached sigs…)
< btcdrak>
wumpus: I uploaded my gitian sigs
< BlueMatt>
michagogo: naa, need to do a few things first, then its like within 20-30 minutes after upload that they're all built and available
< michagogo>
wumpus: re: #9028 (and in general), have you considered tagging some issues for Hacktoberfest?
< cfields_>
btcdrak: you can add ckpool to the mining list. and the cgminer PR hasn't been merged yet.
< btcdrak>
ok
< btcdrak>
seems like the binaries will be ready today?
< andytoshi>
kanzure: no, i have a rust json parsing library for bitcoin purposes, a low-priority TODO is for me to aggressively compare its behaviour to that of univalue
< cfields_>
btcdrak: technically just need 1 more match i think, which i'm sure will show up any minute
< michagogo>
cfields_: that match is probably going to be wumpus
< michagogo>
Who is the one that does the release anyway
< cfields_>
btcdrak: thanks
< sipa>
what does mf mean?
< sipa>
"0.13.1 signed mf"
< MarcoFalke>
my initials
< MarcoFalke>
:P
< sipa>
oh, of course
< cfields_>
I read it as Samuel L. Jackson.
< * sipa>
stupid
< sipa>
...?
< cfields_>
as in: I've had it with these MarcoFalke snakes, on this MarcoFalke plane!
< sipa>
i see.
< MarcoFalke>
Heh, I should change it to m4r(0f41k3 as there will be 1337 commits in the repo after it is merged.
<@wumpus>
hahaha
< achow101_>
are we so lucky that the time from tag to release will be less than 12 hours this time>
< achow101_>
?
< btcdrak>
achow101_: looks like everything has been done barring release notes and upload to bitcoin.org
< btcdrak>
s/notes/announce/
< achow101_>
:D
< btcdrak>
meeting time? or is everyone down at the pub having a well deserved pint?
< achow101_>
I think you're an hour early
< btcdrak>
wait, did the clocks change?
< achow101_>
idk, depends on your country
< btcdrak>
automatic clock update so I would never know >_>
< btcdrak>
this explains a lot...
< achow101_>
dst ends for me next week
< sipa>
it's one hour from now
< sipa>
btcdrak: set it in your calendar as 7pm iceland time
< morcos>
jonasschnelli: i think apple gave us an idea. you should move the fee slider to the touch bar.
< btcdrak>
sipa: let's all just move to Iceland.
< sipa>
morcos: 'touch bar' ?
< morcos>
what they replaced function keys with on the new macbook pros
< jonasschnelli>
sipa: new MacBook Pro physical UX element
< jonasschnelli>
A screen replaces the F function keys
< sipa>
i don't understand
< BlueMatt>
wtf is a "physical UX element"
< jonasschnelli>
morcos: I need to watch the presentation
< BlueMatt>
sipa: they replace the top line of your keyboard with an ipad
< jonasschnelli>
finger print has no plausible deniability
< gmaxwell>
the lenovo x1s have a touchscreen at the top of the keyboard instead of fkeys, it's awful.
< BlueMatt>
jonasschnelli: and your machine is..uhhh...covered in your fingerprints
< btcdrak>
Did anyone see that presentation where someone lifted a fingerprint off a photo of someone and reproduced the print on a 3D printer... and managed to open their phone with it? I think it was a German politician's phone.
< sipa>
BlueMatt: i'm sure my keyboard is already covered with fingerprints :)
< jonasschnelli>
Right... adhesive tape is sufficient to unlock
< btcdrak>
seems like security theatre
< jonasschnelli>
Probably state sponsored move.. :)
< jeremyrubin>
gmaxwell: it's a different thing than that kind
< jonasschnelli>
You can now force everyone to unlock your HDD/SDD
< achow101_>
btcdrak: mythbusters did an episode about fingerprint spoofing
< sipa>
fingerprint unlocking is so annoyingly convenient :(
< jonasschnelli>
heh
< jonasschnelli>
What I want is fingerprint & passphrase
< btcdrak>
I want to keep my fingers
< NicolasDorier>
while playing doing my node in C#, I tried a way to speedup IBD by 50%: Basically I prefetch the UTXO and tx id's (for BIP30) of block N+1 while validating block N. Still a bit early to call victory, but might be a piste to explore for core
< sipa>
NicolasDorier: interesting idea, though i'm not sure it's so useful - i expect we already have the majority of utxo entires cached
< sipa>
but i guess it could speed up looking for the ones that aren't
< NicolasDorier>
sipa: the thing that slow down is BIP30
< NicolasDorier>
because we are checking for a negative
< NicolasDorier>
so it is not in the cache
< sipa>
we don't do that anymore, afaik
< sipa>
only before bip34 activation
< NicolasDorier>
oh checking that
< NicolasDorier>
ah yeah you are right. It's strange, I do'nt know why I get more speed on validation.... well I think I'll get a better idea once my node reach block above 400 000
< NicolasDorier>
the commit on disk is in background on core right ?
< NicolasDorier>
except TxUndo if I remember
< NicolasDorier>
mmh... well, I'll wait I reach later block mayb it's not the case
< jtimon>
meeting...
<@wumpus>
congrats on 0.13.1 everyone!
< * btcdrak>
rings the gong
<@wumpus>
#startmeeting
< lightningbot>
Meeting started Thu Oct 27 19:01:23 2016 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
<@wumpus>
or magnet:?xt=urn:btih:dbe48c446b1113890644bbef03e361269f69c49a&dn=bitcoin-core-0.13.1&tr=udp%3A%2F%2Ftracker.openbittorrent.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.publicbt.com%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.ccc.de%3A80%2Fannounce&tr=udp%3A%2F%2Ftracker.coppersurfer.tk%3A6969&tr=udp%3A%2F%2Ftracker.leechers-paradise.org%3A6969&ws=https%3A%2F%2Fbitcoin.org%2Fbin%2F
< gmaxwell>
Just noticed wumpus hadn't done it. :)
< sipa>
maybe we can discuss signed blocks a bit
< gmaxwell>
So there are a number of things we want to do in a 0.13.2; so those should get in soon.
< morcos>
i'm interested in discussing that, because i want to understand whether this is meant to replace the existing testnet or just be another option
< morcos>
(signed blocks)
< gmaxwell>
(I guess some are in and just need to backport to 0.13 branch.
<@wumpus>
no, it's not meant to replace the current testnet
< kanzure>
re: testnet i also saw the suggestion of loading testnet params from json file
< jtimon>
fine with me, I still extremely dsilike having to use a global, but don't see other way around it if we want to use the union
< gmaxwell>
morcos: my expectation was that it would just be another option. Obviously it would be useless for testing much of anything mining related.
< jtimon>
what I have implemented is from .conf file, not .json file
<@wumpus>
indeed there should at least be a PoW testnet
< morcos>
ok, i think its still important that we have a well used testnet that uses PoW as similarly to mainnet as possible.. i worry that there is kind of only going to be one "testnet" that people use for most purposes though
< morcos>
perhaps it would be possible for transactions to easily end up on both?
< kanzure>
jtimon: didn't mean to recommend a specific file format; i was just pulling a thing from memory.
< morcos>
but maybe thats askign for trouble
<@wumpus>
yes the file format is completely not important
< jtimon>
I'm still trying to test the blocksigning stuff, but the "custom chain" code that preceeds it is pretty much ready I think (feel free to test it and give suggestions), see https://github.com/bitcoin/bitcoin/pull/8994
< sipa>
morcos: i think the issue is that 'testnet' can mean "a place where we test new network features, and subject it to huge reorgs, and other edge cases" or "a place where we expect companies to build a parallel infrastructure"
< cfields_>
adding to that, see the faux-mining mode added in the #9000 PR. That was crucial for me for real-world mining testing of segwit.
< sipa>
and those aren't reconcilable, i think
< jtimon>
that alone should be helpful for rapidly creating a new segwitnet (for the next thing) or whatever
<@wumpus>
one testnet is simply not enough for all testing scenarios
< gmaxwell>
morcos: alas, I don't think htats really possible. Right now the consensus insability of testnet causes some people to just not test on it.
<@wumpus>
btcdrak: awesome
< kanzure>
re: company testing, i have been (lightly) planning to use regtest for those purposes. e.g. company-to-company regtest instances. testnet is still important for testing of course.
<@wumpus>
kanzure: right - within a trusted group using a regtest is just as useful as signed blocks
< kanzure>
oh is that what the proposal is-- i'll have to go look. sorry.
<@wumpus>
it's only when exposing publicly that signing is necessary so people can't grief by generating e.g. tons of blocks
< gmaxwell>
morcos: the issue is that while not ideal, on mainnet a reasonable way of handling very large reorgs is to shut your site down and wait for the operator to manually do something about it. If you try that strategy on testnet, your service will just be down all the time.
< kanzure>
so for the company-to-company testing scenario, my assumption was you simply limit the number of participants to one other group, and then you know who is causing problems (either you or the other guys). still, i can see some advantages to public regtesting. sure.
< JackH>
when will ubuntu ppa's be updated?
< BlueMatt>
JackH: when i get to it (today)
< JackH>
ah sweet, you are fast this time then
< sipa>
btcdrak: nice, the timeline is cool
< luke-jr>
BlueMatt: btw, is it possible/easy to do a PPA with Knots as well? (is it something I can do reasonably myself perhaps?)
<@wumpus>
I think everyone can sign up to make PPAs
< * btcdrak>
is reading scrollback
< BlueMatt>
luke-jr: its not bad
< kanzure>
without signedblocks, if you had three companies trying to test an integration, you would need multiple different regtest links and to relay blocks from one network to the other with a different signature. i could see how that would be annoying to write. yeah..
< luke-jr>
wumpus: yes, it's just not very clear how one would actually make them, especially someone who doesn't use Ubuntu :p
< Frederic94500>
#bitcoin If segwit doesn't activate, he will be activate to the next 2016 blocks?
< sipa>
parse error
< jtimon>
one thing about #8994 related to wumpus' point about regtest among trusted peers... one can select -chain=custom -chainpetname=mysharedsecret and people without access to mysharedsecret won't be able to create the genesis block locally
< BlueMatt>
Frederic94500: we're in the middle of a meeting, please go to #bitcoin
< jtimon>
for the hash of the genesis block depends on -chainpetname
<@wumpus>
luke-jr: in a way it's similar to travis, you have to configure the environment and the building happens on their build servers
<@wumpus>
luke-jr: no need to run ubuntu yourself
< jonasschnelli>
Luke-Jr: there are also meta-generator that auto-generated deb/PPA and fedora, etc. out of one script/conf
< BlueMatt>
wumpus: only in theory....the upload tool stuff is really a bitch to get installed on non-debian systems
< luke-jr>
:x
<@wumpus>
BlueMatt: haha that's sad, I didn't know
< petertodd>
jtimon: I like the idea of a shared secret vs. pubkey based testnet, as it makes it clear that it's only for testing, not doing some kind of "bankchain" sillyness
< jtimon>
well, signed blocks have other advantages for testing, but it's definitely more dsiruptive
<@wumpus>
bitcoin.org change is merged
< petertodd>
jtimon: also, a hmac based thing may be easier to implement it - can be done by just changing the most-work chain test to require XOR key == 0; doesn't require any datastructures to change
< jtimon>
you can just share a chainparams.conf file and when if someone decides to load your testnet with too much work, s/mychainname/mychainname2/ and you start again I guess
<@wumpus>
right, changing the block header structure is what makes it scary
< sipa>
petertodd: it's surprisingly little work, but it's hard to do in a way that is 1) clean 2) runtime selectable 3) reviewable
<@wumpus>
the implementation work is not so bad, review, sure
< sipa>
petertodd: pick 2
< petertodd>
fwiw, I use this same kind of hmac auth trick in open timestamps so calendar servers can use clients as a last-ditch backup, without having the servers actually sign anything in a non-repudiatable way
< jtimon>
we could make other chainparams count for the genesis block hash
<@wumpus>
I mean introducing some union into CBlockHeader would mean there'd be a risk of regression even in the non-testing case
< petertodd>
wumpus: ah, yes, good point
< jtimon>
petertodd: well, I find it more scary than painful too, at least the way I'm doing it with the union (there's also a less scary way that uses more memory in mainnet and another one that is simply way way way too disruptive)
< petertodd>
wumpus: I'm wrong - that is scary
< btcdrak>
sipa: you have to thank harding! he wrote it all.
< kanzure>
what is remaining re: final alert things?
< kanzure>
was the page on one of the .org sites merged
< jtimon>
topic suggestion: are we removing the use of checkpoints for progress estimation?
< gmaxwell>
kanzure: we're not on that toopic now.
< gmaxwell>
topic suggestion: My work removing checkpoints _completely_.
<@wumpus>
#topic removing checkpoints
< gmaxwell>
I have a branch that is removing checkpoints. Haven't totally taken them out yet because I need to replace progress estimation.
< gmaxwell>
It's not hard to do that, just takes a little twiddling.
<@wumpus>
that's good news - progress estimation is probably the least interesting use of them
< gmaxwell>
There are three main components: Removal of checkpoints for IBD test. This is a no brainer. Removal of checkpoints for script checking-- this depends on benchmark results, as we discussed perhaps 4 meethings ago. and the third:
<@wumpus>
did you run into something difficult / uncertain?
< gmaxwell>
The last use is avoiding header flooding. I came up with a tidy way to do this, I think, but it requires an implicit consensus change but I think it is very trivial and obviously fine. But likely to delay things.
<@wumpus>
what about the DoS protection?
<@wumpus>
consensus change, as in a softfork?
< morcos>
do tell
< gmaxwell>
not a softfork. I'm telling.
< gmaxwell>
My changes introduce a constant in chain params which is the known amount of work in the best chain at ~release time. The IBD check uses this, we've talked about using that before for some checkpoint like things.
< gmaxwell>
So I propose that once we have any header chain that has at least that much work in it, we do not accept any more blocks with difficulty under 16 million-- which is roughly equal to about 10 commercially available mining devices.
< petertodd>
note that from the point of view of consensus this is technically speaking no different than making bitcoin core come with a set of blockchain data
< jtimon>
isn't the minimum difficulty check a softfork?
< gmaxwell>
This is a consenus change because the chain could never fall below difficulty 16 million in the future, but an unobservable one... as observing it would require the difficulty fall below 16 million. :)
<@wumpus>
petertodd: well it wouldn't lock in specific blocks as the checkpoints do
< petertodd>
er, gregs #2 thing makes my statement invalid :)
< jtimon>
gmaxwell: yeah, it's a softfork in the pedantic sense
< petertodd>
wumpus: right, I mean, w/o the minimum diff thing, the effect would be no different than ensuring bitcoin core shipped with blockchain data
< jeremyrubin>
I don't think that's great...
< gmaxwell>
jtimon: in a sense, but an unobservable one. Yes.
< jeremyrubin>
Can't difficulty fall that low under a soft fork to a different PoW?
< jeremyrubin>
(not that that should happen)
< petertodd>
jeremyrubin: yes, and at that point your idea of what bitcoin is is so insecure as to be useless
< gmaxwell>
jeremyrubin: then you take out the rule.
< jtimon>
like really imposing the 21 M limit, that was a softfork too, but no need to use bip9 to deploy I guess
< petertodd>
jtimon: +1
< Chris_Stewart_5>
wouldn't that be a hard fork to remove it if it was enforced?
< gmaxwell>
the 16 million number was just a result of a tidy amount with bitmasking that also is really infeasable to attack but also trivial to mine... 10 devices.
< petertodd>
Chris_Stewart_5: yes, removiing is a hard fork, but remember we're talking about a situation where bitcoin as you know it is useless, so tha'ts irrelevant IMO
< gmaxwell>
If someone worried that 16 million were too high, there is a pretty broad range that the number could reasonable be set in.
< petertodd>
gmaxwell: honestly, I'd be inclined to go even higher - 10 machines is absolutely nothing
< gmaxwell>
Anything over 100k would pretty much halt any real risk headerflooding, with current hardware. 16M adds a good amount of headroom.
< Chris_Stewart_5>
but in jeremyrubin example if we are soft forkign to a different PoW that doesn't necessarily hold true, does it? Perhaps I don't understand circumstances of forking to another PoW though..
< jeremyrubin>
petertodd: I disagree, but that's more of a wizards topic
< jtimon>
gmaxwell: are you sure you want to change CheckBlockHeader instead of CheckProofOfWork ?
< morcos>
gmaxwell: i'm not so sure about that.. isn't the abilitity to soft fork to a different PoW something we might want to preserve?
< petertodd>
Chris_Stewart_5: a "soft-fork" to a different PoW isn't really a soft-fork, because the old clients are now horribly insecure
< jeremyrubin>
petertodd: e.g., something like tadge's proof of idle
< gmaxwell>
Chris_Stewart_5: softforking to a new pow is not really a softfork. In any case: keeping it at least that high would require only 10 devices, and ... any old nodes in that world could have their chain redone by those same 10 devices.
< petertodd>
morcos: there is no such thing as a soft-fork to a different proof-of-work - doing that doesn't have the security characteristics of a soft-fork
< gmaxwell>
morcos: it is preserved.
< gmaxwell>
to the extent that it exists.
< morcos>
give how hard hard forks are.. imagine there was a contentious HF that took majority hash power.. might the minority not want to be able to softfork away without having to agree on a HF
< jtimon>
Chris_Stewart_5: yeah if you want a different pow just hardfork
< gmaxwell>
Imagine the diff floor is 1. okay, then the diff goes down to 1. okay.. now I start up a 2011 asic miner and immediately break all those un upgraded nodes.
< morcos>
ok, i need to think about it more.. but i think we should analyze all those scenarios
< gmaxwell>
morcos: but thats also why my figure is ~10 devices and not 10,000 devices. :)
< gmaxwell>
In any case. I think it's fairly easy to understand. And I think the solution basically has all the properties that we want.
< petertodd>
morcos: again, this is a scenario where bitcoin as you know it is horribly insecure - anyone with >10 machines could attack your min-diff chain. I had a high enough credit limit as a student to buy more machines than that. :)
< gmaxwell>
But I expected thought and discussion on it.
< BlueMatt>
gmaxwell: ideally we would like to add the property that someone cant flood you during IBD, but to be fair we also suffer from DoS issues there now
< petertodd>
gmaxwell: if hardware improves, do we up the min diff again? IMO that'd be reasonable
< morcos>
petertodd: not if you've softforked in other PoW requirements that the attackers don't have the hashing or whateve rto produce
< gmaxwell>
BlueMatt: So hold up there.
< gmaxwell>
BlueMatt: I think what I propos has _exactly_ as good protection for that as we currently have, if not somewhat better.
< Chris_Stewart_5>
And this solves header flooding because it requires the attacker to provide headers with ATLEAST that much difficulty, correct?
< BlueMatt>
gmaxwell: didnt disagree, only suggested that ideally we'd fix the issues we have now
< petertodd>
morcos: but again, because that's not really a soft-fork, might as well do a small hardfork at that point to drop the requirement for SHA2 PoW at some point wel before just 10 machines are needed
< gmaxwell>
BlueMatt: right now we won't accept lower difficulty blocks after we've validated up to a paritcular checkpoint.
< gmaxwell>
(okay I'll still explain as other people might miss this)
< gmaxwell>
So you can consider two cases: one where the first peer you fetch from is an attacker, and one where the first peer is honest.
< morcos>
petertodd: i need to think about that.. but i imagine it might always be easier to soft fork, even under adverse scenario like that
< gmaxwell>
If the first peer is an attacker, you'll get header flooded now or under my proposal. (but at least it's just a one time initial install exposure)
< BlueMatt>
gmaxwell: well, not sure its better since the "frst checkpoint" is "known amount of work in the best chain at ~release time" instead of a few along the way to 300k
< gmaxwell>
If the first peer is not an attacker, in my propoal you'll quickly have all the headers and blocked from any attacks. Also no less good than now.
< BlueMatt>
(under first-peer-is-evil attacks, but ok)
< gmaxwell>
BlueMatt: but my proposal needs only headers.
< gmaxwell>
oh under first peer is attacker
< petertodd>
morcos: anyway, good to do up some deployment scenarios regardless to explain how that'd work
< BlueMatt>
oh, i thought we applied checkpoints against headers now
< BlueMatt>
nvm
< sipa>
BlueMatt: we do; after passing a certain checkpoint, we don't accept headers that fork off before that checkpoint
< BlueMatt>
ok, lets take this offline
< BlueMatt>
suggested additional topics?
< gmaxwell>
Okay, thats the overview.
< gmaxwell>
I suggested the final alert. I suppose I should coordinate with achow and cobra to get the thing up and alert out. Any reasons to hold off?
< jtimon>
what about instead... block.nHeight < consensusParams.highPowLimitHeight ? consensusParams.powLimit : consensusParams.powLimitLater
<@wumpus>
#topic the final alert
<@wumpus>
no reason IMO
< btcdrak>
gmaxwell: please get it over with.
< gmaxwell>
Okay. will coordinate.
< gmaxwell>
jtimon: that would make it trivial for an attacker to capture you on a fake chain.
< gmaxwell>
jtimon: just feed you a chain of diff 1 blocks of that height.. and now you won't accept the low diff blocks on the real chain anymore.
< jtimon>
gmaxwell: how am I prevented from handling reorgs in the same way as you?
< sipa>
jtimon: creating many blocks is easy. creating much work is hard
< gmaxwell>
anyting left in the meeting (I'll continue this convo after)
< jtimon>
what I think it's add less risk since consensusParams.highPowLimitHeight is fixed but nMinimumChainWork is expected to chain with each release, no?
< jtimon>
I must be missing something, I don't see the vulnerability that my proposed change introduces
<@wumpus>
ok, that concludes the meeting I think
<@wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Oct 27 19:58:34 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< BlueMatt>
gmaxwell: wait, so how is it better? the only practical difference i see is that you need to get a headers chain up to today before getting protection, instead of only up to checkpoints
< BlueMatt>
but that shouldnt matter much
< gmaxwell>
jtimon: if you start a node and connect to an evil node. The evil node can feed you 500000 blocks at diff 1 and then you will not reorg onto the mainchain anymore.
< jtimon>
yes, how my proposed change makes your branch more vulnerable to that attack is what I don't see
< jtimon>
why wouldn't I reorg to the most work change?
< gmaxwell>
Because you won't even process the first block in that chain.
< sipa>
jtimon: because you'll reject the low-difficulty headers from the real chain you get later
< jtimon>
just like your branch without my proposed change, I think
< jtimon>
mhmm, no, say highPowLimitHeight is the current height whatever it is
< gmaxwell>
No. My branch does not activate until you have enough work to be the real chain at the time of the release.
< gmaxwell>
jtimon: yes, 436182. Say it's that.
< gmaxwell>
Attacker computes 500000 diff 1 headers, and gives you that.
< jtimon>
right, and mine activates at a fixed height, say 436182
< gmaxwell>
Under my code you would sitll reorg to the best chain.
< jtimon>
ok, I accept that chain
< jtimon>
then when I see the real one I reorg, no?
< gmaxwell>
Under your code you would not reorg to the best chain.
< gmaxwell>
No.
< jtimon>
why not?
< sipa>
jtimon: no, you'll reject the low-difficulty headers once you pass the watermark
< gmaxwell>
You will reach 500,000 and now you will reject blocks with low difficulty. So when an honest node sends you block 1 of the real chain you will reject it.
< sipa>
jtimon: because this is a fix to the otherwise existing DoS of being able to feed someone low-difficulty headers
< jtimon>
oh, we have limits on reorg, right, sorry, I get it, thanks
< sipa>
no, we don't have limits on reorg
< gmaxwell>
We don't have limits on reorg.
< jtimon>
mhm, let me read again
< sipa>
we just reject headers that are too low difficulty once we know we're past that stafe
< sipa>
*stage
< jtimon>
" So when an honest node sends you block 1 of the real chain you will reject it." not if the block is height < 436182
< gmaxwell>
if you don't reject low diff headers someone can exaust your memory/disk with header flooding.
< gmaxwell>
which the code you were quoting protect against but wouldn't if it were a height check.
< jtimon>
don't I reject them more than you? ie in your first version nMinimumChainWork will be total work at 436182, then in the next release, total work at a higher height, etc. I always reject low diff after 436182
< jtimon>
I don't get it but let's move on I will think more about it
< sipa>
jtimon: being past 436182 does not mean you're on the right chain
< sipa>
an attacker can veriy easily create such a long chain
< sipa>
creating as much work of the real 43612 chain is nearly impossible
< jtimon>
sipa: right it means the min diff is higher from now on
< jtimon>
right
< sipa>
jtimon: if yhe min difficulty is more than 1 you will reject the early part of the real chain!!!
< sipa>
because the real chain has diff 1 in the beginning
< jtimon>
and "my code" will always prefer the real chain because it's more work
< Chris_Stewart_5>
Not sure if this this is a good question or not, but is this something deployed with BIP9?
< jtimon>
sipa: no, the early part of the real chain is height < 436182 !
< sipa>
jtimon: we DO NOT want to accept just any header below height 436182
< sipa>
jtimon: that is exactly the DoS attack this change is intended to fix
< sipa>
jtimon: maybe you're missing this: once you have *ANY* chain with chainwork above the limit, you reject *every* header below the new difficulty
< sipa>
even in an entirely unrelated chain
< BlueMatt>
oh, damn, something i should've brought up in the meeting - ProcessNewBlock's CValidationState& argument - its really fucking strange. So its used to communicate either a) Errors (ie out of disk, block pruned, etc) or b) AcceptBlock (ie CheckBlock, ContextualCheckBlock, etc) Invalids()...it is NOT used to return success for the current (or any) block, and even if ActivateBestChain finds an invalid block, it will not set the
< BlueMatt>
CValidationState argument as such. 1) a few places in the code get this wrong and 2) this means you have to duplicate logic between the call-site as well as to CValidationInterface's BlockChecked()
< BlueMatt>
does anyone object to me making it call BlockChecked for AcceptBlock failures?
< jtimon>
I don't seee how pindexBestHeader->nChainWork < UintToArith256(consensusParams.nMinimumChainWork) ? consensusParams.powLimit : consensusParams.powLimitLater) saves us from the attacker sending us 500k diff 1 blocks just like with my change, that line only saves you from accepting mindiff blocks afterwards
< BlueMatt>
so then ProcessNewBlock would only use its CValidationState argument (which would then just be optional) in case of failures, not invalid blocks
< sipa>
jtimon: it only protects us once we see the real chain
< sipa>
jtimon: your proposal can trigger even if we don't have the real chain
< jtimon>
right, and with my chain it only protects us for blocks that have height > 436182, the change is not "globally activated forever" in this case, if a shorter chain with more work appears, you may go back below height 436182 and the min diff blocks would be accepted again
< sipa>
so you haven't solved the issue
< jtimon>
note I didn't say pindexBestHeader->nHeight but block.nHeight (that is, the header you are checking now)
< sipa>
you're really doing somwthing completely different
< jtimon>
well, that line is supposed to save us from min diff blocks in the future, no?
< sipa>
your change does not prevent that
< sipa>
someone can keep spamming low-height headers in your proposal
< jtimon>
oh, and you won't ignore them if they're < 436182, sorry, I finally get it
< jtimon>
thanks
< instagibbs>
Congrats! Managed to sleep exactly through meeting time.
< BlueMatt>
ok, I'm removing CValidationState from ProcessNewBlock
< sipa>
BlueMatt: iirc the only reason for CVS in PNB is to return system failure conditions
< BlueMatt>
sipa: nope, its also used to return AcceptBlock errors
< BlueMatt>
sipa: also, its never checked for system failure conditions
< jtimon>
BlueMatt: not sure what you propose to do CValidationState is usually to return error details from functions that already return false when they fail most of the time (if we returned 0 for success and anything else for error codes we wouldn't need it)
< BlueMatt>
the gcc in precise does not support c++11
< luke-jr>
ugh
< BlueMatt>
the ppa currently has an empty dummy package for precise
< BlueMatt>
because fuck precise
< luke-jr>
uh
< luke-jr>
at least leave the old version?
< BlueMatt>
no
< luke-jr>
…
< luke-jr>
patch the code to #define size size_arg? >_<
< BlueMatt>
no
< BlueMatt>
feel free to create the debian/ folder and send it to me and I'll upload
< BlueMatt>
I'm not fighting with it to make precise work
< luke-jr>
XD
< luke-jr>
wait, to do the PPA you just upload the debian folder?
< BlueMatt>
and the original source archive
< BlueMatt>
(ie git archive)
< BlueMatt>
and two other strange metadata files
< luke-jr>
any reason we can't get gitian to produce the files needing to upload? <.<
< BlueMatt>
gitian? they're all in the source tree
< BlueMatt>
except signed by my pgp key
< BlueMatt>
git archive + contrib/debian (though i have some mods i make to contrib/debian....i keep forgetting to re-upstream those, i used to keep it synced)
< BlueMatt>
yes, we do do that, but building a source package results in a) the git archive tar itself b) a tar of the debian/ folder and c) two files which pretty much just list some metadata extracted from the debian folder and hashes of the other files, which is signed by my pgp key
< BlueMatt>
so, no, its really entirely useless to do anything in gitian for this
< gmaxwell>
when did we back off the checkblocks check? was that in 0.13.0 or 0.13.1?