< cfields>
well since we're all doing PRs for lots-of-changes-but-the-reason-isn't-obvious-until-the-next-PR, I suppose I'll do mine too.
< bitcoin-git>
[bitcoin] theuni opened pull request #10285: net: refactor the connection process. moving towards async connections. (master...connman-events6) https://github.com/bitcoin/bitcoin/pull/10285
< fanquake>
cfields Now just open two more, each with another 5 commits, all building on top of each other.
< cfields>
fanquake: haha
< BlueMatt>
cfields: to be fair, sipa objected to mine and so I'm gonna open the reason to get concept acks before merge of the dependant
< cfields>
BlueMatt: sorry, that wasn't meant to be an insult. I've just been staring at this code for a week trying to figure out how to PR it. Just funny timing.
< BlueMatt>
nor was mine, i was just pointing to the vague policy understanding i took from sipa's understanding of my own comments and figured I'd mention it as a possible suggestion
< sipa>
as it seems to be a developing practice the past minutes in the vicinity of this communication channel to speak in relatively length sentences, i shall participate
< fanquake>
cfields If you'd rather stare at some different code for a while; I've been tracking down a Windows issue that's blocking us from moving to latest ZMQ #9254
< BlueMatt>
sipa: is your understanding of the developing practice of relatively lengthy sentences that it shall eventually move towards policy, or is it only a temporal disturbance in the sentence length requirement understanding of the participants of the vicinity of this channel?
< fanquake>
Also, noticed that libzmq is being relicensed.
< BlueMatt>
man i do not envy them in having to collect license grants from /everyone/
< cfields>
BlueMatt: they usually take the route of /everyone with a significant contribution/
< BlueMatt>
even still
< cfields>
yea: "from each individual contributor who wrote a major piece of code in the development process of libzmq"
< cfields>
no fun
< cfields>
but on the bright side, hooray for github centralization! Must make this much easier.
< BlueMatt>
f$cking internet...isp has peering presence but only uses it for IPv6...time to go annoy some noc operators
< BlueMatt>
s/peering/ix peering/
< cfields>
heh
< cfields>
fanquake: that error is very confusing. seems to be in a system header
< cfields>
my best guess is there's some define magic going on somewhere?
< sipa>
BlueMatt: i for one am of the opinion - which i shall hold until presented with sufficient evidence of the contrary - that this is merely a temporary phase, which will soon be considered inconvenient and unnecessary
< fanquake>
cfields Yea I'll have to look into it further.
< gmaxwell>
sipa: are you trying to match my length of lines on IRC? It will take you an awful long time to catch up to my average.
< fanquake>
cfields While you're here, any significant plans depends wise for 0.15.0 ? I had a bunch of updates lined up, but haven't PR'd anything because I wanted to know what you were doing.
< cfields>
fanquake: go for it! the qt overhaul is the big one
< cfields>
fanquake: an osx toolchain bump
< cfields>
fanquake: I'll be doing the toolchain builder as well, but that should just plug in to current depends without too much interruption
< sipa>
gmaxwell: that is an impossible task, i might venture to state, and surely one should not let mere statistics in the history of an ephemeral communication channel like this guide one's actions
< gmaxwell>
sipa: You are currently at an average length of 54 in this channel... compared to my 103.
< fanquake>
cfields qt overhaul in regard to 5.8? I think they have some new "lite" build system. Will we see any benefit from that, I would have thought we were already pretty optimised?
< achow101>
gmaxwell: while you're here, any update on the alert key stuff?
< BlueMatt>
lol
< cfields>
fanquake: split builds for host/target.
< BlueMatt>
saw that one coming
< cfields>
fanquake: and yea, we'll benefit from the lite thing. We just need to crank up a long list of -no-feature-X
< cfields>
(they're supposed to actually work now, rather than just breaking stuff)
< sipa>
perhaps i shall go construct a plugin module for my Internet Relay Chat client to implement nagle's algorithm for outgoing messages and make it combine multiple successive lines into one?
< fanquake>
cfields No worries, I'll have a look at that as well.
< cfields>
fanquake: have you tried to bisect the zmq change? There are a bunch of versions in between :(
< gmaxwell>
achow101: not yet. personally I'm hoping for more old stuff to age off the network, since the key is explotable against that old stuff. (A fact we didn't really know until I did the final homework before releasing the key)
< fanquake>
Somewhat, I originally opened that PR with 4.2.0, then moved to 4.2.1, then 4.2.2, all seemed to have the same issue. So I'm assuming it's something that might have happened between 4.1.6 -> 4.2.0 . Which I wouldn't have tested yet. I'll post in that PR if I find anything.
< achow101>
gmaxwell: will those vulnerabilities be disclosed or not until old nodes drop off the network?
< cfields>
fanquake: so 4.1.6 works?
< gmaxwell>
achow101: well I'd rather not post a howto then later post the key. :)
< gmaxwell>
achow101: if you want to amuse yourself, why not go find the issues yourself. Make a write up. though I'd ask you to keep ir private for now, though ultimately that would be up to you.
< achow101>
but I can't know if I actually found something without the key
< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #10286: Call wallet notify callbacks in scheduler thread (without cs_main) (master...2017-01-wallet-cache-inmempool-4) https://github.com/bitcoin/bitcoin/pull/10286
< gmaxwell>
achow101: well if you wanted to try it, you can change the key or rig the signature validation to return true for it.
< fanquake>
cfields I can't remember if I tested it or just jumped straight to 4.2.0. I'll go back and take a look.
< cfields>
fanquake: maybe try disabling the precompiled stuff?
< cfields>
fanquake: aha! that could definitely be it. we may need to force the WINNT version
< achow101>
gmaxwell: heh. social engineering failed :(. I will amuse myself and find those vulns
< sipa>
achow101: hahaha
< gmaxwell>
hehe.
< bitcoin-git>
[bitcoin] jimmysong opened pull request #10287: [tests] Update Unit Test for addrman.h/addrman.cpp (master...test_addrman) https://github.com/bitcoin/bitcoin/pull/10287
< jl2012>
on testnet I made a script to repeatedly send the same coin to myself every 15 second. When the tx chain becomes long enough, however, it stops working until confirmed. So there is a limit? Is it possible to bypass it?
< achow101>
jl2012: the limit is 25 unconfirmed transactions in a chain
< achow101>
you can raise it for your own node but not anyone else
< jl2012>
so other miners won't mine beyond 25 txs by default?
< achow101>
yes
< jl2012>
thanks
< sipa>
jl2012: there is also an option to make the wallet refuse to create transactions with coins in too long chains (though it avoids using them by default)
< fanquake>
Is there a dev meeting tonight?
< jonasschnelli>
fanquake: Yes.
< jonasschnelli>
fanquake: in 6h and 7min.
< fanquake>
jonasschnelli 3am :|
< jonasschnelli>
fanquake: hah. Yeah. Hard for OZ.
<@wumpus>
yes the time is not exactly ideal for asia/OZ
< jtimon>
suggested topic: summary of BlueMatt's overall plan for libconsensu
< kanzure>
is this 10240?
< jtimon>
if BlueMatt wants of course
< BlueMatt>
jtimon: k, can share. jonasschnelli you have the floor :)
< jonasschnelli>
Re. HD restore. I'm not sure if we should always try to restore funds or if we should check for the bestblock and compare it to the chain tip and only then restore
< jonasschnelli>
But I think we should only restore if the wallet's bestblock lacks behind
< instagibbs>
oh sorry, didnt see topic set already :)
< jonasschnelli>
Because...
< jonasschnelli>
Encrypted wallets may need to unlock
< jonasschnelli>
And also for performance / log reasons
< BlueMatt>
jonasschnelli: i assumed we'd always keep a buffer of X pubkeys around
< BlueMatt>
because you may have wallet "forks"
< BlueMatt>
not sure what you mean by "restore"?
< BlueMatt>
(feel free to tell me to shut up and go read the pr)
< jonasschnelli>
BlueMatt: By restore I mean always check the keypool keys and auto-extend (if only 50 [TBD] keys are left, topup to 100 [TBD]
< kanzure>
looks like it's re: finding relevant transactions
< jonasschnelli>
If we always restore... we would need to unlock encrypted wallet...
< jonasschnelli>
(more often)
< sipa>
jonasschnelli: my assumption was that we'd always mark seen keys as used (and we should do that independently)
< sipa>
jonasschnelli: we should also always extend the keypool when we can
< BlueMatt>
jonasschnelli: ah, you mean like "when do we extend keypool to watch buffer"?
< jonasschnelli>
sipa: Yes. But what if we can't?
< sipa>
jonasschnelli: and if the keypool runs out in a non-interactive setting, shutdown
< achow101>
If it needs to generate keys you could prompt the user right when the main gui pops up
< jonasschnelli>
And whats a save gap limit? I would assume >100 keys.
< BlueMatt>
another option would be to stop updating best seen block
< BlueMatt>
and then kick off a background rescan-from-that-height when wallet next unlocks
< jonasschnelli>
If someone has handed out 101 keys and only the position 101 has payed...
< BlueMatt>
if gap goes under some threshold
< kanzure>
yea, trigger on next unlock is better than achow101 popup
< BlueMatt>
achow101: needs to be cli-compatible, though
< jonasschnelli>
achow101: GUI is solvable..
< sipa>
jonasschnelli: if we fix the bdb flushing stupidity, generating new keys becomes very cheap
< jonasschnelli>
I don't know how to solve the non GUI way
< sipa>
jonasschnelli: shutdown. make sure it doesn't happen
< achow101>
jonasschnelli: how would you hand out 101 keys if the 101st wasn't generated yet?
< BlueMatt>
jonasschnelli: i mean keys are cheap, can do 250 or 500 or something crazy
< jonasschnelli>
sipa: But how to unlock during init in the first place?
< sipa>
jonasschnelli: you can't
< BlueMatt>
jonasschnelli: but cant we just use the keypool number now as the "buffer"?
< sipa>
ah, i see what you mean
< jonasschnelli>
But right after we rescan and sync
< BlueMatt>
and, like, the lower bound should be like keypool count / 2
< BlueMatt>
sipa: you cant just shutdown mid-sync
< jonasschnelli>
BlueMatt: Yes. But with the current 100 default, we would enforce a shutdown on startup for encrypted wallets
< sipa>
BlueMatt: why not?
< sipa>
it's an error condition that we cannot recover from
< * BlueMatt>
re-proposes that we stop updating wallet's best height if our keypool falls below keypool / 2
< BlueMatt>
and then rescan when keypool next gets filled
< sipa>
hmm
< jonasschnelli>
IMO an explicit "restore-mode" with a "unlock during startup" (not sure how) would be preferable for encrypted wallets
< sipa>
BlueMatt: you should also stop pruning
< BlueMatt>
sipa: yes, that would be my major reservation
< BlueMatt>
jonasschnelli: not sure you realistically can in a daemon setting
< jonasschnelli>
is stdin a total nogo? *duck*
< sipa>
so i guess we need a special "stop syncing" mode that we go into when the keypool runs out
< sipa>
jonasschnelli: there is no stdin with -daemon
< BlueMatt>
sipa: i guess you can stop pruning and if disk fills it will do the shutdown part for you :p
< sipa>
BlueMatt: ugh
< BlueMatt>
yea, i know
< jonasschnelli>
sipa: Yes. But at least you could run in non-daemon headless
<@wumpus>
yes a blocking mode makes sense in that case
< BlueMatt>
ok, so blocking in pruning mode, rescan-later in non-pruning mode?
<@wumpus>
and no, stdin is not an option, there should be no expectation with bitcoind that there's anyone at the terminal
< jonasschnelli>
If you run with an encrypted wallet and the bestblock lacks behind, shutdown if we can't unlock over stdin
< BlueMatt>
no stdin, just shutdown
< jonasschnelli>
wumpus: So we have only RPC to unlock?
<@wumpus>
everything should be scriptable
< BlueMatt>
jonasschnelli: but only in prune mode
<@wumpus>
jonasschnelli: yes
< jonasschnelli>
But how do we unlock/extend before we sync?
<@wumpus>
just wait until the wallet is unlocked to start
< jonasschnelli>
rpc starts after chain sync
< sipa>
jonasschnelli: you go into a blocking mode, and you continue after walletunlock
<@wumpus>
right
< sipa>
jonasschnelli: and no, no stdin ever
< jonasschnelli>
but can we block the sync and wait for RPC wallettunlock?
< sipa>
jonasschnelli: why not?
<@wumpus>
sure
< jonasschnelli>
(without changing too much)?
< BlueMatt>
ProcessNewBlock { return false; }
< jonasschnelli>
okay... sounds good. Need to take a closer look.
< sipa>
add a function to validation.h to let the core know that validation cannot progress
< BlueMatt>
maybe stop net too under the current net-pause stuff
< sipa>
right
< jonasschnelli>
Good point.
< kanzure>
should it shutdown if wallet is not unlocked within a certain time period? if it's not shutdown users might expect it to still be syncing.
< jonasschnelli>
Next question: what's a sane gap limit?
<@wumpus>
the only precondition for getting out of Init() is that the genesis block has been processed, everything else can be delayed
< jonasschnelli>
100 seems way to low to me
< sipa>
jonasschnelli: fix bdb flushing insanity, and raise it to 1000 or 10000
< BlueMatt>
jonasschnelli: keypool / 2?
< jonasschnelli>
(risk of losing funds is involved)
< BlueMatt>
and we can bump keypool to 500
< achow101>
how would you know that it is blocking and you need to walletunlock?
< kanzure>
jonasschnelli: i think the answer will depend on performance. also, do you really want to encourage users to use gaps? the answer might be yes..
< kanzure>
achow101: yes that is why i suggested shutdown after a certain period of time. users might not realize that syncing is stopped otherwise.
< jonasschnelli>
there my next concern pops up... all user will always have to have 500+ keypools. In an explicit restore more, only then we would need to have a large pool
< sipa>
jonasschnelli: who cares about 500 keys'
< sipa>
it's 16 kB of memory
< kanzure>
i thought derivation time was the bottleneck?
< sipa>
well, some small constant multiple of that
< jonasschnelli>
Hmm... yes.
< jonasschnelli>
If it just would be a pubkey and H160 onyl.. but it's also the privatre key! hell
<@wumpus>
the memory usage of keys is not an issue, just generation time (and that's only due to bdb stupidity)
< sipa>
kanzure: we can do ~10000 derivation steps per second on a single thread on modern CPU
< kanzure>
is that with bdb madness? :)
< sipa>
and maybe 5 due to BDB flushing
<@wumpus>
calling fsync after every key is not a good idea, it should create the entire keypool refill in one transaction
< luke-jr>
IMO automatic pruning should probably have as a precondition that the wallet has updated to the block being pruned, if it doesn't already; then the wallet can just set its criteria for processing
< luke-jr>
and if auto-pruning is enabled, block validation (safely) when the size is hit, until it can prune further?
< sipa>
luke-jr: agree, but that's not a concern right now as the wallet updates synchronously... with BlueMatt's coming changes maybe that changes
< BlueMatt>
yes, that changes, but it still shouldnt be too slow
< jonasschnelli>
With HD, there would also be no need for the disk-keypool for unencrypted wallets,.. it's just legacy. We could always fill up in-mem
< BlueMatt>
if your wallet falls behind consensus, you have a very, very large wallet
< BlueMatt>
(and should pause sync anyway)
< sipa>
right, the wallet should have the ability to pause syncing or prevent pruning
< jonasschnelli>
Conclusion: a) always scan keypool and topup, b) extend keypool and gap-limit to 500+, c) block when encrypted until RPC unlocked.
< sipa>
sgtm
<@wumpus>
yes
< jonasschnelli>
thanks. That was effective
<@wumpus>
#topic libconsensus (BlueMatt)
< BlueMatt>
yes, so obviously this is all based on #771
< BlueMatt>
but pr #10279 creates a CChainState class which will hold things like mapBlockIndex chainActive, etc, etc
< gribble>
https://github.com/bitcoin/bitcoin/issues/10279 | Add a CChainState class to validation.cpp to take another step towards clarifying internal interfaces by TheBlueMatt · Pull Request #10279 · bitcoin/bitcoin · GitHub
< BlueMatt>
and have ProcessNewBlock Activate..., Connect, etc, etc, etc
< sipa>
yay
< BlueMatt>
long-term that class' public interface will be libbitcoinconsensus, but right now its really just to clean up internal interfaces within validation.cpp
<@wumpus>
sounds good to me
< BlueMatt>
that class would get a pcoinsTip and related stuff to write/read blocks from disk
< BlueMatt>
and then only be able to call that and pure functions (eg script validation)
< jtimon>
BlueMatt: so what's the next thing we will be able to expose with these changes?
< cfields>
ooh, +1
< BlueMatt>
there is a bit of cleanup in the pr, but mostly its just moving into a class
< BlueMatt>
jtimon: expose-wise? probably nothing for like 2 more releases "until its ready"
< * BlueMatt>
is not a fan of libbitcoinconsensus being a grab-bag of random verification functions
< jtimon>
the class itself? mhmm
< BlueMatt>
i mean "the class" but I assume via a C API
< BlueMatt>
any other questions? or next topic?
< jtimon>
yes, I know, and I'm very open to see what you want to expose, even if I don't renounce to the verifyWithoutChangingState x {block, header, tx, script} + getFlags() vision I had
< jtimon>
but that's helpful, I can just imagine the class being exposed as a c api
<@wumpus>
not directly, it's just another step toward being able to
< jonasschnelli>
I wanted to ask if a first step to announce pruned NODE_NETWORK would make sense.
< jonasschnelli>
Could be NODE_NETWORK_LIMITED
< sipa>
jonasschnelli: what would it entail?
< jonasschnelli>
The only requirement is relay, and serve the last 144 blocks
< petertodd>
jonasschnelli: ACK
<@wumpus>
we had this discussion recently, I thnk the conclusion was to use two service bits
<@wumpus>
(or one, at first)
< gmaxwell>
what wumpus said.
< jonasschnelli>
(which is almost always possible with the current auto-prune limit)
< sipa>
i would suggest something that guarantees 1 day and 1 week
<@wumpus>
one bit combination would be 144, one would be ~1000
< luke-jr>
jonasschnelli: so segwit prune=550 wouldn't be allowed?
< * BlueMatt>
resists the urge to bikeshed on the "1 week" number
< gmaxwell>
Which should be 2 days and 2 weeks so the boundary condition doesn't leave you right out.
< sipa>
BlueMatt: i have data!
< jonasschnelli>
luke-jr: We would have to bump there
< gmaxwell>
BlueMatt: sipa has data on request rates.
< BlueMatt>
oh, true, thats right
<@wumpus>
luke-jr: it's allowed, but it can't signal anything
< sipa>
BlueMatt: i'll analyse the numbers again if there is interest
< gmaxwell>
The only think to bikeshed is how much higher do we need the cutoff than his data, it should be at least a couple blocks higher because of reorgs/boundary conditions.
< gmaxwell>
our existing minimum sizing for pruning is sized out for 288 blocks, so I think we should just do that, it will make ~144 pretty reliable.
< jonasschnelli>
Two service bits seems to be great. Did anyone started with specs/BIP?
< sipa>
BlueMatt: i just have a log of which depths of blocks are being fetched from my node
< cfields>
how would NODE_NETWORK_LIMITED interact (if at all) with the remote peer's advertised height?
< sipa>
BlueMatt: since february
< gmaxwell>
cfields: I don't think it should?
< luke-jr>
IMO would be nicer to have the new service bit require *some* historical storage, but I guess we're not running out..
< jonasschnelli>
IMO the purpose is to signal "I have only a limited amount of blocks"
<@wumpus>
cfields: not at all, we ignore that value
< BlueMatt>
sipa: yes, i recall now
< cfields>
ok, good
< gmaxwell>
That advetised height shouldn't be used for almost anything.
<@wumpus>
(as it's easily spoofable)
< jonasschnelli>
The best-height in version doesn't matter IMO
<@wumpus>
it isn't used at all
< sipa>
i believe it is not used at all
< sipa>
(by bitcoin core)
< luke-jr>
I'm not sure why more than the last 1-2 blocks should be needed to indicate relaying
< jonasschnelli>
wumpus: I think it's used by SPV
< gmaxwell>
luke-jr: because or reorgs.
< gmaxwell>
if I can't serve you the parents of my tip, I can't help you reorg onto it, making my serving nearly useless.
<@wumpus>
jonasschnelli: I meant in bitcoin core; I don't know about other implementations
< luke-jr>
hmm
< jonasschnelli>
Is a min of 144 blocks to height?
< jonasschnelli>
No... nm
< petertodd>
luke-jr: and requiring nodes to have a GB or two of space for this is a trivial cost these days
< achow101>
is the point of NODE_NETWORK_LIMITED just to tell nodes that they can request the most recent blocks from said node?
< luke-jr>
assuming we only fetch blocks when reorging to their chain
< instagibbs>
It's the unbounded growth that gets people to shut off nodes
< achow101>
and right now you can't request any blocks from pruned nodes?
< gmaxwell>
In any case the bit must promise more than nodes count on.
< sipa>
achow101: pruned nodes don't even advertize they can relay blocks
< instagibbs>
achow101, NODE_NETWORK is a flag for that, and it's missing from pruned nodes currently
< jonasschnelli>
achow101: once you are in sync (>-144) you can pair with pruned peers and be fine
< achow101>
ok
< gmaxwell>
Say nodes frequently need to catch up a day. You only keep 144 blocks. Peer needs to catch up a day, connects to you.. opps you can't help them because a day turned out to be 150 blocks, they wasted their time connecting to you for nothing.
< gmaxwell>
So for this to be useful the requester has to be conservative and not try to talk to you unless it thinks you are _very_ likely to have what it needs, which means that you need a fair amount more than the target.
< gmaxwell>
So to serve a day of blocks, you'll need a day and a half or so. Round it up to 288.
< gmaxwell>
petertodd: oh hi. long time no see.
< petertodd>
gmaxwell: heh
< gmaxwell>
and as mentioned, our pruning limit is already there.
< jonasschnelli>
I just think we should allow the current auto-pruning 550 peers to signal relay and "limited amount of blocks around the tip".
< luke-jr>
so 137 blocks?
< jonasschnelli>
If we set NODE_NETWORK_LIMIT higher while allowing to prune shorter,.. this would wast potential peers
< petertodd>
luke-jr: 1337 blocks
< gmaxwell>
jonasschnelli: then that will never be used.
< jonasschnelli>
heh
< gmaxwell>
If we don't know how many blocks to except we'll never connect to them.
< gmaxwell>
This impacts the connection logic, we'll need logic that changes the requested services based on an estimate of how far back we are.
< sipa>
when you're fully synced, why wouldn't you connect to a node that guarantees for example having the last 10 blocks?
< jonasschnelli>
gmaxwell: Well, if we are in sync, you could be friendly and make space for the one who need sync and re-connect to limited peers?
< jonasschnelli>
Yes. What sipa said
< jonasschnelli>
I would expect the larger the chain grows the more pruned peers we will see
< jonasschnelli>
(rought assumption)
< sipa>
not that we should support pruning that much, but for bandwidth reasons it may be reasonable that someone wants to relay new blocks, but not historical ones
< petertodd>
sipa: I think we can make a good argument that requiring nodes to have something like 1GB of storage for historical blocks isn't a big deal, and makes the logic around all this stuff simpler
< jonasschnelli>
signaling the amount of block you have is also not extremly effective because of the addr-man, seed/crawl delay
< jonasschnelli>
*blocks
< sipa>
petertodd: again, not talking about storage, but about bandwidth
< sipa>
it's an open question - i'm not convinced it's needed or useful
< jonasschnelli>
Yes. Agree with sipa. Main pain point in historical blocks is upstream bandwidth
< gmaxwell>
sipa sure that would also work: but (1) nodes that only keep ten blocks are a hazard to the network, and (2) there is no real reason to keep that little, and (3) we don't have signaling room to send out every tiny variation.
< petertodd>
sipa: how much more bandwidth do your status say serving ~100 or whatever blocks is vs. 10?
< petertodd>
sipa: I mean, you can just turn off NODE_NETWORK_LIMIT entirely
< jonasschnelli>
gmaxwell: They would keep more but only willing to serve the last 10
< gmaxwell>
sipa: if you want to limit your bandwidth, limit it.
< sipa>
gmaxwell: well we have 3 possibilities
< jonasschnelli>
NODE_NETWORK_LIMITED would be a limit
< sipa>
fair enough, we have other mechanisms for limiting bandwidth
< edcba>
QoS on historical blocks :)
< sipa>
petertodd: i need to look again... it may not be that much difference
< gmaxwell>
sipa: and we have had reorgs longer than 10 in recent memory, what happens if all of your peers are like that?
< BlueMatt>
gmaxwell: we have?!
< sipa>
BlueMatt: bip50 was 30 deep, iirc
< BlueMatt>
oh, you mean the csv false-signaling reorgs?
< BlueMatt>
yea, ok
< sipa>
ok, i retract my suggestion for 10 deep
< jonasschnelli>
Would the two bit amount-of-blocks-available signaling be effective regarding the delay of address distribution?
< BlueMatt>
always need 2 * MAX_HUMAN_FIX_TIME_FACTOR for everything :p
< sipa>
but we do have 3 possibilities with 2 bits... perhaps we can have a 3rd limit
< jonasschnelli>
People tend to prune to MB rather then blocks (which could be a design mistake)
< gmaxwell>
jonasschnelli: Why do you think it has much to do with address distribution delay at all?
< gmaxwell>
if you keep the last 288 you keep the last 288.. you're not going to flicker that on and off.
< gmaxwell>
jonasschnelli: the design guarentees that you'll have 288 blocks.
< gmaxwell>
(of the software)
< jonasschnelli>
gmaxwell: Maybe I'm looking to much to our seeders,... but the crawling till you serve IPs can be very delayed.
< gmaxwell>
so?
< jtimon>
jonasschnelli: I think I agree on prunning by height being more useful
< gmaxwell>
You'll signal you keep X if you're guarenteed to keep X.
< jtimon>
or relative height rather
< sipa>
s/height/depth/
< jonasschnelli>
Okay. But prune=550 is a MB target. Does it guarantee and amount of blocks?
< jtimon>
sipa: right, thanks
< jonasschnelli>
*an
< gmaxwell>
jonasschnelli: it guarentees we'll keep 288 blocks. The whole feature was designed to guarentee that for reorg reasons, but people thought offering a MB centric UI would be more useful to users.
< gmaxwell>
I think in the future we'll change it to a limited set of options.
< gmaxwell>
Maybe all of them named after words for big in different languages, like starbucks. :P
< jonasschnelli>
Okay. Fair enough...
< achow101>
gmaxwell: the MB option confuses people though since it includes the undo data. people see 550 and they assume it means 550 blocks since 1 MB blocks
< luke-jr>
eh, 550 MB is only guaranteed 137 blocks with segwit
< luke-jr>
oh, forgot undo data
< sipa>
gmaxwell: "For me a venti depruned node, please"
<@wumpus>
lol @ coffee names
< gmaxwell>
luke-jr: then that needs to get fixed.
< jonasschnelli>
lol
< gmaxwell>
sipa: with a double shot of xthin.
< jonasschnelli>
pfff
< gmaxwell>
luke-jr: easy fix.
< luke-jr>
controversial fix
< sipa>
gmaxwell: it'll break existing configs
< jonasschnelli>
Okay. I can start writing a draft specs about the two bit (144/~1000) NODE_NETWORK_LIMITED.. will announce once I have something
< BlueMatt>
sipa: I'm sorry, I dont speak starbucks
< gmaxwell>
sipa: so?
< gmaxwell>
jonasschnelli: seriously, like why did I bother commenting today?
< sipa>
BlueMatt: venti is italian for 20. easy. that's obviously more than "grande" or "tall"
< gmaxwell>
first peak is at 144, if _must_ keep more than that to be useful.
< BlueMatt>
sipa: ehh, I'll stick with my *good* coffee, thanks
< BlueMatt>
anyway, next topic?
<@wumpus>
#topic high priority for review
< praxeology>
My 2 cents: the UI should stay in MB, but underlying the variables stored by the software should be in block count... for the prune threshold.
< jtimon>
random though: what about maintaining the mb option an adding an incompatible one (you can only set one) with depth ? then the mb can be just an estimation that translates to depth on init, but you don't break old configs, only the expected guarantees about limits
< BlueMatt>
luke-jr: NACK
< luke-jr>
BlueMatt: NACK topic or NACK it altogether? :/
< achow101>
luke-jr: planned obsolecense is a bad name for it
< BlueMatt>
second
<@wumpus>
added 10285, swapped #10148 for #10195
< * luke-jr>
waits for topic change before going into discussion
< cfields>
luke-jr: maybe explain reasoning for doing so first?
< luke-jr>
re achow101's comment, I don't really think it matters what we call it
< luke-jr>
cfields: 10282 has a full explanation
< petertodd>
any timeframe short enough to really be useful will probably be short enough to raise political risks...
< luke-jr>
1) it's basically guaranteed to be unsafe by then; 2) hardforks become softforks with enough lead time
< jtimon>
I think if it's optional and disabled by default it kind of defeats the point, but I certainly don't want that for myself or the users I recommend to use bitcoin core
< sipa>
luke-jr: i don't see how this has anything to do with consensus changes
< petertodd>
also, is there any precedent for this kind of expiration in other software?
< BlueMatt>
luke-jr: 110% sends the wrong message. if i expected any reasonable person to see that and think "I need to think for myself about what consensus of the network is" I'd be happy with it, but realistically the only people reading that will think "oh, I have to switch to the latest thing from Bitcoin Core, for whatever Bitcoin Core is according to my local google server"
< luke-jr>
jtimon: what is the use case for running node software over 7 years after its release, without maintenance?
< gmaxwell>
petertodd: yes, but I'm not aware of any that can be overridden.
< sipa>
luke-jr: i think insecurity of the software is perhaps a good reason, but not consensus
< petertodd>
gmaxwell: got any examples?
< gmaxwell>
petertodd: see also the thing with debian and xscreensaver.
< luke-jr>
BlueMatt: that's a problem independent of this IMO
< petertodd>
gmaxwell: ah, yeah, that crazy situation...
< BlueMatt>
luke-jr: how is that independant of the thing which creates it? but, indeed, security may be a reasonable reason, not sure its justified, though
< BlueMatt>
am i really not allowed to not upgrade the bitcoind I've got running behind by bitcoind/xyz firewall?
< luke-jr>
BlueMatt: people will mostly all update before this triggers; probably using the insecure method you describre
< gmaxwell>
I agree with petertodd's point about short enough to be useful is short enough to be problematic. :( But there are other not really useful features...
< BlueMatt>
oops, bitcoind crashed in production
< luke-jr>
BlueMatt: note this has an explicit override allowed
< petertodd>
gmaxwell: and there's a larger point too: chances are the surrounding software on your machine is also not getting updated anyway, so you've got other big problems
< luke-jr>
if you really don't want to upgrade, just add to your config file
< BlueMatt>
luke-jr: yes, and you can do that /after/ your bitcoind has crashed
< jtimon>
luke-jr: let's say my friend remembers what I told him about being up to date 6 years and 11 months after I helped him install bitcoin core
< BlueMatt>
which is kinda shit
< gmaxwell>
it would be nice to be able to say there are no nodes running older than X without the user deciding to keep them running.
< luke-jr>
BlueMatt: you could do it before as well, but IMO after 7 years it's okay to force the user to do something
< gmaxwell>
BlueMatt: yes, but the crash was an RCE and all your funds are now gone. :)
<@wumpus>
if you run nodes in production you'll have some system to monitor it
< BlueMatt>
gmaxwell: not if its the bitcoind that everything talks to on your network and it just sits behind sufficient layers of regularly-updated bitcoind firewalls
<@wumpus>
and summon an operator on crashes
< BlueMatt>
wumpus: lol, i meannnnn, maybe
< sipa>
wumpus: hahaha, yes, with a server farm at the end of the rainbow
< luke-jr>
BlueMatt: and what if it doesn't crash, but someone exploits your failure to enforce a softfork?
< jtimon>
or shouldn't I recommend bitcoin core for a wallet?
< petertodd>
wumpus: you should talk to some banking IT guys about how hard it is to get approval to update things :)
< luke-jr>
jtimon: I don't understand your argument.
<@wumpus>
petertodd: I'm not saying anything about updating
< instagibbs>
jtimon, you can over-ride the setting, I believe
< petertodd>
wumpus: literally touching a config option is an update by those standards
< jtimon>
instagibbs: oh, I missed that
<@wumpus>
only about crashes, if some software is important to your business and it crashes, you'll notice.
<@wumpus>
anyhow
<@wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Apr 27 20:01:43 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< jtimon>
luke-jr: sorry, I missed what instagibbs just said, should read the proposal I guess
< jnewbery>
"after 7 years it's okay to force the user to do something" - not sure I understand this. Who's forcing the user to do something?
<@wumpus>
heck my nodes do nothing imporant and even I have a one-liner script that sends me a mail on crash or unexpected exit
< luke-jr>
jnewbery: 7 years after it's released, bitcoind/-qt would exit and refuse to start until the user chose to either upgrade it, or override the expiration
< jtimon>
jnewbery: also, this is free software, you can't force users
< jnewbery>
jtimon - right that's my point
< sipa>
my node does something important, and i have a 0-line script that sends me an mail on crash (= people mail me that my website stopped updating)
< * sipa>
hides
< luke-jr>
jtimon: you can force users to *do something* :P
<@wumpus>
sipa: well that works too
< luke-jr>
sipa: haha
< * BlueMatt>
has a feeling sipa's approach is more common
< jtimon>
luke-jr: yes, but then I will disable the anti-feature for my friend the first time I help him install and tell him about updates
< luke-jr>
jtimon: why?
< luke-jr>
the "anti-feature" has no harm in any reasonable scenario
< * BlueMatt>
would do the same
< jnewbery>
I'd be happy for there to be a warning if running an old version, but I really don't like it automatically disabling a node after 7 years, particularly if the reason is "this enables a sort of certainty of old nodes ending by a deadline, turning any hardfork into a de facto softfork provided it is planned 8 years out."
<@wumpus>
another possiblity would be to only refuse to start after 7 years, not stop if already running
<@wumpus>
this would give less guarantees, but still
< jnewbery>
In that case, we might as well have auto-update
< * instagibbs>
imagining someone running a node non-stop for 7 years
<@wumpus>
jnewbery: wow that's very black and white
< luke-jr>
jnewbery: not the same thing at all; the user can always opt to bypass the expiration
< petertodd>
wumpus: god help us if we ever release a version that gets the date wrong...
< BlueMatt>
wumpus: yea, something much more watered down may be reasonable, including huge fat warnings if the software is like 2 years old
< BlueMatt>
wumpus: the perception will be black and white as well
<@wumpus>
petertodd: well it could mention the flag to override I guess
< jtimon>
luke-jr: again, sorry, should read the proposal but "programmed obsolecense" definitely sounds like an anti-feature, after reading the proposal, if I think is a feature, will maybe just suggest to rename it
< BlueMatt>
refusing to start with an error message mentioning the flag to override would be reasonable, but also largely useless
<@wumpus>
anyhow, apparently the consensus is to not do it, that's fine
< BlueMatt>
though maybe not just to remind users
< petertodd>
wumpus: you still might have quite a bit of chaos of nodes being shut down all at once
<@wumpus>
petertodd: that's why I suggested "<wumpus> another possiblity would be to only refuse to start after 7 years, not stop if already running"
<@wumpus>
in any case, seems there's too much drawbacks to this
< sipa>
wumpus: so all you need to do after 7 years is find a remote crashing bug, and use it on every remaining node (and finding a remote crash bug for 7 yo software doesn't sound hard...)
< sipa>
:)
< petertodd>
wumpus: I'd think nodes get restarted reasonably often, and often by automatic means
< BlueMatt>
wumpus: yes, your proposal i could get behind, mostly because it would inform users that they can add a conf option, making the "refuse to start" thing kinda moot, while still being really insistent on telling users to upgrade, which is fine
<@wumpus>
BlueMatt: I think that's acceptable after 7 years
< sipa>
i don't think that there is much difference between refusing to start vs stopping to work
<@wumpus>
come on, 7 years is an eternity on the internet, we shouldn't bee too childish about this
< sipa>
especially at that timeframe
<@wumpus>
sipa: right
< BlueMatt>
sipa: I tend to disagree
< jnewbery>
wumpus: not helpful. I think people are raising legitimate concerns
<@wumpus>
a startup check is simpler to implement and less error prone though
< BlueMatt>
(re difference between startup and forced shutdown)
<@wumpus>
jnewbery: really, a legitimate concern, that people are running 7 year old software in production?
< BlueMatt>
wumpus: yes
< jnewbery>
a legitimate concern that devs don't have the right to "force" users to do something
<@wumpus>
sometimes it's just like people are just making up unlikely \things just to argue
< instagibbs>
and can't be bothered to click through, or set a flag?
< BlueMatt>
wumpus: 7 years is, in fact, *not* an eternity on the internet
< instagibbs>
I understand it would need to be done right, but that's a little nuts.
<@wumpus>
it wouldn't be forcing to do anything, as there is an override
<@wumpus>
sigh
< BlueMatt>
instagibbs: click through fine, shutdown *while running*?
< luke-jr>
BlueMatt: it's only slightly shorter than Bitcoin's current lifetime
<@wumpus>
BlueMatt: okay, what ever
< instagibbs>
BlueMatt, sure, that's another dot on the matrix, I agree on that one
< jtimon>
it should still be relatively easy for users to get out of the stuck situation in case they can't upgrade in the same system or something, like maybe deleting a filed named ~/.bitcoin/DELETE_ME_ONLY_IF_YOU_CANT_UPGRADE_IN_THIS_SYSTEM or something
< luke-jr>
jtimon: yes, it's already easy to override
< BlueMatt>
jtimon: conf flag seems neater, no one checks their bitcoin datadir
< luke-jr>
we can make it easier perhaps by taking a YYYY instead of POSIX timestamp
< petertodd>
gmaxwell: btw, re: your comment in the meeting, no-one's funding me to do any work on Bitcoin Core these days
< jtimon>
luke-jr: BlueMatt: great
< luke-jr>
jnewbery: nobody is forcing users to run Core at all.
< BlueMatt>
wumpus: on a less controversial note, #7729 is ripe for rebase-and-review, no? or are we now so bogged down on major features that need review you're waiting? :/
< jnewbery>
luke-jr indeed, and I don't think core devs should attempt to force users to upgrade
< luke-jr>
jnewbery: we're not. by running Core, they would be opting in to being forced to either upgrade OR tell the software they don't want to
<@wumpus>
BlueMatt: yes, I tried once, but so much moved around since that it's pretty much "re-do" instead of rebase
< BlueMatt>
grr, sorry about that :/
< luke-jr>
"forced" is really the wrong word here
< BlueMatt>
luke-jr: thats ridiculous, you're living in a world where people have the time go read a ton of bitcoin core docs/code before running it, and do...neither of which are true
<@wumpus>
I don't think we're going to agree on this luke-jr
< jtimon>
yeah, as BlueMatt said, being annoying about upgrading is fine
< * BlueMatt>
goes back to trying to make a dent in the review pile
< luke-jr>
BlueMatt: not before, simply read a short notice when it tells them to make a decision
<@wumpus>
when it detoriorates to arguing about what words to use it's better to just stop
< luke-jr>
jtimon: well, this proposal comes down to being annoying at worst, and BlueMatt doesn't like it either
< BlueMatt>
luke-jr: which proposal now? refusing to startup or crashing while running?
<@wumpus>
BlueMatt: IIRC I don't think any of the wallet changes that complicate rebasing #7729 are your fault, just a lot happened there
< BlueMatt>
but, yea, I'm gonna go back to review, this has devolved
<@wumpus>
BlueMatt: yeah embrace and extinguish, round N
< michagogo>
Really? I didn't know that.
< michagogo>
I remember when WSL was pretty new I tried it and it worked perfectly
<@wumpus>
it worked at some point
< BlueMatt>
it worked when they made the press release for all the people to try it, after that they only cared that it appeared to work so that windows folks build software that doesnt really work on linux, but claims to :p
< michagogo>
It's times like this that I wish I had Windows 10 on my computer
< BlueMatt>
ewwwwwww
< BlueMatt>
thats worse than starbucks coffee!
< michagogo>
I mean, I'm on Win7 right now
< BlueMatt>
ouch
< michagogo>
Haven't upgraded for two reasons. First, I don't use my computer at home so much lately (= the last couple years) and don't have so much time to spend with it to try and fix it if it gets messed up, smooth out the kinks, etc.
< michagogo>
Second, at work I have 7, and I'm concerned that one of two things will happen
< edcba>
7yrs software in prod rotfl: in 2016 i was supporting xp at work...
< jtimon>
is there an issue in win7's github for the bug on their side? let's just go concept ack there
< michagogo>
Either it'll be too different and I won't adjust to it because I'll still be mostly using 7, and then my time with it will be annoying, or, it'll be amazing and I'll get used to having stuff from it, and then be sad when I go back to 7 at work
< michagogo>
"win7's github"?
< jtimon>
sorry, bad joke
< BlueMatt>
michagogo: i think it was a joke :p
< BlueMatt>
jtimon: i found it comical
< michagogo>
Wait, which bug?
< michagogo>
I first thought you meant Windows 10
< jtimon>
BlueMatt: oh, thanks, I read your "I think" as a confirmation that it was bad
< michagogo>
Because if Core builds on Ubuntu but not in WSL, that *is* a bug in WSL
<@wumpus>
after all we did a gitian build very shortly ago, which uses the same version of ubuntu (14.04)
<@wumpus>
maybe it's some magic with line endings or such
< michagogo>
I think as of Creator Update, a couple weeks ago, new WSL installs are Xenial
<@wumpus>
ugh
< michagogo>
Ugh? Why?
<@wumpus>
the windows toolchain on xenial is kind of broken
< BlueMatt>
no wonder wsl is broken
<@wumpus>
yes, that explains it then
< michagogo>
Really? Have bugs been filed?
<@wumpus>
they can be happy they don't get executables
<@wumpus>
I don't think it was ever narrowed down
< michagogo>
Is there anything different besides gcc being replaced by mingw-w64?
<@wumpus>
it's a newer version of mingw-w64 that doesn't work so well
< michagogo>
I was thinking of upgrading my Ubuntu VM (that I use mostly for gitian building) from Trusty to Xenial - should I not do that?
<@wumpus>
fstack-protector produces broken executables, and there was some c++11 threading snafu
<@wumpus>
could be that these are solved by now, I have no idea, haven't tried it for months
<@wumpus>
you can't use xenial for gitian building without everyone else changing too
< michagogo>
I meant for the host
<@wumpus>
oh the host can be anything
< michagogo>
Not changing the target container
<@wumpus>
I use debian for the host
< jtimon>
wumpus: btw, it would be nice to put BlueMatt 's libconsensus-related PRs in https://github.com/bitcoin/bitcoin/projects/6 (and always tag anything there with "consensus")
< michagogo>
I wonder if I should delete my precise sandbox VM
<@wumpus>
michagogo: yes, why not
<@wumpus>
jtimon: ok
< jtimon>
wumpus: thanks, I mean, not a priority, just would be somewhat helpful
< jtimon>
btw BlueMatt thanks for https://github.com/bitcoin/bitcoin/pull/771 I shouldn't look at it much and look at the new things instead but like these things if I find the time
< BlueMatt>
jtimon: lol, that was so long ago even I dont remember how it was designed
< jtimon>
but you said it was based on it, so I was hoping some ideas remained
< jtimon>
that's what makes it more interesting IMO, it could be the same general idea with very different code
< jtimon>
or more or less the same code, which I would find more amusing
< sipa>
jtimon: i think the extent of the similarity is "encapsulate all the state related to blockchain censensus"
< achow101>
wumpus: michagogo windows cross compile on ubuntu xenial is still broken.
< achow101>
on wsl, there's a different problem with depends failing
<@wumpus>
achow101: yes, seems to be different issues - confusing
<@wumpus>
building on xenial, for xenial works fine in any case
<@wumpus>
it only breaks with windows in the picture (either as cross compile or WSL)
< michagogo>
Hm, if I have a VM with a snapshot and a bunch of changes since the snapshot, and I want to create a snapshot of the current state and delete the existing one
< michagogo>
Does it matter at all whether I delete the current one before or after taking the new one, in terms of final size?
< bitcoin-git>
[bitcoin] practicalswift closed pull request #9544: [trivial] Add end of namespace comments. Improve consistency. (master...consistent-use-of-end-of-namespace-comments) https://github.com/bitcoin/bitcoin/pull/9544
< bitcoin-git>
[bitcoin] practicalswift reopened pull request #9544: [trivial] Add end of namespace comments. Improve consistency. (master...consistent-use-of-end-of-namespace-comments) https://github.com/bitcoin/bitcoin/pull/9544