<@wumpus>
it's essential if you want to add just some changes in a file to a commit but not all. Only discovered this recently, don't ask what hacks I was doing to accomplish that before...
< sipa>
you can also use it split existing commits
< sipa>
during a rebase, while editing a commit, use git reset HEAD~
< sipa>
then git add -p to select the changes to be in the first half
< sipa>
git commit -m first; git commit -am second
<@wumpus>
'git gui' can also be used for this (you can stage and unstage individual lines, even), but usually work in the console so that was kind of annoying
<@wumpus>
ah great
< sipa>
and then git rebase --continue
<@wumpus>
is there anything (almost) ready for merging?
<@wumpus>
thought #10143 was, but needs some minor changes still
< gribble>
https://github.com/bitcoin/bitcoin/issues/10226 | wallet: Use boost to more portably ensure -wallet specifies only a filename by luke-jr · Pull Request #10226 · bitcoin/bitcoin · GitHub
< bitcoin-git>
bitcoin/master a4186dd Luke Dashjr: wallet: Use boost to more portably ensure -wallet specifies only a filename
< bitcoin-git>
bitcoin/master 64c45aa Wladimir J. van der Laan: Merge #10226: wallet: Use boost to more portably ensure -wallet specifies only a filename...
< bitcoin-git>
[bitcoin] laanwj closed pull request #10226: wallet: Use boost to more portably ensure -wallet specifies only a filename (master...refactor_wallet_pathsep) https://github.com/bitcoin/bitcoin/pull/10226
<@wumpus>
jonasschnelli: and yes I think it can be a useful feature
< jonasschnelli>
wumpus: Thanks for the review.
< jonasschnelli>
Will fix your points soon
< instagibbs>
wumpus, git add -p also works line by line, in manual edit mode
< instagibbs>
's' to keep splitting chunks, if not granular enough, 'e' for manual editor. A bit confusing at first but description is at end.
<@wumpus>
instagibbs: ah :) thanks
< BlueMatt>
sipa: re #10148: I'm still super unconvinced that multi-head is worth it. in the future optimization space of "flush in chunks in the background", there is relatively little harm in requiring that mid-flush-states be in only one direction - either you're during ibd, in which cas I'd certainly hope this is already the case, or you're not in which case it should be relatively rare that your disk cant keep up and enforcing a
< BlueMatt>
fully-flushed state prior to block disconnect seems perfectly reasonable
< BlueMatt>
sipa: not to mention multihead is super hard to review right now since we dont even have the write side of it implemented
< BlueMatt>
so the assumptions on the read side just seem arbitrary
< BlueMatt>
(and complicated)
< sipa>
BlueMatt: i don't think it's reasonanle to require a full flush before a reorg
< sipa>
reorgs should lead to network-wide slowdowns
< sipa>
and the write side is partially implemented; i'm writing a test for it now
< BlueMatt>
sipa: why? if your node cant finish flushing its state before the next block comes in 9 times out of 10 you have no hope of staying up to date anyway
< sipa>
BlueMatt: the idea is that you'd be flushing continuously
< sipa>
and never fully flush
< BlueMatt>
sipa: why?
< BlueMatt>
why would you want to never fully flush
< BlueMatt>
that'd mean startup would always be painfully slow
< sipa>
because wiping your cache is retarded
< BlueMatt>
well then dont wipe cache when you flush :)
< BlueMatt>
thats an orthogonal issue, i think
< sipa>
it isn't
< sipa>
if you don't wipe when you flush, you need to flush more frequently
< sipa>
and we've benchmarked that (before non-atomic flush)... it's slower
< BlueMatt>
what happened to always flushing in the background?
< sipa>
we're already flushing outside of normal block connection
< sipa>
it's still just more CPU to not wipe your cache
< morcos>
sipa: at the very least this is an unnecessarily complicated optimization for right now
< sipa>
(again, before non-atomic)
< morcos>
this code is difficult to reason about with high certainty and i think BlueMatt is right that multi-head really compounds that
< morcos>
If we decide we need it later, we can discuss it then...
< sipa>
morcos: i think both the test code i'm writing and the recovery code would he hardly any simpler without it
< sipa>
it still needs to be able to deal with reorgs, just not multi-headed ones
< morcos>
Ideally I think we would do #10195 first and not make that contingent on 10148 (w or w/o multihead)
< morcos>
the issue i see with 10148 is that it seems like a lot of complication to solve a relatively simple problem about the way leveldb flushes... it seems to me it would make more sense to review flush/don't erase strategies with 10195 as a first step
< BlueMatt>
sipa: I'm more than happy to revisit multihead after per-utxo, but right now I'm super worried about the complication in it
< sipa>
BlueMatt: if i put an assert(blockhead.size() == 2), would you be happy about it?
< morcos>
maybe we eventually want to do 10148, but i'm hesitant to rework cache writiing in 2 big ways at the same time...
< sipa>
i think 10148 is much more urgent
< sipa>
it dramatically changes our memory usage
< morcos>
so does 10195?
< sipa>
how so?
< morcos>
for the same cache performance you can have a much smaller cache
< sipa>
eh, slightly
< sipa>
anyway, i'm happy to rebase 10195 without 10148
< sipa>
i just assumed we'd want 10148 sooner
< BlueMatt>
sipa: possibly I'd be ok with an assert like that, but then why have the much more complicated code in our consensus logic when we can push that review to when we actually remove the assert, right where it should be? Otherwise you have dormant code and people wont sufficiently review it when it becomes active?
< morcos>
really? only slightly? i haven't tested 10195 yet, but it was my memory that a lot of the cache mem usage was taken up with dead weight from txs you bring along
< BlueMatt>
I tend to agree that I like 10148, personally
< BlueMatt>
i mean both, dont really care tooo much about order, just dont think we need to worry about multi-head
< sipa>
morcos: but with 10195 you get duplication instead
< morcos>
in memory?
< sipa>
yes
< morcos>
oh i havent' reveiwed 10195 yet, i was envisioning a structure without duplication
< sipa>
if multiple outputs for the same tx are in the cache, they become independent entries
< morcos>
like a multi level cache
< sipa>
we can think about that later, first switch the model to be per-txout
< morcos>
ok, well i'll shut up until i review more... but i have to say, the complication of 10148 turns me off from getting to 10195!
< sipa>
have you seen the commit i just added to 10148?
< sipa>
well, yesterdat
< morcos>
Yes... And the part that is confusing is the Multiple reorganizations
< morcos>
Thats still not clear to me exactly what scenario you guys are envisioning that results in that
< BlueMatt>
frankly, we only have 2.5 months before 0.15 feature freeze, and we're gonna get in half of what people are PRing these days
< sipa>
multiple reorgs, with partial flushing
< morcos>
the rest of that comment is very good though
< * BlueMatt>
is entirely unconvinced thats a thing we need to ever worry about
< BlueMatt>
unless we never fully flush, but if we never fully flush startup will /always/ be painfully slow
< BlueMatt>
which i also dont think is acceptable
< sipa>
how so?
< sipa>
it would just lag a few blocks behind
< sdaftuar>
how would it only be a few blocks behind?
< sipa>
at least it's configurable... during IBD it could lag behind more
< BlueMatt>
during ibd you only need single-head, though, I think
< BlueMatt>
it should be pretty much all serial
< sipa>
sure, but i don't want two separate recovery algorithms
< BlueMatt>
huh?
< morcos>
I just think its too many steps at once... Matt and Suhas are trying to explain to me what the series of events that happens here is and I just don't understand
< BlueMatt>
sipa: I'm saying never support recovery of multiple reorgs at once
< sipa>
BlueMatt: it's maybe 5 lines less code!
< sipa>
instead of rolling back from one branch, you roll back from all of them
< morcos>
I don't understand what a partial flush is
< sipa>
morcos: write some of the entries in your cache to disk
< sipa>
instead of all
< morcos>
And then what?
< BlueMatt>
and then...thats it?
< sipa>
continue
< sipa>
you're running out of memory, pick a few dirty entries in the cache, and write those to disk
< sipa>
and perhaps delete a few non-dirty entries
< sipa>
and continue
< sipa>
there are tons of tweaks and analysis possible to find good strategies
< sipa>
but non-atomic flushing makes all of it safe
< BlueMatt>
sipa: but 20 more edge cases in consensus code, 30 more lines of comments to explain why its ok. the previous stuff was very reviewable, now its super tricky unless we /also/ implement the write side (I know you said you have that, but can we leave that for a separate pr?)
< morcos>
So while you are writing those entries, no one else is modifying the cache... but then you give up the lock before you've flushed the whole cache, so then when people modify the cache more.. ok i guess i get it.
< morcos>
But why can't we save that for a later "improvement" , why is it necessary now?
< sipa>
morcos: because the algorithm is the same
< sipa>
can we just put an assert in it, that forces it to be the simple case?
< sipa>
so you can review it assuming it only needs to deal with the simple case
< morcos>
What am I missing, why do you want to have the code there if we are not using it?
< sipa>
it's 5 lines...
< sipa>
yes, i can remove it
< sipa>
but it's nice to at least have the database format support multihead, so it's not yet another backward compatible upgrade that needs upgrade
< BlueMatt>
hardly? just means you cant downgrade after an unclear shutdown?
< * BlueMatt>
is much less concerned about that
< sipa>
plus new code that needs to deal with the old case
< BlueMatt>
but maybe others are?
< sdaftuar>
you can still downgrade with a -reindex-chainstate!
< sipa>
ok...
< sipa>
would you be fine with a database format that just has a record saying "rollback block X, rollforward block Y, rollback block Z", and the recovery code literally follows those steps?
< sipa>
actually, that's pretty complicated on the write side
< BlueMatt>
possibly, though the "might have had stuff from a future branch way down the line partially flushed" stuff just makes it harder to review consensus-critical crap
< sipa>
sigh, ok, i'll try to simplify the code as much as possible to only deal with a single reorg
< morcos>
sipa: i'm not sure that your approach is wrong.. i think its just a lot to hit someone all at once in thinking about it
< sipa>
morcos: fair enough
< BlueMatt>
sipa: thanks, now lets get this all merged for 0.14 =D
< BlueMatt>
ehh, 15
< morcos>
you might be right that it is a more elegant approach if you have this end goal in mind down the road
< morcos>
but we're stuck trying to recreate the logical progression you went through to end up there
< sipa>
understood
< sdaftuar>
sipa: did you ever test the performance of flush-without-wiping with a per-utxo model?
< morcos>
that said... i'm getting more comfortable with it after hashing it out a bit
< sipa>
sdaftuar: i haven't
< morcos>
sipa: another simple improvement would be to instead of having a 2.0x factor for cache memory usage to just track usage = all cache usage + dirty coins usage
< sipa>
morcos: i really hope that the non-atomic flushing after simplifying will be simple enough to be reviewed
< sdaftuar>
wumpus: i think #9942 is ready for merge
< gribble>
https://github.com/bitcoin/bitcoin/issues/10179 | Give CValidationInterface Support for calling notifications on the CScheduler Thread by TheBlueMatt · Pull Request #10179 · bitcoin/bitcoin · GitHub
< jonasschnelli>
BlueMatt: Oh... I totally forgot that one.. thanks. Will try.
< BlueMatt>
jonasschnelli: that doesnt do anything by itself
< BlueMatt>
the real change is the pr after it
< BlueMatt>
but its blocking :)
< jonasschnelli>
Okay... thanks.
< jonasschnelli>
I think the freezes i face are caused by something different then the 9583
< cfields>
jonasschnelli: 10228 just keeps it from happening in the future. Problem is that your bitcoin-config.h is busted atm
< cfields>
(i strongly suspect, anyway)
< jonasschnelli>
ah... I see. That seams reasonable...
< cfields>
jonasschnelli: specifically MSG_DONTWAIT. Are you seeing a warning about it when you build?
< jonasschnelli>
oh.. maybe. :|
< cfields>
if so, this is your problem, and autogen will fix you right up.
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #10231: [Qt] Reduce a significant cs_main lock freeze (master...2017/04/qt_freeze) https://github.com/bitcoin/bitcoin/pull/10231
< cfields>
jonasschnelli: why not subscribe to UpdatedBlockTip() and cache it there?
< jonasschnelli>
cfields: IMO we don't have a signal that covers the bestheader (not bestblock)
< jonasschnelli>
ermm... we have NotifyHeaderTip
< jonasschnelli>
cfields: Do you think it would be preferable to cache it in the GUI clientmodel instead of the core validation part (validation.cpp)?
< cfields>
jonasschnelli: heh, I just grepped to the same conclusion :)
< jonasschnelli>
cfields: Yes. I could cache it in "static void BlockTipChanged(ClientModel *clientmodel, bool initialSync, const CBlockIndex *pIndex, bool fHeader)"
< jonasschnelli>
when fHeader == true
< cfields>
jonasschnelli: i think it makes sense for each subsystem to maintain their own view of those things, yes
< jonasschnelli>
Yes. Let me change this... I don't think there are valid use cases outside the GUI
< cfields>
(not sure everyone would agree with that, but that's the direction we've been moving in)
< jonasschnelli>
cfields: indeed.
< cfields>
jonasschnelli: the exception obviously being if you need atomic access to a few locally cached things, in which case you have to be careful to keep everything in sync. But if it's just for the gui, I think caching it there makes sense.
< jonasschnelli>
Yes. Right... i'll change the PR
< cfields>
jonasschnelli: any reason we can't do the same thing for the other cs_main takers in there?
< jonasschnelli>
cfields: Yes. We should do similar things there... i'll have a look
< cfields>
(adding new ui events, i mean)
< cfields>
jonasschnelli: great :)
< morcos>
Here is my attempt at a high level description of the current fee estimation algorithm.
< bitcoin-git>
[bitcoin] jnewbery opened pull request #10234: [net] listbanned RPC and QT should show correct banned subnets (master...list_banned_correctly) https://github.com/bitcoin/bitcoin/pull/10234
< sipa>
BlueMatt, morcos, sdaftuar: i hope it's easier to review now
< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #10235: Track keypool entries as internal vs external in memory (master...2017-04-wallet-more-keypool-cache) https://github.com/bitcoin/bitcoin/pull/10235
< sipa>
i'll leave the WIP marker in until I had a test for the reorganization
< jonasschnelli>
though the argument of "little money" is dangerous..
< sipa>
sdaftuar, BlueMatt: thank you
< jonasschnelli>
Years back some of us probably had little money on VPS's which now has worth a six digit number. :)
< jonasschnelli>
And while you move your keys away from your VPS,... there is no guarantee they where not compromised..
< jonasschnelli>
the security requirements can change over time and most people won't sweep the funds to a new address
< * sipa>
remembers the linode hack
< sipa>
but agree that there is not argument why it shouldn't work
< jonasschnelli>
But I know... i also run nodes on external root servers and sometimes,.. they have a some test mainnet coins
< BlueMatt>
jonasschnelli: I'm not saying its a *good* idea, just that if people want to do it we shouldnt try to break it
< sipa>
new key creation performance has been horrible for ages
< BlueMatt>
(some people buy insurance, too :p)
< sipa>
we should improve that when we can, period
< jonasschnelli>
Yes. Sure... I just in general think it's good to show some critical respons whenever someone mentions AWS or Azure. :)
< BlueMatt>
fair
< sipa>
btw
< * sipa>
had a bitcoind with wallet on a vps, and its money was stolen
< jonasschnelli>
see! :-)
< BlueMatt>
lol
< sipa>
though i do consider that my own damned fault
< jonasschnelli>
so it even happens to core devs. :)
< gmaxwell>
BlueMatt: I don't understand your complaint about multiple head. It's strictly safer than not having it.
< gmaxwell>
I agree it will be easier to reason about the effects of multi-head after were're per-txout, and I expect we won't make intentional use of it until then.
< sipa>
gmaxwell: multi-head is only needed once we do multiple partial flushes
< sipa>
gmaxwell: we *do* need reorg support immediately, but not necessarily support for multiple reorgs at once
< BlueMatt>
gmaxwell: I'm happy to re-review post-per-utxo
< BlueMatt>
but right now the review burden is high, and I'm not convinced of its usefulness unless we actually have a write side that we're gonna merge :)
< sipa>
BlueMatt: well the advantage would be not breaking backward compatibility
< sipa>
but per-txout will break that anyway
< BlueMatt>
yes, also not sold on that :)
< BlueMatt>
indeed
< gmaxwell>
the same code fixes reorgs to that, the extra stuff that does things for multi-head is just a no-op otherwise.
< BlueMatt>
gmaxwell: can we not have no-ops in consensus-critical logic?
< gmaxwell>
so then if there is some corner case we should corrupt the state rather than just handling it?
< sipa>
gmaxwell: there is no way we can now end up in a situation that needs multi-head support
< sipa>
unless there is a bug in the code, and in that case it's unlikely the multi-head code actually does the right thing
< sipa>
my reason for having it in the first place was because i don't consider it much more complicated, so it's easier to get it in now than changing it again later
< sipa>
but it seems people disagree it's easy to understand
< luke-jr>
wumpus: might make sense to backport #10207
< gribble>
https://github.com/bitcoin/bitcoin/issues/10207 | Clarify importprivkey help text ... example of blank label without rescan by wtogami · Pull Request #10207 · bitcoin/bitcoin · GitHub
< luke-jr>
why is abortrescan not allowed in safe mode?
< sipa>
lol
< jonasschnelli>
luke-jr: heh... is that already merged?
< luke-jr>
seems so
< luke-jr>
Travis jobs are randomly cancelling?
< BlueMatt>
luke-jr: they do that automagically if you force push before they run now
< luke-jr>
BlueMatt: it doesn't *look* like jonasschnelli is force-pushing on #10231