< BlueMatt> cfields: I mean mine's like 3 LOC long, but, really, I dont care....I have nothing in 15 which is in any way related to that, and only a 16 stretch goal which needs a fix there
< cfields> BlueMatt: ok. I'm actually finishing it up now
< cfields> it's really big, but very worth it imo. It'll just be a big one to review
< BlueMatt> cfields: I'm down for a touch-everything sharedptr change for 16 here
< BlueMatt> yea, I think its worth it
< BlueMatt> I mean isnt it mostly mechanical?
< cfields> nah, there's lots to it :\
< cfields> well, the shared-ptr part is pretty mechanical
< BlueMatt> hmm, ok, will reserve judgement for now, but the idea sounds good
< cfields> but then you still have the refcounting on top
< BlueMatt> and I'm totally for a big overhaul of that shit for 16
< cfields> and once you get rid of that, tons of LOCK(cs_vNodes) go away
< BlueMatt> why bother refcounting on top? just relay on ~CNode and use the shared_ptr refcounting for it?
< cfields> have to control the thread it deletes from
< BlueMatt> ah
< BlueMatt> ok
< cfields> anyway, it's nice and tidy imo. Very straigtforward. Just large.
< BlueMatt> alright, well will reserve judgement, sounds reasonably in principle, then
< BlueMatt> yea, large is ok...for 16
< BlueMatt> gonna sit for 1.5 months first, then, though :/
< cfields> i don't quite understand that arguement. We have a feature freeze for a reason. That's when stuff bakes. It's not a code freeze.
< BlueMatt> hmm? I mean i doubt this would land pre-15, and if it doesnt then we're all gonna be focused on getting lots of testing in on 15 and likely wont review/merge all that much before the final gets shipped
< cfields> sure, if there's no time, that's one thing. But I don't see any need for stuff to bake for a while before feature freeze just to let it bake
< BlueMatt> ohoh, you mean maybe this will land for 15? I mean id be surprised...lots to clean up and try to land before then that is probably higher prio, but, ok, fair :)
< BlueMatt> higher prio in the next 3 weeks with folks in the us on vacation for the next week, even :/
< cfields> well i'd like to do some benches before i push, but i'm thinking that cs_vNodes is blocking a significant percentage of the time
< BlueMatt> oh? alright, well may be worth pushing, then...still, two weeks to merge is an incredibly tight schedule
< cfields> and i have other stuff that relies on this (I was just putting it off until last). But yea, I agree that it's not really hight priority. If it doesn't make it, it doesn't make it.
< cfields> *high
< cfields> I'm mainly just happy that I finally worked out an approach I like. It'll wear off in a few hours :)
< BlueMatt> pr quickly, then
< BlueMatt> and dont close it this time when it wears off :p
< cfields> hehe
< cfields> ok, back to coding/testing. I haven't actually run the thing yet. Time to see how badly it crashes/burns :)
< BlueMatt> lol, sg
< wumpus> whoa, validation really became a lot quicker with recent changes
< sipa> wumpus: IBD or at tip
< sipa> ?
< wumpus> at tip
< wumpus> (a node catching up a few days)
< sipa> ah, i mean while in full sync processing newly mined blocks
< wumpus> it used to bring this particular system to a halt while slowly syncing blocks, now the blocks breeze by
< sipa> if you have slow I/O, pertxout likely helps a lot
< sipa> because it causes far fewer flushes
< wumpus> definitely slow i/o
< luke-jr> I need to kill my btrfs..
< wumpus> but that's great, a lot of users, esp windows users, tend to have slow i/o laptops
< wumpus> so that's a decent thing to optimize for
< wumpus> luke-jr: why?
< luke-jr> wumpus: slow I/O. very slow. :/
< luke-jr> even Chromium gets slow on btrfs home :/
< wumpus> which reminds me I should probably do some window testing again pre-0.15 *sigh*
< wumpus> luke-jr: so btrfs is better in everything except performance?
< luke-jr> wumpus: and reliability, I guess. had some kernel panics in btrfs code a few times several months ago
< luke-jr> and someone in #btrfs told me to add a mount option to avoid a very rare data corruption bug
< luke-jr> oh, and ioctls don't work in 32-bit mode, but I use 64-bit now
< wumpus> I've never tried btrfs yet, just sticking with extN on linux, I guess I'm extremely conservative with regard to file systems
< sipa> i tend to use ext4 as root fs, and zfs for all the rest
< sipa> (where "all the rest" is not much)
< luke-jr> hm, it probably doesn't help I/O that I have btrfs compression enabled
< luke-jr> although you'd think modern CPUs would be fast enough it didn't matter
< wumpus> sipa: doesn't zfs on linux require custom patches?
< sipa> not on ubuntu at least
< luke-jr> O.o
< wumpus> luke-jr: for encryption that's mostly true, for compression (especially when writing) I'd expect a reasonable perf loss with compression, for reading you might get a perf gain in some cases (but only if you store files that are well-compressible on a slow medium)
< luke-jr> tempting to hack btrfs to notice when files get modified often and not compress those.. but I don't really want to hack on my filesystem, so not happening ☺
< wumpus> also, even if throughput higher, latency might be worse because of the extra decompression step
< luke-jr> oh, ZFS is the filesystem that's GPL-infringing XD
< luke-jr> prob just go back to ext4
< wumpus> disk compression seems a waste of time in the 201x's, most large files (video, images, music) are already compressed so I'd dare say it helps very little in practice
< wumpus> even if you store raw video/music then generic gzip compression is not very suited
< luke-jr> maybe I should try just turning it off before giving up on it
< wumpus> thinking, something it might help with is compiler object files, those tend to be well-compressible
< wumpus> and large (in case of c++)
< * luke-jr> changes it to use LZO compression rather than gzip
< luke-jr> (supposedly it's faster)
< sipa> wumpus: i use lzo compression for the blockchain
< sipa> lzo is much faster
< gmaxwell> wumpus: until recently the gap between disk IO speed and the rest of the system was making it so that compression (at least fast compression like LZO) actually increased performance...
< luke-jr> recently = SSD?
< luke-jr> I also made the mistake of moving my home to 5400 RPM.. >_<
< gmaxwell> not just SSD but newer pcie ssds.
< wumpus> gmaxwell: also for writing?
< wumpus> I guess it depends on the kind of data
< wumpus> for reading performance it can be quite obviously true
< wumpus> sipa: custom patch, or at the file system level?
< bitcoin-git> [bitcoin] Mirobit opened pull request #10710: REST/RPC example update (master...docupt) https://github.com/bitcoin/bitcoin/pull/10710
< morcos> instagibbs: I'm trying to review #10333 and I was trying to understand the ReturnKey() behavior. How come it is not a bug (also in master) that we call ReturnKey() even in the case where CoinControl gave us a destination address (so we never reserved a new key)
< gribble> https://github.com/bitcoin/bitcoin/issues/10333 | [wallet] fee fixes: always create change, adjust value, and p… by instagibbs · Pull Request #10333 · bitcoin/bitcoin · GitHub
< instagibbs> calling return is a nop if you haven't marked a key as reserved IIRC
< instagibbs> oh i see what you mean, lemme continue reading...
< morcos> instagibbs: i just figured it out
< morcos> i didn't know how reservekey's work
< morcos> but yeah the reservekey object has nIndex -1 until you ask it to actually get a reserved key
< instagibbs> yep
< morcos> so calling return on it is a no-op
< instagibbs> so yeah i think the logic is right
< morcos> while i have you, aren't you missing a continue at the bottom of your loop in the subtractFeeFromAmount > 0 case
< instagibbs> it's safe to call returnkey unless you actively are saving a usage
< instagibbs> checking
< morcos> instagibbs: yes agreed on the returnkey
< morcos> after line 2378
< instagibbs> wallet.cpp? That's in SelectCoins on my end
< bitcoin-git> [bitcoin] jnewbery opened pull request #10711: [tests] Introduce TestNode (master...testnode2) https://github.com/bitcoin/bitcoin/pull/10711
< instagibbs> it's just going to loop, right?
< instagibbs> if subtractFeeFromAmount != 0 and you don't have enough fee, it just hits the end and loops
< morcos> but it'll sign first?
< morcos> also you seem to have a regression in the subtractFeeFromAmount > 0 case where you have isDust change. Previously that change was getting deleted and added to fees, now I think you're not doing that
< instagibbs> ah yeah i wrote this previous to that behavior, will fix
< instagibbs> not sure what you mean about signing first, I'm probably looking at the wrong line
< morcos> oh maybe i am, i was looking without whitespace, hold on
< morcos> yeah my fault on that, seems fine
< morcos> We ought to be able to avoid looping completely when subtractFeeFromAmount > 0 though...
< morcos> If we changed the logic when subtractFeeFromAmount > 0 to first check for dust change, and if that exists move it to fee, then reduce the recipient amounts as necessary to get to the fee. (unless the recipient amounts aren't big enough, which is a failure anyway)
< instagibbs> I mean it can still be considered at the end in the "Fee adjustment" stage right?
< morcos> I think it would be better to eliminate dust change to fee first, b/c then you can just subtract less from the recipients (only in the case where subtractFeeFroAmount > 0)
< instagibbs> mm right
< morcos> If you have a minute, indulge me for a second while we walk through exactly what scenario 10333 is trying to solve
< instagibbs> so it's the same situation as the previous fix you made, where it grabs a lot of inputs, fails, then next time overpays in fees. Your fix adds fees to existing change output
< instagibbs> This is to make it also cover the "no change output" case
< instagibbs> meaning we calculate what the change would look like every time, adjust as necessary
< morcos> In the no change output case though, doesn't that by definition mean we've tried via the knapsack algorithm to get inputs that add up as closely as possible to the desired target
< instagibbs> for that crazy fee level, yes
< instagibbs> nFeeRet in pathological case can be many times higher than required
< instagibbs> change is only calculated via valuein-valuetoselect, nFeeRet is part of the latter term
< morcos> Yeah maybe it's not made worse by your PR, but it seems like the algorithm that tries to find an exact match is always going to try to find an exact match assuming a change ouput, and then therefore will always overpay fees by the feerate*1 extra output in the case where we do find a match which doesn't require change
< morcos> but if that amount is > dust, which i think it always is, then we'll always revert to the second step of trying to find MIN_CHANGE?
< morcos> i don't know, i guess it just seems like in some ways its changing the coin selection algorithm
< morcos> maybe not for the worse, but its a weird way to change it
< instagibbs> the real fix is to do effective value selection
< morcos> Yes, that combined with not trying to find an EXACT match, but trying to find a match within some tolerance that you are willing to lose to extra fees
< morcos> EXACT match finding is stupid
< Murch> morcos: BranchAndBound assumes no change output.
< morcos> that tolerance already kidn of happens via the remove dust output to fees , but we could make it explicit and do better
< instagibbs> Yeah imo the weakness of 10333 is that it's only needed to avoid stupidity, and will likely poorly interact with real fixes
< instagibbs> right now we're just so bad at estimating, we almost never exact match
< morcos> Murch: yeah i need to review that, but I think it shoudl be aware of the amount its wiling to discard in excess fees, and return success if it finds a result under that tolerance, and if not, assume a change output and aim for TARGET_CHANGE (probably just MIN_CHANGE)
< Murch> instagibbs: If it is only a minuscule amount missing for the fees, wouldn't it perhaps be acceptable to take that from the change output?
< instagibbs> Murch, it already does
< instagibbs> it's the case of it not having a fee output to make larger in which is currently just SFYL
< morcos> instagibbs: what would you think about another approach, that perhaps changed the logic less
< Murch> morcos: It allows for excess up to the cost of creating a change output. In my original proposal also the cost of creating an input (spending the output later), but with the high fee rate volatility, maybe only the change is a better measure.
< morcos> i'm wary of unintended consequences...
< instagibbs> morcos, very open to it, if you have the idea
< morcos> but suppose in the no-change case if we overpay the fee by too much, we just be willing to loop again (to some limit so its not unbounded)
< Murch> morcos: BranchAndBound is only for the "no-change" case. If you're going to create a change output anyway, I don't understand why you're going to work so hard to minimize it to an arbitrary number. ;)
< morcos> Murch: ok, good that makes sense. Yes. Agreed with that too!
< instagibbs> morcos, maybe keep the caching logic, and just loop?
< instagibbs> with some anti-exhaustion param?
< Murch> I mean, you don't want to have all of your money in transit, but besides that, why is 0.01 BTC a great size for output?
< morcos> actually the caching logic isn't somehting i love
< morcos> it just seems to had logistical complexity
< morcos> s/had/add/
< instagibbs> so any looping change will have to guarantee efficient convergence to a valid tx
< morcos> instagibbs: well in the dumbest implementation... just have a bool, triedAgain, and if nFeeRet < nFeeNeeded OR (nFeeRet > nFeeNeeded + too_much_extra AND !tried_again) then you loop again
< Murch> morcos: BranchAndBound does an exhaustive search, so looping again will yield no other results. We should just do a stricter limit if we feel that we're overpaying.
< morcos> Murch: we're talking about for 0.15 before BAB
< Murch> ah, I'm sorry.
< instagibbs> morcos, heh, that's basically what I tried earlier
< morcos> instagibbs: do we have a sense for how rare these overpayments are? my gut is they are extremely rare, and just being willing to discard it one time will make them all but disappear
< instagibbs> well, with more complex logic on top
< morcos> instagibbs: and what happened?
< morcos> sorry btw, for engaging on this so late, was too caught up in the fee estimate stuff (so many edge cases to get right there too, but probably this should have been more important)
< instagibbs> appreciate the feedback
< instagibbs> so to happen twice you'd basically have to oscillate twice, and get exact matches twice
< Murch> Here's an idea: How about we just raise the target for the knapsack a little, and use that adjustment to buffer missing fees but remain over MIN_CHANGE?
< morcos> Murch: we allow reducing MIN_CHANGE to MIN_CHANGE/2 to achieve that affect, its only the case where we found an exact match but used fewer inputs that we have a problem
< Murch> ah okay
< Murch> sorry, I'm late to the conversation, maybe I'm not helping :-I
< morcos> instagibbs: ok here is an even more obviously correct idea i think
< instagibbs> yeah I'm not quite convincing myself on previous idea
< instagibbs> I'd like to say "no worse"
< morcos> what if we put an additional loop around the section that starts at: const CAmount nChange = nValueIn - nValueToSelect;
< morcos> where we only repeat that section if nFeeRet - nFeeNeeded is too high.. and if it was, then we change nValueToSelect to reflect the new fee, but we don't redo the coinselction
< morcos> This will result in just calculating that now we do have positive change, and creating the change output for us as expected
< morcos> I like avoiding the unintended consequences in either of the other two approaches of redoing coin selection if some criteria happens
< instagibbs> as long as that redefinition doesn't do anything weird, sounds reasonable
< morcos> In this case we're only running coin selection exactly as many times as we currently do, we're just adding a change output if we didn't have one and the fee is way too high
< morcos> you don't have to reuse the same variable
< morcos> structure that little loop to make sense
< morcos> also i'm happy to write up that idea if you want to take a look
< instagibbs> let me take a look and give it a shot
< morcos> ok, i already started, i'll shoot you a link in a few mins, but i'm happy to let you do in your PR, i just wanted to see if it would work out easily. it might still have issues
< instagibbs> actually go ahead and work on it, I've got to wrap up work stuff before 4 day weekend
< instagibbs> thanks
< morcos> heh, damn... there are few details that need to be worked out that i was hoping you'd do.
< instagibbs> hah, just dont tell gmaxwell I'm working on it
< instagibbs> looking
< morcos> if thats right , i think we just need to smartly determine the threshold, and we need to convince ourselves or add logic so it can't get stuck in some infinite loop, i don't knwo if the pick_new_inputs bool needs to be reset or its too confusing to just be confident it'll complete next time or what
< instagibbs> smells right, leaning towards "no need to reset" but will need to think more
< morcos> i don't think reseting could hurt... just in an else to if (pick_new_inputs) you reset it to true... but not sure
< morcos> also not sure what to do about the threshold, but i think that problem is no different than 10333, it's just more explicit
< instagibbs> sure, I think reset would be more "default" behavior, easier to review
< morcos> I think something simplish like GetMinimumFee(Approx size of change output) + Min_Change_size (Dust Threshold of change output) is about right
< instagibbs> re:threshhold, I think you should also be adding the current nChange in the calc
< morcos> it's maybe a bit hacky, but could be cleaned up once we have effective value machinery in the future
< morcos> huh?
< morcos> there is no current nChange
< morcos> that section is only it if changeouput position == -1
< morcos> only hit
< morcos> anyway, afk for a few
< instagibbs> right, but that is going to fees, and you may not need to after
< instagibbs> later
< morcos> wait what
< instagibbs> ill annotate on github...
< morcos> ok
< instagibbs> nevermind, was wrong
< sipa> wumpus: zfs
< bitcoin-git> [bitcoin] morcos opened pull request #10712: Add change output if necessary to reduce excess fee (master...addchangehwenoverpaying) https://github.com/bitcoin/bitcoin/pull/10712
< bitcoin-git> [bitcoin] jnewbery closed pull request #10015: Wallet should reject long chains by default (master...walletrejectlongchains) https://github.com/bitcoin/bitcoin/pull/10015
< luke-jr> wumpus: is your current key on bitcoin.org? https://bitcoin.org/laanwj-releases.asc
< bitcoin-git> [bitcoin] jnewbery closed pull request #9943: Make script.py wildcard importable. (master...rpctestsprimitives) https://github.com/bitcoin/bitcoin/pull/9943
< wumpus> luke-jr: that's the release signing key, not my personal key, that's laanwj.asc
< wumpus> but both should be up to date
< luke-jr> wumpus: hmm, someone's saying it doesn't match the UASF sigs
< wumpus> the release key certainly won't
< wumpus> I only use that for sha256sums.asc, which wasn't provided for uasf sigs
< luke-jr> ah
< luke-jr> so he should use the laanwj.asc?
< wumpus> yes
< wumpus> 0x1E4AED62986CD25D is the only subkey I use for signing
< wumpus> strange, I can't get one node to sync from another with master, which works with 0.14
< wumpus> (the node I'm syncing from is not up-to-date, but does that matter? it's sending the 'headers', just seems to get ignored)
< sipa> i believe nodes don't respond to headers question until they're in sync
< wumpus> just trying to do a benchmark, why does this have to be so difficut every time :(
< wumpus> yes, I already patched the client node for this, it responds to all getheaders
< wumpus> I'm sure the headers is being sent, the 0.14 branch node synced fine from it
< sipa> so what is the setup exactly?
< sipa> you have a not-fully synced node A, and a new node B, connected to A?
< wumpus> https://github.com/bitcoin/bitcoin/issues/9923 is the problem you're talking about, but I know about that one :)
< wumpus> yes
< wumpus> A only listens, has a part of the block chain, B only connects, and should sync from A so that they end up with the same blocks
< wumpus> (and most important, same utxo database)
< wumpus> A has blocks up to 430241
< sipa> A sends getheaders, B responds with headers?
< sipa> eh, other way around
< gmaxwell> if you're too far back you'll stay stuck in IBD. In particular you have to at least meet the minimum chain work to get out of IBD.
< wumpus> 2017-06-30 19:22:11 received: getheaders (997 bytes) peer=5
< wumpus> 2017-06-30 19:22:11 getheaders 430241 to end from peer=5
< gmaxwell> and I believe 430241 will not.
< wumpus> 2017-06-30 19:22:11 sending headers (82 bytes) peer=5
< wumpus> (that's on A)
< wumpus> B gets a headers message with one entry: 2017-06-30 19:22:11 ProcessNewBlockHeaders: 000000000000000003f1410b194facc9092a2b76e99db8653ec1a32edfd2ab29
< sipa> that's a remarkably small headers packet
< gmaxwell> if you make IsInitialBlockDownload return true, you'll be good to go; my bet.
< wumpus> (that's the blockhash for block 430241 )
< gmaxwell> er return false.
< wumpus> on A or B?
< sipa> so it looks like B already has all the headers?
< sipa> (up to 430241)
< gmaxwell> on the thing with 430241 blocks (I believe that is A?)
< wumpus> sipa: seems so! I can delete B's state and retry if that helps
< sipa> wumpus: what does getblockchaininfo on B say?
< gmaxwell> (though if B has the headers this sounds like it might be something else!)
< sipa> or even getchaintips
< wumpus> "headers": 430241,
< wumpus> so it has all the headers, but only blocks up to the genesis block
< wumpus> so the problem is in B which is not requesting any blocks
< sipa> so while B is in IBD, there are some restrinction on where it downloads from
< wumpus> gmaxwell: yes I already patched out the code on A that would make it refuse to send headers when in IBD, I struggled with that one too many times to forget
< gmaxwell> or it did but A didn't reply? (I assume you have debug=net and so you can tell?)
< sipa> B only has one connection?
< sipa> wumpus: getpeerinfo on B does not list any blocks in flight?
< wumpus> sipa: yes, B can get only one connection, and nothing inflight
< wumpus> gmaxwell: no getblocks
< sipa> does getchaintips say anything about invalid chains?
< gmaxwell> sweet, sounds like a bug.
< wumpus> getchaintips for A and B (guess B is the relevant one) https://0bin.net/paste/6Kqmm+1VMAhkOTJy#T9erOX4pcLnHu0uW++cugcWEkwDSFdKPlrefkP1nL86
< sipa> to debug (if the behaviour remains after a restart of B, perhaps add LogPrintf's to all the return statements in FindNextBlocksToDownload on B
< sipa> there are many cases in which it decides not to return anything
< sipa> i'd be helpful to know if a) it is being invoked and b) if yes, why it does not return anything
< sipa> getchaintips looks totally normal
< gmaxwell> or just attach gdb breakpoint at FindNextBlocksToDownload and restart node A and step through it?
< wumpus> trying with a C (fresh B) first, to see if it getting stuck after the headers is reproducible
< wumpus> yep, same problem second time
< wumpus> received all the headers but making no block requests
< sipa> wumpus: i'll give you a patch to debug
< sipa> ah, i think i found it
< sipa> if (state->pindexBestKnownBlock->nChainWork < UintToArith256(consensusParams.nMinimumChainWork)) { // This peer has nothing interesting
< gmaxwell> ah!
< wumpus> "This peer has nothing interesting."
< wumpus> yes, just narrowed it down to there
< wumpus> 0.14.2 thinks the peer does have something interesting
< gmaxwell> yea, thats it. e2652002b6011f793185d473f87f1730c625593b added that.
< gmaxwell> --
< gmaxwell> Delay parallel block download until chain has sufficient work
< gmaxwell>
< gmaxwell> nMinimumChainWork is an anti-DoS threshold; wait until we have a proposed
< gmaxwell> tip with more work than that before downloading blocks towards that tip.
< gmaxwell> --
< wumpus> just going to delete that code for now, see if it works then
< gmaxwell> basically it impedes a dos attack where I make a long fork of low difficulty blocks with an asic miner and run you out of diskspace fetching those blocks.
< wumpus> can we please have a mode for benchmarking that disables all these annoying checks :)
< gmaxwell> wumpus: IIRC sdaftuar wanted a hidden commandline option to let you set the nMinimumChainWork to another value.
< wumpus> now there's this one, as well as #9923, every time it becomes more difficult to sync nodes to each other in synthethic circumstances
< gribble> https://github.com/bitcoin/bitcoin/issues/9923 | Whitelisting outgoing connections · Issue #9923 · bitcoin/bitcoin · GitHub
< wumpus> ok
< sipa> #10357
< gribble> https://github.com/bitcoin/bitcoin/issues/10357 | Allow setting nMinimumChainWork on command line by sdaftuar · Pull Request #10357 · bitcoin/bitcoin · GitHub
< gmaxwell> hurray I remembered who was doing what for once.
< wumpus> lol apparently that was a bad idea, now it segfaults
< gmaxwell> you can't remove the null check. :P
< gmaxwell> git revert e2652002b6011f793185d473f87f1730c625593b
< wumpus> 478 state->pindexLastCommonBlock = chainActive[std::min(state->pindexBestKnownBlock->nHeight, chainActive.Height())];
< wumpus> right
< wumpus> that did it, it's syncing
< wumpus> thanks!