< bitcoin-git> [bitcoin] sipa pushed 6 new commits to master: https://github.com/bitcoin/bitcoin/compare/ad826b3df9f7...dc6dee41f7cf
< bitcoin-git> bitcoin/master 4a6b1f3 Matt Corallo: Expose AcceptBlockHeader through main.h
< bitcoin-git> bitcoin/master 63fd101 Matt Corallo: Split ::HEADERS processing into two separate cs_main locks...
< bitcoin-git> bitcoin/master a8b936d Matt Corallo: Use exposed ProcessNewBlockHeaders from ProcessMessages
< bitcoin-git> [bitcoin] sipa closed pull request #9183: Final Preparation for main.cpp Split (master...net_processing_5) https://github.com/bitcoin/bitcoin/pull/9183
< bitcoin-git> [bitcoin] TheBlueMatt opened pull request #9260: Mrs Peacock in The Library with The Candlestick (killed main.{h,cpp}) (master...net_processing_file) https://github.com/bitcoin/bitcoin/pull/9260
< sipa> BlueMatt: haha
< BlueMatt> did you see the pr text?
< sipa> yes
< bitcoin272> hey guys I'm curious, why was 25 chosen for the ancestor count?
< BlueMatt> ugh, git isnt smart enough to realize when you rename a file and then create a dumb #include "newfile.h" is a move :(
< Eliel> BlueMatt: it should understand it if you do the rename with git mv
< sipa> BlueMatt: where does it not realize this?
< BlueMatt> main.h -> validation.h; main.h == #include "validation.h"
< BlueMatt> because i dont want to touch literally every file
< sipa> i don't understand what the problem is
< BlueMatt> will make rebase of ~everything impossible
< BlueMatt> git is smart when rebasing/merging across moves otherwise
< BlueMatt> Eliel: no, it doesnt cache that info
< Eliel> if it won't understand it in a single commit, try first renaming and then creating the new file.
< BlueMatt> it regenerates it itself, so there is no place for it to figure it out
< Eliel> in two commits
< BlueMatt> yea
< sipa> BlueMatt: does it still do this tracking after a merge commit is introduced around the two commits?
< sipa> maybe it treats it as one then
< BlueMatt> sipa: I have no idea...
< gmaxwell> bitcoin272: measurements on the actual network, combined with the compute time created for longer chains (They're more expensive to process).
< sipa> BlueMatt: can you check? if not, it's probably not worth deviating from the "every commit needs to compile and run tests" policy
< BlueMatt> willdo
< sipa> actually, i'll check myself - i want to know
< BlueMatt> heh, ok
< * BlueMatt> is currently checking which files still dont compile with 1GB of memory in kvm....
< sipa> good
< BlueMatt> lots of them, it looks like :(
< sipa> BlueMatt: doesn't seem to work
< BlueMatt> across a merge?
< BlueMatt> damn git
< BlueMatt> does rebase work at all across a rename? merge does, but rebase might now
< BlueMatt> not, actually
< Eliel> maybe you can make it a symlink instead?
< BlueMatt> eww
< BlueMatt> I think i might rather sed all the files
< sipa> i tried rebasing #8580 on top of a merged 9260 (with --no-ff)
< gribble> https://github.com/bitcoin/bitcoin/issues/8580 | Make CTransaction actually immutable by sipa · Pull Request #8580 · bitcoin/bitcoin · GitHub
< sipa> and it conflicts in main.h
< BlueMatt> damn
< sipa> like... the whole file is a conflict block
< BlueMatt> ok, well I guess its gonna be rebase-hell either way....
< sipa> we could choose to leave one of the two named main.h/.cpp
< BlueMatt> yea...
< sipa> which at least would make rebases that touch the not-moved-out part work
< BlueMatt> i mean could leave it as main.h and change the #define in main.h to VALIDATION and add a TODO thats like "move this shits"
< BlueMatt> then do it like when we close merge window so that its easier
< Eliel> would symlink cause more trouble than it'd solve?
< BlueMatt> alternatively, I could actually go sed the entire codebase
< BlueMatt> Eliel: I dont think it'd cause trouble since we have the #define guards
< BlueMatt> but its super ugly
< BlueMatt> (and, actually, git might not track that, either?)
< Eliel> right, true
< Eliel> but yes, I think sedding the whole codebase is best
< sipa> validation.h is much larger than net_processing.h, so i'd suggest having main.h and net_processing.h, rather than validation.h and main.h
< BlueMatt> yea
< sipa> so just remove the last two commits?
< BlueMatt> sipa: I went ahead and did as you suggested and left main.cpp moved to validation.cpp, and just added a TODO to main.h to move it
< sipa> ah, i'd just have left validation.cpp to be main.cpp
< sipa> that move can easily be do at the same time as the .h rename
< BlueMatt> welllll....i mean sed wont cuase (m)any rebase issues........
< sipa> yes, but it's a weird state to have main.h but validation.cpp
< sipa> and there is no need to that move early
< BlueMatt> there is also no need to wait on the wholesale main/validation move/sed
< BlueMatt> :p
< sipa> well, so either do the whole thing now, or not at all
< BlueMatt> I'm doing it now :)
< phantomcircuit> sipa, after looking at 8831 again i can see why you wanted to not have the ReadKeyValue logic in CWallet
< phantomcircuit> im not sure a class with virtual functions is going to be enough either though
< phantomcircuit> sipa, i could just add a bunch of methods to CWallet like LoadName LoadPurpose etc
< morcos> sipa: for bitcoin272's problem, the wallet doesn't rebroadcast things that aren't in its own mempool correct? so is his node restarting (which would then maybe accept some of the later chain txs) or something else?
< sipa> morcos: uh, right
< morcos> gmaxwell: in the code that doesn't store an orphan if its parent was rejected... are we sure that's a good idea?
< morcos> if a parent had too low fee
< morcos> it is entirely possible that the parent + orphan would then stay in the mempool
< gmaxwell> morcos: well we don't have the parent now and won't request it so the orphan will not get resolved. We might want to keep it around for BIP152 but right now BIP152 makes NO use of the orphan pool..
< morcos> what do you mean we won't request it?
< gmaxwell> while it's in the reject filter we won't request it.
< gmaxwell> ('cause thats what the reject filter does)
< morcos> argh, thats just kind of an accident about the way alreadyhave works though right?
< gmaxwell> Am I confused? Highly likely... I have a cold.
< gmaxwell> Well I thought that was the intent-- we don't want to request things we 'know' we will just reject.
< morcos> i mean in the orphan processing code we're specifically requesting the parents, but you're right we "AlreadyHave" things that are recentrejects
< gmaxwell> Yea, we could request it again if the orphan's fee was high for example and maybe then they both could be accepted.. would make ancestor feerate mining work better.
< morcos> yeah, but i guess i'm drawing a distinction between requesting txs in response to inv's (in which case you dont' want them if you recentlyrejected)
< morcos> and requesting them as a parent, in which case, maybe you want to try again
< morcos> in fact, we could almost bypass the mempool minfee check altogether since most of our recent peers won't send us stuff that violates it anyway and then it would basically just work
< morcos> but i guess we can't quite do that
< gmaxwell> I'd like rejection to work differently, putting rejected things in a datastructure that works like the new sigcache open-hashtable where they're taged for eager eviction but kept around until actually evicted... and then BIP152 can use that pool, and orphan resolution can use it and so on.
< morcos> does that work for non-POD
< gmaxwell> 'works like' I don't mean the lockless aspects of it either, but just the fact that there is a generation/deleted flag.
< morcos> ok.. i think i'll still PR that takes non-stored orphans with rejected parents and adds them to rejects, but just comment that we might want to remove that if we fix up the rest. i think its strictly better than existing
< gmaxwell> the principle is that we already took the cost of transfering the data, we should keep as of the most useful 'useless' stuff as we can afford... in case it turns up in a block or as a parent.
< gmaxwell> But I dunno if it's better to spend time working on that or mempool sync.
< morcos> gmaxwell: arghh.. you guys and your mempool sync.. i tend to like the other methods better.. but BlueMatt was trying to convince me nothing can match the privacy of mempool sync
< gmaxwell> hah
< BlueMatt> I'm still a fan
< gmaxwell> well perhaps I'm also chasing it because in theory it's possible to get pretty close to optimal bandwidth efficiency and today we waste a lot of bandwidth on relay. (though it's gotten incrementally better in the last several releases)
< gmaxwell> but it's easy to venture into overdesign.
< gmaxwell> OTOH, Fibre exists and is pretty awesome.
< gmaxwell> and I could have also said that it was a crazy excercise in chasing optimality.
< morcos> we should set up some data-collecting nodes that see how much better our block reconstruction would be if we'd kept everything we were told about
< BlueMatt> lol, by adding #include <boost/thread.hpp> to fix windows build error, net_processing.cpp ticked over the 1GB-memory-usage mark :(
< BlueMatt> fucking boost
< BlueMatt> gmaxwell: I'm more excited by better relay privacy
< gmaxwell> morcos: I've been monitoring a bit of that on and off.
< sipa> BlueMatt: i think more recent gccs have lower memory usage? :0
< gmaxwell> actually I find a lot of the blocks that are almost perfectly reconstucted miss due to replacements / doublespends.
< morcos> gmaxwell: how much of the gap can you close?
< BlueMatt> sipa: I'm sure, this is what was in default-debian on digitalocean
< morcos> replacements/doublespends that you heard about?
< gmaxwell> Yes.
< morcos> but weren't RBF?
< BlueMatt> huh, can anyone reproduce travis' hangs on #9260
< morcos> why didnt you have them?
< BlueMatt> i cant...
< gribble> https://github.com/bitcoin/bitcoin/issues/9260 | Mrs Peacock in The Library with The Candlestick (killed main.{h,cpp}) by TheBlueMatt · Pull Request #9260 · bitcoin/bitcoin · GitHub
< gmaxwell> As in I replaced the transaction, and the block confirmed the other (presumably the miner didn't support replacement or it didn't make itt othem yet)
< gmaxwell> morcos: some were RBF.
< gmaxwell> Though I did see a doublespend that wasn't at least once.
< BlueMatt> oh, maybe I can
< morcos> yeah, so thats a good use of your eager eviction
< gmaxwell> morcos: if you have debug=1 on, and the BIP152 missed by only a few txids it will log the missing txids and you can check your logs to see if you'd previously heard them.
< morcos> gmaxwell: ha, even easier than i thought
< BlueMatt> oops lol
< gmaxwell> during the period I last looked this was the overwhelming majority of blocks that missed only a couple. But there is a lot of variance since it depends on miner policy...
< gmaxwell> morcos: I think 'few' might only be <5 so you might want to turn that up.
< gmaxwell> The purpose of that logging was to explore the impacts of prefill policies, and we wouldn't ever prefill more than a couple.
< sipa> yup, 5
< sipa> <=5
< gmaxwell> (I don't think prefill is worth working on until we eliminate all the preventable misses)
< gmaxwell> (e.g. by using the orpan pool and by keeping at least replacements around in some kind of pool...)
< morcos> sdaftuar and i were saying that we might also want to have a super-HB peer among our 3 HB peers, as you maybe don't want 3 peers prefilling for you
< morcos> for instance one kind of obvious prefill policy is if the tx is non-standard (such as >100k) prefill, but then thats big
< gmaxwell> yes, though it's little enough data that it probably doesn't matter. The spec recommends you don't prefill more than 10kb in total.
< gmaxwell> morcos: of course, there is also the Fibre technique where you send FEC data... and then all the peers could usefully contribute. :P
< gmaxwell> We also don't have a way to NAK a HB request-- I reasoned that this was okay because even if every peer requests HB from you... you still end up not really any worse than relaying a full block to a single peer. Same kind of thinking applies for excessive prefill.
< morcos> yeah we should probably right a new mining api first. :)
< morcos> s/right/write/
< gmaxwell> Yea, no kidding.
< gmaxwell> I've also thought that we should probably not be using HB mode at all unless we have inbound connections or we're mining (or we've been asked by the user). but that kind of complexity also got answered with 'the overhead is irrelevant'.
< * jtimon> rebased #8855 again
< gribble> https://github.com/bitcoin/bitcoin/issues/8855 | Use a proper factory for creating chainparams by jtimon · Pull Request #8855 · bitcoin/bitcoin · GitHub
< BlueMatt> sipa: even with gcc 6.2 net_processing ticks over 1GB (incl host)
< gmaxwell> :(
< gmaxwell> BlueMatt: you have failed at main splitting! :)
< jtimon> intirestingly enough, with +15-22 in main.cpp, #8498 doesn't need rebase since sep1
< gribble> https://github.com/bitcoin/bitcoin/issues/8498 | Optimization: Minimize the number of times it is checked that no money... by jtimon · Pull Request #8498 · bitcoin/bitcoin · GitHub
< BlueMatt> clearly
< BlueMatt> to be fair, so does init
< jtimon> what did I missed?
< * jtimon> went to the logs
< bitcoin-git> [bitcoin] morcos opened pull request #9261: Add unstored orphans with rejected parents to recentRejects (master...orphanRejects) https://github.com/bitcoin/bitcoin/pull/9261
< jtimon> BlueMatt: not renaming main to validation in the same PR would make it simpler to review
< BlueMatt> jtimon: that pr should be easy to review
< BlueMatt> the main.cpp rename is a clean rename, so that especially
< jtimon> I guess I'm not enthusiastic about renaming main in general, I believe it still does a lot of things beyond consensus validation, plus it still includes most globals
< jtimon> but will keep looking
< jtimon> but yeah, review it's not a good argument
< BlueMatt> jtimon: see scrollback, there are ~no functions which are not block/tx processing left in validation.cpp
< BlueMatt> jtimon: except, yes, globals
< BlueMatt> jtimon: what /did/ you review?
< jtimon> commit by commit, if the moveonlys are moveonlys (ie https://github.com/bitcoin/bitcoin/pull/9260/commits/84922e4bf4c38227fbbbede50e09c87fe2a5c7f0 ) and what you say in https://github.com/bitcoin/bitcoin/pull/9260/commits/87c35f584397e2309970afdcca8e03731a86639e is true, everything seems fine or it shouldn't compile, to give an utACK I would need to grep mapOrphanTransactions and mapOrphanTransactionsByPrev and verify the
< jtimon> moveonlies
< jtimon> re rename right, git knows the file is renamed but you eat the include changes which I agree is not hard to review
< wumpus> cfields: I thought about this global version/context parameters thing in RPC a bit, and there's several other potentially controversial solutions that are used for JSON API versioning we could consider as well: an alternative (say "v2") entry point, or using a HTTP header. An advantage would be that these are conceptually separate from normal arguments
< wumpus> cfields: I'm not specifically aiming 8811 for any release but I'd sure hope it can be merged before 0.14
< wumpus> cfields: as it changes all the RPC dispatch tables it's pretty annoying to keep up to date
< wumpus> cfields: I just wish people would test it a bit
< wumpus> cfields: regarding testing it: maybe it would make sense to port at least one of the RPC tests to fully using named arguments? currently it adds a test but that one is specifically for the named-arguments-parsing functionality
< jtimon> sipa: BlueMatt: if we're doing file renames, maybe we can move ahead with some in https://github.com/bitcoin/bitcoin/pull/8328 (those people can agree on)
< bitcoin-git> [bitcoin] instagibbs opened pull request #9262: Don't consider coins available if too many ancestors in mempool (master...toolong) https://github.com/bitcoin/bitcoin/pull/9262
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/dc6dee41f7cf...c4d22f6eb216
< bitcoin-git> bitcoin/master 10ae7a7 Matt Corallo: Revert "Use async name resolving to improve net thread responsiveness"...
< bitcoin-git> bitcoin/master c4d22f6 Wladimir J. van der Laan: Merge #9229: Remove calls to getaddrinfo_a...
< bitcoin-git> [bitcoin] laanwj closed pull request #9229: Remove calls to getaddrinfo_a (master...2016-11-gai) https://github.com/bitcoin/bitcoin/pull/9229
< wumpus> btw flagging things as "needs backport" with "0.14" doesn't make much sense :-)
< bitcoin-git> [bitcoin] laanwj pushed 1 new commit to 0.13: https://github.com/bitcoin/bitcoin/commit/b172377857f9b5a0b2f43e0e57be9acf82a6c50a
< bitcoin-git> bitcoin/0.13 b172377 Matt Corallo: Revert "Use async name resolving to improve net thread responsiveness"...
< sipa> wumpus: i merged a few things that still need backporting
< wumpus> sipa: that's fine, we should probably do that in a pull grouping them together
< wumpus> sipa: I just have a really bad feeling about the async resolving thing so backported that immediately
< wumpus> if not they should be labeled "Needs backport" with milestone 0.13.2
< sipa> wumpus: yes
< wumpus> heh this is the code in libc where it crashes: https://0bin.net/paste/V1n0GkHdlDatZrnO#N5htO2+DbXw1EtNNKz4oB-3ykuixE4KGJTLiZ56/V9L to be specific: the if (--waitlist) line
< wumpus> mind the "This is tricky. See getaddrinfo_a.c for the reason why this works." comment :)
< sipa> that does not look thread safe at all
< Lightsword> wumpus, that’s not going to be externally exploitable is it for things that uses getaddrinfo?
< wumpus> Lightsword: I don't *think* so, but it depends on what kind of bug this turns out to be
< Lightsword> has upstream confirmed?
< wumpus> it's not triggered by any specific data the remote is sending, if that's what you're worried about
< wumpus> also it's not possible for P2P clients to make bitcoind do DNS lookups
< wumpus> so no, I don't think this is a security issue for us, but you can't be too careful right...
< wumpus> it's a confirmed use after free, not a NULL pointer dereference
< fanquake> When are we planning on releasing 0.13.2 ? Looks like 5 backports left.
< wumpus> it's doing "0x7ffff79b77f0 <__gai_notify+48> subl $0x1,(%rdx)" where rdx is, say, 0x441f0fc3c08944, then if you try to inspect that "0x441f0fc3c08944: Cannot access memory at address 0x441f0fc3c08944" -- too bad, already unmapped
< fanquake> *the first rc of 0.13.2
< wumpus> fanquake: well yesterday in the meeting there was agreement that all the 0.13.2 backports are labeled - so after these are backported rc1 can be released any time
< fanquake> Ok. I need to start attending the meetings, but difficult with time-diff though.
< wumpus> yes it's not a good time for australia/most of asia
< fanquake> Also, not sure why I said 5 PRs left, there is only 9239, 9194 and 9191 (which includes multiple, mostly test-related fixes).
< fanquake> It'll be right. Always have the logs, just need to make time for reading them. I think someone also posts a meeting summary to reddit or something.
< wumpus> not just to reddit :) https://bitcoincore.org/en/meetings/2016/10/20/
< wumpus> anyhow we could, say, alternate between two meeting times if there is a lot of demand for people from asia/australia to attend the meetings. But I've never really got that impression.
< sipa> the meeting is on SHA256(days_since_1970) % 24 UTC.
< gmaxwell> I ws thinking of suggesting alternating but then though that there probably isn't a good time for both asia and europe.
< wumpus> sipa: theoretically that would be fair, in practice if you don't take the distribution of people over timezones into account, you may end up with a time that's only good for people who live in the middle of the atlantic ocean :-)
< wumpus> gmaxwell: right I don't think there is
< sipa> gmaxwell: 8 am UTC would be relatively good for both asia and europe
< wumpus> 8 am UTC would be quite a good time for me
< sipa> (9am in western europe, 4pm in hong kong, midnight on west coast)
< sipa> 3am on east coast, however
< gmaxwell> thats too late for east coast US though I don't know if we currently have any eastcoasters.
< gmaxwell> oh instagibbs is duh.
< sipa> morcos, suhas, instagibbs
< gmaxwell> oh double duh.
< wumpus> in any case every time would be bad for someone - so to motivate a (even bi-weekly) time change there has to be enough demand
< gmaxwell> I just think of NY as existing in a parallel universe.
< wumpus> e.g. just rebroad asking for it is not enough :p
< gmaxwell> well we lose jl2012 due to this.
< wumpus> yes and fanquake
< BlueMatt> lol, google has a thing where they will hook up your project to run in fuzzing on some machiens 24/7, except to do it you have to agree that they will announce the bug to the world 7 days after you release a fix
< BlueMatt> who in their right mind would agree to that? thats like stupid high-risk for users
< wumpus> it could make sense for software that auto-updates quickly and automatically
< wumpus> but no, certainly not for bitcoin
< BlueMatt> I mean they're talking about it for "critical infrastructure" (ie common libraries)
< BlueMatt> like, sure, maybe google updates their libcs quickly, but the vast majority of folks do not at all
< sipa> 4pm UTC: 8am westcoast, 11am eastcoast, 5pm europe, midnight hong kong
< wumpus> BlueMatt: for libraries it's much harder to say, agreed, there will be tons of especially embedded platforms that never update them at all
< wumpus> then again that's not google's fault - finding the vulnerabilities before the 'bad people' do is still important
< BlueMatt> wumpus: sure, but that doesnt mean you announce them publicly 7 days after the first release with the fix
< wumpus> (or alternatively, after the "bad people" have already used them for years to get access anwyay...)
< BlueMatt> yes, you fix quickly, and then announce it much later
< gmaxwell> BlueMatt: it's fine for projects that are alread well fuzzed, I suppose.
< gmaxwell> NSA already found those vulns last week anyways.
< BlueMatt> it might be better than /not/ doing the fuzzing, but its still a super shitty policy
< wumpus> it's the old responsible disclosure discussion
< BlueMatt> wumpus: responsible disclosure (usually) is a discussion about what to do when upstream tells you to fuck off, not what to do when upstream is responsive
< wumpus> BlueMatt: sure, though part of it is how long to wait with public announcement
< BlueMatt> suresure, but who the fuck ever suggested 7 days?
< midnightmagic> responsible disclosure is using a timeline which is fair both to the public who is affected by the bug and the vendor, without unfairly prioritizing one over the other.
< midnightmagic> The timelines *were once* measured in weeks, since the public good of disclosure was more important than protecting the financial interests of a corporation who wasn't remunerating the researcher.
< BlueMatt> midnightmagic: yes, but one week? fuck people who just blindly update and would be protected wouldnt even get protected if you only wait 7 days
< BlueMatt> midnightmagic: I mean I'm with you....fuck the vendor and their financial interests, users must be protected, but 7 days is short enough that you're harming users, too
< midnightmagic> 7 days is pretty sure. At least Gobbles and/or that italian ultra-prolific guy isn't just dropping 0-days and letting the vendors scramble.
< midnightmagic> *pretty short
< * midnightmagic> tries to remember that italian guy again..
< midnightmagic> ah, here he is. I've come to appreciate the way he does things: http://aluigi.altervista.org/
< midnightmagic> (not even vendor notification.)
< bitcoin-git> [bitcoin] luke-jr opened pull request #9263: release notes: Only free transactions are being removed, not priority. (master...relnotes_freetxn) https://github.com/bitcoin/bitcoin/pull/9263
< bitcoin-git> [bitcoin] laanwj opened pull request #9264: 0.13.2 backports (0.13...2016_12_backports_0_13) https://github.com/bitcoin/bitcoin/pull/9264
< jonasschnelli> What is preferable: two keypools one for change, one for external OR a keypool with keys flagged for internal or external use?
< bitcoin-git> [bitcoin] laanwj closed pull request #9191: [qa] 0.13.2 Backports (0.13...Mf1611-q01302) https://github.com/bitcoin/bitcoin/pull/9191
< wumpus> possibly the flagging approach is easier to do in a backwards compatible way - old versions can ignore the flags
< jonasschnelli> wumpus: good point..
< jonasschnelli> for the deserialization (SerializationOp(Stream& s, ...)), if the stream is longer then the acctual READWRITE, it will be ignored? right? (for backward comp.)
< jtimon> python3 ./qa/pull-tester/rpc-tests.py mempool_packages takes 31s in my computer, would it be crazy to move it from the extended test to the regular ones?
< bitcoin-git> [bitcoin] laanwj pushed 3 new commits to master: https://github.com/bitcoin/bitcoin/compare/c4d22f6eb216...3fbf07926240
< bitcoin-git> bitcoin/master d824ad0 Alex Morcos: Disable fee estimates for a confirm target of 1 block
< bitcoin-git> bitcoin/master e878689 Alex Morcos: Make GUI incapable of setting tx confirm target of 1
< bitcoin-git> bitcoin/master 3fbf079 Wladimir J. van der Laan: Merge #9239: Disable fee estimates for 1 block target...
< bitcoin-git> [bitcoin] laanwj closed pull request #9239: Disable fee estimates for 1 block target (master...blockstreamtil2blocks) https://github.com/bitcoin/bitcoin/pull/9239
< luke-jr> wumpus: doh, I was about to do that (more backports)
< dcousens> BlueMatt: don't miners use priority for their own transactions?
< fanquake> jtimon takes 51s here
< dcousens> (not the coinbase, just, "other" transactions)
< jonasschnelli> wumpus: I could do a BP of #8989
< gribble> https://github.com/bitcoin/bitcoin/issues/8989 | [Qt] overhaul smart-fee slider, adjust default confirmation target by jonasschnelli · Pull Request #8989 · bitcoin/bitcoin · GitHub
< wumpus> jonasschnelli: not sure that's what we want though?
< luke-jr> dcousens: fee delta works fine for that use case
< wumpus> jonasschnelli: I mean, this is a minor release, how much do we want the GUI to change?
< jonasschnelli> Yes. It would be a notable change.
< wumpus> maybe it's ok though in this case I'm not sure
< jonasschnelli> Can we BP #9239 without the GUI changes?
< gribble> https://github.com/bitcoin/bitcoin/issues/9239 | Disable fee estimates for 1 block target by morcos · Pull Request #9239 · bitcoin/bitcoin · GitHub
< jtimon> fanquake: thanks, still, I don't see a consistency in times between extended and non-extended tests, I have a little commit in a long branch that I can cherry pick based only on what makes sense for my computer based only on time, introducing -skipslow that works with or without extended, but it looks a little bit too complicated
< wumpus> you could reply that in the #9239 topic, probably morcos has an opinoin on it too :)
< gribble> https://github.com/bitcoin/bitcoin/issues/9239 | Disable fee estimates for 1 block target by morcos · Pull Request #9239 · bitcoin/bitcoin · GitHub
< jonasschnelli> wumpus: or just disabled (shorten the slider range) for target 1 in the backport
< * wumpus> wishes gribble had rate-limiting
< wumpus> jonasschnelli: yep, exactly
< wumpus> jonasschnelli: although that may turn out to be a riskier/less-tested solution than #8989 which at least has been tested in master. DIfficult.
< gribble> https://github.com/bitcoin/bitcoin/issues/8989 | [Qt] overhaul smart-fee slider, adjust default confirmation target by jonasschnelli · Pull Request #8989 · bitcoin/bitcoin · GitHub
< jonasschnelli> Yes. IMO 8989 and 9239 is sort of one BP "package"
< wumpus> jonasschnelli: in that case we should just backport both I guess
< jonasschnelli> Agree
< jonasschnelli> I can do that next week...
< jonasschnelli> (if nobody did it in the meantime)
< wumpus> luke-jr: good that you hadn't started yet, then, would have been a waste of work as some had manual conflicts to resolve (though you could check if you resolved them in the same way :)
< luke-jr> wumpus: well, I had started.. but I can rebase :x
< luke-jr> I went through the PRs manually, not using tags, so I probably got a few you missed
< luke-jr> (well, *almost* manually.. XD)
< wumpus> well I strictly backport what is tagged, not more, if not it should have been tagged
< luke-jr> wumpus: erg, your backports aren't on top of Marco's? :x
< wumpus> luke-jr: no, I forgot about that one, will rebase
< wumpus> now they are (luckily no new conflicts)
< luke-jr> woo no conflicts here either it seems
< bitcoin-git> [bitcoin] laanwj opened pull request #9265: bitcoin-cli: Make error message less confusing (master...2016_12_rpccli_message) https://github.com/bitcoin/bitcoin/pull/9265
< luke-jr> wumpus: pushed backports-0.13 to my github; want to just pull it into yours?
< wumpus> luke-jr: will have a look in a moment
< BlueMatt> dcousens: and the "add fee" logic will continue to be maintained....but that isnt the "priority" code - this refers specifically to coin days-destroyed logic
< luke-jr> the priority code will be as well.
< sipa> no, it won't
< wumpus> luke-jr: looks ok to me - though I'm not entirely sure about the qt memory leak commits, they are all pretty minor one-time leaks, so users shouldn't notice it
< sipa> it serves no function anymore
< luke-jr> sipa: yes it does, the same function it always served: anti-spam
< wumpus> luke-jr: still makes sense to clean them up, but not sure whether the backport is risky
< luke-jr> wumpus: I don't care strongly if you want to skip those
< luke-jr> the backport may be risky, but if we have an rc or two, I'd put it in
< wumpus> luke-jr: also the menu reparenting may have subtle behavior issues I hadn't considered (though none reported yet)
< wumpus> yes, sure
< sipa> luke-jr: so blocks with 950 kb of spam is fine, and 50kb of transactions from bitcoin old timers that doesn't really pays miners a competitive price will save anything?
< sipa> the only scalable way to deal with this is economic incentives
< luke-jr> sipa: it would be better than 1000 kb of spam :P
< wumpus> lukanyhow I'll pull them into #9264
< sipa> and face it, that is how it works in practice now
< gribble> https://github.com/bitcoin/bitcoin/issues/9264 | 0.13.2 backports by laanwj · Pull Request #9264 · bitcoin/bitcoin · GitHub
< luke-jr> if we go to exclusively economic incentives, we'd need to remove ALL spam filtering..
< sipa> luke-jr: there are still local node dos protections
< sipa> which everyone has an incentive to maintain
< sipa> and don't requires miners to be gatekeepers of good and bad transactions
< luke-jr> sipa: if we had a system that wasn't suffering from spam, removing priority might make sense. but we don't.
< sipa> luke-jr: removing priority will have 0 impact
< sipa> wallets don't use it anymore, (almost all) miners don't use it anymore - even if it was usable as a means to distinguish better from worse usage of block space, it isn't anymore
< luke-jr> has someone shown that to be true?
< luke-jr> last time I looked, a large % of transactions in blocks were in the priority area
< luke-jr> (not majority-large, but not <5% either)
< sipa> fair enough, i have no actual data on his
< sipa> *this
< sipa> but how do you measure that?
< luke-jr> I wrote a RPC call that analyzed the sort order
< * luke-jr> tries porting it to 0.13
< luke-jr> ugh this thing is slooooooow
< jtimon> for those interested in more configurable testchains: https://github.com/bitcoin/bitcoin/pull/8994#issuecomment-264406053
< luke-jr> oh, it'd also imply removing the soft blockmaxsize, which is essential since >1 MB blocks are not safe right now. which would therefore imply segwit should be rejected. not a good hole to go down IMO.
< luke-jr> sipa: CPFP and some other weirdness I don't recognise kinda made my analyzer useless :/
< luke-jr> jonasschnelli: is it just me, or is 0a261b63fd4f1b07431f8a65762ef9f1ef1c11c8 a bugfix that should be backported?
< gmaxwell> luke-jr: I think we should remove the softmax blocksize. I think it's harmful to the network to have some miners producing smaller blocks when there is a high fee backlog.
< luke-jr> so just ignore all the much bigger problems large blocks are causing? why is fee so important? it doesn't seem likely larger blocks are going to significantly change the fee rate either.. :/
< gmaxwell> luke-jr: having it set lower makes the larger block problem _worse_: vitually all miners just set it to the maximum, and then we pretend like we've done something useful there instead of actually doing something useful. (like implementing a dynamic soft cap so that when demand goes down feerates don't drob absurdly low)
< gmaxwell> s/drob/drop/
< luke-jr> not sure I follow; that sounds like the topic has changed to the *default* maxblocksize, but I'm talking about the option itself.
< luke-jr> wouldn't the latter just be the min fee setting?
< gmaxwell> luke-jr: sort of. minfee setting is really a relay setting, it gets in the way of CPFP. Should be intentionally low
< luke-jr> perhaps the gradual min-feerate would be a good thing to reintroduce
< gmaxwell> I think the problems of propagation are more or less solved for the moment, so the concerns are bulk blockchain growth and fee behavior predictablity. Having a couple miners going around confirming stuff with absurdly low fees or failing to confirm stuff with decent fees doesn't help.
< luke-jr> hmm, would gradual min-feerate break prediction?
< gmaxwell> No. At least if constructed right it should improve it (esp if the prediction is aware of it).
< morcos> gmaxwell: i'd be strongly in favor of a separate min mining feerate
< morcos> do you actually want to remove blockmaxsize (and blockmaxweight?) as options?
< morcos> i don't feel strongly about that, but i do think we should avoid setting blockmaxsize as default. i tried to benchmark the behavior and didn't show it being a big performance hit
< morcos> but thats counterintuitive
< morcos> and i'd rather not risk it by default
< gmaxwell> I think we should default them to maximum at a minimum. Removing them-- I don't really care probably removing them would irritate someone, so not worth doing. I don't think they're useful controls (beyond overriding our dumb maximum)
< morcos> right.. so default no setting for blockmaxsize in particular (to avoid size accounting unless you set it)
< gmaxwell> right.
< morcos> wumpus: you can backport just the first commit of #9239. i will separately test, but that was the intent. the only difference will be you will slide the slider to 1 but it will give you an answer for 2
< gribble> https://github.com/bitcoin/bitcoin/issues/9239 | Disable fee estimates for 1 block target by morcos · Pull Request #9239 · bitcoin/bitcoin · GitHub
< morcos> which is exactly what happens most of the time now anyway.. so will look like no change in behavior
< gmaxwell> Today, with BIP152 and fibre propagation has very little to do with blockmaxsize. And blockmaxweight is the wrong dimension for applying backpressure on fees: it's blind to demand, you'll still dumbly produce smaller blocks when there is a nice backlog and dumly produce blocks that are too big when there is none (at least the minfee stops the bleeding there)
< * luke-jr> notes that >1 MB blocks are still a problem as-is; it's not like gradual mining min-fee is implemented yet
< gmaxwell> luke-jr: yes, but a pointless softcap that virtually everone overrides is still not helping.
< luke-jr> gmaxwell: nobody overrides it >1 MB yet AFAIK.
< morcos> I think I went through this before, but can't see where I wrote it up. I think we actually need 3 minimum rates and minrelaytxfee (a default minimum for the mempool) is not one of them
< morcos> 1) min mining feerate
< morcos> 2) rate used to define dust (should change rarely)
< morcos> 3) rate used as the minimum increment to pay for relay bandwidth (closest analog to existing minrelaytxfee)
< luke-jr> morcos: how would 3 be different from current minrelaytxfee?
< morcos> I don't think 3) actually needs to a have a floor other than 0, but i don't suppose it hurts
< gmaxwell> luke-jr: they all will as soon as it matters, we trained them to with an unreasonable default. Last non-empty block that was under 990k was 217 blocks ago.
< luke-jr> as soon as it matters, would still be better than immediately by default
< gmaxwell> morcos: having a floor makes the system behavior more stable in any case, no real reason to go forwarding around low fee stuff that won't get mined ever just because we have nothing better to relay. :)
< morcos> although i guess, having a user set minimum for your mempool might be something people want (and a good fail safe if we get something wrong) so maybe we do want all 4
< morcos> gmaxwell: sure, but its inherently rate limited by the mempool min fee.. but i agree. i don't know what i'm arguing that point... and separating 3 and 4 is the least important concern.
< gmaxwell> luke-jr: yes it is worse because it puts everyone in the mode of just nailing it to the maximum, which would undermine doing something sensible and dynamic.
< luke-jr> but we don't have anything sensible and dynamic yet.
< morcos> i guess the only reason it matters is if someone wants to set their minimum higher to say 10 sat/byte, then thats not clear that they really want their increment requirement to be 10 sat/byte
< gmaxwell> luke-jr: of course not, we have a nearly useless setting instead which you spend a lot of effort defending. This impedes doing something smarter.
< luke-jr> it doesn't impede anything..
< gmaxwell> I promise it does, touching any of this means arguing with you and few people have interest in that.
< gmaxwell> and there is the polite lie that something already is there, there is a blocksize setting, you can make it larger or smaller, problem solved.
< jonasschnelli> I think the HD chain splitting will not be backward compatible. Adding a flag to CKeyPool would result in possible external keys from the internal chain on older versions of core
< jonasschnelli> Also, CKeyPool has no version-int.
< luke-jr> [12:22:27] <luke-jr> jonasschnelli: is it just me, or is 0a261b63fd4f1b07431f8a65762ef9f1ef1c11c8 a bugfix that should be backported?
< gmaxwell> I said it would be incompatible a bunch of times.
< jonasschnelli> I though we could make it compatible.. but yes. I guess it wont.
< jonasschnelli> Then I try to add a new record type, maybe "hdkey" that stores the keypath (int / ext chain) as well as the masterkey and the according pubkey.
< jonasschnelli> GetKey could derive the priv key on the fly
< jonasschnelli> luke-jr: we could bp... but o
< jonasschnelli> but i guess its not an important fix
< gmaxwell> what I was commenting on before was that we probably want to do several incompatible changes at once, so we don't end up having to support dozens of old versions.
< jonasschnelli> gmaxwell: Yes. That would be preferable. I think the splitting should be combined with the on-thy-fly derivation and the flexible keypath.
< jonasschnelli> Ideally +pub-CKD
< jcorgan> i haven't kept up recently, but are there any relevant BIPS or defacto practices from other wallets we should be paying attention to in this area?
< jonasschnelli> bip44/45 maybe. But I don't think its wise to force users to do pub key derivation.
< jonasschnelli> a flexible would allow to use bip44 and we could still stick to m/0'/k' by default
< jonasschnelli> a flexible keypath
< jcorgan> reviewing the flexpath PR is on my list this morning
< jonasschnelli> thanks
< jcorgan> but, i'm mostly just wondering if there are any good practices from other wallets we might benefit from understanding
< jcorgan> just thinking out loud
< gmaxwell> Doing public derrivation while the software supports private key export is really extremely inadvisable.
< gmaxwell> I also don't think it makes sense to spend time on advanced features when basic functionality is still kinda broken.
< gmaxwell> but I'm not the one working on it. :)
< jcorgan> agree, but it's important to define what unbroken basic functionality should be :)
< jcorgan> in other words, if there is to be breaking changes, and you only want to break it once, better get it right
< jcorgan> i didn't mean to jump into the middle of your thread, just noticed it and taking an interest. carry on.
< jonasschnelli> I agree with gmaxwell. We first need to solve the key-chain split.
< jonasschnelli> child key derivation is an interesting feature if you want to use core together with a hardware wallet.
< jonasschnelli> child pub key d.
< bitcoin-git> [bitcoin] luke-jr opened pull request #9266: Qt/RPCConsole: Put column enum in the right places (master...bugfix_datarole) https://github.com/bitcoin/bitcoin/pull/9266
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/3fbf07926240...31bcc667863f
< bitcoin-git> bitcoin/master fe37fbe Wladimir J. van der Laan: bitcoin-cli: Make error message less confusing...
< bitcoin-git> bitcoin/master 31bcc66 MarcoFalke: Merge #9265: bitcoin-cli: Make error message less confusing...
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9265: bitcoin-cli: Make error message less confusing (master...2016_12_rpccli_message) https://github.com/bitcoin/bitcoin/pull/9265
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/31bcc667863f...5412c08c3cf1
< bitcoin-git> bitcoin/master b7aa290 S. Matthew English: unification of Bloom filter representation...
< bitcoin-git> bitcoin/master 5412c08 MarcoFalke: Merge #9223: unification of Bloom filter representation...
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9223: unification of Bloom filter representation (master...patch-10) https://github.com/bitcoin/bitcoin/pull/9223
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/5412c08c3cf1...98514988a3d3
< bitcoin-git> bitcoin/master 08ed8c1 Gregory Maxwell: Developer docs about existing subtrees....
< bitcoin-git> bitcoin/master 9851498 MarcoFalke: Merge #9246: Developer docs about existing subtrees....
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9246: Developer docs about existing subtrees. (master...devdocs_for_subtrees) https://github.com/bitcoin/bitcoin/pull/9246
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/98514988a3d3...d7ba4a233bd5
< bitcoin-git> bitcoin/master 0828619 Suhas Daftuar: [qa] Dump debug logs on travis failures.
< bitcoin-git> bitcoin/master d7ba4a2 MarcoFalke: Merge #9257: [qa] Dump debug logs on travis failures....
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9257: [qa] Dump debug logs on travis failures. (master...travis-debug-logs) https://github.com/bitcoin/bitcoin/pull/9257
< bitcoin-git> [bitcoin] morcos opened pull request #9267: Disable fee estimates for a confirm target of 1 block (0.13...backport9239) https://github.com/bitcoin/bitcoin/pull/9267
< MarcoFalke> sipa: I think this is needed to get our subtree clean again
< instagibbs> can anyone explain why this isn't setting the argument correctly?:
< instagibbs> self.extra_args = [['-usehd={:d}'.format(i%2==0)] for i in range(4)]
< instagibbs> + self.extra_args[0].append("-limitancestorcount=10")
< instagibbs> I'm trying to set a custom limit in test, and it's not being enforced
< instagibbs> ahhhh nevermind, the node is being spun down and up later in the test
< MarcoFalke> Yeah, we should use self.extra_args as argument where possible, instead of writing out the args every time.
< instagibbs> sdaftuar, how does one get the cached value?
< sdaftuar> instagibbs: oy, i'm thinking about it now. ancestor count is easy, but the descenadnt count is not...
< instagibbs> how often will that one trigger in wallet context though?
< sdaftuar> unfortunately the test we want to do is, what is the max number of descendants that any of pcoin's ancestors have?
< sdaftuar> and also, i think we should compare those values to some lower thresholds (say 10 instead of 25)
< instagibbs> I'm finding bugs with my implementation with non-standard values
< sdaftuar> well i think your implementation would never skip over any pcoins? except if the chain limits are violated in a reorg
< instagibbs> it always only fails at the 25 threshhold, unsure why. If we have an ancestor one already cached, we might just use that
< instagibbs> seems less error-prone, and should catch the common case?
< instagibbs> im not sure what you mean?
< sdaftuar> so to answer your first question, each mempool entry has a value nCountWithAncestors
< instagibbs> how do I access that entry
< instagibbs> (looking)
< sdaftuar> instagibbs: might need to add some kind of accessor, but mempool.mapTx.find(txid) or something should do it.
< sdaftuar> since we're already checking to see that it's in the mempool, this seems okay to me
< sdaftuar> CalculateMemPoolAncestors() can be a bit expensive, fyi
< sdaftuar> but in order to do the descendant calculation properly, we'd need to call CMPA on the pcoins in question, and then check the descendant count of each ancestor returned
< sdaftuar> (which i guess is equivalent to calling CMPA with lower thresholds)
< sdaftuar> i'm not sure though how reasonable it is to call CMPA inside AvailableCoins? if somehow your wallet has a lot of in-mempool stuff, this could be slow. not sure how to think about it.
< instagibbs> I don't understand. I likely don't understand what CMPA is actually doing. I thought it was already checking for limit violations
< sdaftuar> you're trying to see if a transaction that spends pcoins would be a limit violation. you're checking to see if pcoins itself would fail to get into the mempool based on the configured limits.
< sdaftuar> but you know it's already in the mempool
< sdaftuar> so almost by definition, those limits won't be violated
< sdaftuar> (turns out there is some weird edge case behavior where it could be, but i think that's beside the point)
< sdaftuar> oh, maybe i didn't make this clear: we call CMPA when we try to accept a new tx to the mempool. so if it's already gotten in -- a precondition of your code -- then calling it again on that tx, with the same limits, really shouldn't fail.
< instagibbs> ok now I'm confused why this works in the general case then.
< instagibbs> with default values
< sdaftuar> hmm. do you have a test i can look at?
< instagibbs> yes but its very easy to test the base case
< instagibbs> let me push my test im working on
< instagibbs> just send all your coins to yourself in a loop 26 times
< instagibbs> last time will trigger ATMP failure without these lines, with it triggers "Insufficient funds"
< sdaftuar> interesting, i'll take a look
< instagibbs> you're right though, this makes no sense in retrospect
< * sdaftuar> just learned about set_trace, whoa!
< instagibbs> wait... how do you write all those tests o_0
< sdaftuar> slowly :)
< instagibbs> i think it spawns zombies if you run in batch mode, be careful :P
< instagibbs> that test has "10" as the limit, but it successfully sends 25 times
< instagibbs> no matter what it's set to
< instagibbs> and the error message changes to mempool entry failure if you set a different value...
< sdaftuar> ah i think i see what's happening.
< sdaftuar> there might be some edge case bugs in CMPA we ought to fix
< sdaftuar> oh but it looks like for limit purposes, CMPA is trying to calculate whether adding the tx given would cause the limits to be exceeded.
< sdaftuar> ugh, this is kind of messy
< gmaxwell> sdaftuar: you're clear on the goal right? don't consider any coin we couldn't actually spend.
< sdaftuar> gmaxwell: yes, understood.
< gmaxwell> I wouldn't worry much about the coinselection being slow if you have many unspent inputs-- so long as slow isn't so slow as to cause rpc timeouts.
< gmaxwell> The code that figured out if all the parents were confirmed or ismine used to have factorial complexity... nice natural rate limitor on building large unconfirmed chains. :)
< sdaftuar> gmaxwell: instagibbs: i have two approaches to suggest, not sure which is better:
< sdaftuar> 1) do basically what you do here, except using CMPA correctly (i'd need to figure out exactly how to do that, certainly you could pass in a tx that spends pcoins, or maybe there's a way to call CMPA with adjusted limit values that works, not sure)
< sdaftuar> 2) don't bother calling CMPA in AvailableCoins, and instead just check to see if pcoins has more than N ancestors (a cached value) for some N much less than the default ancestor limit (maybe 5 or 10).
< sdaftuar> and then find a later point in the wallet code to abort the send if the descendant count would be violated
< instagibbs> I like (2) for now, just because I can't tell how CMPA is working sometimes :)
< gmaxwell> for the case that people hit, really it's just the ancestor limit that actually matters: people chaining change.
< instagibbs> yep
< sdaftuar> yeah the problem with 1) is that you don't know the size of the tx you're generating, so it's always going to be possible for the tx to ultimately fail
< sdaftuar> 2) is easy, but may result in some rare annoyances
< gmaxwell> sdaftuar: this is a sign we should reconsider that relay policy... the size is cornercase enough that it shouldn't really matter.
< sdaftuar> gmaxwell: we knew the descendant count/size was a theoretical problem from the start (someone else spending outputs of a tx that pays to you can prevent you from relaying new transactions until those other ones clear), but i believe it's the
< sdaftuar> descendant count/size stuff that limits DoS attacks on the mempool the most
< instagibbs> Ok, I'm going to learn more about CMPA, then likely do (2) instead
< sdaftuar> "those other ones clear" <--- should be until the parent confirms
< gmaxwell> sdaftuar: why do you suggest above going 'much less' than the ancestor limit?
< sdaftuar> gmaxwell: i just think that we should steer well clear of the relay policy limits. partly just philosophy (we didn't think there were any common use cases close the values we chose)
< sdaftuar> but also practically, being near the ancestor limit makes it more likely some peer will think you're violating, say, the descendant limit
< bitcoin837> hey guys, before I write my own, does anyone already have a bash script that interacts with bitcoin-cli to do a sendmany split of a large chunk to your own deposit addresses?
< gmaxwell> makes sense. well doing 5 less would probably provide plenty of safty there.
< sdaftuar> (breaking relay)
< instagibbs> bitcoin837, #bitcoin
< bitcoin837> sure
< instagibbs> np
< sdaftuar> instagibbs: gmaxwell: there's a problem with this approach
< sdaftuar> just talked to morcos, and one thing we realized is that you might be combining lots of inputs, each with many ancestors
< sdaftuar> so the resulting transaction would fail
< * BlueMatt> sooo doesnt want to have to rebase #9260...anyone wanna review?
< gribble> https://github.com/bitcoin/bitcoin/issues/9260 | Mrs Peacock in The Library with The Candlestick (killed main.{h,cpp}) by TheBlueMatt · Pull Request #9260 · bitcoin/bitcoin · GitHub
< sdaftuar> so the best thing we could do in AvailableCoins is eliminate any coins that are already at the ancestor limit, i guess. but we need additional logic in SelectCoins..() i think
< instagibbs> wait, why is this different than what we were going to do?
< instagibbs> i thought that was the definition of ancestor number
< instagibbs> oh i see, right
< instagibbs> I had this thought a couple minutes ago and somehow lost it. Yes, 2 inputs may have n-1 history, making 2n-2
< sdaftuar> yep. or making n-1, if they're different outputs of the same tx
< sdaftuar> we have to check later on
< instagibbs> Yep. glad someone else thought of it and actually communicated
< gmaxwell> In the case people have actually encountered, it's just a decendant chain limit AFAIK. Many of the other cases would be pretty hard to hit with the wallet behavior.
< instagibbs> does the wallet prefer shorter unconfirmed chains vs longer?
< instagibbs> I know it prefers confirmed change first, etc
< gmaxwell> It is completely blind to it right now.
< instagibbs> ok, so it might not be all that rare
< gmaxwell> To make it prefer shorter is easy: first make it so it can reject if it's too long, and try again with a higher limit if that fails.
< sdaftuar> i think one way we could do that is to make SelectCoinsMinConf do the checking against some passed in limit, and then call it twice or something
< sdaftuar> we already call SelectCoinsMinConf repeatedly
< gmaxwell> thats how we handled unconfirmed, yep.
< instagibbs> yeah its the same idea
< instagibbs> prefer deep confirmed coins from others, prefer confirmed coins, ok now try unconfirmed <-- I think today
< sdaftuar> the other approach would be if there's a way to do it directly in ApproximateBestSubset?
< gmaxwell> it does 6,1,0 and it could become 6,1,0+1i,0+10i,0+20i.
< sdaftuar> not sure if that's a good idea
< instagibbs> ABS assumes you have enough coins i believe
< instagibbs> which means if you run into too long chain, that will be different and fail
< gmaxwell> not a great idea to do it in ABS. Just making it a parameter to SelectCoinsMinConf would be straightforward.
< instagibbs> gmaxwell, why are we using complex numbers for the arg :P
< sdaftuar> ok, so if we do it in SCMC, then i think the approach would be to filter out of availableCoins the ones with >N ancestors, and testing whether the resulting set of inputs would pass CMPA before returning?
< sdaftuar> does that sound right?
< sdaftuar> hmm. this doens't quite make sense -- once ABS returns coins with enough value, but that violate the chain limits, trying again isn't likely to help, is it?
< instagibbs> SCMF can say "no"
< instagibbs> pass failure to SC, returning Insufficent Funds
< sdaftuar> right, but what is going to do differnetly when invoked with different limits?
< gmaxwell> sdaftuar: you start with the most restrictive limits first.
< gmaxwell> and it will only consider coins which will not violate.
< instagibbs> if it fails to get enough coins under each limit, raise it, if it can never get enough, fails
< gmaxwell> and if that can't find a solution, you relax the limits and try again. The first limit is very confirmed, then confirmed, then unconfirmed but 1 deep, then unconfirmed and deeper... The reason to try with multiple limits is so that it doesn't do something dumb like build a chain of 19 deep, then at the 20's also spend all your other unconfirmed coins... so that the next call has nothing to spe
< gmaxwell> nd.
< sdaftuar> gmaxwell: i'm trying to understand how ABS works now, but it's not obvious to me how effective it would be at randomly finding a set of inputs that don't violate the chain limits as the set of available coins it's given increases
< instagibbs> ABS wouldnt be in charge of it, would have to be higher
< instagibbs> ABS I believe just has a set of inputs, and tries to make "good" fits based on value
< sdaftuar> instagibbs: right. so let's say you have a set of inputs with at most 1 ancestors that has enough value to createa a tx, but the resulting tx has too many ancestors
< sdaftuar> what are the chances that when you add some more inputs (where it is possible to find a suitable input set that passes the chain limits), that ABS will return it to you?
< instagibbs> hm the issue seems to continue all the way until you create the actual transaction
< gmaxwell> yes, there is nothing really we can do about the ancestor limit right now-- it even depends on future and non-local information, we can deal with the decendant limit now.
< sdaftuar> gmaxwell: did you swap ancestor and descendant in that last line?
< gmaxwell> yes.
< gmaxwell> Oh you're also saying for ancestors. Indeed, we can only be greedy, you're right.
< sdaftuar> if ABS were somehow aware of the constraint, we might get lucky and succeed
< sdaftuar> but if it's not...
< gmaxwell> it's not a framework that would likely do well with that kind of constraint in any case.
< gmaxwell> For some reason I thought the combination rule for ancestor limit was MAX (a depth) not sum.
< instagibbs> Yeah me too :/
< sdaftuar> sum reflects how much recordkeeping (and pointer hopping) we do
< instagibbs> So, we could still have the wallet prefer shorter inputs, and post-transaction finalization, have a CMPA check before returning?
< instagibbs> shorter-chain inputs greedily*
< sdaftuar> yes i still think it'd be useful to do what i wrote above: in SCMC, filter out from AvailableCoins the ones that have more than N ancestors, for different values of N
< gmaxwell> sdaftuar: yes, while realizing that I was wrong from your comment it was clear enough to me why it's sum.
< sdaftuar> but there are then two failures from SCMC: not enough value, in which case it's worth tryign again with a higher limit; or chain limit exceeded, in which case I'm not sure it's worth calling again with a higher limit
< sdaftuar> we might fail sometimes to generate a transaction when there are coins available that would work, but this seems like the best we can do right now
< sdaftuar> and certainly the most important thing is to not commit a transaction that then fails in ATMP
< sdaftuar> "This must not fail." :)
< instagibbs> so we'll still need a CMPA call, or something similar, to get the actual ancestor count before returning right
< instagibbs> otherwise "this can fail" :)
< sdaftuar> yes. once we've put the transaction together, we can call CMPA and know whether it'll pass.
< sipa> i think we should deal correctly with a failed ATMP call, like removing the tx from the wallet in that case
< sipa> (in addition to doing sanity checks ahead of time)
< sdaftuar> sipa: agreed
< instagibbs> what's the barrier to that fix? I don't know the wallet well enough
< sdaftuar> i don't think we ever delete anything now, do we?
< instagibbs> removeprunedfunds :P
< instagibbs> but no, not normallly
< sdaftuar> oh, neat. i didn't know that existed!
< instagibbs> it's meant to be for importing/removing proofs of payment without scanning
< morcos> is the reason we commit first in case the node crashes?
< sdaftuar> so we can just zap tx's if we create them but ATMP fails? that seems easy
< morcos> we could alwasy have ATMP(fJustCheck)
< sdaftuar> morcos: i assumed that's why we commit first
< sdaftuar> morcos: ATMP(fJustCheck) doesn't work very well with RBF and mempool eviction
< instagibbs> morcos, hmm I thought there was issues with that, otherwise we'd have an easy way to check in rpc if policy is ok with it?
< instagibbs> ^ ok that
< morcos> ah, yes...
< sdaftuar> maybe RBF isn't a big deal, not sure
< gmaxwell> someone should just remove that comment, it's not true. "Must not fail perminantly" it's fine if it fails until some transactions confirm.
< morcos> ok.. well we just want to be a bit careful.. b/c even things that do get rejected from ATMP, first make it into the memory pool. so we'll want to make sure no one ever makes it so those coudl get relayed or remembered in some other way, if we're about nuke them from teh wallet
< gmaxwell> right, it's important to never remove something from the wallet that could have relayed.
< morcos> or remembered in the future opportunistic eviction rejection map
< sdaftuar> morcos: oh! now i see what you mean.
< gmaxwell> ::sigh::
< gmaxwell> The thing I was hoping this issue would fix is the wallet needlessly maining excessively deep transactions when it could choose inputs and avoid it.
< instagibbs> we can still do that, yes?
< gmaxwell> Removing things from the wallet on ATMP failure might be fine (though if it'll relay after the next block ... it might have been better to just leave it), but it won't avoid needlessly creating unattractive transactions.
< gmaxwell> instagibbs: sure.
< morcos> gmaxwell: yeah i think we get it, we're jsut trying to fix up the edges too
< morcos> but as i think you mentioned the other day, as it is now it won't get relayed after the next block, b/c you don't rebroadcast whats not in your mempool and you don't try to reaccept whats in your wallet
< instagibbs> is this referring to ATMP failed transactions?
< morcos> but we all agree that first you try with things that in the simple case (single stranded chains) will stay well clear of the limits
< sdaftuar> instagibbs: yeah there's a bug now, where the code that tries to periodically relay unconfirmed wallet transactions skips over things that aren't in the mempool
< sdaftuar> instagibbs: i think it's an easy fix, just try to reaccept to the mempool first
< morcos> sdaftuar: i dont know if i 100% agree thats a bug, but certainly a separate issue
< gmaxwell> We we should fix rebroadcast transactions to try remempolling things. :)
< sdaftuar> morcos: right, i guess reaccepting makes it harder for you to abandon a too-low-fee tx?
< gmaxwell> thats a day one bug, considering reorgs/doublespends.
< morcos> gmaxwell: at the risk of tangents.. the danger with that is that i think we'll go into these endless work loops of trying things that have long since been double spent or whatever
< morcos> sdaftuar: no its easy enough to skip abandoned, we already skip reaccepting those on startup
< morcos> but yes, it makes the auto-defacto-abandoned state disappear, which is not necessarily a good thing
< instagibbs> ok, I need coffee bbl
< gmaxwell> morcos: well it's once per half an hour or whatever... and work per transaction. We also can tell when transactions are conflicted and could skip those.
< morcos> gmaxwell: i guess we could just ask some users with big wallets to see how long that takes them on startup (maybe we need to put in benching for it), since we already do it then
< gmaxwell> morcos: the non-mempool part of resend wallet txn wouldn't be needed if we had mempool sync. :P
< morcos> if we get it right, we can just skip blocks and PoW too. :P :P
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/d7ba4a233bd5...9e4bb312e695
< bitcoin-git> bitcoin/master facbfa5 MarcoFalke: [qa] Get rid of duplicate code
< bitcoin-git> bitcoin/master 9e4bb31 MarcoFalke: Merge #9221: [qa] Get rid of duplicate code...
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9221: [qa] Get rid of duplicate code (master...Mf1611-qaMaxUploadDuplCode) https://github.com/bitcoin/bitcoin/pull/9221
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/9e4bb312e695...c36229b0b2e9
< bitcoin-git> bitcoin/master 8a70a9d wodry: Improvement of documentation of command line parameter 'whitelist'
< bitcoin-git> bitcoin/master c36229b MarcoFalke: Merge #9251: Improvement of documentation of command line parameter 'whitelist'...
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9251: Improvement of documentation of command line parameter 'whitelist' (master...patch-3) https://github.com/bitcoin/bitcoin/pull/9251
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9064: unify capitalization of "bitcoin address" (master...changeCaps) https://github.com/bitcoin/bitcoin/pull/9064
< sdaftuar> BlueMatt: i was trying to verify that the second commit in #9260 is move only. first, do you have any clever ways to suggest that i could use to do that?
< gribble> https://github.com/bitcoin/bitcoin/issues/9260 | Mrs Peacock in The Library with The Candlestick (killed main.{h,cpp}) by TheBlueMatt · Pull Request #9260 · bitcoin/bitcoin · GitHub
< sdaftuar> second -- i tried to use git blame -C on the net_processing.cpp file, to verify that you didn't introduce any changes
< sdaftuar> that almost works, though for some reason i can't figure out, it assigns blame to you for a handful of lines around the Misbehaving() function
< sdaftuar> i can't see any change in that code, but i'm puzzled as to why git thinks a change was introduced
< sipa> sdaftuar: git diff --patience COMMIT~:oldfile COMMIT:newfile
< sdaftuar> sipa: thanks, i'll try that
< morcos> sdaftuar: its not ENTIRELY move only
< sdaftuar> yeah i know... just trying to figure out what the real diff is
< sipa> the only diff i saw was some header changes, and the snippet i pasted in a comment
< sdaftuar> sipa: i think there's a one-line change about feeFilterRounder too?
< sipa> sdaftuar: ah, he squashed in some changes
< sipa> i reviewed an earlier version
< sdaftuar> ah ok
< sipa> 84922e4
< morcos> btw.. that "almost bug" with saferModifyNewCoins, i forgot i had a node running with my new coinsviewcache patches. it lost consensus a couple days ago..
< morcos> investigating now, but i bet it hit that bug... whew. dodged a bullet
< sdaftuar> nice. lets not PR that one. :)
< instagibbs> sdaftuar, ok updated my PR, thanks for the help
< sdaftuar> instagibbs: cool, i'll take a look
< jonasschnelli> We fill the keypool with a new target size of 3
< jonasschnelli> then we manage to reserve 4 keys: https://github.com/bitcoin/bitcoin/blob/master/qa/rpc-tests/keypool.py#L45
< cfields> jonasschnelli: seems to always reserve target + 1
< cfields> ./bitcoin-cli -testnet keypoolrefill 104
< cfields> 2016-12-02 21:13:24 keypool added key 108, size=105
< sipa> i think that may be the vchDefaultKey...
< MarcoFalke> no, don't think this is vchDefaultKey
< MarcoFalke> the target+1 comes from a time when the function was only used by one caller (Maybe getnewaddress or something, where you first call keypoolrefill and then grab one key)
< MarcoFalke> Now if you call it from another place which does not grab the key, you end up with off-by-one
< instagibbs> sdaftuar: hmm the in mempool check done in AvailableCoins doesn't cover that case?
< instagibbs> Oh I see confirmation plus not mempool
< BlueMatt> wumpus: re: compilation memusage, validation.cpp takes ~1.19G, init ~1.1G, net_processing ~0.95G, rpcrawtx ~0.95G, rpcdump ~0.99G, rpcwallet ~1.0G, wallet ~1.2G, walletdb ~0.82G...main took ~1.46G
< BlueMatt> so while splitting main didnt cut out worst-case memusage by a /ton/, it did potentially make it possible to compile with 1.5G again, whereas it previously wasnt once you include the host
< sipa> but validation + net_processing together now take 2.14G!
< sipa> bad job!
< BlueMatt> lol
< luke-jr> need to do -flto with under 1 GB RAM use ‼‼‼‼! :p
< BlueMatt> luke-jr: I cant get lto to build anymore
< gmaxwell> BlueMatt: build time is greater for me too.
< BlueMatt> greater like slower or faster?
< luke-jr> it would be nice if you could compile "split LTO data" for libraries, so one can just use non-LTO for their system but have the ability to do LTO on demand
< gmaxwell> BlueMatt: slower. takes more time.
< BlueMatt> gmaxwell: yea, not surprising given how much shit we have in headers (and how much boost has in headers!)
< gmaxwell> bad except when napping is required. presumably its faster on things other than my laptop.
< gmaxwell> yea, need to get rid of boost.
< sipa> gmaxwell: are you using parallel lto linking?
< gmaxwell> sipa: no just the seperate files require more time in total.
< gmaxwell> due to all the garbage in headers.
< BlueMatt> gmaxwell: well even if it is slower, at least you dont have to wait for net_processing if you only want to touch validation or vica versa :p
< gmaxwell> I'm not complaining, its the right thing to do.
< gmaxwell> though the recompile benefit is mixed, it isn't all that often that I don't end up editing a header that causes everything to get recompiled. P
< gmaxwell> :P
< BlueMatt> i know, but I am complaining about headers in bitcoin core (and boost)
< sipa> gmaxwell: which gcc version? earlier gccs compiled both to object code and gimple in lto mode; in later ones the default is generating only gimple, which is faster
< BlueMatt> sipa: I dont think we're talking about lto
< sipa> ah.
< BlueMatt> sdaftuar: ok with not fixing your comments in #9260?
< gribble> https://github.com/bitcoin/bitcoin/issues/9260 | Mrs Peacock in The Library with The Candlestick (killed main.{h,cpp}) by TheBlueMatt · Pull Request #9260 · bitcoin/bitcoin · GitHub
< gmaxwell> I'm not talking about LTO.
< BlueMatt> paveljanik: you got a minute to ack 9260, since you already looked over it?
< * BlueMatt> is still hoping it can be merged prior to any other main.cpp changes :p