< bitcoin-git> [bitcoin] kallewoof opened pull request #10386: [wallet] Optional '-avoidreuse' flag which defaults to not reusing addresses in sends (master...feature-white-black-address) https://github.com/bitcoin/bitcoin/pull/10386
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/a26280bc1415...7f2b9e0868f5
< bitcoin-git> bitcoin/master f203ecc Pavel Janík: Shadowing is not enabled by default, update doc accordingly.
< bitcoin-git> bitcoin/master 7f2b9e0 Wladimir J. van der Laan: Merge #10381: Shadowing warnings are not enabled by default, update doc accordingly...
< bitcoin-git> [bitcoin] laanwj closed pull request #10381: Shadowing warnings are not enabled by default, update doc accordingly (master...20170510_Wshadow_not_enabled_by_default) https://github.com/bitcoin/bitcoin/pull/10381
< ckm> anybody know why bitcoin started on microsoft/windows?
< ckm> seems odd for an open-source project
< kallewoof> Probably because Satoshi was using Windows when he wrote it
< ckm> okay, yeah, that's a pastability. not sure though. maybe it was that the alternatives were less prevalent then, so it could have been strategic to get more miners. i kinda want a logical explanation. like it could have been a statement like hacking microsoft/windows in a phoenix type of way. hmm. your thoughts?
< sipa> my assumption is that satoshi or whoever wrote the first code was a windows programmer
< sipa> the naming style of variables etc is consistent with that
< ckm> okay going with that then is there any indication that this was a microsoft job? they do seem to be the most supportive big tech company, including support in visual basic, openly wanting a cashless money system, and with statements of public approval
< sipa> offtopic
< wumpus> there are likely more windows programmers than unix programmers in the world, so the chance of a software project starting as windows software is fairly high. Seems kind of a big leap to say it's a microsoft job then. If it had started as linux project, would you have personally implicated Linus?
< wumpus> but yes, offtopic, #bitcoin please
< ckm> i just looked through all the original code and it doesn't seem like a "microsoft job" from the comments et cettera. i wasn't very into computers, i thought linux was hard to use back then, so i was using windows/mac but a programmer would i think have been able to use it. although it would be fairest to the public to make a windows version so everybody could in theory mine the genisis block.
< ckm> and sorry for offtopic i will switch over to there, thanks!
< jonasschnelli> Can anyone explain when a peer does relay its own CAddress to a connected peer? When I connect with mininode and inject a getaddr, I get now addr message
< wumpus> on incoming connections, yes
< jonasschnelli> wumpus: hmm... maybe I got fooled by the AddNewAddresses time penalty?
< wumpus> sure, or maybe there's a bug, I don't know
< jonasschnelli> Okay... i'll check.. maybe my local changes broke thinsg...
< wumpus> it seems advertizing works, but the issues about IP changes not propagating are kind of strange
< wumpus> sorry I meant on outgoing connections, incoming wouldn't make sense as the peer already found you :)
< jonasschnelli> wumpus: Ah... right..
< wumpus> so when your peer connects to someone else, it learns its own address (through the peer's version message), it starts advertizing that
< jonasschnelli> Though, if I start two peers (A,B) and connect_nodes_bi(A.B), then connect with a mininode (C) to B, I guess B should relay addr(A) when I send a getaddr
< wumpus> yes I think so, it should gossip about addresses it knows, though it's heavily randomized
< sipa> eventually, yes
< sipa> but that may take hours
< jonasschnelli> sipa: do you see a better/simpler way how to test if a peer has relayed a addr (through our test/functional/ framework)?
< jonasschnelli> (examine peers.dat is probably a very bad idea)
< wumpus> parsing peers.dat from the test framework seems painful
< jonasschnelli> If there would be a way to retrieve addrmans content from the outside..
< wumpus> also it's not really an interface to be tested, the peers.dat format should be flexible
< jonasschnelli> yeah.. indeed
< wumpus> agreed
< wumpus> some testing-only RPC?
< warren> why testing only?
< wumpus> that part of the code needs to be more testable
< wumpus> so we can assure that it works and track regressions
< wumpus> I don't see much point in having it as documented outside interface, but if you have use cases, sure...
< jonasschnelli> wumpus: Yes. Thought the same... I try to add an debug RPC to outputs addrmans content
< warren> I mean, it would be useful to be able to dump what it thinks it knows about peers during runtime of an ordinary node.
< sipa> we could add an rpc to report all of addrman's db through json?
< wumpus> yes, exactly
< kallewoof> Heh. I always thought getpeerinfo did just that.
< kallewoof> But I guess addrman can have entries for nodes you aren't currently connected to..
< wumpus> right, getpeerinfo shows the peers that the node is currently connected to, not the ones it knows about
< jonasschnelli> A simpler approach would be to ensure we directly answer a getaddr request when the debug GetArg("-getaddr_direct") is set
< jonasschnelli> The whole address relay is pretty hard to test on localhost. If 127.0.0.1 then IsRoutable() == false, GetLocalAddress() does not allow 127.0.0.1,..
< paveljanik> jonasschnelli, can you please rebase #9697 to make it easier for testing? Thank you.
< gribble> https://github.com/bitcoin/bitcoin/issues/9697 | [Qt] simple fee bumper with user verification by jonasschnelli · Pull Request #9697 · bitcoin/bitcoin · GitHub
< jonasschnelli> paveljanik: I did fresh compile it on Ubuntu 14.10 and didn't had any problems... There is already an ACK on the PR... but I can rebase if it's unavoidable.
< paveljanik> jonasschnelli, ok, np.
< paveljanik> there are three new commits anyway...
< jonasschnelli> Yeah.. right. Let me rebase then.
< jonasschnelli> Give me some minutes (working branch is different right now)
< paveljanik> It is an issue with my workflow... rebase will make it much easier for me (I'll be able to use the default/automated workflow).
< jonasschnelli> sure... will do
< jonasschnelli> paveljanik: #9697 is now rebased
< gribble> https://github.com/bitcoin/bitcoin/issues/9697 | [Qt] simple fee bumper with user verification by jonasschnelli · Pull Request #9697 · bitcoin/bitcoin · GitHub
< bitcoin-git> [bitcoin] jonasschnelli opened pull request #10387: [WIP] Define and signal NODE_NETWORK_LIMITED (pruned peers) (master...2017/05/node_network_limited) https://github.com/bitcoin/bitcoin/pull/10387
< bitcoin-git> [bitcoin] laanwj pushed 5 new commits to master: https://github.com/bitcoin/bitcoin/compare/7f2b9e0868f5...79aeff6e08f3
< bitcoin-git> bitcoin/master 9970219 Matt Corallo: Update contrib/debian to latest Ubuntu PPA upload....
< bitcoin-git> bitcoin/master a8e9286 Matt Corallo: Bump minimum boost version in contrib/debian
< bitcoin-git> bitcoin/master c5071e1 Matt Corallo: Build with QT5 on Debian-based systems using contrib/debian
< bitcoin-git> [bitcoin] laanwj closed pull request #10328: Update contrib/debian to latest Ubuntu PPA upload. (master...2017-05-update-debian) https://github.com/bitcoin/bitcoin/pull/10328
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/79aeff6e08f3...18c9debe602d
< bitcoin-git> bitcoin/master a637734 Luke Dashjr: rpc/wallet: Workaround older UniValue which returns a std::string temporary for get_str
< bitcoin-git> bitcoin/master 18c9deb Wladimir J. van der Laan: Merge #10341: rpc/wallet: Workaround older UniValue which returns a std::string temporary for get_str...
< bitcoin-git> [bitcoin] laanwj closed pull request #10341: rpc/wallet: Workaround older UniValue which returns a std::string temporary for get_str (master...rpcwallet_uv_workaround) https://github.com/bitcoin/bitcoin/pull/10341
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/18c9debe602d...eb8263bdc9d3
< bitcoin-git> bitcoin/master 0c60c63 practicalswift: Remove unused Python imports
< bitcoin-git> bitcoin/master eb8263b Wladimir J. van der Laan: Merge #10317: Remove unused Python imports...
< bitcoin-git> [bitcoin] laanwj closed pull request #10317: Remove unused Python imports (master...remove-unused-python-imports-ii) https://github.com/bitcoin/bitcoin/pull/10317
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/eb8263bdc9d3...94e52273f30f
< bitcoin-git> bitcoin/master 6c914ac Thomas Snider: [wallet] Securely erase potentially sensitive keys/values
< bitcoin-git> bitcoin/master 94e5227 Wladimir J. van der Laan: Merge #10308: [wallet] Securely erase potentially sensitive keys/values...
< bitcoin-git> [bitcoin] laanwj closed pull request #10308: [wallet] Securely erase potentially sensitive keys/values (master...tjps_secure_erase) https://github.com/bitcoin/bitcoin/pull/10308
< bitcoin-git> [bitcoin] morcos opened pull request #10388: Output line to debug.log when IsInitialBlockDownload latches to false (master...printIBD) https://github.com/bitcoin/bitcoin/pull/10388
< jonasschnelli> gmaxwell: The idea behind the signaling 7056 blocks in NODE_NETWORK_LIMITED was, that SPV peers and other very irregular network participants can catch up filtered (or non filtered) blocks of about a month.
< jonasschnelli> Though the value itself is questionable (I though NODE_NETWORK_LIMITED_HIGH^2 may be resonable).
< gmaxwell> I'm pretty doubtful people are going to set pruning to anything but not at all or all the way right now.
< jonasschnelli> gmaxwell: I don't know. I personally would consider pruning to 10GB or something that doesn't hurt. But yeah.... I see the point with UTXO syncs and maybe reserve it then for future use cases (though we could also add another bit)
< morcos> gmaxwell: given the size requirements for other aspects of the data directory (such as chainstate), it seems to me its not unreasonable to keep several gigs of blocks... I think if there was a 10GB option, it might be heavily used, especially if it was either recommended or somehow the default
< instagibbs> Defaults are obviously quite sticky. In my experience almost no one even knows about pruning.
< instagibbs> (so people just shut if off)
< jonasschnelli> Also consider the use case (thats getting more and more popular) where people link their SPV wallets with their full node. There a "month of blocks" could make sense...
< jonasschnelli> And I'm not saying this is a good use case... :)
< jonasschnelli> instagibbs: Some people told me that the prices per GB storage falls after moors law, but I personally feel that it was much simpler to run a full node back in 2013,... and not at least because of the disk requirement.
< instagibbs> Most people buying computers aren't optimizing for 100GB+ chain sizes.
< wumpus> "Some people told me that the prices per GB storage falls after moors law", indeed, that hasn't been true for quite a while
< gmaxwell> The setting based on size thing really doesn't work that well, the two logical settings are unpruned and the minimum.
< gmaxwell> so I'm not saying that keeping X or Y is useless, but we need a different interface to make them ever used.
< gmaxwell> And the big difference is "can people sync from you" -- p2p spv nodes are almost non-existant these days.
< wumpus> there is currently no reason to select anything but the minimum if you set up pruning
< wumpus> if it did serve the latter blocks I'd gladly set the pruning to larger, and I've heard similar things from other people, but there's just no reason now
< gmaxwell> The context here though is having yet another level.
< murchandamus> Howdy
< gmaxwell> I don't think having three levels of just pruning makes a lot of sense (nor do I think it's supported by the fetching data we have), when also there will be proposals for start-from-utxo-assumevalid which will have its own levels of storage.
< jonasschnelli> Right, the question is if both NODE_NETWORK_LIMITED_(LOW|HIGH) is enabled, should we then use a third, larger value (bit 1: 144, bit 2: 1008, both bits: ?)
< sipa> we could leave it undefined for now
< wumpus> on the other hand if there are three possibiilties with those bits, it makes sense to give the combination meaning too
< wumpus> or leave it explicitly undefined for future expansion,s ure
< sipa> merely the presence of having server-but-not-full-archival nodes will likely change things already
< gmaxwell> What I suggested was that it's undefined for setting, but for recieving you know it means at least X blocks, but it may also require other things, so you can't set it yet.
< sipa> and we'll learn from that
< jonasschnelli> Yes. It felt after a waste if we don't define it. Well, undefined (for future usecases) is also a definition.
< wumpus> agree
< instagibbs> gmaxwell, s/setting/sending/?
< gmaxwell> instagibbs: both.
< wumpus> gmaxwell: yes "it means at least..." makes a lot of sense for backwards compatibility
< gmaxwell> If you see both bits set, then it means they have at least 2016*2 blocks (which is what I suggested). But you don't set it yet, because it may (will) later be defined to say "And I offer a UTXO syncpoint" or whatnot.
< wumpus> yes
< gmaxwell> to be extra conservative it could just be defined to mean at least _HIGH now. I don't have a strong opinion.
< gmaxwell> you can always increase the requirement later.
< wumpus> #startmeeting
< lightningbot> Meeting started Thu May 11 19:00:16 2017 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< lightningbot> Useful Commands: #action #agreed #help #info #idea #link #topic.
< gmaxwell> #bitcoin-core-dev Meeting: wumpus sipa gmaxwell jonasschnelli morcos luke-jr btcdrak sdaftuar jtimon cfields petertodd kanzure bluematt instagibbs phantomcircuit codeshark michagogo marcofalke paveljanik NicolasDorier
< sipa> present
< murchandamus> present
< wumpus> topics?
< sipa> murchandamus == murch?
< cfields> hi, here
< instagibbs> yes sipa
< murchandamus> aye!
< morcos> gmaxwell: oh ok, i agree with not defining now.. maybe we should make just _HIGH more then though
< kanzure> hi.
< gmaxwell> morcos: yes, I was thinking HIGH would be targeted at hosts syncing 2016 blocks, but I forget where the breakpoints were exactly in sipas' data.
< wumpus> any proposed topics? (we can continue the pruning service bits topic if people want that)
< BlueMatt> damnit, can enver remember topics i wanted to bring up come meeting time :(
< luke-jr> O.o?
< gmaxwell> wumpus: well we should talk about per-txo. I think it's ready for merge except for more testing/review.
< instagibbs> suggested topic: fee targeting/coin selection overhaul
< wumpus> #topic per-txo utxo database
< sipa> #10148
< gribble> https://github.com/bitcoin/bitcoin/issues/10148 | [WIP] Use non-atomic flushing with block replay by sipa · Pull Request #10148 · bitcoin/bitcoin · GitHub
< sipa> oops, no, #10195
< gribble> https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub
< morcos> sorry sipa, that has been on my list, but my list has been gathering dust for the last couple weeks
< BlueMatt> I'm still halfway through review
< gmaxwell> The graphs should be making all your mouths water.
< instagibbs> why is y-axis in block time :P
< wumpus> ah yes I was still testing #10148, should probably switch to #10195
< gribble> https://github.com/bitcoin/bitcoin/issues/10148 | [WIP] Use non-atomic flushing with block replay by sipa · Pull Request #10148 · bitcoin/bitcoin · GitHub
< instagibbs> clock*
< gribble> https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub
< gmaxwell> instagibbs: so that the flushing graph works out.
< sipa> instagibbs: so that the x axis of both graphs lines up
< cfields> i've made it through review, but I lack enough confidence to ACK the thing :\
< BlueMatt> gm2051: #10192
< gmaxwell> The most impressive thing about that chart isn't the ~33% speedup, it's the reduction in flushing frequency.
< gribble> https://github.com/bitcoin/bitcoin/issues/10192 | Cache full script execution results in addition to signatures by TheBlueMatt · Pull Request #10192 · bitcoin/bitcoin · GitHub
< BlueMatt> gmaxwell: ^
< cfields> gmaxwell: yes, that's great.
< sipa> i was very surprised by how much it reduces flushing
< wumpus> nice chart
< sipa> my guess is that the resulting speedup isn't that dramatic because it's running on a system with pretty fast I/O
< wumpus> heh, but most users have systems with slow i/o
< sipa> oh, one downside: chainstate db goes from 2.2G to 2.7G
< gmaxwell> right, testing it on a system with slow i/o would be interesting. But will take forever.
< BlueMatt> 2.7 seems fine
< luke-jr> hmm
< sipa> the TODOs that i know of are a few code cleanups (marked as TODO in the code), and better UI wrt the upgrade process
< sipa> there is a one-time upgrade of the old db to the new db at startup, which can be interrupted
< gmaxwell> The upgrade doesn't take long on a system with fast IO at least.
< BlueMatt> sipa: it needs way more review
< sipa> BlueMatt: of course
< BlueMatt> there are lots of things in that queue, sadly
< wumpus> upgrade should be fast if it just iterates through the db in sorted order
< wumpus> even on systems with fairly slow (seek) i/o
< sipa> wumpus: it does, it takes a couple of minutes on a system with fast I/O
< wumpus> I'll try it out this week
< gmaxwell> One thing to keep in mind is that keeping it unmerged reduces testing. Absolutely it needs more review before being merged, but we also should try to get it merged sooner rather than later.
< gmaxwell> So that we get more time baking on it in master.
< BlueMatt> gmaxwell: we're not even close to that point
< BlueMatt> but, yes, we should be agressive about merge, as long as its a ways before release
< morcos> agreed gmaxwell, same thing with #10199
< gribble> https://github.com/bitcoin/bitcoin/issues/10199 | Better fee estimates by morcos · Pull Request #10199 · bitcoin/bitcoin · GitHub
< sipa> morcos: i'll do another review pass on that today
< wumpus> yes, this shouldn't be something to merge last minute for 0.15
< morcos> good thing about that one, is its a lot less likely to casue a disaster
< * BlueMatt> finds 10199 to be wayyyy more important than per-utxo
< * morcos> disagrees
< gmaxwell> I strongly disagree.
< wumpus> they're both important for different reasons
< wumpus> it's comparing apples and oranges
< BlueMatt> anyway, do we have any real topics?
< sipa> i have one low-priority idea i'd like to talk about (running utxo commitments)
< * BlueMatt> wishes the ny weather were better...
< BlueMatt> sipa: shoot
< wumpus> #topic fee targeting/coin selection overhaul (instagibbs)
< gmaxwell> BlueMatt: it's beautiful here in mountain view today.
< sipa> instagibbs: topic
< murchandamus> gmaxwell: A bit cloudy here in Palo Alto though. ;)
< instagibbs> So I tried my hand at doing redoing fee targeting #10360 without directly changing coin selection
< gribble> https://github.com/bitcoin/bitcoin/issues/10360 | [WIP] [Wallet] Target effective value during transaction creation by instagibbs · Pull Request #10360 · bitcoin/bitcoin · GitHub
< instagibbs> Was wondering if people were wary of that, versus a complete overhaul
< instagibbs> morcos has concerns
< morcos> wary is a good word for it
< morcos> not opposed
< gmaxwell> I feel a little uneasy about the changes to augment selection while we don't have a strategy to sweep dust. It worries me that we're potentially going to again untintentionally create another UTXO count blowup event.
< sipa> instagibbs: i haven't looked into the details... is it just treating the net value of its output as amount - feerate*size_of_spend ?
< murchandamus> I've talked a bit with instagibbs about it and it seems to me that when that approach finds a direct match it may have a huge number of inputs
< instagibbs> sipa yes
< gmaxwell> murchandamus: that sounds great to me. :P
< murchandamus> I've combined a similar approach but trying to combine inputs by size large to small
< instagibbs> sipa, so you may get more smaller inputs that directly match, for example
< murchandamus> gmaxwell: I think that slightly bigger average transaction with lower variance in transaction size are better than huge variance in transaction size
< morcos> i like the idea of being at least somewhat fee smart, and including more inputs when fees are lower
< sipa> but even if it spends many inputs, it always does so in a way that is economical
< instagibbs> sipa, it will always use "positive effective value" inputs
< sipa> so unless you assume that the feerate is going down in the future, there is little reason to postpone spending them
< murchandamus> morcos: Exactly. We have huge variance in fees over the week
< gmaxwell> murchandamus: high to low sounds like it would destroy privacy.
< morcos> the idea of wanting to do a quick transaction and paying 200 sat/byte to do it, and you include tons of little inputs that have little to none net value seems bad to me
< murchandamus> sipa: But adding a single input costs as much as four outputs, so you'd probably rather add a change output when fees are high than add another input, and later in the week do a few consolidation transactions
< * luke-jr> wonders how contentious it would be to move to a system where dust requires a larger proof so it can be pruned from the UTXO set
< murchandamus> I've been hearing that some people do this to save money
< gmaxwell> morcos: I though about that and I think a nice estimate would be the ratio between the current feerate set and the BIG_NUM target (e.g. 1008 block target). If this ratio is low, then you should be agressive in spending.
< morcos> admittedly its a difficult problem... how do you distinguish a user that only does 200 sat/byte txs vs one that has a range
< murchandamus> gmaxwell: I don't think it does, because it's only the selection for making exact matches
< morcos> gmaxwell: yes, i agree... something like that
< murchandamus> and you don't know if the first combination that matched was actually among the largest in your wallet
< morcos> anyway, clearly there are a lot of ideas here, and i kidn of think that the amount of consideration that has to go into most changes is almost the same, and it might not be worthing doing all of that sanity checking if we're only making a small change
< instagibbs> yes, so there are plenty of interesting strategies, the question is should we not be making semi-obvious fixing until we can agree on those :)
< murchandamus> gmaxwell: interesting approach
< instagibbs> related: can we kill minimum total fee?
< morcos> at the very least, i'd think this should be lower priority for 0.15 than the prevously mentioned things right now... so we have to think about resources
< morcos> instagibbs: +1 on that for a first step
< sipa> instagibbs: ack
< instagibbs> morcos, yes that was the feeling I get
< murchandamus> instagibbs: Accounting for UTXO in selection by effective value does take a load of pain out of selection strategies
< gmaxwell> instagibbs: but if a 'semi obvious fix' blows up other considerations thats bad. Do you have a reason to believe your change won't cause a massive increase in utxo accumulation?
< morcos> also i have a PR #9343 that also removes edge case logic. cleaning these things up now will make future improvements easier to do and reason about
< gribble> https://github.com/bitcoin/bitcoin/issues/9343 | Dont create change at dust limit by morcos · Pull Request #9343 · bitcoin/bitcoin · GitHub
< murchandamus> instagibbs: I'm not sure that just this change with the current selection strategy of Core does much better in the selection, because of the selecting/deselecting pass
< instagibbs> gmaxwell, no, I wasn't assuming as much, sorry if that seemed implied
< murchandamus> gmaxwell: Pretty sure it would rather decrease UTXO footprints
< murchandamus> right now Core will often fail in the first few passes when it selects more inputs because it hadn't accounted for the fees in advance
< sipa> instagibbs: which feerate are you using?
< instagibbs> sipa, whatever the user has selected via settings
< gmaxwell> instagibbs: well, thats why I ask. basically, in any mature system you can have 'bugs' which you depend on. at first glance it semeed to me that your PR might fix a bug where we're overly agressively spending TXO that are negative value. But we may depend on that bug to help manage the UTXO size. At least that was my impression.
< murchandamus> with this change it would always succeed as soon as it finds an exact match
< sipa> instagibbs: oh, duh
< instagibbs> well, aside from total minimum fee, because that's stupid
< instagibbs> gmaxwell, could be. Maybe next step, aside from general refactor and cleaning, is to get better data
< gmaxwell> murchandamus: I'm not following your logic. Yes, the first few passes will fail, then it will target a higher amount, and be successful.
< murchandamus> gmaxwell: Yes, it shoudl never select UTXO that are negative effective value
< instagibbs> and stop throwing away money as fees egregiously: #10333
< gribble> https://github.com/bitcoin/bitcoin/issues/10333 | [wallet] fee fixes: always create change, adjust value, and p… by instagibbs · Pull Request #10333 · bitcoin/bitcoin · GitHub
< gmaxwell> murchandamus: By "should" do you mean the current behavior?
< instagibbs> current behavior almost certainly does
< gmaxwell> The current behavior absolutely will select txos with negative value.
< sipa> So how about we fix that first?
< gmaxwell> And "fixing" that may have severely deletarious effects on the network.
< murchandamus> gmaxwell: It only fails if it doesn't find a transaction input set within the number of estimated inputs from the previous tries
< morcos> sipa: yeah i was thinking that
< instagibbs> I could always just make it ignore negative effective value
< morcos> gmaxwell: disagree
< instagibbs> ...
< instagibbs> lol
< instagibbs> I mean either it's a feature, and we shouldnt fix it, or a bug and we should
< morcos> instagibbs: yeah exactly
< gmaxwell> It can be both! :)
< murchandamus> gmaxwell: I meant "should" as in when you calculate the effective value of a UTXO, if it is a negative effective value it shouldn't be selected
< instagibbs> aspirational should?
< sdaftuar> so there's a difference here between generally factoring in feerates in the coin selection, and throwing out inputs that have negative value. i assume that's what we're getting at?
< murchandamus> gmaxwell: actually yes
< gmaxwell> It's a feature when its mild and happening during times of low feerate. And a bug when its severe when it is insane and happening during high feerate.
< murchandamus> current behavior would select them
< instagibbs> gmaxwell, fair enough...
< wumpus> yes, eating utxos with negative value cleans up the utxo set
< morcos> gmaxwell: right now when feerates vary from 10 to 200 sat/byte. something that is 0 at 200 sat/byte should be cleaned up at a lower fee rate, not thrown away
< gmaxwell> Fixing it unconditionally without doing something about dust cleanup may be quite harmful to the network.
< wumpus> of course it's better to not create them in the first place
< morcos> wumpus: yes! see #9343
< gribble> https://github.com/bitcoin/bitcoin/issues/9343 | Dont create change at dust limit by morcos · Pull Request #9343 · bitcoin/bitcoin · GitHub
< murchandamus> morcos: That's what I meant
< instagibbs> ^I should review that one
< morcos> i'm not sure thats the only way they are created, perhaps we could do more to avoid creating them also
< gmaxwell> morcos: well they're created by people being buttheads.
< morcos> i should say, i'm sure thats not the only way
< instagibbs> I noticed in current logic we create near-dust, but when we're modifying change, we have much higher bar to clear
< gmaxwell> and no amount of fixing the wallet will prevent their creation.
< instagibbs> we should sync this
< wumpus> yes, we should get that one in
< wumpus> gmaxwell: well not creating them ourselves is not a total solution, but it helps
< gmaxwell> wumpus: absolutely, sorry if it sounded like I said otherwise.
< morcos> gmaxwell: yeah its a good question how many are created unintentionally vs intentionally
< morcos> it wouldn't be unreasonable to raise the limit for intentionally creating them in Core, in adddition to imporving unintentional behavior
< gmaxwell> and at least as far as the "some anonymous third party sends you a few bitcents" I think it's fine to spend those at a slight loss, esp if its in a privacy preserving way, and double especially if it's at as low a fee rate as you expect to see.
< murchandamus> morcos: Core never creates change outputs smaller than 0.01 BTC unless the wallet is being almost depleted with the transaction, right?
< morcos> murchandamus: well.. thats the design goal, i don't think its achieved
< instagibbs> it's not achieved at all
< instagibbs> sad!
< gmaxwell> what morcos said
< gmaxwell> well it's better since 0.13.whatever when we fixed some things.
< gmaxwell> with the target/2 checks.
< murchandamus> I'll have to check out the linked issue
< morcos> in particular, if you have nTotalLower < target + CENT, you aim for target, and almost definitely create some stupidly small change
< morcos> among other posibilities
< murchandamus> morcos: Ah right, I forgot about the pre-selection before knapsack
< morcos> the linked issue is even more edge case
< murchandamus> thanks for the reminder
< morcos> but like nTotalFee needs to go away before its easy to clean up the general case
< morcos> easIER
< murchandamus> morcos: I think the way to go would be to dissect and modularize the coin selection out of wallet.cpp, if that's possible
< murchandamus> right now it is such a moloch
< instagibbs> I mean you can basically just throw away anything at or below SelectCoins, imo
< wumpus> murchandamus: yes please
< gmaxwell> Moloch whose mind is pure machinery!
< murchandamus> I think instagibbs and I might coordinate something there, and jnewberry was also interested AFAIK
< gmaxwell> can we new-subject? good discussion on this, lots of PRs for people to look at and discuss more. :)
< instagibbs> morcos is also interested
< instagibbs> yes I'm satisfied, we can continue offline
< murchandamus> thanks, me too
< wumpus> #topic running utxo commitments (sipa)
< sipa> ok
< gmaxwell> instagibbs: in any case, that one concern: that we might 'fix' what is dedusting the UTXO is the only reason I didn't ACK your patch. So we should try to get confidence there or add some other fix for that.
< instagibbs> gmaxwell, my firster iteration didnt even "fix" it :P
< * jonasschnelli> think we should have a graphical 3D "real coins" (in the size of the BTC value) manual drag'n'drop coin selection
< sipa> so, gmaxwell and i have been thinking about the possibility of maintaining a UTXO commitment hash all the time
< sipa> this would be useful for making gettxoutsetinfo instantaneous, or for syncing from someone else's UTXO set, or as the basis for a softfork later
< wumpus> yes, that would be useful
< sipa> and it seems it would be possible to have an implementation that does this at a cost of a few microseconds per input and per output
< gmaxwell> A first requirement for any kind of UTXO assumevalid is having a continuous commitment value to simplify review.
< sipa> in a cryptographically secure way
< wumpus> jonasschnelli: that's pretty much just "a better GUI for coin control" right?
< gmaxwell> Effectively, we construct an incremental unordered hash for sets-- so you can add and remove entries one at a time... Because just one scheme isn't enough, we actually constructed several, and now have a fun tradeoffs challenge to decide between them.
< sipa> there are a few different possible implementations (one is based on multiplying big numbers mod a big prime, one is based on EC math, one is based on adding large hashes toether)... with different performance and security tradeoffs, i'll send a mail about it to the ML soon
< BlueMatt> ah, ok, was gonna ask what the design was, cool
< instagibbs> i love the alternative explanations...
< BlueMatt> i mean we can also not use complicated solutions and be willing to do it in the background at a cost of however many milliseconds later
< BlueMatt> dunno how complicated your proposal is, of course
< sipa> BlueMatt: map every UTXO to either a 3072-bit integer and multiply those for all outputs, or to an EC point and all those for all outputs
< sipa> the multiplication approach is faster, but harder to cache
< gmaxwell> The strength of our proposals is strictly better than the discrete log assumption. Neighter are especially complex, though the multiplying one is probably simpler for joe-fool to implement as long as they don't care about performance.
< sipa> the EC approach could allow us to cache the effect of a single transaction, and then instantly apply it to the running commitment
< sipa> hahaha
< wumpus> interesting proposal
< sipa> so, even though 5us per input/output may not seem much, it's several hours of CPU time for a node syncing from scratch
< BlueMatt> sipa: hmm, i assume you have a pointer to a crypto accumulator somewhere?
< BlueMatt> paper*
< BlueMatt> ehh, I'll just wait for your ml post
< sipa> BlueMatt: it's a really uninteresting accumulator, as it can't be used to prove anything
< sipa> i'll include some references
< gmaxwell> BlueMatt: there have been several papers on related schemes-- however. Of course, the papers ignore the performance considerations. And especially in our case we have tradeoffs around block validation latency.
< instagibbs> I still want compact proofs, get back to work.
< sipa> also, the DL approach would mean we need an implementation of a fast multiplication mod a (fixed) prime , or a GMP dependency
< instagibbs> (ducks)
< sipa> instagibbs: so one advantage that these things have is that they're not incompatible with other UTXO/TXO commitment approaches
< gmaxwell> in any case, ML post will talk about the tradeoffs and schemes.
< cfields> is ordering relevant? any effects on parallel validation/caching?
< sipa> cfields: it's 100% parallellizable
< gmaxwell> cfields: they're totally independant of ordering.
< wumpus> if at least it can be done in parallel with some of the i/o (e.g. database lookups) it wouldn't have to add that much to the total validation time
< cfields> whew
< instagibbs> sipa, ack
< gmaxwell> And as sipa is pointing out, unlike other utxo commitments they do not break STXO like schemes where you have nodes that don't store the whole utxo set.
< sipa> ordering is irrelevant... it's effectively a set commitment that is homomorphic wrt to set union and subtraction
< sipa> but if we have this, we could for example log the UTXO commitment hash in the UpdateTip debug.log lines
< BlueMatt> gmaxwell: well the validation latency tradeoffs are mostly removed by commiting to the previous block's utxo commitment, no?
< BlueMatt> is there some reason we should avoid doing so?
< gmaxwell> So the applications I see for this are: an instant gettxouset info, being able to have updatetip log the UTXO state (making checking nodes much easier), a start on an ability to do an ASSUMEVALID UTXO sync. ('start on' because there is a security/philosophy debate if the value must also be commited to the chain if we're to do an assumevalid like sync)
< sipa> BlueMatt: well we're not even talking about commiting to it in blocks (though that is an interesting possibility later)
< BlueMatt> sipa: great! so lets not do it inline and do it in the background later
< sipa> BlueMatt: and delayed commitments are more complicated if you actually want the latency reduction... you need a backlog and background processing
< BlueMatt> as long as its under 100ms rpc is still instant
< BlueMatt> ish
< gmaxwell> oh well none of this is anywhere near that slow in any case.
< sipa> which would interfere with CPU demand for signature validation
< sipa> gmaxwell: if we need an mod inverse in the RPC, it could be 10ms or so with a naive implementation
< gmaxwell> I think with sutiable caching we're talking about worst case impact, if done inline, on the order of 10ms.
< sipa> anyway, not that much more to say about it
< BlueMatt> i think we can live with 10ms :p
< BlueMatt> anyway, looking forward to the ml post :)
< wumpus> #topic PRs high priority for review
< sipa> i just wanted to give a heads up
< BlueMatt> (in an rpc that would otherwise block cs_main...)
< sipa> BlueMatt: that 10ms can even run without cs_main
< instagibbs> a less-extreme wallet PR for consideration to 0.15: #10333
< gribble> https://github.com/bitcoin/bitcoin/issues/10333 | [wallet] fee fixes: always create change, adjust value, and p… by instagibbs · Pull Request #10333 · bitcoin/bitcoin · GitHub
< instagibbs> there are still n > 1 reports of extremely high feerates with large num inputs
< BlueMatt> sipa: thats my point :p
< instagibbs> they seem to match up with this case
< wumpus> instagibbs: added 0.15 tag
< instagibbs> thanks
< gmaxwell> Where are we with multiwallet?
< wumpus> there were plenty of review comments on luke-jr's pull, but he hasn't addressed them yet AFAIK
< gmaxwell> I haven't been following it closely because I've been more focused on per-txo/the above commitment stuff/etc.
< gmaxwell> luke-jr: ^ plz.
< jonasschnelli> #8694 needs still rebase
< gribble> https://github.com/bitcoin/bitcoin/issues/8694 | Basic multiwallet support by luke-jr · Pull Request #8694 · bitcoin/bitcoin · GitHub
< luke-jr> wumpus: it's pending on jtimon's PR
< wumpus> luke-jr: which one?
< wumpus> I merged a few jtimon PRs this week
< luke-jr> #9494
< gribble> https://github.com/bitcoin/bitcoin/issues/9494 | Introduce an ArgsManager class encapsulating cs_args, mapArgs and mapMultiArgs by jtimon · Pull Request #9494 · bitcoin/bitcoin · GitHub
< luke-jr> which actually looks mergable?
< wumpus> ok, seems that one is already in high priority for review
< sipa> wumpus: yeah, i marked it so last week
< luke-jr> reason being 8694 touches mapMultiArgs
< wumpus> luke-jr: good to know, will take a look at it soon and merge it
< luke-jr> rather than avoid that, it seems better to just fix the locking for ti
< wumpus> yes
< wumpus> sipa: thanks
< jonasschnelli> Also, consider reviewing HD-Auto-Restore #10240 (it's currently not in high prio), we should have this in 0.15, otherwise users need to do loop10000(getnewaddress), rescan(genesis) to restore funds.
< gribble> https://github.com/bitcoin/bitcoin/issues/10240 | Add HD wallet auto-restore functionality by jonasschnelli · Pull Request #10240 · bitcoin/bitcoin · GitHub
< wumpus> jonasschnelli: added 0.15 milestone
< gmaxwell> jonasschnelli: I'd missed that you got that going. good to hear.
< jonasschnelli> I'll rebase and fix the points soon.
< gmaxwell> I'll take a look at it after I finish my code review on some other PRs that I'm currently working on.
< jonasschnelli> thanks gmaxwell
< instagibbs> 3 minutes
< sipa> my #1 is still #10195 :)
< gribble> https://github.com/bitcoin/bitcoin/issues/1 | JSON-RPC support for mobile devices ("ultra-lightweight" clients) · Issue #1 · bitcoin/bitcoin · GitHub
< gribble> https://github.com/bitcoin/bitcoin/issues/10195 | Switch chainstate db and cache to per-txout model by sipa · Pull Request #10195 · bitcoin/bitcoin · GitHub
< sipa> eh
< sipa> ah!
< wumpus> yep
< wumpus> #endmeeting
< lightningbot> Meeting ended Thu May 11 19:58:15 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< jonasschnelli> wumpus: The 3D graphical coin selection was a joke.. though, for educational use, it would be great
< wumpus> jonasschnelli: yes I like the idea, I'd probably use it, I currently always use manual coin selection already
< jonasschnelli> You have a wallet, then you 3d-pick some coins... and you get back a change into your wallet
< jonasschnelli> Yes. I personally always do manual coin selection.
< jonasschnelli> But not sure about the majority of users. :)
< wumpus> jonasschnelli: I think it was in Milan that someone showed some wallet app that had 2d physics effects with coins
< instagibbs> I've never done coin selection personally
< luke-jr> 2D would be enough
< wumpus> don't remember which one, but it looked quite funny
< jonasschnelli> luke-jr: Today everything needs to be 3D,... even 2D. :)
< gmaxwell> 2d is probably better, 3d illustrations often confuse people about volumes. But if you show more valuable coins as bigger that will create a false impression that bigger coins cost more to spend, no?
< instagibbs> simply tossing current coin selection and adding random selection would probably be a huge plus sadly.
< luke-jr> jonasschnelli: ☹
< wumpus> I don't think it's *useful* for most people but those things can be fun to play with and people learn how it works
< jonasschnelli> 3D would be in a form you woudnl't recognize. The depth and shadows only slighly change when you drag the coins.
< wumpus> jonasschnelli: oh, so no tetris blocks? :-)
< jonasschnelli> heh
< jonasschnelli> Someone I feel if – from the beginning – coin selection would have been a "manual process", people would understand Bitcoin transactions better.
< wumpus> plane tiling puzzle but with the goal to get as close as possible to the spend value :p
< jonasschnelli> Also the "feerate" problem.. that it's not absolute.
< wumpus> jonasschnelli: I tend to agree
< gmaxwell> A better visualization might be be the coins being sized based on how much weight they add to the txn, and different metals based on value. :P
< jonasschnelli> Automatic coin selection may be something large wallets / exchanges are using.
< jonasschnelli> gmaxwell: Oh. Different metals... I like this.
< instagibbs> "Silver, since when did I have Litecoin?"
< jonasschnelli> With a SW badge? :-)
< luke-jr> gmaxwell: yeah, I was thinking numbers rather than colours
< luke-jr> maybe both?
< wumpus> niec idea
< gmaxwell> luke-jr: well yea, both of course.
< luke-jr> gold for > 1 ᵐTBC, silver for > ᵗTBC, bronze for > 1 TBC, and "plastic" for smaller
< * luke-jr> hides
< gmaxwell> hah
< instagibbs> my p2p-fullblocks test is timing out on a couple of my machines... is this known issue
< instagibbs> oh scratch that, this seems to be an assertion error
< gmaxwell> jonasschnelli: sipa: petertodd: luke-jr: Bitcoin-dev is as useful as we make it. There are certian reliable parties that will crap-post reply to every useful idea posted to the list. When we respond directly point by point arguing their foolish positions, the list becomes useless for technical discussions.
< gmaxwell> I would strongly recommend that whenever you get a message from someone who has reliably made useless progress blocking responses that you do not respond to them point by point, but instead make sure their concerns are addressed in a message responding to a productive point raised by someone else.
< gmaxwell> Otherwise, the thread just becomes an endless argument over stupidity and people who are likely to make reasonable responses will ignore the thread.
< sipa> frequency of depths of blocks downloaded from my node over the past 3 months: http://bitcoin.sipa.be/depths.png
< chatter29> hey guys
< chatter29> allah is doing
< chatter29> sun is not doing allah is doing
< chatter29> to accept Islam say that i bear witness that there is no deity worthy of worship except Allah and Muhammad peace be upon him is his slave and messenger
< chatter29> As-salāmu ʿalaykum (Arabic: السَّلَامُ عَلَيْكُمْ‎‎ [asːaˈlaːmu ʕaˈlaikum]) is a greeting in Arabic that means "peace be upon you". The greeting is a standard salutation among Muslims and is routinely used whenever and wherever Muslims gather and interact, whether socially or within worship and other contexts. [1] The typical response to the greeti
< chatter29> ng is waʿalaykumu s-salām (وَعَلَيْكُم السَّلَام [waʕaˈlaikumu sːaˈlaːm]; "and upon you, peace").