< jonasschnelli>
travis cancels jobs: " This job ran on our Trusty, sudo: required environment which will be updated on Wednesday, June 21st. Please add group: edge to your .travis.yml file to try the new images and check our blog for more details about this update."
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #10636: [qa] util: Check return code after closing bitcoind proc (master...Mf1706-qaTraceback) https://github.com/bitcoin/bitcoin/pull/10636
< sturles>
If I add a watchonly address to Bitcoin Core, and it receives a transaction output, and add the private key for the same address later. Will the output become spendable without a rescan?
< ryanofsky_>
sturles, it should unless there is a bug
< sturles>
ryanofsky_: Cool! I'll try to use it with pull request #9728 in a semi-hot wallet. Just add private keys as required to fill up. Avoiding a bitcoin transaction to refill the hot wallet.
< instagibbs>
sturles, if you're doing p2sh you may need to pass the redeemscript as well
< sipa_>
cfields: any idea how it is possible that passing libbitcoin_util before libbitcoin_wallet on the linking cmdline works?
< instagibbs>
achow101, so your PR doesn't touch the looping or target behavior, just coin selection itself?
< achow101>
yes
< sipa_>
cfields: wallet.cpp uses FastRandomContext's constructor, which is defined in libbitcoin_util...
< achow101>
instagibbs: It needs to use effective values and access to fee rate so I only borrowed a few parts of your effective value PR for those
< cfields>
sipa_: they're all tangled up. iirc last time i checked, there was no ordering that wasn't circular.
< cfields>
i worked out how it was working at that point, let me refresh my memory
< sipa_>
but we're not passing any grouping to the linker, so it shouldn't be able to resolve circular dependencies?
< sipa_>
i can swap util and wallet
< sipa_>
but there are other pairs that i can't swap
< instagibbs>
achow101, trying to decide how dangerous that is on its own. If it results in no change, but too high of a fee due to the looping, we'll dump all that value into fee :(
< instagibbs>
knapsack specializes in exact matches, so red flags are going up for me
< sipa_>
instagibbs: the idea is that the extra fee will not be more than what it would cost to create + spend change
< instagibbs>
sipa_, hold on let me get the issue im speaking of
< achow101>
instagibbs: so since the new coin selection is supposed to get "exact matches" more frequently, we could be running into the high fees problem?
< instagibbs>
if it loops a few times, failing to "get enough", then when it does, it's more likely to get an exact match, then vastly overpay fees. This tends to happen on wallets with lots of utxos
< instagibbs>
the PR you got the commits from should also fix this issue more holistically, but larger review surface
< cfields>
sipa_: just finished what I was working on. looking and refreshing now.
< achow101>
instagibbs: I thought about taking the entire rewritten loop from the effective value PR but I wasn't sure if that would break anything or use stuff that you changed that I didn't grab
< cfields>
sipa_: ah, i believe it works because of the libs already added. eg. FastRandomContext was already emitted by util for server
< cfields>
(I recently discussed a strategy for nuking wallet's deps on server with jonasschnelli, i believe he's begun PRing some of the necessary changes)
< jonasschnelli>
cfields: But that PR is a very first beginning... it would require much more work
< cfields>
jonasschnelli: sure. I just wanted to point out that there's at least _some_ desire to work out the dependency mess :)
< jonasschnelli>
Yes. Lets clean this up
< cfields>
jonasschnelli: what's the PR you've already opened? I need to make sure to review/ack
< gmaxwell>
instagibbs: at least based on the design I understand, when achows' patch does something there will be no looping because by design it picked a solution that had enough fee.
< jonasschnelli>
cfields: #10517
< gribble>
https://github.com/bitcoin/bitcoin/issues/10517 | Factor out CCoinsView based AreInputsStandard/IsWitnessStandard by jonasschnelli · Pull Request #10517 · bitcoin/bitcoin · GitHub
< jonasschnelli>
(needs rebase)
< cfields>
jonasschnelli: thanks
< gmaxwell>
instagibbs: it takes the original target feerate as an argument and does all its calculations with effective fees, and only allows overpayment to the extent that costs equal or less than the future change spend that it avoided.
< instagibbs>
gmaxwell, ok let me re-read, because that's what my PR is supposed to be doing, heh
< gmaxwell>
instagibbs: no, your PR does it on everything; not just in the exact match case. Which I raised the dust inflation concern above. (which it seems no one is interested in doing anything about...)
< gmaxwell>
This doesn't have the dust inflation concern, since it only has an effect when there is no change (thus no increase in the number of outputs).
< sipa_>
cfields: FastRandomContext iirc is only in libbitcoin_util... how is wallet able to use it? wallet is passed later on the cmdline
< gmaxwell>
To reiterate: the concern I have with your effective rate change is that it will make wallets be unlikely to select low effective rate coins and never select negative effective rate coins. But will still create change, which seems almost certian to cause an increase in UTXO set bloat. And this effect is mostly tangential from the problem you're trying to solve, which is related to non-convexity
< gmaxwell>
of the fee search.
< instagibbs>
gmaxwell, I removed that a long time ago
< instagibbs>
anyways, let met do a fuller review, I'm getting myself turned around
< cfields>
sipa_: it's used by net_processing in server, which marks it as undefined. Then util resolves it. Then it's already resolved for wallet.
< sipa_>
cfields: ugh
< sipa_>
ok
< gmaxwell>
instagibbs: ah, I had lost track of the fact that you had PRed a change that addressed the overpayment without doing the effective rate thing.
< achow101>
I'm confused :(
< instagibbs>
achow101, wallet code makes people scared and afraid, don't worry
< gmaxwell>
instagibbs: this new PR creates dust change. :(
< instagibbs>
gmaxwell, it tries "relatively hard" to avoid making something less than MIN_FINAL_CHANGE
< instagibbs>
unfortunately without something smarter we just have to pick something and shoot for it
< instagibbs>
open to ideas though
< achow101>
gmaxwell: which PR creates dust change? Mine?
< gmaxwell>
It could use murch's hurestic to cut that down further. (Murch's hurestic is that you can throw away change if it's less that FeeRate*(34+4+1+34+74) (cost to create and spend that change output).
< instagibbs>
so there's the "I will enver make an output this small" level which could be that ^
< gmaxwell>
(with adjustments for different script types)
< instagibbs>
then there's also the "I'm willing to try again"
< instagibbs>
and get something bigger or an exact match
< instagibbs>
toss vs rety, I mean
< gmaxwell>
instagibbs: the patch achow submitted is fairly close to guarenteed to give an 'exact' (uses that hurestic) match if one is possible.
< instagibbs>
so if we simply fail to get an exact match we should likely grab some buffer more we want as change, whatever that is
< gmaxwell>
achow101: what instagibbs is talking about at this start of this conversation is related to how fees are handled by selecting coins, finding we need to pay a larger fee than we selected, then going back with a bigger target. What can happen now is that on one iteration you select a zillion inputs, decide you need a lot of fee, then go back, but this time your higher targets means you select onl
< gmaxwell>
y a couple inputs, but you got close enough to your fee-inflated-target that you do not include a change output... and then you overpay fee.
< gmaxwell>
achow101: and I think what you are intending to accomplish is completely orthorgonal with that defect. (perhaps the implementation isn't-- but the idea is)
< achow101>
gmaxwell: oh, ok.
< achow101>
gmaxwell: as the implementation is right now, I don't think it will exit the loop if a BnB algo passes though
< achow101>
also, something else to consider is if the BnB algo fails on one loop so it falls back to the original algo, but then on the next pass (with the fee regtarget thing) the BnB algo passes. I'm not sure what would happen in that case
< instagibbs>
lots of fee, I think
< gmaxwell>
achow101: thats the "perhaps the implemetation isn't-- but the idea is" part. Probably the BnB should just be skipped after the first iteration.
< gmaxwell>
instagibbs: I think your 10333 is more attractive in light of achow's PR, assuming they were correctly combined. Since his PR will do a pretty good job of finding a changless solution, if there is one, I think it's fine to make the rest of the behavior just assume there will be change. Though perhaps I gave achow bad advice on the integration. I suggested just replacing out the selectcoins p
< gmaxwell>
art in order to avoid having to handle coincontrol and fee-from-amount.
< gmaxwell>
combined naievely it will be pretty bad.
< cfields>
jonasschnelli: ping. still around?
< gaf_>
is any reason to worry about segwitx2 has about 77~ or who cares about it, because this donest affecting anything?
< instagibbs>
gmaxwell, if we limit the BnB stuff to the first step, this removes any need for 10333 to correct any additional bad behavior
< ProfMac>
If not here, where can I discuss this?
< ProfMac>
I have a set of directories, sandbox-1493974463, sandbox-1494226733, and the like. I have set up a git server, and put the earliest sandbox under git control following, e.g., https://git-scm.com/book/en/v2/Git-on-the-Server-Setting-Up-the-Server Now I want to move to the next sandbox. I assume I need to do git {init, remote add, ...} but I am a little wobbly about "pull" The goal is to add the changes in the 2nd sandb
< bitcoin-git>
[bitcoin] practicalswift opened pull request #10638: [rpc] Clarify pblock assumption via an assertion (master...nnonce) https://github.com/bitcoin/bitcoin/pull/10638
< gmaxwell>
instagibbs: hm? perhaps I don't get your comment. Are you saying 10333 is not needed post achow making his pr run only on the first step?
< instagibbs>
gmaxwell, I don't think it's needed for the first BnB run, but just as needed otherwise as before
< gmaxwell>
instagibbs: okay agreed.
< gmaxwell>
instagibbs: question I have is with all those changes, should the exact match attempt just be taken out of the loop? it will mean duplicating recipent pays fes logic.
< gmaxwell>
instagibbs: I was also thinking of another strategy for your loop logic that I think may be better and more clear: target=amount; 1. Assume change. Attempt selection. if it fails is there a saved solution? if so return that. Compute fees? if not increase target to current fees, goto 1. Can change be elimiated (<murch hurestic) if so do so and return solution. If not, is the change above the M
< gmaxwell>
IN_CHANGE? if so then stop and take that one. If not, save the solution, and target=max(amount+fee+min_change, target*1.01) and goto 1.
< gmaxwell>
so basically it guarentees the target goes up with every iteration, and only gives up and uses the dusty solution if its already tried raising the target beyond the wallet's capacity.
< gmaxwell>
it doesn't make any real attempt to avoid change, since I assume achow101's first step handles that.. though if it happens to find itself with an acceptably changeless solution then it will accept it.
< instagibbs>
"should the exact match attempt just be taken out of the loop" had this exact thought. It's something to look at imo.
< instagibbs>
what would a "saved solution" be in this case?
< gmaxwell>
saved solution would just be the transaction, one where the solution was viable but had non-eliminatable change under the min_change target.
< gmaxwell>
so we keep looking (with ever higher targets) after it, hoping to get something that passes, but if we don't we can return the dusty one.
< instagibbs>
ah, so if < dust, accept, < min_change, save and try again higher, loop
< gmaxwell>
yea, so long as the loop increases enough each iteration it'll terminate in acceptable time.
< gmaxwell>
so, e.g. when the wallet doesn't have enough in it to meet min_change you'll go for that the next iteration, find your selection fails, then take the slightly dusty one from the last go.