< wallet42> What are the counter arguments to have a switch where the miner can set a couple of vbytes (10-50K) aside in a getblocktemplate that will include lowfee, oldest first transactions instead of highest paying?
< sipa> wallet42: complicates the block construction code significantly, would likely not be used by a significant portion of the hashrate
< sipa> further, having it available even if it was used wouldn't really benefit anyone; it's essentially introducing a lottery in the mining construction from the perspective of wallets
< Provoostenator> I was thinking about creating an RPC method to add a new destination + amount to an existing RBF transaction. Or to add a "append-tx" argument to sendtoaddress. Has anyone worked on that in past?
< Provoostenator> (or explained why that's a terrible idea)
< sipa> Provoostenator: i've thought a bit about a more invasive idea that's related
< sipa> i think it would be nice to have in the wallet, independent from the sendtoaddress/sendmany/... RPCs, a list of "outputs i would like to pay to" (with address/amount)
< sipa> and any time that list is changed, or a transaction confirms, a new single transaction is constructed that pays out to all the entries in that list that haven't been paid to already
< Provoostenator> And then have the wallet figure out if a new transaction is cheaper than amending one?
< sipa> no, there is always just a single outstanding unconfirmed transaction
< sipa> for any change, that transaction is modified
< Provoostenator> That makes sense.
< sipa> the cost of bumping for RBF reasons is usually trivial compared to its actual fee
< sipa> (just add 1sat/vbyte)
< Provoostenator> Right, unless it's a very large transaction. But that would be trivial to check.
< sipa> there are complications to this... for example you need coin selection that guarantees conflicts with all previously attempted unconfirmed transaction
< sipa> or you may need to pre-sign multiple candidate follow-up transactions based on which of your earlier attempts confirms
< sipa> (as you can't automatically get access to encrypted keys later)
< sipa> and it likely conflicts with how people actually use bitcoin in the real world now - where they expect a txid to be something you can track
< Provoostenator> Oh, so in that case you'd use SIGHASH flags?
< sipa> just predict the possible sceniarios, and sign a full follow-up transaction for each (or at least for the few most likely ones)
< aj> sipa, wallet42: can't you just use prioritisetransaction rpc on the 50 oldest/whatever transactions to get that behaviour anyway?
< Provoostenator> Although it sounds more elegant, I don't think I can pull that off, yet :-)
< sipa> aj: yes
< aj> sipa: predict the possible scenarios? should be called fund-xanatos-transaction...
< sipa> xanatos?
< sipa> Provoostenator: well i think this would be pretty much independent from all existing transaction logic
< sipa> it'd reuse coin selection, but that's it :)
< wallet42> sipa: are the wallet RPC capabilities currently enough to pull of something like that with an external python app?
< wallet42> As a prototype
< sipa> yes, just use prioritisetransaction
< sipa> and getrawmempool
< sipa> ugh, master allocates enormous amounts of memory for me while syncing from scratch from network
< sipa> 0.15.1 with the same configuration works fine
< sipa> (it made my system with 32G RAM OOM)
< Provoostenator> sipa: any way Travis could catch something like that?
< Provoostenator> Or does it take too long before memory usage gets out of hand?
< sipa> it OOMed when my debug.log said the dbcache was 1.7G
< sipa> which is substantial but not excessive
< sipa> and the overhead on top of what the dbcache predicts should only be a few 100 MB
< Provoostenator> sipa: the second log link in the channel topic seems a bit stale...
< aj> Provoostenator: yeah, the cron job broke and i haven't updated it since botbot seems better
< Provoostenator> Any idea why "test/functional/test_runner.py --loglevel=INFO bumpfee" doesn't result in "Mining blocks..." appearing my console?
< aj> Provoostenator: okay cron job fixed, that wasn't so hard
< Provoostenator> aj: nice, that seems to work
< Provoostenator> Is anyone timestamping them?
< aj> Provoostenator: each line has a timestamp :) but if you mean cryptographically, no
< morcos> sipa: that sounds bad
< morcos> re: wallet replacement, i think privacy concerns would also be substantial
< morcos> you can assume chain analysis companies would start tracking unconfirmed txs, and use that to further group outputs as related
< Provoostenator> That's one reason I would make this RPC-only initially.
< Provoostenator> morcos: but does it really matter much privacy wise if you spend the next transaction from your change or by amending the original transaction?
< morcos> Provoostenator: well if you spend from your change, it MAY be unclear that it was your change or the person you paid spending (agreed that right now change is probably pretty easily identifiable though)
< morcos> also if you spend unconfirmed, then you have a good point
< Provoostenator> I can see a use case for low priority and smaller transactions, e.g. paying back friends, where you intentionally set the fee too low, so you can keep amending it. A poor mans payment channel. Obviously there's a privacy cost.
< morcos> or rich mans payment channel as the case may be. :)
< Provoostenator> "TestFramework (INFO): Mining blocks..." does show up in the test log file.
< Provoostenator> Even stranger given that the default is INFO.
< Provoostenator> Ah, so I need to call "test/functional/bumpfee.py --loglevel=INFO" for this to work.
< bitcoin-git> [bitcoin] joemphilips opened pull request #11770: [REST] add a rest endpoint for estimatesmartfee, docs, and test (master...rest_fee) https://github.com/bitcoin/bitcoin/pull/11770
< promag> archaeal_: it it was me, I would have the PR title: Add REST endpoint blockhash
< promag> then one commit for the implementation, one for the test and other for the doc
< promag> ^ 11765
< archaeal_> i see...yeah that makes sense
< Lauda> sipa I can confirm very high memory usage. I had a fresh sync (albeit a fork of 0.15.x) and it was using over 6 GB with 4GB dbcache and swap was 100% utilized.
< bitcoin-git> [bitcoin] jnewbery opened pull request #11771: [tests] Change invalidtxrequest to use BitcoinTestFramework (master...refactor_invalidtxrequest) https://github.com/bitcoin/bitcoin/pull/11771
< bitcoin-git> [bitcoin] jnewbery closed pull request #11047: [tests] rename functional tests (master...rename_functional_tests) https://github.com/bitcoin/bitcoin/pull/11047
< morcos> sipa: have a second to talk about SW wallet?
< morcos> i thought one of the cases we wanted to cover (which is mostly a version of 2 i think) is that you have a backup of a 0.15.0 wallet now, and you addwitnessaddress, but then later you have to restore from backup
< morcos> this is what the 2(a) case was meant to handle
< morcos> i dont think i particularly care if we don't handle that in 0.16, but it does seem like at some point we should handle that... with SW out, people are going to be doing that work flow.
< morcos> never mind, i didn't read carefully enough.. the 1(a) case handles that if you are running 0.16, its only the downgrade case that isn't handled
< morcos> i think that is good enough
< sipa> morcos: ok
< morcos> i'm fine with 1(a), 2(c), 3(a). But I wonder if there is some way we can warn the user about Problem 2, like could the backup have a higher walelt version (ick)
< sipa> right, that would be one solution
< sipa> but it's hard to prevent backups by just file copying
< sipa> you could have some system where a wallet is 'loaded' into a node, and the wallet file itself becomes marked as a backup, but something else in the node's configuration contains knowledge to avoid treating it as a backup
< sipa> but... yuck
< morcos> well the good thing is there is a workaround right.. so maybe warnings are enough, saying that if you want to downgrade and you only have a backup, you first need to run 0.16 with the backup and make a fresh backup, and then you can downgrade
< morcos> and that is true regardless of whether you backup is from 0.15 or 0.16
< BlueMatt> sipa: I really cant say I'm a fan of 2c
< sipa> the alternatives aren't great either though
< BlueMatt> sipa: what did you think of my suggestion of a segwit-wallet-bestblockhash field
< BlueMatt> in addition to wallet-bestblockhash
< BlueMatt> which allows you to rescan for segwit addresses if you downgraded to 15 for a while (and havent fully pruned those blocks)
< sipa> BlueMatt: sounds doable, but it's not really something we can continue
< sipa> it's a bit of an accident that segwit was introduced in 0.13.1, but will only be actually supported 3 major versions later
< BlueMatt> hmm?
< sipa> i don't think we should continue to expect that it's at all feasible that a new feature introduced will actually work at all after downgrading
< BlueMatt> I didnt claim it would/should?
< sipa> right, but the approach you suggest really only works for segwit
< sipa> not for anything later
< sipa> (likely)
< BlueMatt> yes, this is specifically a segwit thing in large part cause we're not doing a full -walletupgrade for it
< BlueMatt> if we were doing a -walletupgrade (which I still think is the "right" way to do this, but obviously with a ton more complication) then I dont think we'd need to do this
< BlueMatt> a non-default clear statement of "I'm moving my wallet to 0.16" should be sufficient, but absent that, we should be more careful to handle some downgrade cases
< gmaxwell> how is walletupgrade right when it leaves current wallets people are using today unrecoverable?
< BlueMatt> hmm?
< gmaxwell> people use addwitnessaddress, then their backups are instantly no good-- no lookahead, no nothing.
< gmaxwell> that screup is fixed by 'upgrading' by default
< sipa> gmaxwell: i don't see how that has anything to do with explicit upgrade or not
< BlueMatt> I mean addwitnessaddress is barely supported and largely unsupportable, at least without something like upgradewallet
< BlueMatt> I mean you can auto-upgrade a user if they've previously used addwitnessaddress
< BlueMatt> cause an addwitnessaddress wallet is already a mess (at least wrt backups)
< sipa> explicit upgrade just guarantees that old versions will give you an error rather than silently missing things _for addresses generated in the future_
< sipa> obviously any addresses generated in the must should always continue to work
< BlueMatt> of course, but explicit upgrade also has a clear definition for the user: you cannot open this wallet anymore in old versions
< BlueMatt> which if we *dont* do an explicit upgrade, we end up in this quagmire of backward compatibility/downgrade hell
< gmaxwell> BlueMatt: if you don't autoupgrade someone then you won't make their backups good again.
< BlueMatt> "make their backups good again"?
< BlueMatt> I do not think we should *ever* auto-upgrade someone, with the possible exception of folks who've previously addwitnessaddress'd, cause they're kinda fucked if we dont, or at least we should nag them
< sipa> i think at some point we should do a required uograde
< gmaxwell> Right now, joe blow believes our claims of supporting segwit, runs the software, follows instructions to use it... unknown to him, his backup will miss coins if recovered. Post upgrading to a release that autoupgrades his backup will no longer miss coins when recovered.
< gmaxwell> _You cannot tell if they've used addwitnessaddress because they could be loading a backup which hasn't seen it_
< sipa> well 1a solves that
< gmaxwell> if you try, you get even more awesome effects like one backup works for address X and another doesn't for reasons entirely unrelated to address X.
< BlueMatt> yes, I think for that reason 1a is probably good
< sipa> the only case that is not covered is downgrading + restoring a backup simuktaneously
< sipa> which is inherently impossible to solve completely
< BlueMatt> yes, and hence my suggestion to do a segwit-wallet-bestblockhash, cause then we at least fix it for non-pruned or only-briefly-run-as-downgraded nodes
< sipa> but some solutions are possible for the case where the backup was made after uograding
< BlueMatt> sidestepping the version rabbithole
< * BlueMatt> would also like to further discuss 2a
< BlueMatt> whats default keypool size? its not *that* bad
< sipa> 2000 keys
< BlueMatt> and, when we go to do things "the right way", we can clean those out, maybe
< sipa> + more logic and edge cases
< BlueMatt> with a -walletupgrade can clean out those entries
< sipa> = something that still only works as long as you don't actually use the wallet too much after downgrading
< BlueMatt> hmm? no?
< BlueMatt> your backups will still work cause they have the witnesses added
< BlueMatt> and the upgraded wallet obviously wont cause you set the version to be un-openable with the old version
< sipa> right
< sipa> but if the wallet was used a lot in the new version after the backup was made, you'll hit problems when the backup's keypool runs out
< BlueMatt> yes, but that is always true
< BlueMatt> if your keypool runs out you cant run from a backup or you lose funds
< BlueMatt> period.
< BlueMatt> (I suppose with the exception of HD, but....ugh)
< sipa> no, not period
< sipa> this is exactly the assumption we don't need to make with HD wallets anymore
< sipa> relying on the keypool to deal with downgraded backups reintroduces that assumption
< BlueMatt> yes, with the exception of HD
< sipa> that's not a small exception
< sipa> that's the norm
< BlueMatt> ok, so now we're back to segwit-wallet-bestblockhash
< BlueMatt> thats the only way to fix that issue
< wallet42> sipa: ah yes. prioritisetransaction is exactly what im looking for.
< sipa> BlueMatt: right, but only in the case the user actually uogrades again
< BlueMatt> well if they dont upgrade again there is literally nothing we can do
< sipa> that's my point
< wallet42> spia: my python prototype question was in the context of the output list wallet amend tx amend idea you talked about with provoost
< BlueMatt> (and I think thats reasonable, if you use segwit in a version with a headline of "SegWit Wallet" and then downgrade, you'd kinda expect to not have your segwit funds)
< sipa> BlueMatt: so why do we need to support the downgrade + restore case again?
< BlueMatt> well the current proposal does *not* fully work in the case you upgrade again, without a rescan
< sipa> it's without guarantees at best
< BlueMatt> if you are non-pruned its guaranteed
< sipa> in whatever case
< sipa> it's without guarantees at best as long as you don't upgrade
< BlueMatt> huh?
< sipa> don't upgrade after downgrading, i mean
< BlueMatt> yes, if you dont ever switch back to 0.16+, then you are maybe not safe in this case
< sipa> not maybe
< BlueMatt> but so what? we can still improve things
< sipa> in that case nothing can be guaranteed
< BlueMatt> doing a partial-rescan is a very, very big improvement for many of the cases we're talking about
< BlueMatt> even if its not a guarantee
< BlueMatt> assuming you may never upgrade to 0.16+ and trying to support that seems like nonsense
< BlueMatt> but thats not what I'm arguing here
< sipa> ok, so let's walk through yhis
< sipa> you upgrade to 0.16
< sipa> there is no segwitwallet record
< sipa> you do what? assume that the existing non-segwit wallet bestblock record is accurate?
< BlueMatt> a) ignore, b) rescan from segwit activation date if you have P2SH-P2WPKH entries c) always rescan from segwit activation date
< sipa> b assumes the file is recent enough to have those records
< BlueMatt> d) rescan in case -findmissingsegwitoutputs=true is set
< BlueMatt> indeed, b kinda sucks
< sipa> i think the best solution is just telling people to run with 0.16 and -rescan in that case
< BlueMatt> do we have a rescan from date rpc?
< BlueMatt> i think we do now, right?
< sipa> unsure
< BlueMatt> I mean in that case we also need 2a, I think
< sipa> how so?
< sipa> 1a solves everything during a rescan
< BlueMatt> yea, sorry, 1a obviously fixes it for rescan, maybe I'm just more worried about problem 2
< gmaxwell> unrelated to this, through some combination of multiwallet, -rescan, restarting my node.. I ended up in a situation with a bunch of wallets that lost all their transactions but think they're in sync.
< BlueMatt> more like "oops, da server with wallet blew up, bring in the one we upgraded from last week, running 0.15.1, will have to do for now"
< BlueMatt> gmaxwell: thats....frightening....is it 0.15.1 with the copied-db issue?
< morcos> i basically agree with sipa
< sipa> my solution to problem 2 is tell people to avoid downgrading and restoring, and if in exceptional cases that misses something, rescan
< gmaxwell> w/ recent-ish master.
< BlueMatt> sipa: but that doesnt solve the problem of not upgrading right away? at least 2a will help significantly
< morcos> i think 1a, 3a and a guide for acceptable workflows and how to recover is the way to go.. we should not make things too difficult.
< morcos> however i think we will need to do something like 2a when we have the upgrade wallet option for 0.17. but then it will i think be fully optional as to whether you upgrade or not.
< gmaxwell> I _think_ what I did was start with two dozen -wallets and -rescan then realized it would sequentially rescan each, killed the process, then went on to try using them directly.
< sipa> BlueMatt: but even if we do 2a, there is no solution for 4a
< sipa> recovery options are great, but they exist
< BlueMatt> yea, issue 4 is unsolveable, I agree
< morcos> sorry, not actually 2a for 0.17, but something that takes all your keys in your keypool and stores them as things that implicitly might have any style address
< BlueMatt> with the exception of 1a + rescan
< sipa> 2a is essentially trying to avoid a recovery procedure in a situation which already is something that should never be part of normal workflow
< gmaxwell> if we make rescan faster we could be more liberal in expecting its use.
< gmaxwell> not tooo liberal, since pruned nodes can't rescan...
< BlueMatt> gmaxwell: do you have an intuition in why rescan is slow? I havent looked into it
< sipa> BlueMatt: we should do a benchmark for reindex with and without wallet
< sipa> if there is a significant different between the two, there is a totally obvious improvement from not calling Solver on each output in order to match it with the wallet
< sipa> (which construct a vector of vectors ffs)
< gmaxwell> well we fully read and deseralize each block and each transaction then pass them through the wallet code, so that implhes 1001 memory allocations plus sha256ing each transaction (and reserializing to do so)... looking at IO we also seem to create a lot of excess IO with opening the file, seeking to the block closing it.
< sipa> then indeed there is the cost of just loading blocks, which could be improved by having a block deserializer that doesn't build/verify the merkle tree
< gmaxwell> at least from rough looking at it, we do both a bunch of dump computation and IO and all of it serialized.
< gmaxwell> it's worth noting that it's not remarkably faster with the blockchain all in cache.
< BlueMatt> sipa: ok, so I guess I feel a lot more comfortable about things if we do #11281 and recommend rescanblockchain segwit_activation_height
< gribble> https://github.com/bitcoin/bitcoin/issues/11281 | Avoid permanent cs_main/cs_wallet lock during RescanFromTime by jonasschnelli · Pull Request #11281 · bitcoin/bitcoin · GitHub
< BlueMatt> sipa: I'd still be more comfortable with 2a...its far from a hard guarantee of anything, but I really could see someone ending up in the state where that'd save them
< BlueMatt> if for no other reason than someone restoring a backup may load in 0.15, realize they need 0.16, then open in 0.16 right away
< BlueMatt> and end up screwing themselves
< BlueMatt> the segwit-wallet-best-block-hash would also similarly solve the same issue even if you do my option a where you just ignore it if it isnt there
< BlueMatt> though I agree that kinda feels gross given its solving such a specific edge case without any hard guarantees
< BlueMatt> can we at least high-priority #11281 for 0.16?
< gribble> https://github.com/bitcoin/bitcoin/issues/11281 | Avoid permanent cs_main/cs_wallet lock during RescanFromTime by jonasschnelli · Pull Request #11281 · bitcoin/bitcoin · GitHub
< BlueMatt> so that rescan is at least slightly less painful from the qt command window
< sipa> sgtm
< BlueMatt> and add a "I lost my segwit coins" button in the debug window
< * BlueMatt> *ducks*
< gmaxwell> any kind of scanning uncertanty will cause untold amounts of prophylactic scanning.
< BlueMatt> yea, but I dont think we can really solve that......
< BlueMatt> in part cause 0.16 docs are going to have to have a bunch of "in case you did X, do this rescan"
< gmaxwell> thats partly why I was commenting about making rescanning faster.
< BlueMatt> yes, though dunno about making that a requirement for 0.16 :p
< gmaxwell> oh another point on speed-- I opened an issue for this-- right now if you load up a dozen wallets and -rescan. ... it rescans a dozen times, one for each wallet by itself...
< BlueMatt> yes, that should be fixed as well
< gmaxwell> my first idea about not spending the rest of my life rescanning was thwarted by that.
< bitcoin-git> [bitcoin] TheBlueMatt opened pull request #11775: Move fee estimator into validationinterface/cscheduler thread (master...2017-09-background-feeest) https://github.com/bitcoin/bitcoin/pull/11775
< jonasschnelli> BlueMatt: added 11281 to the project 7
< BlueMatt> jonasschnelli: thanks!
< cluelessperson> BlueMatt: are you updating the fee estimator?
< BlueMatt> cluelessperson: it doesnt change the fee estimator logic, except to handle a few edge cases better
< sipa> Lauda: using 6GB with 4GB dbcache sounds bad but not nearly the same thing i was seeing
< sipa> i was seeing 32GB usage with 1.7G dbcache
< Lauda> Well, add 4 gb of SWAP to that so probably ~10ish (I think I allocated 7 maximum for the VM). However, yes. That is much worse :<
< Lauda> Any ideas as to what is causing that?
< sipa> by "using" i mean RSS size, not physical usage
< sipa> it's pretty hard to map VM usage to physical usage, as processes can share memory etc
< sipa> (by VM i mean virtual memory, not virtual machine - it's probably confusing in this context)
< Lauda> Oh, then 6 GB yeah.
< Lauda> Your behavior is only in current master, not in 0.15.1?
< sipa> indeed
< sipa> in 0.15.1 memory usage stayed nicely within a few 100 MB of the cache size
< sipa> though i now fail to reproduce with master
< Lauda> That's strange.
< cluelessperson> BlueMatt: do you know who might be best to redo the logic?
< cluelessperson> reason being that fees seem to jump up faster than volume
< gribble> https://github.com/bitcoin/bitcoin/issues/2 | Long-term, safe, store-of-value · Issue #2 · bitcoin/bitcoin · GitHub
< BlueMatt> cluelessperson: if there were a clear and easy fix it would have been done already :/
< BlueMatt> sadly good fee estimation is super non-trivial
< BlueMatt> *especially* for the non-economical (eg doing non-rbf so want high confidence) case
< cluelessperson> BlueMatt: agreed. I could only recommend experimenting with "lower estimates slightly" and see how the mempool responds.
< cluelessperson> that would take >years to tune.
< BlueMatt> doing it with rbf in economical mode could probably get improvements and go down faster, etc
< BlueMatt> "lower estimates slightly"? what does that mean?
< BlueMatt> in steady-state the fee estimator does incredibly well, it also happens to do incredibly well for very-high-confidence-of-inclusion
< cluelessperson> BlueMatt: (existing_fee_estimate) - 1 sat, -5 sat, -10 sat
< * cluelessperson> chuckles at the absurdity
< BlueMatt> though fee estimation also ends up being a bit of a self-fuffilling prophecy
< BlueMatt> yea, I dont think that is a good approach :p
< cluelessperson> BlueMatt: I'm taking classes. Hopefully I can help in the future, but right now I'm a moronw. :/
< mlz> lol
< bitcoin-git> [bitcoin] MarcoFalke opened pull request #11779: qa: Combine logs on travis by default (master...Mf1711-travisQaLogs) https://github.com/bitcoin/bitcoin/pull/11779