< cbits_> Yeah, and it automatically sets rbf checked when you have it set to visible
< gmaxwell> wumpus: would we perhaps want to consider removing zap? we added it because we had no way to abandon transactions, but we do now. I've seen it used in a way that created a lot of damage. (user ran into the unconfirmed depth limit and couldn't make transactions. Then zapped their wallet, then stared paying again, doublspending the @#$@# out of themselves. .. then were stuck trying to reassemble
< gmaxwell> the pieces... figure out who they still owed, etc.
< tonebox> It seems like segwit is a temporary fix... 4M is going to be overwhelmed, just like 1mb now... Wouldn't a better solution be to change the time-base, so now it's 10 minutes per block... Soon, 5 min, then 2.5... It could revert to 10, and be automatically adjusted just like the difficulty.
< Lightsword> tonebox, no, that results in higher orphan rates due to latency
< tonebox> Ok... Thanks... Would there be any way to make segwit not fixed at 4mb so this won't be a problem that needs to be solved again in a few years?
< tonebox> Also, it would seem like a dynamic timebase and fixing the issue with orphans would be a better solution long term.
< CubicEarth> Lightsword: I always thought moving to a 5 minute block time would be perfectly fine
< gwillen> This isn't really a good channel for this kind of discussion -- better in #bitcoin probably.
< CubicEarth> a slightly higher orphan rate wouldn't hurt anything
< wumpus> gmaxwell: I'm fine with that...
< wumpus> gmaxwell: what I mostly don't like about it is that it requires a rescan, and is very non-selective (zap all unconfirmed)
< wumpus> gmaxwell: mostly it's useful to troubleshoot issues with wallet bugs and corruption, when a certain transaction behavres strangely. But a tool to remove a single transaction from the database would work much better for that and be less impactful. Back in the day,though,that was hard to do for some reason
< wumpus> I agree abandontransaction replaces all end-use servicable reasons to use zap
< gmaxwell> It might just make sense to rename it and hide it for that reason. If there is a reason to not do that, we should probably enhance abandon further.
< wumpus> me preference would be to hide it in some dangerous-sounding wallet salvage or editing tool, not have it in the main executable at least or maybe not even in the main distribution
< wumpus> I mean there's a use for low-level-ish wallet editing, but it certainly shouldn't be easily available
< gmaxwell> Salvage is also pretty raw... I've said before it should be called "savage (verb) wallet"
< wumpus> hehe
< wumpus> salvage has been actually useful to a lot of people though, it tends to be the only thing available if the rest if something is corrupted
< luke-jr> doesn't zap recover from corrupt bdbs we can't open?
< wumpus> no, zap doesn't do that
< wumpus> it assumes that records are simply readable, what it does is remove all transactions
< wumpus> with a mode to keep metadata (by default) and another to trash it
< gmaxwell> salvage does. though at least historically it would also miss data in perfectly fine wallet.dats (though I think some of that was due to bugs which have been fixed)
< luke-jr> ah, mixed them up I guess
< wumpus> I think those issues have been fixed
< wumpus> thoug there are still some weird issues with salvagewallet, for example berkeleydb can return an error when salvaging an otherwise ok wallet (but it doesn't lose records anymore IIRC)
< wumpus> then again this is the kind of thing we're asking for with not updating the backend library for years. BDB should die.
< gmaxwell> so I did some testing and later BDB now appear to be bidirectionally compatible?!
< gmaxwell> but I couldn't find any announcement of it.
< gmaxwell> it just worked.
< wumpus> it works in some cases
< wumpus> that's been the case when I tried too. But I don't trust it.
< gmaxwell> ah, I wasn't aware that it ever worked before.
< wumpus> I think the backward incompatiblity thing is just a matter of no one ever seriously researching this, and what are the edge cases of it, than a sure thing
< wumpus> it doesn't work in *all* cases that is clear
< wumpus> one thing that is not backwards compatible is the log files; so if there are still log files behind, the old version will error out
< wumpus> it may well be that a "clean" .dat file, like produced by backupwallet, is always backwards compatible. Though that doesn't help a user that crashes on first run with the new version then tries to go back, the wallet being in intermediate state.
< wumpus> of course it's fairly simple to work around this with a conversion tools and/or making and automatic backup at first run. But it was just never deemed worth the trouble
< wumpus> especially as newer BDBs have license issues
< wumpus> the plan was ,and should still be, to move away from it
< wumpus> baahh.. this is the second time I have trouble with the tests due to a stale qa/cache directory
< wumpus> we should probably nix it when a change in bitcoind is detected
< wumpus> e.g. write that path and sha256sum of the bitcoind used into the cache directory, and if that changes, delete it
< sipa> unsure that's worth it... breaks of the cache direction are very infrequent, i think
< sipa> at least in my experience
< wumpus> any change to the wallet, at least
< wumpus> node0 keeps a wallet from mining the initial blocks, which is usually what causes the problems, if the test expects a certain newly introduced property of transactions
< wumpus> maybe it's infrequent but it is really frustrating and can lead to hours of misdirected search for bugs if it happens
< wumpus> conceptually it's also not *valid* to use an older cache, there is no guarantee that your tests passing is worth anything
< wumpus> the new bitcoind may completely mess up the initial steps and it'd still pass because it is cached
< wumpus> but ok I'll just add a message "Using cached node state in %s" to the test output, maybe that's enough to remind people to delete it if they run into weird issues
< sipa> no, i think you're right
< sipa> we shouldn't be using outdated caches
< jonasschnelli> This is strange... can it be a caching issue?
< jonasschnelli> I can't see a reason why the dump is different on your machine than on mine / travis.
< jonasschnelli> Maybe you have a chance and check the file "wallet.unencrypted.dump" (maybe pastebin it) when running with --nocleanup
< * jonasschnelli> starting Ubuntu 16.04
< jonasschnelli> sipa: maybe you have a chance to review #9965. It seems that you ate most familiar with that code part. It also touches the segwit.py test (where I'm not sure if I did the right thing).
< wumpus> jonasschnelli: yes it was a caching issue
< jonasschnelli> Great! *relief*
< wumpus> jonasschnelli: that why I wrote the posts above ^^
< wumpus> I think we should store a hash of the bitcoind executable in the cache directory and delete it when it mismatches
< jonasschnelli> wumpus: Ah. Thanks. I haven't read the scrollback. Yes. Good idea with the binary-hash mismatch detection.
< MarcoFalke> 1
< MarcoFalke> no, the salvagewallet issue is not yet fixed
< MarcoFalke> At least I am not aware that anyone fixed it and the tests are still disabled
< wumpus> MarcoFalke: from what I remember what causes the test to fail is that salvagewallet can return false when the answer should be true. I don't think any keys are still being lost
< wumpus> at least that was my experience from last time I tried to reproduce the issue
< MarcoFalke> ok, going to enable the test on the nightly builds and see what happens
< wumpus> good idea.
< wumpus> what I also found back then is that if you have a wallet that it fails on, it's fully reproducible with that. So it's something in the specific database that triggers it
< Victorsueca> be careful, maybe the code becomes self-aware and starts gathering private keys all around the world and then spends all UTXOs to 1Yoink.... :P
< wumpus> :p
< morcos> Re: fee estimation and RBF. My plan for Core wallet is as follows:
< morcos> Fee estimation allowed for targets of: 2, 4, 6, 12, 24, 48, 144, 1008
< morcos> There will be 2 types of estimate for each target, a conservative estimate (probably not too different from todays estimates, but still a bit less conservative) and an actual estimate
< morcos> I was imagining some kind of interaction between those and RBF, such that if you don't have RBF enabled then it prompts you to use the conservative estimate or something
< morcos> Via RPC, you'll be able to get all kinds of more specific information if you choose
< morcos> Turns out its a bit tricky to do the longer time horizon estimates, b/c if your node hasn't been up for that long, you can't really know.. And if you shut your node down, it doesn't currently record all the txs stuck in your mempool as failing.
< morcos> Fixing these issues is what's holding me up now... And then I think I may have a performance problem to fix.
< petertodd> morcos: whatever you do, I'd suggest you plan for far more opt-in usage in the future; with ppl spamming the network to apparently raise fees the next round may make whatever estimating thing we come up with unreliable; tx replacement otoh is much harder to game
< morcos> Yeah I think a separate but related project is to have an auto-replace mode...
< petertodd> (e.g. something that ignored opt-in txs entirely my fail if the % of them becomes much higher)
< petertodd> yeah. auto-replace mode is nice - gmaxwell has a neat way to do it where nlocktime is used to ensure replacements can be done prematurely
< morcos> As far as fee estimates dealing with opt-in txs.. I don't think thats much of a problem.. The only issue with that was to be cautious when they were knew that some miners might nto accept them
< morcos> prematurely? oh you mean sign in advance?
< petertodd> morcos: exactly! he suggested signing n contradictory txs, each with a higher nlocktime and higher fee
< petertodd> morcos: particulkarly relevant for hw wallets like trezor
< morcos> interesting... and maybe you could have a differential relay too... :)
< petertodd> morcos: yeah, differential relay would be nice, although at least replacement BW just increases the average; peak BW is our actual bottleneck
< bsm1175322> I have a need to deliver witness data to SPV clients, which necessitates a one-line change: https://github.com/VidaID/bitcoin/commit/2a3052622596db9b1fe29cd357cfc58a831b050c
< bsm1175322> Would we make this an option or something?
< bsm1175322> *could
< BlueMatt> bsm1175322: you need to do this over p2p?
< bsm1175322> yes
< BlueMatt> bsm1175322: if you can do it over rpc via the gettxoutproof/verifytxoutproof stuff we could probably tweak the format to include proofs of witnesses as well, but p2p....ugh, i think everyone wants to completely remove that code sooner or later, its not good
< BlueMatt> bsm1175322: (need to replace it with bloom filter commitments in blocks or so)
< bsm1175322> BlueMatt: agreed on that.
< bsm1175322> Well giving every random client direct access to RPC is not a good idea for a number of reasons
< bsm1175322> Stage 2 of this project will be to redesign BIP37. I'm well aware of its flaws.
< bsm1175322> But, for the moment we're just using it, warts and all.
< BlueMatt> bsm1175322: well the other thing we can do is extend the rpc to support it and then your patch will be simpler :)
< BlueMatt> (right now your patch isnt providing any proof of the witnesses, only the tx data, and providing the witness itself just as extra)
< bsm1175322> Oh I hadn't thought of that...there's a Merkle proof that's possible for the witness data too...
< bsm1175322> But...who would care about that? The client can verify that the witness signature is correct...
< BlueMatt> bsm1175322: yea, would have to provide the merkle path to coinbase + merkle path to the witness in question
< BlueMatt> well assuming they have the transactions being spent, sure
< bsm1175322> Hmmm since you're here in NYC...we should get together soon and have a brainstorming session about a BIP37 replacement...
< BlueMatt> sure, iirc there was some ml post not too long ago on it
< BlueMatt> something about committed bloom filters, I dont recall if they concluded that the filters were too big to be practical or if they were excited though
< BlueMatt> maybe it was a few months ago.....
< bsm1175322> Oh that one...and I calculated the size was unreasonable...
< BlueMatt> awww, damn
< BlueMatt> well, ok, brainstorming it is
< BlueMatt> oh wow that was months ago
< bsm1175322> UTXO set commitment in some form are #1 on my wishlist for improving light client security.
< gribble> https://github.com/bitcoin/bitcoin/issues/1 | JSON-RPC support for mobile devices ("ultra-lightweight" clients) · Issue #1 · bitcoin/bitcoin · GitHub
< bitcoin-git> [bitcoin] sdaftuar opened pull request #9970: Improve readability of segwit.py (master...2017-03-segwit-test-improvements) https://github.com/bitcoin/bitcoin/pull/9970
< BlueMatt> bsm1175322: hmm, yea, 12 GB total isnt trivial, though I dont think its insane...I mean its not like you download that whole dataset unless you dont know when your keys were created
< BlueMatt> bsm1175322: does have lots of challenges, though :(
< bsm1175322> There's probably a workable mutation of that idea...
< BlueMatt> bsm1175322: utxo commitments dont help you sync, though, but, yea, are a huge win
< BlueMatt> its unclear what the "right" solution is, I mean scanning the chain for your transactions makes less and less sense every day, especially given that folks are moving towards off-chain txn
< BlueMatt> cant scan for those.....
< bsm1175322> That's a different problem entirely ;-) SPV-lightning...
< bsm1175322> Another idea I'm a fan of on this topic is andytoshi's PoW skiplists...
< BlueMatt> yea, PoW skiplists are cool, though you have to be careful with them depending on your use-case
< BlueMatt> what use-case do you have for them? I mean 80 bytes * 400k blocks isnt that much, still, today?
< bsm1175322> Linear algorithms suck. ;-)
< bsm1175322> Initial sync in SPV mode still takes a non-trivial amount of time on phones.
< bsm1175322> It's just hashing those 400k blocks...
< BlueMatt> i mean we can improve that....we need a "give me all headers without the previous block hash in binary form" thing
< BlueMatt> mabye not p2p, just connect to blockchainheaders.com and get it
< BlueMatt> its 18MB
< BlueMatt> and hashing that cant take that long, no?
< bsm1175322> It does take that long, for whatever reason. Bcoin only verifies headers at a rate of ~20/s on the phone.
< dgenr8> bsm1175322: did you mean committed bloom filters?
< bsm1175322> @chjj is there any other reason you can think of besides sha256 speed which might be causing spv sync to be slower on the phone? Obviously it's quite fast in nodejs.
< bsm1175322> dgenr8: yes that's what we were discussing
< BlueMatt> bsm1175322: I mean I assume its also p2p latency, which isnt fun
< dgenr8> BlueMatt: size unreasonable with what fp rate
< BlueMatt> 20/s seems supperrr slow
< bsm1175322> dgenr8: see the referenced post with my calculation, I used fp rate= 1/height ~ 10^-6
< bsm1175322> came out to about 12GB of committed filters.
< bsm1175322> obviously this can be tuned...
< bsm1175322> At the cost of holding blocks you don't need.
< BlueMatt> (though we may want a similar command to remove checkpoints entirely for ibd - much easier to connect to a few peers and ask them for 4MB of headers (~100k blocks) at a time and then get a header chain and ban anyone who tried to spam you
< BlueMatt> instead of the current getheaders stuff
< BlueMatt> bsm1175322: to be fair, you probably want something much higher than 10^-6
< bsm1175322> yes
< BlueMatt> bsm1175322: you definitely want to download some extra blocks
< bsm1175322> The desirable false positive rate is still (constant)/(height) though, or it's still a linear algorithm...
< BlueMatt> bsm1175322: welcome to the real world, low-cost linear is perfectly ok :P
< BlueMatt> as long as batteries and lte improve faster than your linear increases, at least for as long as you're not using 2nd layer stuff, you should be fine :)
< bsm1175322> Only in the bitcoin world...it makes for a horrible user experience. :-P
< BlueMatt> (well, and data caps...fuck data caps)
< bsm1175322> I'll keep looking for logarithmic solutions :-P
< BlueMatt> 10 seconds in a linear scan isnt all that much different from 10 seconds in a magical logarithmic scan, at least for users :P
< BlueMatt> I see your point, but I'm less worried
< BlueMatt> superlinear, well...lets not do that
< bsm1175322> Well...having been doing dev work on testnet for a few months...where it takes 30 minutes for bcoin to do an initial spv sync...
< BlueMatt> lol, ok, fair point
< BlueMatt> see previous comment about chunking header requests with new p2p messages
< BlueMatt> :)
< BlueMatt> its slow because we've been busy optimizing other things, should be easy to optimize, though
< bsm1175322> Yeah I'm going to have to look into that on bcoin. For now we're beta testing on regtest so are avoiding the problem.
< bsm1175322> BlueMatt: FYI another idea that's floating in my head for improving SPV is whether we could use some form of https://en.wikipedia.org/wiki/Oblivious_transfer
< BlueMatt> bsm1175322: hmm, possibly useful to receive blocks after a high-fp-rate filter commitment or something? Maybe too high overhead, though, gmaxwell might have more to say
< bsm1175322> To be clear, I haven't figured out any algorithm that works and I'm not making a proposal...but I want to find a way...
< BlueMatt> heh, ok
< BlueMatt> the old "that sounds cool, we should use it somewhere" approach :)
< bsm1175322> exactly
< gmaxwell> .... there have been many concrete proposals before. the performance is bad though.
< bsm1175322> Yes...in order to be oblivous about which of N bytes are sent, you have to read/process all N bytes, for *each* request...
< bsm1175322> I'm still hoping there's a way around that observation...
< BlueMatt> uhh, that would be kinda obvious if you didnt....
< BlueMatt> "hey, i dont know what I'm sending you, but its one of A or B, and I never read B from my hdd......"
< bsm1175322> Insert some preprocessing/Merkle tree magic...
< BlueMatt> heh
< bitcoin-git> [bitcoin] MarcoFalke opened pull request #9971: qa: Initialize log in TestManager (master...Mf1703-logFixup) https://github.com/bitcoin/bitcoin/pull/9971
< bitcoin-git> [bitcoin] jnewbery opened pull request #9972: Fix extended rpc tests broken by #9768 (master...test_logging_fixups) https://github.com/bitcoin/bitcoin/pull/9972
< midnightmagic> \o/
< bitcoin-git> [bitcoin] theuni opened pull request #9973: depends: fix zlib build on osx (master...fix-zlib-osx) https://github.com/bitcoin/bitcoin/pull/9973
< bitcoin-git> [bitcoin] ryanofsky opened pull request #9974: Add basic Qt wallet test (master...pr/qt-test) https://github.com/bitcoin/bitcoin/pull/9974
< bitcoin-git> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/8910b4717e5b...21833f9456f6
< bitcoin-git> bitcoin/master d055bd6 John Newbery: Fix extended rpc tests broken by 8910b4717e5bb946ee6988f7fe9fd461f53a5935
< bitcoin-git> bitcoin/master 21833f9 MarcoFalke: Merge #9972: Fix extended rpc tests broken by #9768...
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9972: Fix extended rpc tests broken by #9768 (master...test_logging_fixups) https://github.com/bitcoin/bitcoin/pull/9972
< bitcoin-git> [bitcoin] MarcoFalke closed pull request #9971: qa: Initialize log in TestManager (master...Mf1703-logFixup) https://github.com/bitcoin/bitcoin/pull/9971
< Telmo> Ola