< bitcoin-git>
[bitcoin] luke-jr opened pull request #18902: Bugfix: Only use git for build info if the repository is actually the right one (master...fix_gitdir_again) https://github.com/bitcoin/bitcoin/pull/18902
< luke-jr>
wumpus: I'd agree, but in light of the regression to #7522, it looks like we either need to do #18902 (which builds on the re-tar-ing), or restore the build.h hack
< gribble>
https://github.com/bitcoin/bitcoin/issues/18902 | Bugfix: Only use git for build info if the repository is actually the right one by luke-jr · Pull Request #18902 · bitcoin/bitcoin · GitHub
< gribble>
https://github.com/bitcoin/bitcoin/issues/7522 | Bugfix: Only use git for build info if the repository is actually the right one by luke-jr · Pull Request #7522 · bitcoin/bitcoin · GitHub
< bitcoin-git>
bitcoin/master 1e94a2b Russell Yanofsky: depends: Add --sysroot option to mac os native compile flags
< bitcoin-git>
bitcoin/master 56611b0 fanquake: Merge #18743: depends: Add --sysroot option to mac os native compile flags...
< bitcoin-git>
[bitcoin] fanquake merged pull request #18743: depends: Add --sysroot option to mac os native compile flags (master...pr/sysroot) https://github.com/bitcoin/bitcoin/pull/18743
< hebasto>
luke-jr: what is wrong with "regression to #7522"? what builds fail?
< gribble>
https://github.com/bitcoin/bitcoin/issues/7522 | Bugfix: Only use git for build info if the repository is actually the right one by luke-jr · Pull Request #7522 · bitcoin/bitcoin · GitHub
< luke-jr>
hebasto: builds will get the wrong version/hash embedded, from unrelated git repos, and access unrelated git data outside the source code root
< jnewbery>
blocking the rest of BIP 157 implementation
< jnewbery>
thanks!
< lightlike>
#17037, which is on "chasing concept ACKs", was closed yesterday
< gribble>
https://github.com/bitcoin/bitcoin/issues/17037 | Testschains: Many regtests with different genesis and default datadir by jtimon · Pull Request #17037 · bitcoin/bitcoin · GitHub
< wumpus>
lightlike: thanks, removed
< wumpus>
anything else to change/add/remove?
< jonatack>
nice to see the blockers moving forward lately
< wumpus>
yes, two have been merged this week IIRC
< wumpus>
looks like #17994 is kind of close to merge too
< wumpus>
#topic Adding another scheduler thread (gleb)
< gleb>
I implemented #18421 which helps non-reachable nodes to be less visible to the upstream infrastructure (DNS servers, ASNs).
< gleb>
The idea is to query DNS periodically by already-known reachable nodes, to update the caches, so that non-reachable nodes are served from caches.
< gribble>
https://github.com/bitcoin/bitcoin/issues/18421 | Periodically update DNS caches for better privacy of non-reachable nodes by naumenkogs · Pull Request #18421 · bitcoin/bitcoin · GitHub
< gleb>
It requires reachable nodes execute this query periodically, and potentially that DNS request might take several minutes. AFAIK, it is a part of the low-level stack, and can’t be easily solved on application level. Because of this, we can’t safely integrate this feature into existing threads: all of them sort of assume nothing would block them
< gleb>
for so long.
< gleb>
So I was wondering what should be a good solution here? Give up on the idea because it’s not worth adding a new thread? Or maybe add a new thread keeping in mind it will be useful in future for similar (non-restricted) tasks? Or maybe modify scheduler to limit max exec time (not sure how to do that in practice…)
< wumpus>
can't this be done asynchronously?
< wumpus>
it seems the thread would spend most of its time waiting for the network anyhow
< gleb>
Yeah, it hangs on the network call.
< wumpus>
IIRC libevent has some async DNS functionality
< luke-jr>
oops, sorry I'm late
< gleb>
That might help actually! Will investigate this then. Thank you wumpus. Wasn't sure which tools we have available.
< wumpus>
in any case, on 32-bit systems we don't want to add another thread, on 64 bit systems it doesn't matter
< wumpus>
in any case if you can avoid adding a thread that'd be good
< ariard>
do people have a bit of time to talk about bip157 or more broadly light clients ?
< wumpus>
#topic Removing valgrind from travis (jnewbery)
< jnewbery>
thanks wumpus
< jnewbery>
like you say, this was mostly resolved this morning, but I thought I'd give some more context in general
< jnewbery>
In December, we added a travis job to run all the functional test in valgrind for every PR.
< jnewbery>
That meant that ci runs were taking around 3 hours (and much longer in some cases due to backlog).
< jnewbery>
Thankfully, we're not doing that since this morning :)
< wumpus>
ariard: probably there's some time left, though it's preferred if you propose topics at the beginning of the meeting or between meetings with #proposedmeetingtopic
< jnewbery>
We are, however, still running ASan/LSan and UBSan jobs, which take about an hour.
< jnewbery>
I think that's too long for a PR ci job. Preferably travis runs should return in a few minutes to allow fast iteration on PRs. Longer running jobs can be done on a nightly travis build on master.
< jnewbery>
I did a bunch of work in 2017/18 to make ci jobs faster, so I was surprised to see how much slower they've become since then.
< ariard>
wumpus: ah yes I proposed yesterday but I should have used #proposedmeetingtopic right
< gleb>
Nice to hear travis no longer takes hours because of valgrind, it was painful last time I rebased my things at a busy day. Thanks jnewbery
< wumpus>
as I said in the PR I think it'd still make sense to run the unit tests and one functional test (spinning up and down bitcoind) in travis to test the init/shutdown sequence
< elichai2>
I have a suggestion, but I'm not sure how easy to implement is that
< jnewbery>
Really, just a plea to keep travis times down on PR jobs. It makes developers' lives much pleasanter!
< wumpus>
but running it on everythign was always overkill
< wumpus>
I agree, long turnaround times for tests are bad for a project
< elichai2>
we can have a "fast" CI on PRs and a longer one after it was accepted to merge, so before the actual merge it will run in another CI
< wumpus>
please use testing time and resources efficiently
< luke-jr>
elichai2: does Travis support that?
< wumpus>
don't do silly or overkill things
< sipa>
i think that would introduce way more process overhead for maintainers
< jonasschnelli>
also... don't forget that bitcoinbuilds.org runs usually faster but without the ASAN/TSAN and fuzzers
< elichai2>
luke-jr: good question, sadly I doubt it supports it natively . I know rust-lang is doing it via a bot.
< jonasschnelli>
(for a quick feedback on a PR)
< wumpus>
jonasschnelli: hah
< luke-jr>
jonasschnelli: can we get that to report to GitHub?
< jonasschnelli>
luke-jr: it is
< jnewbery>
I think it probably does support it. You just set it up to run on every push to master
< luke-jr>
jnewbery: well, it'd be nice to get them aall runn BEFORE merge
< jonatack>
jonasschnelli: yes, i always look at bitcoinbuilds for first feedback, then much later, travis on my own github branch and on bitcoin/bitcoin
< jnewbery>
but like sipa says, anything that causes things to not get caught pre-merge transfers work to the maintainers
< elichai2>
sipa: well we could delegate it to a bot, but it will require implementing and a big change on how merges happen(more uses of bots) which probably not everyone will like
< jonatack>
jonasschnelli: i'm grateful for bitcoinbuilds
< elichai2>
oh sipa was talking about nightly builds
< luke-jr>
I only see AppVeyor and Travis on the PR I just made..
< sipa>
we're talking about different things
< sipa>
i'm totally in favor of doing more work on master merges than on PRs
< sipa>
things can always be reverted if there is an unexpected problem soon after merging
< elichai2>
sipa: and then maintainers need to check the result on the nightly CI and revert if something broke it?
< wumpus>
rather not, of course, but the full valgrind run wasn't that effective in catching things anyway
< sipa>
i don't think adding separate CI between PRs before and after they're "accepted" is a good idea as it just pushes more work to maintainers (arguably a more scarce resource than CI infrastructure...)
< luke-jr>
does valgrind do anything the *Sans don't?
< sipa>
luke-jr: it can test actual production binaries
< elichai2>
luke-jr: I think gmaxwell show me an example once but I don't remember
< luke-jr>
sipa: oh, true
< sipa>
the sans all require different builds that invasively change the output
< luke-jr>
well, Valgrind does it by runtime patching of stuff… not sure that's much different?
< luke-jr>
and emulation IIRC
< sipa>
(they can also test things that valgrind can't, because they have knowledge of the source code)
< wumpus>
yes
< luke-jr>
(I've seen Valgrind emulate an instruction *wrong* before)
< sipa>
luke-jr: sure
< wumpus>
both approaches have their advantages and disadvantages I think that's clear
< sipa>
but it is certainly possible that a bug in the source code exists that persists into production binaries (and can be caught by valgrind), but is compiled out in sanitizer builds
< wumpus>
true
< sipa>
because it's very optimizer dependent for example, and sanitizer builds prevent some optimizations (or at least interfere with it significantly)
< wumpus>
so yes it's good to test master under valgrind as well
< wumpus>
once in a while at least
< elichai2>
can the opposite also be true? (ie overflow that is optimized out because the read was never used etc)
< sipa>
sure
< sipa>
that's what sanitizers are for
< sipa>
they primarily test for discoverable bugs in the source code
< wumpus>
yes good point
< elichai2>
FWIW I think I have a mainnet node lying around running under valgrind constantly (although since covid I didn't check it)
< jonasschnelli>
elichai2: social node distancing
< wumpus>
hehe
< luke-jr>
aka Tor?
< elichai2>
yeah lol, I don't want it to get infected with some UB :P
< wumpus>
#topic bip157 and light clients (ariard)
< jonatack>
the full valgrind run brought to light some issues for me recently that lead to more robust code... #18681 was an example
< gribble>
https://github.com/bitcoin/bitcoin/issues/18691 | test: add wait_for_cookie_credentials() to framework for rpcwait tests by jonatack · Pull Request #18691 · bitcoin/bitcoin · GitHub
< ariard>
Yes so about light client I had really interesting discussion with people
< ariard>
and the constructive outcome of this was it would be better to have a more defined policy
< ariard>
when we now a solution isn't perfect, but at same time not restrain the project to make steps forward
< ariard>
what I was worry about, is by supporting bip157 in core, all people building such nice LN wallets
< wumpus>
jonatack: hehe the cookie file race was detected just because valgrind makes things slow :)
< ariard>
consider the validation backend as a solved issue
< luke-jr>
BIP157 isn't just "not perfect", it's harmful/backward
< jonatack>
yep :p
< ariard>
instead of having well-awareness, they are free-riding on the p2p network for now
< jonasschnelli>
I think BIP157 support in core is a conceptual no brainer. The question is maybe more, if it should be open to non-whitelisted peers (random peers).
< ariard>
and having a better idea for which bip157 support was aimed, people using their mobile wallets with their full-nodes
< ariard>
or servicing random clients in the wild, which maybe a bit insecure
< sipa>
there is nothing insecure about it; it's just a bad idea for them to trust random peers
< wumpus>
the same issue as with the bloom filters again
< sipa>
(but that's still better than BIP37...)
< luke-jr>
jonasschnelli: what is the use case for it?
< wumpus>
(though at least this doesn't have as much DoS potential)
< sipa>
wumpus: i don't think so;
< sipa>
exactly
< luke-jr>
bloom filters are strictly better I think
< sipa>
BIP157 support is very cheap for the server
< sipa>
luke-jr: how so?
< wumpus>
it's a kind of 'altruism' that might not be warranted
< luke-jr>
sipa: lower overhead
< sipa>
luke-jr: for whom?
< ariard>
on the security aspect, supporting bip157 in core encourage people to connect directly to random peers
< wumpus>
luke-jr: wait, how?
< luke-jr>
sipa: for everyone
< sipa>
luke-jr: wut
< ariard>
and almost all bip157 clietns, dont't have strong addr management countermeasures
< wumpus>
ariard: but that's *their* problem
< sipa>
BIP157 is certainly harder on clients
< ariard>
*peer management protection
< luke-jr>
take the reasonable use case of a user using a light wallet with their own full node
< wumpus>
we care about the server side
< luke-jr>
bloom does this fine, with very little overhead
< ariard>
wumpus: but do you want to make it easy for people to build insecure solutions?
< luke-jr>
you scan the blockchain once on the server side
< sipa>
luke-jr: but you need to do it once per client
< sipa>
with BIP157 you do it once
< luke-jr>
sipa: how many people have multiple clients?
< luke-jr>
and even a few clients is still relatively low total overhead there
< wumpus>
scanning on the server side was always the problem
< luke-jr>
wumpus: but that's exactly the ideal in this case
< luke-jr>
you don't want to burden your phone/battery
< wumpus>
if you allow random people on the internet to offload computation to you, you're infinitely generous
< luke-jr>
you don't. this isn't for random people, it's for trusted peers…
< luke-jr>
your own wallets
< ariard>
scanning on the server side isn't great, even worst with LN clients verifying channel opening
< wumpus>
for whitelisted peers it's okay
< wumpus>
sure
< luke-jr>
random people using it is harmful, and the very reason to avoid merging it
< luke-jr>
ariard: server side typically has ~unlimited power
< sipa>
luke-jr: i agree it's a bad idea; i'm not sure it is harmful
< luke-jr>
client has a battery to worry about
< luke-jr>
sipa: it encourages light wallets to use foreign nodes
< sipa>
and it would be far less of a bad idea if it was softforked in, so the filters are verifiable
< ariard>
maybe it should be part of the release node, to advice whitelisting
< ariard>
*notes
< sipa>
but that's not going to happen any time soon
< luke-jr>
sipa: that doesn't fix the problem of people not usign their own node
< jnewbery>
if it's your own server, you don't need an spv protocol. Just upload your xpub
< sipa>
luke-jr: not everyone uses their own full node, period
< theStack>
so the rationale here from luke-jr is that in the end every person should have its own full node?
< sipa>
luke-jr: there are good and bad ways to deal with it
< ariard>
luke-jr: yes but rescannning code of core isn't that performant, no parallelization, a lot of lock tacking
< luke-jr>
jnewbery: yes, that direction seems a lot better IMO
< jonasschnelli>
I think ariard concern is hypothetical but IMO boils down on limiting bandwidth,... you can write a client today that downloads all blocks over and over again.
< jnewbery>
good, so your use case is solved
< luke-jr>
jnewbery: ⁇
< ariard>
jonasschenelli: are you thinking about intentional DoS ?
< luke-jr>
my point is that there is no use case for neutrino
< jnewbery>
not everyone wants to use bitcoin in the same way as you, and that's ok
< jonasschnelli>
ariard: both... intentional or just because of the use cases
< luke-jr>
Bitcoin's security model depends on at least most people using their own full ndoe
< luke-jr>
it's okay if there are exceptions, but there's no reason to cater to them, especially when the network's security is already at high risk
< sipa>
luke-jr: i strongly disagree; it depends on enough people independently verifying the blockchain
< jonasschnelli>
if there is the concern that there are too many BIP157 clients,... one might want to limit the bandwidth
< ariard>
jonasschenelli: okay my point was really about LN clients, for which bip157 was designed, not an application which needs to download block over and over
< luke-jr>
sipa: enough people = most
< sipa>
luke-jr: i strongly disagree
< luke-jr>
sipa: a minority verifying is useless if the majority imposes the invalid chain economically
< jonasschnelli>
ariard: Same for any SPV client,... right?
< ariard>
jonasschenlli: yes my concern isn't bip157 specific, I do think that's the best option available today
< luke-jr>
stratum > bloom > bip157
< luke-jr>
for private/trusted usage
< luke-jr>
which is the only usage we should support IMO
< ariard>
it's more how do you scale any light client protocol to avoid building centralized chain access services when they hit a scaling roof
< gleb>
luke-jr: I assume you meant electrum?
< luke-jr>
ariard: there's no difference
< sipa>
luke-jr: bip157 has other advantages over bloom filters, such as being able to connect to two nodes and comparing the filters, permitting a "1 of 2 nodes is trusted" security model
< luke-jr>
gleb: Stratum is the protocol Electrum uses, yes
< jonasschnelli>
ariard: I would expect that wallet providers ship a recent pack of filters with the ap
< ariard>
overall, bip157 is good for experimentation, while keeping awareness there is still unsolved issues on security and scalability
< luke-jr>
sipa: but improving security of light wallets is a net loss of security for the network
< luke-jr>
sipa: because now fewer people will use a full node of their own
< sipa>
luke-jr: 99.99% of users don't even have SPV level verification
< jonasschnelli>
ariard: the beauty is also that filters can be retrieved from centralized sources and CDNs
< luke-jr>
sipa: if 99.99% don't have their own full node, Bitcoin has failed
< jonatack>
fwiw, i'm running a bip157 node on mainnet with -peercfilters=1 -blockfilterindex=1 to test for the first time, and /blockfilter/basic is 4 GB
< ariard>
jonasschnelli: yes but what's your trust model with such centralized sources and CDNS
< jonasschnelli>
ariard: IMO the goal for compact block filters is to get a block commitment at some point
< ariard>
you can dissociate getting the filters from such CDN and getting filters-headers/headers from the p2p network
< jonasschnelli>
ariard: also, one can crosscheck the CDN filters against some p2p loaded bip157 filters
< ariard>
jonasschenlli: it would simplify SPV logic and improve their security but even committed you still need to download them
< sipa>
luke-jr: Ok.
< jonasschnelli>
ariard: what is the worry with downloading them?
< sipa>
i don't think this discussion will lead anywhere
< jonasschnelli>
better continue the ML discussion I think
< ariard>
jonasschnelli: bandwidth cost if you download them directly from the p2p network
< jonasschnelli>
(happy to continue outside of this meeting)
< ariard>
jonasschnelli: but yes I agree you can crosscheck the CDN filters against filter-headers provided from the p2p network
< kanzure>
is the contention that light clients should be doing IBD and validation?
< jonasschnelli>
heh
< sipa>
kanzure: i think luke-jr is contending that light clients shouldn't exist, and all wallets should be either a full node, or connected to the user's own trusted full node
< ariard>
kanzure: no my concern is assuming you have the bip157 light client paradigm, how do you make it scale ecosystem-wise
< sipa>
at least for a majority of users
< kanzure>
next question: how many times should someone have to do IBD? i think the correct answer should be only once ever....
< kanzure>
[if they can keep integrity of their download and state]
< sipa>
ariard: i don't understand how your concern is any different from nodes serving blocks *at all*
< jonasschnelli>
kanzure: next question: how many random peers have I misused by testing mainnet IBD
< kanzure>
these and other disturbing questions.
< luke-jr>
sipa: ideally
< luke-jr>
at least, that as long as the situation is not good, anything that makes light clients better, is harmful to Bitcoin, and shouldn't be merged
< luke-jr>
because that can only result in fewer people using a full node
< ariard>
sipa: it's another issue but yes also an unsolved problem, my assumption was you may have a desquilibritate number of light clients compare to full-nodes
< ariard>
and maybe faster than expected
< sipa>
luke-jr: my belief is that bitcoin offers a choice for financial autonomy, and choice is a good thing - not everyone will choose to make maximal use of that, but everyone who wants to should be able to
< jonasschnelli>
Yes. The only difference to blocks serving (which seems to cause much more traffic) is that blocks served to bip157 are pure consumption while blocks served to full nodes should - ideally - be served to other peers.
< luke-jr>
sipa: you already have that choice with fiat: you can print monopoly money, and refuse to honour USD
< ariard>
jonasschelli: yes you may have assume some reciprocity between full-nodes peers
< jnewbery>
ariard: imposing upload costs on peers is something that is caused by any activity on the p2p network. It doesn't make much sense to distinguish between application data types because there will always be some other data you can download. Peer upload resource cost can really only be done on the net layer by deprioritizing nodes that are taking up resource.
< ariard>
at least I see incentives far more aligned
< sipa>
luke-jr: this is not productive
< jonasschnelli>
agree with jnewbery
< wumpus>
4 minutes to go
< luke-jr>
sipa: it's the same thing; if most people just trust miners, then the people who don't trust miners will simply get cut off when miners do something they don't like; the losers are the full nodes
< luke-jr>
light clients are a hardfork to "no rules at all"
< sipa>
perhaps - but far less easily than having money on coinbase is a hardfork to "whatever monetary policy coinbase likes"
< ariard>
jnewbery: but ideally you do want to increase security by increasing connectivity, like I prefer to offer my bandwidth to other full-nodes for censorship-resistance?
< jnewbery>
then don't enable serving cfilters :)
< luke-jr>
sipa: it's the same, but miner(s) instead of coinbase
< jonasschnelli>
ariard: there is no way you know if the blocks you serve are for other full nodes
< ariard>
jnewbery: sorry I don't get you on depriorirtizing nodes that are taking up resources, can you precise?
< luke-jr>
jonasschnelli: technically true, but what non-full nodes download full blocks these days?
< jnewbery>
ariard: here's another example for you. If a peer asks for the same block twice, should you serve it again? You're clearly not helping block propagation
< jonasschnelli>
luke-jr: wasabi did for a while (full block SPV)
< jnewbery>
if your answer is 'no', then you need to keep internal book-keeping of which blocks you've served to whom
< jonasschnelli>
(maybe still does)
< sipa>
ariard: if a node asks too much of your resources (memory, cpu, bandwidth, i/o), deprioritize serving their incoming requests
< jnewbery>
if your answer is 'yes', then how is it any different from serving a cfilter?
< ariard>
jnewbery: maybe that's a fault-tolerance case and it makes sense to serve it again
< ariard>
sipa: yes but we don't do this AFAIK? and if everyone start to deprioritizie servicing bip157 clients you do have an issue
< sipa>
ariard: no, but we absolutely should
< jnewbery>
sipa: +1
< sipa>
(not BIP157 specifically, just in general - if you ask too much of us and we get overloaded, deprioritize)
< * luke-jr>
still hasn't heard a use case for merging BIP157 at all, aside from harming Bitcoin
< jonatack>
question: if bip157 is opt-in, and a full node can soon export a descriptor wallet xpub, why would a full node turn on serving cfilters?
< wumpus>
this should be the end of the meeting
< sipa>
i don't see what exporting and xpub has to do with that
< wumpus>
maybe we should continue next week
< * luke-jr>
either
< ariard>
jonasschnelli: yes you may not know what kind of clients you're servicing, but with all this stuff we make assumptions of what kind of clients
< ariard>
are effectively deployed ?
< ariard>
wumpus: yes we can end, but thanks for all your points it's really interesting
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu May 7 20:04:44 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< luke-jr>
maybe we can get NicolasDorier to join next week <.<
< luke-jr>
although it might be the middle of the night there
< sipa>
this discussion should really be on the ML
< luke-jr>
it is, I need to read recent replies
< ariard>
luke-jr: I've read your point on supermajority of the economy, but isn't this assuming you can see the economic traffic?
< ariard>
and with LN you may have not see the real payment traffic because channel
< luke-jr>
ariard: measuring it accurately would require that, but not understanding what we depend on
< luke-jr>
even today, we can't measure it accurately, but we can see it's not in a good situation
< ariard>
luke-jr: and what's your opinon on fallback-full-node in case of fork detection ? Like you can switch to an authoritative blockchain view in case of anomalies
< ariard>
but at least you don't have to download all blocks in regular time
< luke-jr>
ariard: so every stale block, you IBD⁇
< luke-jr>
ariard: if you have the capability to run a full node all the time (necessary for any similar ideas), why wouldn't you just run it regularly anyway?
< theStack>
luke-jr: would you mind shortly explaining how your full node count script works? what are the criteria to identify a peer as "full node"?
< ariard>
luke-jr: no after seeing like 6-blocks fork, you do connect to a full-node, and rescan filters from fork branch common ancestor until the fallback node tip
< ariard>
you may not have the capability to run a full-node, but you may know someone around you that you can point your light client to in case of anomalies
< ariard>
also maybe you can do something like assumeutxo, in case of anomalies download a utxo set and IBD from then?
< luke-jr>
ariard: you can't connect to your full node unless you run one. connecting to *a* full node is what light clients normally do..
< luke-jr>
ariard: filters don't prove anything
< luke-jr>
if you're okay trusting someone around you, you can do that *normally*
< luke-jr>
assumeutxo does not reduce sync time
< luke-jr>
assumeutxo is only acceptable provided the full IBD from zero is performed still
< ariard>
luke-jr: assuming you do have authentication deployed at some point, you may not connect to *a* full node but actually Bob's full-node
< ariard>
and Bob maybe not okay to offer you bandwidth, but still okay to provide you headers, and you somehow trust Bob
< ariard>
you can have a set of semi-trusted fallback nodes, like Alice, Bob, etc
< luke-jr>
ariard: you can do that with bloom already
< ariard>
luke-jr: right I'm not arguing bip157-vs-bip37 here, but more broadly on light-client model in case of forks
< luke-jr>
ariard: if you have a node you personally trust, that's not quite the same thing as the light-client model
< luke-jr>
even if you're only using that node to verify headers
< luke-jr>
perhaps you don't trust the person as much as yourself, but it's still much closer to "your own full node" than light wallet
< luke-jr>
(actually, checking your incoming transactions against full nodes run by *the people you care to pay* might even be more secure than your own full node? XD)
< achow101>
would it be reasonable to replace salvagewallet with a bdb deserializer and try to recover key-values ourselves instead?