< promag>
kallewoof: there is no need to prevent a loop iterating an empty array
< kallewoof>
promag: Yeah, I was confused. Fixed. :)
< promag>
cool
< promag>
I have to test -noincludeconf
< promag>
didn't thought about that´
< kallewoof>
Yeah, I didn't think about it either. I think a command line -noincludeconf will actually disable config includeconf. Which.. is good, I think?
< kallewoof>
No, it seems like -noincludeconf is ignored from command line right now.
< promag>
really?
< kallewoof>
It's ignored in bitcoin.conf too. `noincludeconf=1 \n includeconf=relative.conf` will still include relative.conf.
< kallewoof>
Ah, wait. `includeconf=relative.conf \n noincludeconf=1` will not include relative.conf.
< kallewoof>
Not sure this is a useful feature at all, to be honest. (-noincludeconf I mean)
< aj>
kallewoof: foo=bar nofoo=1 foo=baz --> nofoo clears out bar, but then foo=baz gets put in
< aj>
kallewoof: -nofoo on the commandline would clear out everything for other options
< kallewoof>
aj: probably because I am loading both kinds manually, but -noincludeconf from cli does not cancel `includeconf=relative.conf` from bitcoin.conf
< kallewoof>
both kinds = `includeconf` and `[chain].includeconf`
< aj>
kallewoof: yeah, other options have that taken care of them by the ArgsManager::Get*Arg* functions, you'd have to do it yourself
< kallewoof>
aj: Gotcha. I think I'll require that `noincludeconf` is not set before doing it, so people can -noincludeconf from command line. Feels buggy otherwise.
< kallewoof>
promag: I pushed a fix for -noincludeconf
< promag>
kk
< fanquake>
eklitzke Good to know you were just travelling. Thought you might have bailed on Core dev after all the slow review turnaround.
< kallewoof>
aj: I looked at the seen-txns file you sent me. In combination with block data I should be able to extract what I need I think. For starters I'll make the tool that can read current data, then I will nudge you for data, probably. :)
<@wumpus>
kallewoof: TX_CONF (0x03) has 8 bytes reserved for block height - it's good to plan ahead when designing formats, but 80000 years is maybe a bit much :)
<@wumpus>
kallewoof: also please add some header magic, and a format version
<@wumpus>
kallewoof: especially as the packet types have no framing information of themselves, this means the format is not forward compatible, so readers need to be able to reject newer files
<@wumpus>
(e.g. no way to skip unknown records or fields)
<@wumpus>
(which is a valid decision with regard to storage, but maybe needs to be documented)
< bedotech>
Hi all, currently bitcoin-core what kind of wallet derivation implement? (for example BIP44 and so on...)
<@wumpus>
bip32 hierarchical deterministic wallet
<@wumpus>
not bip44, you can find the implemented bips in doc/bips.md
< bedotech>
wumpus: thanks a lot
< fanquake>
wumpus looks like #12855 is ready to go now that sipa's nit has been fixed
< bedotech>
i want to build a online service that generate address from extended public key, is better to use backward compatibility P2SH-P2WPKH or i can use direct P2WPKH?
< fanquake>
bedotech ask in #bitcoin
< bedotech>
fanquake: thanks!
< fanquake>
or #bitcoin-dev
< bitcoin-git>
[bitcoin] promag opened pull request #13026: Fix include comment in src/interfaces/wallet.h (master...2018-04-fixincludecomment) https://github.com/bitcoin/bitcoin/pull/13026
<@wumpus>
MarcoFalke: I don't get it, why are you removing perf.cpp/h?
<@wumpus>
MarcoFalke: is counting cpu cycles no longer relevant?
<@wumpus>
I really don't get this, without any discussion
< sipa>
wumpus: it seems the code is actually unused
< sipa>
only perf_init and perf_fini are invoked
< sipa>
or is your point that we expect to use them again for another purpose soon?
< sipa>
(i don't have an opinion either way, just want to make sure it's clear what is happening)
< MarcoFalke>
wumpus: The discussion was in the pull that removed the usage, IIRC
<@wumpus>
it used to be that the benchmarks reported number of cycles too
<@wumpus>
apparently that was changed, and now my code to measure cycles was removed too
<@wumpus>
no one ever asked me about any of this
<@wumpus>
"I have removed the cycles statistics because I personally don't think it is necessary, and it simplifies the code. I could add it back though if others think its a helpful statistic"
<@wumpus>
wtf asked him :/
<@wumpus>
apparently I even reviewed that and missed that
<@wumpus>
but you can't say there really was any discussion
< MarcoFalke>
fine
< MarcoFalke>
Is there any evidence that this actually is a statistic that is not redundant?
<@wumpus>
redundant compared to what?
< MarcoFalke>
std::chrono
<@wumpus>
well std::chrono counts time, not cycles
< MarcoFalke>
(the clock we use right now)
<@wumpus>
e.g. not when the cpu is actually idle
< MarcoFalke>
I asssume the only difference would be in the deser block test?
<@wumpus>
I don't know. It's entirely valid to have a discussion about whether it is redundent or not, but I think how this went is absurd
< MarcoFalke>
It was unused for months now, and no one noticed. I don't think removing the code as it was unused is absurd.
< MarcoFalke>
I am not saying we can't add it back in
<@wumpus>
right I haven't been using the benchmarks the last months
< MarcoFalke>
When added back, it should be an optional swich (clock/cycles), so the format doesn't change again
< sipa>
FWIW, if we continue work on platform specific SHA256 implementations, itay be necessary to do a mini benchmark at startup to determine what is fastest... generally cpu cycles are the most accurate way of doing that
<@wumpus>
well I"m not going to bother that's for sure
<@wumpus>
if I need this again I'll just patch it in locally
< MarcoFalke>
sipa: That means the fuction should be moved to util.cpp?
< sipa>
wumpus: i think you're overreacting
< sipa>
people who reviewed this missed this, and acted in good faith
< sipa>
we can trivially bring the code back
<@wumpus>
yes it was also my own dumb fault for not noticing it, on the other hand that PR changed so much in the format it was easy to miss
< sipa>
if counting cpu cycles actually more reliable then counting time in benchmarks? i generally lock my cpu to a single frequency to do benchmarks, as noise complicates things otherwise, but i never botheres trying to look at cpu cycles
< MarcoFalke>
sipa: That was my though. Only difference would be when cpu waits on io?
< sipa>
MarcoFalke: waiting on IO does not directly reduce clock rate
< sipa>
it could cause a switch to a different process though (but then cpu time is not counted anymore, while cycles are!)
< MarcoFalke>
oh, cycles are still counted?
< MarcoFalke>
In which case it would be identical to clock
< sipa>
yes, the cycle counter is per cpu thread
< sipa>
which clock are we using?
< sipa>
cpu time or wall time?
< MarcoFalke>
Yeah, I think BlueMatt told me something that the clock might be using cycles internally
< MarcoFalke>
std::chrono::steady_clock mostly
< MarcoFalke>
fallback to std::chrono::high_resolution_clock
< MarcoFalke>
other way round, sorry
< sipa>
yup, sorry those also keep ticking if a process if not executing
< sipa>
though it's unclear what those clocks really dl
<@wumpus>
it's one big stack of abstractions
< sipa>
they may use system calls
< sipa>
which are completely inappropriate if you want to measure very short running pieces of code (sub microsecond)
<@wumpus>
at least cpu cycles is a clear, transparant metric, the only problem is that it's not available on all architectures, and on e.g. ARM it needs system calls to measure :-/
< MarcoFalke>
sipa: I think that is why we loop a bit before taking the clock
< MarcoFalke>
in fact the iterations are hardcoded
< sipa>
that may be historical
< sipa>
the benchmarks used to be entirely self-measuring (aiming to run for 1s)
< BlueMatt>
sipa: the reason the iteration count was hardcoded is cause the time measured changed a shitload depending on iteration counts
< BlueMatt>
so hardcoding was easier to get more stable results than running it 20 times until you had a bunch of results for the iteration count you wanted
< bitcoin-git>
[bitcoin] Sjors opened pull request #13029: Interpret absense of prune= as prune=1 if there are pruned blocks (master...2018/04/implicit_prune) https://github.com/bitcoin/bitcoin/pull/13029
< jnewbery>
wumpus: I think #13024 can go on the high priority for review. It's blocking walletload, which a few people said they wanted for V0.17 in the last weekly meeting.
< BlueMatt>
I'll probably miss the meeting, but, hey, I got through 3/4 of the high-priority PRs, even acked 2, and left blocking comments on the 3rd....how did *you* do this week?
< BlueMatt>
someone should repeat that when meeting time comes ^ :p
< gribble>
gmaxwell was last seen in #bitcoin-core-dev 5 weeks, 0 days, 1 hour, and 59 seconds ago: <gmaxwell> it's not like you pay lower fees due to it.
< meshcollider>
hi
<@wumpus>
any proposed topics?
< jonasschnelli>
I'd like to touch the light client mode design
<@wumpus>
ok
< promag>
check high priority review first?
< kanzure>
11:10 < BlueMatt> I'll probably miss the meeting, but, hey, I got through 3/4 of the high-priority PRs, even acked 2, and left blocking comments on the 3rd....how did *you* do this week?
< jonasschnelli>
ack
<@wumpus>
promag: yes, that's always the first topic
< jonasschnelli>
we can't view jimpos txindex PR
< jonasschnelli>
github issue... can we reopen the PR in a new #?
< sipa>
Do we at some point just close the PR and open a new one, to flush all historical discussion?
< jonasschnelli>
sipa: yes
< sipa>
(assuming that's the reason for the sad unicorn)
< jonasschnelli>
jimpo should do it though...
< sipa>
yes
< kanzure>
has someone reported the unicorn to support@github.com
< jonasschnelli>
I could not get hold of him in. the last days
< achow101>
maybe telling github would help...
< jonasschnelli>
kanzure: please do
< sipa>
probably better if a repo owner does so
< sipa>
i'll send a mail
< jonasschnelli>
I doubt it would help in a resonable timeframe
< meshcollider>
I don't get the unicorn, I can see it fine
< jonasschnelli>
it's wired
< promag>
me to, but in incognito
< instagibbs>
kanzure, i did
<@wumpus>
I see the unicorn
< jonasschnelli>
I can't. But i can in mobile view
< instagibbs>
incognito works reliably for me
< jnewbery>
instagibbs reported it
< kanzure>
instagibbs: thanks. i'll refrain.
< sipa>
instagibbs: when?
< instagibbs>
couple days ago
< sipa>
any response?
< instagibbs>
no
< achow101>
I've reported it just now
< achow101>
also works in private browsing (firefox)
< jonasschnelli>
logged out state works
< jonasschnelli>
however, jimpo should just open a new PR
< jamesob>
I can view it in lynx just fine
< sipa>
suggested tiny topic: cyclic dependency
< jonasschnelli>
hah jamesob
< jamesob>
;)
<@wumpus>
#topic cyclic dependency
< sipa>
i was wondering if we should have a policy against the type of cyclic dependency where the .cpp files include eachother's .h files (but not .h files include eachother)
<@wumpus>
do we have that problem?
< sipa>
that's not a cyclic dependency for the compiler, but it does mean those two modules can't really be used independently and is generally a sign of bad separation
< kanzure>
do we have many of that
< cfields>
sipa: example?
< sipa>
there are a few open PRs that introduce them
<@wumpus>
I do agree cyclic dependency should be avoided in general
< sipa>
so i wanted to bring it up here to see if that should be a PR merging blocker
< sipa>
or just a "try to fix it up afterwards if introduced"
< sipa>
i'm fine with either
< cfields>
indeed sounds like likely bad design that should at least be justified in the PR
< sdaftuar>
cfields: +1
< jonasschnelli>
Yes. And maybe mention it in the developer guidelines?
<@wumpus>
right, might make sense to discuss this in the PR
< meshcollider>
Sounds like it'd be a painful thing to fix up afterwards in some cases
< aj>
seems good to fix it before merge, but not sure it should be added as a lint check or similar
< sdaftuar>
i can imagine that in some contexts we'd merge anyway
<@wumpus>
for a refactor it should definitely be avoided
<@wumpus>
refactoring is supposed to make the code better
<@wumpus>
not introduce further bad design
< sdaftuar>
that might be an example of an improvement that just doens't go far enough though?
< sipa>
well one way of looking at it is that the current util.cpp there has two subcomponents that already have a cyclic dependency
< sipa>
within the same .cpp file
< sipa>
and forcing people to fix it before they can separate it is maybe counterproductive
< sipa>
or not ;)
<@wumpus>
maybe...
< sipa>
enough on the topic i guess
< jcohen>
one "quick" solution to avoiding that circular dep would be to jam it all into a single file - which i think would be even less desireable
<@wumpus>
it is already in a single file
< jcohen>
the alternative would be to split util up even further, which would lengthen the diff
< sipa>
this is just an example, it's relatively easy to fix in this case
< cfields>
sipa: right. Since the point of this one is to break up a big blob anyway, requiring it to solve the circular issue in one go would be pretty burdensome. But if it's moving in the right direction, imo maintaining the status quo is ok.
< cfields>
we could go back to main.cpp :p
< sipa>
cat *.cpp | gcc -
<@wumpus>
cfields: true...
< achow101>
"just put everything into one big file and call it main.cpp"
<@wumpus>
I don't think I feel strong enough about this one to put it in the guidelines
< jtimon>
well, perhaps breaking util as needed first makes sense
<@wumpus>
though commenting on it where relevant is a good idea
< sipa>
sgtm
<@wumpus>
if there's an obvious solution to avoid it that's better, but we cannot enumerate every single software design compromise
< jonasschnelli>
if that is something we should follow or drop
< jonasschnelli>
I know the use-cases with 10794 are limited, but a stepping stone towards hybrid and BIP158 light validation
< sipa>
jonasschnelli: will review
< jonasschnelli>
maybe first check for a concept ACK/NACK
< jtimon>
aj: yeah, and it seems moveonly, so perhaps just rebasing 12954 on top of 13021 solves the issue in this case? sipa
< sipa>
jonasschnelli: yes, makes sense - it's a bit too much to look at right now in the meeting though, i think
< jonasschnelli>
sure...
< jonasschnelli>
I'm only interested to know if the concept make sense for you guys
< jonasschnelli>
(of having a light client mode)
<@wumpus>
they have been open for a long time, probably should et around to at least concept-acking them
< sipa>
yes
< jonasschnelli>
Great. Thanks
< sipa>
jonasschnelli: oh, the idea of having a client mode - that makes absolutely sense to me
< sipa>
but it heavily depends on how and what :)
< meshcollider>
Having a light client mode is definitely a concept ACK for me
< LukeJr>
jonasschnelli: only as a temporary stage
< sipa>
LukeJr: how so?
< jonasschnelli>
right... I wasn't sure if the PR is the right place to discuss that or if we want to do it in a meeting
< LukeJr>
sipa: it should always be building up to a full node in the bg
< sipa>
LukeJr: i disagree - it's a perfectly valid usecase to have one full node you run yourself, and then have multiple client nodes connect exclusively to it
< LukeJr>
(even if that bg process is paused for a time)
< LukeJr>
sipa: hmm
< cfields>
jonasschnelli: I realize this is really unhelpful feedback, but I find the current download logic nearly impossible to follow as-is. I remember giving up on review of this last time for that reason. Imo it's due for a bit of a cleanup/encapsulation before piling more on :(
< sipa>
LukeJr: but lightweight upgrading to full in the background is also an very good usecase, imho
< cfields>
(feel free to ignore that, maybe it's just my issue reading the code)
< sdaftuar>
cfields: you are not the only person who finds the download logic confusing :)
< sipa>
i believe those who (helped) write it agree :)
< jonasschnelli>
heh. Yes. The open PR is not the first try to make this with a possible very small impact on the code...
< sipa>
jonasschnelli: thanks for stickin with it, though
< cfields>
yes, I was hesitant to mention that because I didn't want to de-motivate at all.
< LukeJr>
sipa: as soon as the wallet is split, your use case just works
< sipa>
LukeJr: this is how i imagine the wallet being split :)
< jonasschnelli>
It's still unclear though how fee estimations and mempool checks would be done "in that way"
< sipa>
jamesob: that's modularizing the code better, which is independently useful
< sipa>
i don't think the goal should be separating the wallet from the node into different processes, and then inventing a protocol between the two... instead of just making the wallet run as a light client
< jamesob>
the two sound very similar to me
< LukeJr>
I don't agree. There's no reason the wallet and node should be in the same process.
< jonasschnelli>
sipa: I agree. For me its just unclear how to transport fee estimations and mempool checks between light client and the fullnode.
<@wumpus>
LukeJr: that's not what sipa is saying, in his case the node and wallet would also be in different processes, but without custom protocol
< sipa>
jamesob: the advantage of using P2P as communication between node and wallet (which is what you get if you see wallets as just lightweight nodes) is that it actually modularizing things: you can run _any_ wallet software or _any_ node software
< jamesob>
wumpus sipa: but then using what protocol? a more fleshed out rpc interface?
< sipa>
jamesob: P2P
< instagibbs>
p2p messages
< instagibbs>
with whitelisted fee estimation type messages, i assume
<@wumpus>
right
< jonasschnelli>
instagibbs: but not before we have BIP150/151
<@wumpus>
I think for localhost it's fine without?
< jonasschnelli>
I don't want MITMled fees
< jonasschnelli>
yes. sure
< sipa>
oh, short topic: update on private authentication protocols
<@wumpus>
but yes, in general that's correct
< jnewbery>
I don't think doing process separation with IPC precludes later doing a p2p-based wallet
< jamesob>
sipa: thanks for the explanation; will noodle on that
< jnewbery>
p2p-based wallet seems like a very large project
< sipa>
jnewbery: i agree, but i think it's a bit overkill
< sipa>
jnewbery: however, i don't think that's a choice that needs to be made
< jonasschnelli>
jnewbery: we are not too far away from a p2p based wallet IMO
< sipa>
encapsulating the communication between node and wallet is an objective improvement to the code
< sipa>
even if it does not lead to also a process separation along that interface
< jnewbery>
jonasschnelli: really? Maybe I overestimate the difficulty, but it seems like we'd need a lot of new logic in the wallet to understand p2p
< sipa>
jnewbery: you misunderstand!
< sipa>
jnewbery: it would reuse all the existing full node code
< sipa>
and p2p implementation
< sipa>
just don't do validation
< instagibbs>
turn off full validation, ofc
< ryanofsky>
just catching up, but yeah, i think two approaches are not exclusive, and work done to support ipc will be useful anyway for making wallet more independent and better able to do async p2p stuff
< jonasschnelli>
jnewbery: it needs BIP158 light client mode (or fullblock), fee and mempool checks. Done
< sipa>
ryanofsky: yes, fully agree
< jonasschnelli>
You just start two instances of Core. One as a full node, one with disabled validation pointing to the full node
< LukeJr>
does it simplify or complicate the internal code? (ignoring the present level of complexity in itself)
< jonasschnelli>
--> <sipa>oh, short topic: update on private authentication protocols
< jnewbery>
ok, I understand. How much work is it to make core work in --disablevalidation mode?
< sipa>
jnewbery: that's the current topic :)
< jonasschnelli>
jnewbery: 10794
< jonasschnelli>
(does it)
< jnewbery>
ok, I'll shut up and read the PR
< jonasschnelli>
although 10794 leads out the wallet
< jonasschnelli>
(to make it reviewable)
< jonasschnelli>
*lefts out
<@wumpus>
#topic update on private authentication protocols (sipa)
< sipa>
so, as some know gmaxwell and i have been thinking about authentication protocols that have better privacy than bip150
< jonasschnelli>
privacy in the sense of fingerprinting?
< sipa>
yes
< sipa>
the goal is to have a design where one node has 1 or more private keys, and the other node has 1 or more public keys
< sipa>
and the second node learns whether one of the other node's private keys matches one of your public keys
< sipa>
but _nothing_ else
< sipa>
the node with the private keys does not even learn if authentication was succesful
< sipa>
or doesn't learn which keys it was being queried for
< jonasschnelli>
the use case is then wide deploey authentication sheme rather then the more client-server-ish approach?
< sipa>
it's still client-server
< sipa>
the cool thing about this is that you can always run the authentication protocol
< LukeJr>
sounds hard to give an "authentication failiure" error?
< sipa>
LukeJr: the idea is that most of our connections are unauthentication anyway (and should be)... so whatever privileges you give to authenticated nodes, you just don't give if auth fails
< sipa>
this has a very cool property
< sipa>
you can _always_ run this authentication protocol
< sipa>
even if you don't care who the other party is
< jonasschnelli>
sipa: but, if you have node A and node B's pubkeys and you want connect to A, how can you be sure you'r not connecting to B?
< LukeJr>
sipa: but if the authenticating node doesn't know if it failed, then it doesn't know if it has an authentication connection it expects
< sipa>
LukeJr: it can run the same protocol in the other direction to find out
< LukeJr>
hmm
< sipa>
jonasschnelli: you just query for who you want the other party to be
< phantomcircuit>
or just ask for something that requires authentication
< sipa>
the cool thing is that if you always run the authentication protocol, but if you're not interested in authentication do it with just a randomly generated pubkey, a MitM can't find out what you're doing
< sipa>
they have to assume you're trying to authenticate
< sipa>
anyway, turns out... this is difficult
< cfields>
sipa: i'm not sure if this has evolved from when we discussed last... does your peer learn how many pubkeys you'd accept?
< sipa>
cfields: yes, but you can stuff your request with a number of random ones
< cfields>
right, ok
< sipa>
just pad to 256 keys or whatever, always
< instagibbs>
so, did you fix it? :)
< jonasschnelli>
sounds interesting.. are there written specs already?
< sipa>
we have a protocol that works with 1 privkey and 1 pubkey - which means you need to run in many times sometimes which doesn't lead to great fingerprinting properties
< sipa>
and i'm talking to some people to extend it :)
< sipa>
note that that protocol linked to is broken
< sipa>
but it does explain the rationale pretty well, i think
< sipa>
end topic
< jonasschnelli>
I guess this protocol would require more cryptoanalysis then the exiting BIP150
< sipa>
jonasschnelli: that's ok, i'm talking to dan boneh about it
< jonasschnelli>
Good!
< meshcollider>
Dan is the solution to all crypto problems
< jonasschnelli>
heh
<@wumpus>
it'd be good as a future successor to BIP150 - though I guess this research shouldn't discourage anyone from implementing BIP150 and having something working on more short term
< sipa>
right
<@wumpus>
(maybe that's obvious, but just to be clear)
< jonasschnelli>
Implementing BIP151/150 is still hold back due to the network refactor, right?
< jonasschnelli>
If not, I can continue on BIP150
< jonasschnelli>
sry. 1541
< jonasschnelli>
sry. 151
< cfields>
sipa: I'm still a bit confused as to the use-case. Is the intent to be able to explicitly find a known peer, or more generally to build up a list of tofu-ish nodes that aren't known to misbehave, and look for them first when creating outbound connections?
< jnewbery>
cfields: what's the state of network refactor? Any PRs awaiting review?
< sipa>
cfields: you can't do TOFU, you don't learn who you're connecting to
< sipa>
cfields: the whole point is avoiding having discoverable identities for things that should be identityless
< sipa>
but sometimes you have a node you trust already (due to external reasons, for example you run it yourself)
< sipa>
in which case you'd configure an addnode with a known pubkey or so
< cfields>
sipa: got it, thanks.
< cfields>
jnewbery: #11457 still, looks like it needs rebase again
< Murch>
cfields: In the case that you for example want to connect with a thin client to your own node, the only valid key you query for is your home node's. If you want to defend against Sybil, you may query a list of known friends and accept any of them. If you just want to scare off a MITM, you query for random keys.
< sipa>
this is just a replacement for the authentication part; it needs an existing encrypted connection to run over anyway
< * jonasschnelli>
will continue with 151
< LukeJr>
how does one authenticate the encryption key?
< sipa>
you don't
< jonasschnelli>
bip151 does not protect from MITM
< sipa>
encryption is done with ephemeral keys, and is completely unauthenticated
< sipa>
it does not protect against MitM
< jonasschnelli>
Only from passible observing and undetectable interception
< jonasschnelli>
*passive
< sipa>
but then you run an authentication protocol which can determine if the party you are talking to (possibly the MitM) is who you think it is
< sipa>
anyway, enough on the topic
< sipa>
just wanted to give an update that there are some cool ideas
<@wumpus>
yes, thanks!
< * sipa>
lunch
< jonasschnelli>
thanks sipa for working on this!
<@wumpus>
unless someone has a very quick last-minutet topic I think that was the last topic for today
<@wumpus>
clear :)
<@wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Apr 19 19:56:03 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< phantomcircuit>
sipa, separating encryption and authentication like that has the potential for an obvious mistake
< phantomcircuit>
mitm encryption and then fail to authenticate the shared secret used for encryption
< achow101>
question about wallet hd upgrade path (#12560): is it preferable to warn that a new backup is needed for -upgradewallet or just use a new option entirely?