< sipa>
achow101: aftee batching the writes, it should be almost entirely cpu bound
< sipa>
so you're now at 8ms of CPU per generated key?
< supay>
i lost access to my bitcoin and scrubbed my system for any data i could find. found some bip32/44 paths. trying to import private keys into bitcoin core using importprivkey, but that isn't working. there aren't any errors either. any help?
< sipa>
oh, if we'd cache intermediary results in the key derivation we'd save a bunch
< achow101>
sipa: I'm pretty sure it's still spending most of the time on writing to the wallet as the wallet file starts growing in size almost immediately and that should only happen only after all keys are derivec
< gmaxwell>
supay: what exactly does "isn't working" mean?
< supay>
gmaxwell: i've done: bitcoin-cli importprivkey the_key "" true -- and i wait about 25 mins or so for the rescan, and when i run bitcoin-cli getwalletinfo, the balances are still 0
< sipa>
supay: are you sure the private key you're importing corresponds to the address?
< gwillen>
hey supay, I assume you are 0x23212f from github?
< sipa>
could it be you're importing a root key instead of the derived key?
< sipa>
or gotten the compression flag wrong? (depending on how you obtained the private key)
< supay>
sipa: to which address? i gather importing the private key suffices, as it has the address?
< supay>
gwillen: yes, that is correct! :)
< sipa>
supay: well i don't know where you got the key
< gwillen>
supay: you said the path is m/44'/0'/0'/0/427 -- are you trying to import the master privkey, or the privkey derived from that path?
< supay>
gwillen: thank you for your reply there. i wasn't aware this channel exists.
< gwillen>
np, I didn't want to encourage you to come here since it's not really a tech support channel, but the people here are trustworthy :-)
< supay>
gwillen: i have a bunch of lines like this one (multiple paths). i see them as 4 comma separated values. the 2nd column is address, the 4th is private key. i'm not sure if it's the master or otherwise
< gwillen>
if you are trying to import the master privkey, that is your problem -- bitcoin core does not know about bip 44, and won't try to derive the child keys automatically
< gwillen>
ahh, if each line has a different key, then I expect it is the child key, which I would expect to have the funds
< supay>
that is correct, each line has a different key
< supay>
wait, let me share some with 0 balance
< supay>
is that safe?
< achow101>
no, it is not safe
< gwillen>
supay: if you "getaddressinfo" the address after importing the privkey, what does it say about ismine and solvable?
< gwillen>
supay: sharing private keys is, to a first approximation, never ever safe
< supay>
haha, makes sense
< supay>
let me check getaddressinfo, one moment
< gwillen>
and in fact, sharing private keys which are derived from a path related to other private keys that have funds is _specifically_ unsafe
< supay>
gwillen: ismine for that address is current false
< gmaxwell>
supay: rescan can take hours depending on your system, but the call blocks while it's in progress so you'll now when its done.
< gwillen>
and the fact that this is not obvious results in some serious arguments over usability
< gwillen>
gmaxwell: he said in his github issue that the rescan completed
< supay>
gmaxwell: i've been observing debug.log. i'm sure rescan is complete
< gmaxwell>
supay: good!
< supay>
gmaxwell: wait, is ismine: false a good thing? :o
< gmaxwell>
aside the above exchange with supay asking if it was safe to share zero balance keys with us in here, ... ahem, non-hardened derrivation, ... how many users would even ask if it was safe? [/obrant]
< gwillen>
gmaxwell: yes, indeed, the very fact that he asked the question is strong evidence of your rightness in this argument :-)
< supay>
i'm so confused. i shouldn't have asked?
< gwillen>
haha, no, not at all
< supay>
:D
< gwillen>
just that most people would go ahead and share without asking
< gwillen>
which would be bad because it would compromise their wallet
< achow101>
supay: ismine false means that address is not part of your wallet
< supay>
gwillen: ah yes, but you cautioned me in the github reply!
< gwillen>
and there is an ongoing argument over the fact that this happens
< gmaxwell>
supay: it's good you asked, ignore my comment there-- I'm just pointing out that its relevant to a unrelated longer term debate.
< achow101>
supay: so either importprivkey is not working (we have tests, but it's possible) or the address you have is not an address corresponding to that private key
< supay>
achow101: that is correct. all the material i have read indicates directly using importprivkey. do i need to first add the address to the wallet?
< supay>
gmaxwell: ah hehe
< gwillen>
supay: if you run dumpprivkey address, it will show you what it thinks the privkey is for the address
< gwillen>
and you can compare it to the one you have in your own dump
< supay>
achow101: right, i really hope it is not the latter
< gmaxwell>
I'm guessing that the privkey he is importing doesn't match the address he expects. Maybe due to a compressed flag?
< gwillen>
if it's different, don't panic yet, there are lots of ways that the UX can be confusing
< gmaxwell>
supay: what software wrote this list of addresses / paths/ keys?
< supay>
gmaxwell: can't be very sure! :(
< achow101>
supay: can you try `bitcoin-cli listaddressgroupings` and see if your address is in there?
< supay>
i just searched for these patterns on a backup of my computer
< supay>
achow101: sure, one moment
< gmaxwell>
I believe supay said above he was getting ismine false on an address he believed he imported a private key for
< gmaxwell>
If so, I think that proves that he did not successfully import a matching private key.
< supay>
achow101: hm, interesting. another one that i tried some time ago is on listaddressgroupings. this one, i ran importaddress but didn't import the private key yet
< achow101>
supay: that's expected. importaddress imports the address to be watched.
< supay>
gmaxwell: is that the only conclusion? :(
< gwillen>
supay: ahhhh sorry, did you run 'getaddressinfo' on one where you just imported the address? I meant to suggest you do it on one where you imported the key.
< supay>
achow101: ah, alright
< achow101>
do you see any unfamiliar addresses there? if so, try dumpprivkey on those addresses and see if you can find the one that matches the private key you imported
< supay>
gwillen: ah no, i ran it on the one where i imported the key
< gwillen>
supay: don't panic yet, there are many many ways that the user interface can go wrong, and most of them do not involve you losing coins once we figure out what's goin gon
< supay>
achow101: i do infact, this was a clean install. there were definitely some new addresses
< supay>
gwillen: oh! that's a relief
< gwillen>
honestly all this stuff is very arcane
< gmaxwell>
My leading hypothesis is that the dump he has shows the wrong addresses with the keys. E.g. because it mishandles the compressed flag.
< gmaxwell>
Or because of some off by one, like each line is the address for the private key on the prior/next line.
< gmaxwell>
either case is easily fixable if they're the issue.
< achow101>
gmaxwell: i'm trying to figure out if he's got an off by one or something like that. once he can find the address that was imported for the private key, he can check if that's in his list
< gmaxwell>
supay: has any of the addresses which have balances that you're trying to import spent before?
< achow101>
gmaxwell: i would be surprised if it was a compressed flag problem. the key is compressed (he says it begins with K) and it's BIP 32 derived, which basically requires compressed keys
< supay>
gmaxwell: not sure, but pm'd you one such address
< gmaxwell>
achow101: darn, I missed the fact that we'd already asked for the leading character.
< supay>
achow101: yep, starts with a K. another with a K. and there's one with an L
< achow101>
gmaxwell: not so much asked but provided in the issue :) #15736
< gmaxwell>
supay: does the file you're taking these from have any kind of header that I could use to determine the originating software?
< gmaxwell>
achow101: still possible some broken app was using bip32 with uncompressed keys. But I agree thats a lot less likely.
< supay>
it had something indicating the columns. that's all i remember. unfortunately, got rid of the header!
< supay>
that's how i know the last column was private key and second column was address
< gmaxwell>
in the future when you're trying to recover data, take care to not throw anything out. :P
< supay>
yess :(
< gmaxwell>
do we really have no rpc that will take a private key and return addresses?
< achow101>
gmaxwell: nope
< supay>
gmaxwell: interestingly, i think electrum does that
< supay>
when you import a private key, it says "so and so addresses were added"
< gmaxwell>
probably the easiest thing to do would be to import a single additional key from the middle of the list with rescan false, and then look in dumpwallet output to see what new address got added.
< supay>
oh, i see, let me try that
< gwillen>
(in general I recommend always using rescan false, and then call rescanblockchain when you're ready to rescan -- you can give it a height to start scanning so it doesn't need to start in 2009)
< supay>
oh, i didn't know you could do that! interesting :D
< supay>
gmaxwell: my dumpwallet output is huge. so many keys, addresses etc. no idea where all of these came from
< gmaxwell>
supay: by default the wallet starts off with 2000 keys generated in it.
< achow101>
that's expected. just open the file in a text editor and ctrl+f your private key
< supay>
oh, wow. i see
< supay>
alright, the addresses are definitely not what i expected. both have 0 balance
< gmaxwell>
are the addresses with them addresses that are in your list anywhere?
< supay>
nope, they aren't! :O
< achow101>
supay: one thing that I think we forgot to mention was the address type. dumpwallet probably gives you addresses beginning with 3, yes? what do the addresses in your list begin with?
< achow101>
(rescan will pick up both address types, just that exporting addresses from core will use the default type)
< supay>
dumpwallet gave me 3 address corresponding with the private key. the addresses on the list begin with 1
< achow101>
restart bitcoind with with -addresstype=legacy and try the dumpwallet thing again
< supay>
alright, let me give that a go
< gwillen>
btw supay, are you only using bitcoin core for rescuing this old wallet? I.e. you have no other funds in it?
< supay>
yes, that is correct!
< gwillen>
(if so, it's possible to set it up without the 2000 keys already in it, just to reduce confusion)
< achow101>
gwillen: that's not in a release yet
< gwillen>
o :-(
< gwillen>
heh, nevermind
< achow101>
would require installing 0.18.0rc2
< gmaxwell>
huh? you can still set the keypool size to 1.
< achow101>
oh right. I assumed gwillen was talking about blank wallets
< supay>
achow101: restarted bitcoind with that flag. however, same addresses in dumpwallet
< gwillen>
ohhh, I was, but that's clever
< gwillen>
you probably need to do the import again now that the flag is set
< supay>
ah, alright
< gwillen>
the addresses are remembered from before, unless you make a new wallet
< gwillen>
BTW, let me take this opportunity to ask: you have multiple safe copies of this list of private keys, right? Stored in at least a few places, at least one of which is not "electronically on the computer you're using right now"?
< gwillen>
since you have hopefully learned your lesson about single points of failure? :-)
< supay>
i don't have more copies ;((((
< gwillen>
well you should make them now
< supay>
hahaha, alright
< gwillen>
before you do any more fiddling with the list, like, very carefully copy it to a USB stick
< gwillen>
and do not even dream of getting rid of the old hard drive / backup / whatever you got it from
< gwillen>
never do any more backups to that drive or touch it in any way
< gwillen>
store it safely as an extra precaution
< supay>
hehe, that makes sense
< supay>
wait, let me retrieve the list as i had it before
< supay>
i think that will shed more light to this issue
< gwillen>
(for example, if there is somehow something wrong with your dump, or it is incomplete, you might have to go back to it and try to dig out more informattion)
< gwillen>
in fact, consider buying another drive to make an identical copy of whatever drive that is
< achow101>
supay: in your dumpwallet output, you should see multiple addresses for each private key
< gwillen>
like, do the math on how much your bitcoins are worth, and consider that the bits we are helping you recover right now are effectively a pile of hundred-dollar bills of the equivalent value
< supay>
gwillen: i have a few copies of the backup, on cloud. but i'll make other physical copies as well
< gwillen>
and you should treat them with equivalent care
< supay>
achow101: yes, that is correct. i see 3
< gwillen>
okay, great
< gwillen>
(hopefully encrypted in the cloud...)
< supay>
yep, it is encrypted! :)
< achow101>
supay: one of those should begin with a 1, that doesn't match what you have in your list?
< supay>
achow101: right. there's one that begins with 1. it doesn't match :( which is why i want to get a clean dump of the files and try again. i think i messed up somewhere along the way
< achow101>
do you see that address anywhere else in your list?
< supay>
the one in dumpwallet? nope, that isn't present in my list
< achow101>
I doubt that you've done anything wrong that's made this output an incorrect address. I think it's far more likely that the private keys you have simply don't match
< gwillen>
oops, we lost him
< gwillen>
hmm, I just imported a descriptor with no private keys, and got ""All private keys are provided, outputs will be considered spendable. If this is intentional, do not specify the watchonly flag."
< gwillen>
it's a sh(wsh(multi(2,A,B,C))) with fingerprints and paths given
< gwillen>
and tpubs
< sipa>
are the privkeys already imported?
< gwillen>
although getdescriptorinfo on it returns "hasprivatekeys": false
< gwillen>
shouldn't be
< sipa>
in that case it sounds like there's a bug...
< gwillen>
*nods*
< sipa>
what does gai on one of the resulting addresses say?
< gwillen>
I'm actually not sure how to get an address from it
< gwillen>
that was going to be the next thing to figure out :-)
< sipa>
deriveaddresses
< gwillen>
oho
< gwillen>
gai says ismine: false
< gwillen>
gai also displays the desc in a kind of broken way, uhoh
< gwillen>
it's suppose to step all three *'s together, right?
< sipa>
yes
< gwillen>
ahh, it looks like maybe it is, but it's forgotten the origin info for all but the first one
< gwillen>
so they display without it, but they're still changing
< gwillen>
this is a peak bug-filing day for me I guess :-)
< gwillen>
I suppose not a lot of people are executing these codepaths yet
< sipa>
yeah
< gwillen>
fanquake: how do you tag those so fast :-P
< gwillen>
are you a robot
< sipa>
gwillen: i have met a humanoid-looking individual who claims to be fanquake
< * luke-jr>
wonders if the security danger to assumeutxo could be mitigated by only producing hashes encrypted to a key that controls UTXOs over a certain value
< gmaxwell>
luke-jr: -ECOMPREHENSIONFAIL
< gmaxwell>
forget the mechenism, what property are you trying to achieve?
< luke-jr>
gmaxwell: making it infeasible to trust someone else's UTXO snapshot
< luke-jr>
ie, you can generate your own on PC A, and move it to PC B, but that's it
< gmaxwell>
well you can just copy your chainstate now...
< luke-jr>
gmaxwell: context is a ML thread suggesting assumeutxo can only be modified in code, which not only creates a trust-the-devs problem, it also makes it useless for this legitimate use case
< gmaxwell>
Personally I'm fine with "a trivially validated (root) hash is embedded in the software you run, and subject to the same review process that governs any other update to it" but I know that some people (morcos, and I think sipa) felt that was placing too much responsiblity on the developers/reviewer. I still feel that they could be convinced otherwise, since I still haven't seen a counter to my
< gmaxwell>
primary assertion that essentially every change to the software has the same/worse security risk, but most are much harder to review.
< gmaxwell>
luke-jr: which legitimate use case?
< luke-jr>
gmaxwell: an easy way for users to bootstrap new nodes of their own, from an older node they already synced
< luke-jr>
and/or effectively backup their chainstate
< luke-jr>
(if it gets corrupted, having a commitment to a UTXO set would help recover quicker)
< gmaxwell>
luke-jr: I thnk that usecase is essentially irrelevant: consider, we must support users syncing up without that because that will always be the vast majority of the syncups, without too much pain. If we do, then the 'use yourself' is not much of an improvement-- you just go from acceptable to somewhat better. Relatedly, we must be able to catch up several months of activity in a short
< gmaxwell>
amount of time, otherwise nodes that have been offline for a couple months will not be recoverable except through a trusty process... (e.g. the case with ethereum now).
< luke-jr>
I don't have a magic solution to make IBD instant.
< gmaxwell>
Also, I think the problem with the kind of thinking you're offering there is that the overlap in users that want instant magic now, can't just rsync an existing node, and will actually go along with some complicated utxo encryption scheme is essentially zero.
< gmaxwell>
luke-jr: no, but instant I think isn't really important (and for anyone who wants faster, rsyncing a node is probably never going to be beat)
< luke-jr>
it doesn't need to be complicated
< gmaxwell>
luke-jr: so you have some always on AWS node .... oh now you have to take one of your valuable private keys, manually handle it, put it on it... essentially no one will ever do that.
< gmaxwell>
just rsync. rsync works fine. We can make rsync work better by havin an rpc to flush and quiesce the chainstate.
< luke-jr>
it's good if it doesn't work with untrustworthy AWS nodes..
< luke-jr>
rsync doesn't work for backups
< gmaxwell>
What you're describing effectively doesn't work for backups either.
< luke-jr>
why not?
< luke-jr>
actually, it might not even need encryption for that case: just update the wallet file with a UTXO snapshot regularly, and restore from that if it's there
< gmaxwell>
Right. But all that is just begging for some 'helpful person' to fork the software and make the one line patch to remove the agreement check. Relaying on that, instead of making sure there is little to no need for it, just makes it more likely that in practice all users end up looking like ethereum does now, pure blind trust.
< gmaxwell>
like if you worry about corruption (A good worry), then just a chainstate recovery without any external loading would be fine.
< gmaxwell>
(I believe wumpus began working on this before)
< luke-jr>
for that, we need a way to freeze the chainstate db files during the backup, right?
< gmaxwell>
I believe wumpus worked on a thing previously to backup the chainstate using ldb snapshots.
< gmaxwell>
luke-jr: if we implement a AV utxo sync we'll get chainstate backups as a side effect most likely, since we'll need to store periodic old utxo states for people to sync from.
< gmaxwell>
(and so presumably the same mechenism could optionally be run more frequently for local-only recovery at the expense of more disk space)
< gmaxwell>
Though I think if people really need better than what is offered to all users then we have a problem.
< gmaxwell>
I think that last point basically summarizes my view: We need to offer acceptable performance in an acceptably secure way, if we do not people will do insecure things. If we do-- then we don't have a strong reason to implement conditionally secure local only behavior, it wouldn't be worth the effort-- because the generally available mechenism is good enough. We also don't have some massive
< gmaxwell>
surplus of development resources (review, QA, design, development...), and we've also seen from the past the insecure half solutions (like BIP37) starve more secure secure approaches of resources.
< luke-jr>
gmaxwell: AV utxo sync is not acceptably secure
< gmaxwell>
luke-jr: then no software is secure enough because it is strictly more secure then the rest of the software.
< gmaxwell>
no compromise that can make AV fail couldn't also make the software fail, it is only protected against by review.. and yet review in general is radically harder for software in general than it is for AV.
< gmaxwell>
there are a million and one subtle changes that would silently turn off validation in production, many of them would be hard for review to detect.
< gmaxwell>
So how does AV make any of that worse?
< luke-jr>
AV utxo sync is not simple AV..
< gmaxwell>
It isn't clear to me what you're saying.
< gmaxwell>
What is the attack model that you are concerned with?
< luke-jr>
as to your argument, it can't be strictly more secure, since it relies on the software being secure as a prerequisite to verify it
< gmaxwell>
luke-jr: It's more secure than the other vectors, not including them. Consider a nearly unbreakable lock but which can be defeated by drilling ... in a cardboard door. We need not debate too much about the limitation of the lock, since anyone that can drill can even more easily just drill right through the rest of the door.
< gmaxwell>
if someone can sneak through a false AV hash through review and get people to run it, assuming that the software has been updated to compute AV hashes on all nodes (which is the point of pieter's rolling utxo set hash work) so that catching a false one is as trivial as just looking at the output on a node that didn't depend on the update, ... then that person could just as well sneak through
< gmaxwell>
something to trigger bypassing scriptchecks or similar.
< gmaxwell>
because it would be a lot easier to sneak a backdoor like that in, then put in a false hash that everyone could trivially check. This is the AV security argument.
< gmaxwell>
It applys no less to utxoav then it does to av in general.
< luke-jr>
this does assume people *can* verify it; but it is creating a situation where it's likely people won't be able to anymore (because block sizes will get increased since it "doesn't matter" anymore)
< gmaxwell>
People validate new version with the existing versions that they have. If you have a valid version, then you can check that a new release is invalid. So even if no one is starting from scratch you still can't get a compromise though unless you 'break the chain' of review, e.g. by having everyone who reviews leave and get replaced with new people that didn't benefit from the review of others.
< gmaxwell>
and I don't think it changes the situation on block sizes much, already block sizes are well beyond what results in a sustainable initial sync, and have been for years.
< gmaxwell>
sustainability of the initial sync isn't a concern there.
< gmaxwell>
it's already horked.
< luke-jr>
not entirely yet
< luke-jr>
today, a new user *can* do the IBD
< gmaxwell>
even developers of lite clients don't want to run full nodes now, come on.
< gmaxwell>
luke-jr: yes, well people _can_ do an archive sync of ethereum to, which is several TB of data and months of processing on a very high end system.
< gmaxwell>
s/to/too/
< gmaxwell>
luke-jr: it isn't like most users can review updates except in the same kind of sense that they could buy a really high end computer system.
< gmaxwell>
(in fact, _I_ can't reliably review updates, I am absolutely confident that you could slip a 'stops actually validating' backdoor past me-- but its much less likely that you could get one past everyone who looks)
< gmaxwell>
(and I assume vice versa)
< gmaxwell>
luke-jr: Thanks though for explaining more though. I at least agree "utxoav would be an excuse to radically crank load" is something to consider.
< gmaxwell>
luke-jr: but I would suggest that the absense of this is already resulting in an outcome where very few users (even businesses) don't run full nodes, and now many that do so via highly insecure opaque snapshot installs.... and that is a lot worse.
< luke-jr>
gmaxwell: the ability to review code is a boolean; the ability to review a UTXO AV grows with IBD time
< luke-jr>
although I suppose the "an excuse to radically crank load" problem exists with *any* solution in this area
< gmaxwell>
luke-jr: not quite. AV review effort is a constant if you've already got a node running.
< luke-jr>
gmaxwell: new users shouldn't be forced to trust existing users in this way
< gmaxwell>
but thats also what happens with software review, reviewing the whole of the software is well beyond the capabilities of any single person.
< gmaxwell>
at best we can hope people review updates.
< luke-jr>
reviewing code is soemthing new users *can* do, even if not as a single person
< gmaxwell>
luke-jr: and 'forced' is too strong, you can sync history, even if its costly to the point of being obnoxious: it already is and the result is people not running full nodes, in vast numbers.
< luke-jr>
that can change if we get 1 TB blocks
< gmaxwell>
no, because "1 tb blocks" (or similar jokes) isn't remotely viable for continued operation.
< gmaxwell>
luke-jr: even if capacity were cranked up 'new users' as a group could do a sync from nothing without AV... even if it was so expensive as to make it unrealistic for just a single user to do alone.
< gmaxwell>
and any load level that was viable for single users to keep _running_ would remain viable for a group of users to history validate for a long time.
< gmaxwell>
as an aside, I really don't think that handicapping the software is a viable way to avoid load increases that compromise security.
< gmaxwell>
All that approach would do in the long run is demand someone forks the software, fixes the shortcoming, and then pushes you out of the way-- killing your influence.
< gmaxwell>
it would be like if you had applied this same logic to compact blocks and managed to block their introduction... BU would have eventually gotten 'xthin' working right, and then essentially all miners would use it, and many nodes (due to the bandwidth savings).
< luke-jr>
compact blocks doesn't have this problem, though?
< luke-jr>
hmm, I guess in a sense it could be argued to
< gmaxwell>
sure it could easily be argued to
< gmaxwell>
(and in fact many people argued that blocksize was no longer an issue because of compact blocks, but most of those people were confused about what compact blocks did)
< gmaxwell>
there have been a bunch of threads saying essentially "compact blocks made blocks 99% smaller so blocks can now be 100x larger!"
< gmaxwell>
but my point is that to the extent people thinking that is a risk, you can't solve it by just not making the improvement. Someone else will make it, failing to make it would only just diminish your relevance for future changes.
< gmaxwell>
(and not only that, it would make true some of the absurd allegations of unethical behavior, e.g. intentionally degrading the system)
< gmaxwell>
(or at least arguably make them true)
< luke-jr>
could have different priorities until someone else starts working on such features
< luke-jr>
not that I'm saying it's necessarily a solution, just a possible alternative to "don't do it and lose users"
< gmaxwell>
Right, but even with that it still points out that 'don't implement a performance improvement' isn't really a way to effectively avoid security loss in the long run.
< gmaxwell>
and 'people will switch software' is only one risk factor. For something like initial sync, what happens is that many people don't run fullnodes at all: and we've seen that happen and those folks are overwhelming more likly to think "wtf is capacity limited at all!".
< gmaxwell>
the analog for compact blocks is e.g. miners doing 'headers only mining' then not caring about propagation/validation speed.
< gmaxwell>
which, indeed, had become very common before compact blocks
< gmaxwell>
(and which was a material driver for the development and deployment of compact blocks, ... not alternative software though that would have also been one had your approach been taken there)
< gmaxwell>
In any case, point is that ignoring a need isn't very effective-- it'll be routed around in some way, and the routing around may have much worse outcomes.
< gmaxwell>
(of course the degree depends on how serious the 'need' is...)
< gmaxwell>
(and how easy the bypasses are, unfortuantely "trust someone else" is almost always a very easy bypass that a very large number of economically important users are happy to make)
< araspitzu>
It's my understanding that the fees attached to this tx wouldn't be enough to cover the "minRelayFee" and thus the transaction isn't even worth to be made
< sipa>
gwillen: back to your problem yesterday... you say gai on the derived addresses says ismine:false (which is expected for a watchonly address), but what does it say about watchonly and solvable?
< gwillen>
sipa: solvable true, watchonly true
< gwillen>
the only problem I see is the lack of key origin info
< sipa>
ok!
< sipa>
that's easy to fix
< gwillen>
and the spurious warning
< sipa>
the import logic only looks at origin info for pubkeys that are being imported as well
< gwillen>
makes sense
< gwillen>
but the wallet is capable of holding origin info for pubkeys it doesn't have?
< sipa>
it should import all known origin info, even for pubkeys that aren't being imported
< sipa>
hmm
< sipa>
i'm mostly confused why deriveaddresses doesn't maintain the origin info
< sipa>
there is no wallet involved there
< sipa>
yeah i think we can import origin info for other keys
< gwillen>
sipa: yeah ProcessImport is just throwing it away if it doesn't have the key
< gwillen>
it calls pwallet->AddKeyOrigin for keys it has
< gwillen>
and throws away the rest
< gwillen>
but AddKeyOrigin wants a pubkey, which we don't have...
< sipa_>
it doesn't actually use a pubkey
< sipa>
only a pubkeyhash
< sipa_>
hi sipa_
< sipa>
why am i here twice in the same client?
< MarcoFalke>
sipa: Have you tried turning it off an on again?
< sipa>
actually, just off
< sipa>
feels much better
< sipa>
gwillen: oh i see now
< MarcoFalke>
\o/ on rc3, hopefully the last
< sipa>
you're reporting the gai output, and notice there that the origin info is missing
< sipa>
yeah, i think we should try importing just all origin info, even for non-importable pubkeys
< sipa>
but it's probably not enough to delay 0.18 for
< sipa>
(trying to avoid killing MarcoFalke's mood)
< dongcarl>
meeting?
< achow101>
meeting
< sipa>
meeting
< jonasschnelli>
yes
< jonasschnelli>
meeting
< wumpus>
#startmeeting
< lightningbot>
Meeting started Thu Apr 4 19:01:17 2019 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< wumpus>
so are any of these regressions for 0.18?
< achow101>
I don't think so. they're all new stuff added with descriptors
< gwillen>
I don't have opinions about 0.18
< gwillen>
none of them are correctness issues
< wumpus>
achow101: ok, thanks
< wumpus>
doesn't seem that needs to block the release
< wumpus>
#topic release notes TODOs 0.18.0
< gwillen>
well, "manual coin control broken with multiple wallets" is not merely superficial, but I don't think you can successfully do anything broken with it, as far as I can tell
< sipa>
i'd welcome help to fill in my TODOs :)
< MarcoFalke>
working on it rn
< wumpus>
gwillen: right-I just think the criterion at this point is "worked in 0.17 and broken in 0.18"
< gwillen>
:+1:
< wumpus>
unless it's really dangerous ofc, but in that case we could also opt for disabling the feature that introduces it
< gwillen>
it is not totally out of the question that a clever user could accidentally send coins from the wrong wallet with it
< gwillen>
but when I tried to do so, I got an error
< gwillen>
and it only triggers if you are performing coin control on multiple wallets at once, which would be a weird thing to be doing
< wumpus>
gwillen: we could add a known issue
< gwillen>
that would make sense, although who reads those
< gwillen>
but I think it is definitely a known thing that one should avoid triggering
< wumpus>
sigh, who ever reads anything right ...
< wumpus>
if you start asking that why spend time writing release notes at all !
< wumpus>
anyhow the TODO s are;
< wumpus>
(TODO pieter: it feels like this section can be merged with the earlier RPC changes section) (under low-level changes)
< wumpus>
Descriptors with key origin information imported through importmulti will have their key origin information stored in the wallet for use with creating PSBTs. (TODO pieter: this should probably be merged with the text on importmulti gaining descriptor support)
< wumpus>
(TODO pieter: mention getdata randomization from #14897 and perhaps orphan tx handling from #14626) under "network"
< gribble>
https://github.com/bitcoin/bitcoin/issues/14897 | randomize GETDATA(tx) request order and introduce bias toward outbound by naumenkogs · Pull Request #14897 · bitcoin/bitcoin · GitHub
< wumpus>
I've already added it, feel free to edit it further
< gwillen>
thanks
< gwillen>
I tried to think of a narrower recommendation, but I think probably very few people are using these features together anyway
< gwillen>
and I'm afraid of wrongly implying that some usage pattern is safe, given that I don't really know the implications of the issue
< wumpus>
yes, it's good like this
< gwillen>
:+1:
< sipa>
gwillen, achow101: it seems deriveaddresses has somewhat exponentially-looking performance for increasing number of keys
< sipa>
this is weird
< achow101>
i would expect it to be linear
< sipa>
it should be
< achow101>
maybe it's because you use the same provider as the input and output of Expand? More and more stuff gets put into it, so it becomes large and slower to search?
< achow101>
I don't think there is a need to use the same provider since it isn't using any of the resulting solving data
< bitcoin-git>
[bitcoin] sipa opened pull request #15749: Fix: importmulti only imports origin info for PKH outputs (master...201904_importallorigins) https://github.com/bitcoin/bitcoin/pull/15749
< sipa>
gwillen: ^
< bitcoin-git>
[bitcoin] jnewbery opened pull request #15750: [rpc] Remove the addresses field from the getaddressinfo return object (master...2019_04_remove_address_from_getaddressinfo) https://github.com/bitcoin/bitcoin/pull/15750
< bitcoin-git>
[bitcoin] sipa opened pull request #15751: Speed up deriveaddresses for large ranges (master...201904_fasterderiveaddresses) https://github.com/bitcoin/bitcoin/pull/15751