< phantomcircuit>
gmaxwell, wait they cancel a invoice if the transaction reaches them over the network first?
< gmaxwell>
Yes, they did to me and other people at the time were reporting it too.
< gmaxwell>
Maybe they since stopped? if so then I'm out of date.
< phantomcircuit>
that's crazy
< echeveria>
phantomcircuit: they want to be able to selectively abort client payments.
< echeveria>
this is doable if they really wanted to within another protocol, but would need to have the client doing partial signatures, and them filling validity with another signature of their own input to the payment.
< gmaxwell>
BIP70 could have been defined that way, varrious people advocated for it.
< gmaxwell>
But doing that correctly is harder for the wallet, since it needs to track UTXO as 'I signed for this, but it hasn't been broadcast yet'.
< gmaxwell>
At the time BIP70 was written, the only 'used' metric bitcoin core really had was "spent by the mempool"
< luke-jr>
maybe someone should get this in the Bustapay stuff
< phantomcircuit>
echeveria, oooh
< phantomcircuit>
they want to be able to blacklist utxo entries
< phantomcircuit>
got it
< echeveria>
gmaxwell: I don't think it's very user friendly to have that as a system though. you end up with ambiguity, I've paid someone with this output, and they may actually spend it sometime in the future, or not.
< luke-jr>
if you're not careful, it locks your change too
< echeveria>
I guess have a timeout in the script that means your partial signature is only valid for x hours or something, but even so. with most wallets tending towards having absolutely no logic in them this seems like a non-starter.
< luke-jr>
echeveria: Bitcoin intentionally doesn't support such timeouts
< gmaxwell>
you 'time it out' by double spending it.
< gmaxwell>
What I think the wallet should do is track pending spends, and eventually intentionally double spend them if they 'timeout'
< gmaxwell>
probably by just including them in a future transaction post timeout.
< gmaxwell>
and also show a 'pending balance', so it's clear that a wallet with pending spends isn't 'empty'
< gmaxwell>
But... yea, it's a non-zero amount of work.
< luke-jr>
especially when you consider Lightning, I bet :p
< gmaxwell>
But what bitpay wants people to do is just busted.
< gmaxwell>
like they want to be able to say 'payment rejected, try again with more fee' but then no effort at all to doublespend the prior payments.
< gmaxwell>
and no indication in the wallet that you've potentially double paid them.
< luke-jr>
those seem like wallet-side decisions
< gmaxwell>
so days, months, or even years later, you might have a whole bunch of bitpay payments go through twice..
< gmaxwell>
They are but they need to be part of any BIP70 alternative that doesn't immediately broadcast txn.
< gmaxwell>
As in the spec needs to advise sensible behavior, and wallets _must_ implement somethign sensible or they create security vulnerabilities.
< gmaxwell>
at a minimum if a payment is rejected and needs to be reissued, the wallet must doublespend the original one.
< gmaxwell>
(and since there is bidi communication, there is really no reason that a request couldn't be rejected before a signed copy is even sent... but thats another matter)
< echeveria>
and handle when someone refuses to do the double spend with their hardware wallet.
< gmaxwell>
the double spend can at least be avoided all togeather in the common case.
< gmaxwell>
echeveria: right or with a third party anti-doublespend signer.
< echeveria>
ends up being a rather unfortunate amount of logic in the wallet.
< gmaxwell>
right and a bunch of it is tricky, which is why I think it's important that a spec that makes you do this tricky thing at least warn you about the stuff you need to consider.
< gmaxwell>
otherwise it would be like a ecdsa spec that just said nothing about where you're supposted to get this nonce value. "Nonce? that means used once. I'll just use a counter."
< ap4lmtree->
any of you take free online classes, such as through coursera ?
< ap4lmtree->
they have some cs and every other field courses
< gwillen>
ap4lmtree-: not sure how on-topic this is here, but fwiw the Coursera Crypto I class by Dan Boneh is very good
< phantomcircuit>
wumpus, btw i dont have fuzzing.tar.xz anywhere so that can definitely be removed
< phantomcircuit>
gmaxwell, the idea that the remote end is going to reject a payment is crazy town
< gmaxwell>
phantomcircuit: I don't think it is? like at least it's not unreasonable for the remote end to ignore your unconfirmed payment unless it has some minimum feerate.
< phantomcircuit>
gmaxwell, it's not unreasonable to ignore you until it confirms
< phantomcircuit>
then what
< gmaxwell>
right, sure, but assuming you wanted it to not be ignored until them, you would have appricated the oppturnity to make a different txn.
< phantomcircuit>
gmaxwell, i guess what im saying is that rejecting an unsigned transaction makes sense, but rejecting one that's signed and broadcast?
< phantomcircuit>
once it's confirmed it's confirmed
< gmaxwell>
well thats why they want you to submit the transaction directly instead of broadcasting.
< phantomcircuit>
but it's signed why should you trust them not to broadcast it themselves
< gmaxwell>
You shouldn't. They might. Which is why you need to handle it correctly.
< gmaxwell>
E.g. if they reject it and your response is to resign e.g. with more fee, you need to make sure you doublespend it... e.g. same as if you'd feebumped in the UI.
< gmaxwell>
which sounds obnoxious, but its not bad considering that feebumping is already an existing and pretty desirable feature.
< phantomcircuit>
gmaxwell, it seems more like you want a list of "payments im trying to make" and have the wallet ensure any new transaction invalidates anything that doesn't pay to that set
< gmaxwell>
Maybe, though you don't really want to invalidate a payment that the recipent is already trying to CPFP unless asked to do so.
< wumpus>
phantomcircuit: no problem, that whole section can be removed, the fuzzing corpus is going to move to a separate repository
< wumpus>
gkrizek: there's not that much configuration; server is chat.freenode.net port is 6697, room is "#bitcoin-commits,#bitcoin-core-dev", nick is "bitcoin-git", branches is "master,0.11,0.12,0.13,0.14,0.15,0.16,0.17"
< wumpus>
gkrizek: password and nickserv password is not filled in, Ssl is checked, Long url is checked, the other checkboxes are not
< ap4lmtree->
gmaxwell, okay ill check it out
< gkrizek>
wumpus: Ok, thanks! Are we cool with keeping that the same? Send a message when pushes happen to those branches and when PRs are open/closed/merged/reopened? I can add or remove whatever we want
< newbie111>
hi ALL
< meshcollider>
gkrizek: we dont need branches 0.11, 0.12 or 0.13 anymore btw, only 14+
< wumpus>
good point, I cleaned up those branches to keep the number of branches from getting out of hand, as there's no active development expected on them anymore
< benthecarman>
Is there an easy to way get a CTransaction from a CWalletTx
< wumpus>
benthecarman: the 'tx' field contains a pointer to the underlying CTransaction
< benthecarman>
Oh okay thnx
< hebasto>
promag: hi, should I close #14798 in favor of #15136?
< hebasto>
promag: I would not work on it if it seems useless for me.
< promag>
hebasto: then I think you should close the other one with some reasoning
< promag>
see you in the meeting
< hebasto>
promag: sure
< gkrizek>
meshcollider: wumpus: Ok, great. Thank you. I'm going to send messages when Tags are created as well. They are part of the same event type from GitHub and seem important info to message about
< achow101>
gkrizek: that's a good idea. currently wumpus has to send that message himself
< sipa>
gkrizek: thank you for working on this
< wumpus>
gkrizek: thanks, seems useful!
< gkrizek>
Great, I think it would be too and since they are already part of the event it's almost no extra work.
< gkrizek>
sipa: no problem! Happy to help. This will probably be a pretty minimal implementation at first because Services are officially EOL on the 31st. Just trying to get something to replace it for now, then can make it better later on.
< achow101>
meeting?
< instagibbs>
meeting.
< wumpus>
#startmeeting
< lightningbot>
Meeting started Thu Jan 10 19:00:32 2019 UTC. The chair is wumpus. Information about MeetBot at http://wiki.debian.org/MeetBot.
< wumpus>
current ones: #11082 #14491 #14711 #14941 #14938
< gribble>
https://github.com/bitcoin/bitcoin/issues/11082 | Add new bitcoin_rw.conf file that is used for settings modified by this software itself by luke-jr · Pull Request #11082 · bitcoin/bitcoin · GitHub
< jonasschnelli>
Maybe we can discuss #11082 since there was the discussion of JSON vs INI?
< gribble>
https://github.com/bitcoin/bitcoin/issues/11082 | Add new bitcoin_rw.conf file that is used for settings modified by this software itself by luke-jr · Pull Request #11082 · bitcoin/bitcoin · GitHub
< meshcollider>
Hi
< achow101>
so hwi is currently under my account. but if we are going to use it for hardware wallet support in core, then perhaps it would be better to have it under a proper github org. the obvious one that comes to mind is bitcoin-core
< wumpus>
yes, that makes sense
< achow101>
instagibbs: might have some thoughts on this?
< instagibbs>
nothing against achow101 but also I'm actively using it and would be nice to be put under an established org
< sipa>
so the first question is between 1/2 and 3, i think; whether the filters are their own index semantically, or part of the main data structures
< jonasschnelli>
What speaks against being part of the main data structure?
< jimpo>
I prefer 1/2 since I can imagine alternate filter types
< jonasschnelli>
(like option 3)
< sipa>
and i guess that depends on how they'll be used; if they're mostly optional things that some people want to enable, a separate index is the obvious way to go
< sipa>
but if we envision it may become a consensus rule, having it separate is quite annoying
< jimpo>
metadata is like 76 bytes (2 hashes + disk loc)
< jonasschnelli>
From what I see is that there are plans to commit the filters at one point so it'll be part of the consensus, right?
< jimpo>
it's more convenient to have it in CBlockIndex if part of consensus rules, yes, but I'm not sure how much worse the separate index is there
< sipa>
i guess it isn't really
< jimpo>
would have to do some synchronization or build the filters twice (but that's really fast)
< jimpo>
like you can validate the consensus rule w/o storing the filter
< sipa>
it can go from an asynchronously maintained one to a synchronous one perhap at that time, but still remain a separate db/directory
< jimpo>
if you don't want to
< sipa>
right
< jimpo>
which is another strong (IMO) argument in favor of 1/2: not everyone needs to or should be forced to store filters
< jimpo>
and no need to bloat CBlockIndex if people might disable it
< sipa>
agree; i forgot about the possibility that even in case of it being consensus, there is no requirement to actually store the filters
< sipa>
about the difference between 1 and 2... i prefer the flat files approach slightly (the filters aren't that big that using leveldb is insurmountable, but they're still append-only data like blocks/undo)
< sipa>
and i saw you also have a PR to abstract out the flat files stuff, which seems useful in any case
< jimpo>
I do agree that 2 is somewhat conceptually cleaner
< jimpo>
but it does make the code more complex and is slower right now unless we add a caching layer to the flatfile stuff
< jamesob>
we don't know how much slower though, right? might be negligible
< gmaxwell>
I feel sketchy about adding a requirement to store blobs in leveldb, I think in the long run we won't end up using leveldb -- we now no longer need basically any interesting properties of it, it's just a MAP on disk for us now.
< gmaxwell>
I'm confused that its slower or at least why it would be under actual use though.
< jamesob>
gmaxwell: we might use snapshotting to alleviate lock contention at some point...
< gmaxwell>
jamesob: you wouldn't say that if you'd actually tested snapshotting.
< jimpo>
just two separate reads instead of 1 for each filter I think
< sipa>
gmaxwell: but do you think that choosing to use leveldb right now will complicate a hypothetical move to using something else?
< jimpo>
but there are certainly ways to reduce that overhead which I haven't experimented with
< gmaxwell>
jimpo: why two seperate reads?
< jamesob>
1 for leveldb, then 1 for flatfile
< jimpo>
first read from the leveldb index to get the disk pos, then read the filter from flatfile
< sipa>
well the leveldb part can be in memory, no?
< sipa>
like CBlockIndex is now
< gmaxwell>
Ah so it's directly reading the leveldb instead of having the index in memory like the block index is.
< jimpo>
true, we could keep the whole index in memory
< jimpo>
but shouldn't that be pretty much equivalent to setting the cache on the leveldb high enough?
< gmaxwell>
I would be saying these pointers should just be in the blockindex. But there was some talk about switching this on and off above.
< gmaxwell>
jimpo: not really, there are a bunch of overheads in the leveldb caching and it's not particularly intelligent.
< sipa>
also a leveldb _read_ is not a single disk access
< sipa>
(as opposed to something that hits a cache)
< jimpo>
ok
< gmaxwell>
it's quasi log(n) plus a bunch of rather expensive bloom filter checks, which is why I was surprised the flatfile was slower..
< sipa>
so with index in memory + flatfile you're guaranteed one disk seek always
< gmaxwell>
but if your flatfile is a leveldb access plus the file now I see why it couldn't be faster.
< gmaxwell>
since you take the ldb overheads in both cases.
< jimpo>
gmaxwell yes it was leveldb + flatfile
< jimpo>
OK, so what if we have a leveldb index + flatfiles
< jimpo>
then if that's slow we can add logic to pull the whole leveldb index into mem like the CChain structure
< jimpo>
but as a separate structure
< sipa>
is that easier than doing it immediately? (i'm thinking about dealing with the possibility of leveldb read failure outside of startup etc)
< jimpo>
yes, it definitely breaks up the size of the code change
< gmaxwell>
sipa: I think right now our leveldb usage can be replaced with something that very simply serializes the things like blockindexes, and an open-hash table for the utxo set. e.g. like the one the ripple people did... and it would likely be a pretty massive speedup. I suppose you could just convert this to a flatfile /then/, so it's not the end of the world. We also take on some increased
< gmaxwell>
Q/A,dev overhead if we start storing blobs in ldb because thats just a whole other set of access patterns that we need to worry about that might result in misbehaviors (like, what is ldb's peak memory usage look like when storing 40kb blobs in it?)
< jamesob>
sipa: if we have leveldb read failures we've got bigger problems than serving block filters, no?
< jimpo>
is a leveldb read failure significantly more likely than a flatfile read failure?
< sipa>
gmaxwell: ok i see
< sipa>
jimpo: it needs exception handling etc, which i remember was a pain to deal with in the UTXO code
< wumpus>
doesn't all i/o need error handling?
< jimpo>
I guess flatfile accesses just return error codes or null vs throwing exceptions? is that what you're saying sipa?
< sipa>
sure, but an "if (!read) return false" in the startup code is easier than a wrapper database object that catches exceptions etc
< wumpus>
oh sure the way in which errors are reported will differ
< sipa>
anyway, i'm fine with leveldb index + flatfiles, and moving the load into RAM to a later chance
< wumpus>
one thing at least, for the first iteration it's useful to keep the amount of new code minimal
< sipa>
performance at this point doesn't matter that much for this application, but it's nice that in the future it wouldn't be a compatibility break
< gmaxwell>
is the only reason to not store it in the blockindex is to just avoid the size of the blockindex objects increasing when the filter isn't used?
< jimpo>
potentially if core supports multiple filter types (dunno how likely that is?)
< sipa>
jimpo: also to simplify building it in the background, i guess?
< wumpus>
so if using leveldb results in much less code than implementing some manual file handling, that might be handier for review, as said it can be optimized later
< gmaxwell>
it won't be.
< jimpo>
it's a few hundred more lines
< jimpo>
*after* the flatfile refactor
< wumpus>
as long as you don't end up rolling your own k/v store
< jimpo>
but I think the refactor is good to do anyway
< jamesob>
agree with that
< sipa>
agree, yes
< sipa>
gmaxwell: what makes you say so?
< gmaxwell>
The whole process of making these things super optional seems to add a lot of complexity. If instead we didn't think of it that way the filters could just be handled right along side, say, undo data.
< jimpo>
eh, depends how you think about complexity
< jimpo>
I consider adding more stuff to CBlockIndex to be more complex than a separate, asynchronous system
< sipa>
gmaxwell: agree, it adds complexity to make it optional and separate, but that's a different question than whether or not everything should be in leveldb or partially in flat files
< jimpo>
otherwise it involves messing around with validation code before there's even a consensus change
< gmaxwell>
If I drop out support for upgrades and what not, I believe I could add support for including indexes with two dozen lines of code. Obviously not a fair comparison.
< wumpus>
right, with decoupling at least changes can be localized and contained, instead of tying everything together
< gmaxwell>
But building a pile of logical abstractions that add layers and layers is not actually a reduction in complexity.
< wumpus>
exactly jimpo
< gmaxwell>
It's decomplexity theater.
< sipa>
please
< wumpus>
is that what he's doing?
< jimpo>
rich hickey would disagree, I think
< gmaxwell>
and then we just spend our entire lives on pointless refactors shifting around the deckchairs on this fractal of layers, while seldom actually advancing visible functionality, which is exactly what the project has been doing for the last year.
< wumpus>
I don't think this is construcive anymore
< wumpus>
any other topics?
< jonasschnelli>
rw_config INI vs UniValue
< jamesob>
operation of cblockindex is consensus critical; atm serving filter blocks is not => decoupling failure of the optional feature from the critical one is safer
< wumpus>
#topic rw_config INI vs UniValue (jonasschnelli)
< jonasschnelli>
The current rw_config PR from luke-jr uses .INI file structure (own parser)...
< jonasschnelli>
The question is, if a simple JSON file wouldn't be simpler and more stable
< wumpus>
jamesob: FWIW I agree, as long as it is not consensus critical, it's better to have it separate from the consensus critical code, this was the same idea with separating out the other indexes
< wumpus>
but apparently all of that was pointless —
< jimpo>
JSON is somewhat less human-readable IMO
< jonasschnelli>
Yes. But should its main focus be human editable / readable?
< jamesob>
also more annoying to write (e.g. trailing commas on last key causes errors)
< jonasschnelli>
I think using a JSON file is more straight forward since we have the parser/writer logic in place (rather then adding another file format)
< jonasschnelli>
Also, the rw_configs focus AFAIK is "edit through the application layer" rather then manually with a text editor
< ryanofsky>
the question isn't really "do you like ini or json better?" the question is how do you want to replace QSettings? with a new ini parser, or existing ini parser, or with existing univalue parser?
< jamesob>
I'm in favor of whatever is less code
< jonasschnelli>
UniValue is very likely less code
< jimpo>
Yeah, I'm on board with JSON
< jonasschnelli>
But maybe we pick up the discussion when luke-jr is back or – if you care – comment on the PR
< jonasschnelli>
other topics?
< jamesob>
so can we proceed on jimpo's option (2), i.e. indexing in ldb and storing filters in flatfiles?
< wumpus>
jamesob | I'm in favor of whatever is less code <- same
< wumpus>
no other topics?
< jnewbery>
reminder: wallet IRC meeting tomorrow
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Jan 10 19:52:52 2019 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< dongcarl>
I would like people to take a look at #12255 when they can, I'm not sure if there is precedent for how this is usually done but I think we should contact distro package maintainers if we decide it should be merged
< dongcarl>
I do not want careless distro maintainers clogging up people's hard drives and bringing down nodes just because the default datadir has changed
< wumpus>
to be honest I have no idea who is actually using those files, whether they're used in any distro packaging
< wumpus>
I do know it tends to be hard to find reviewers for init scripts and such
< wumpus>
I think they're meant as examples in the first place, to copy over and tweak, not for anything to be automtically installing them
< wumpus>
but it's a good point...
< dongcarl>
wumpus: I know that at least Arch Linux uses the systemd file
< gwillen>
I would appreciate if people would glance at #14978, it has a couple of utACKs but I don't really know what it needs before someone is comfortable merging it
< gribble>
https://github.com/bitcoin/bitcoin/issues/14978 | Factor out PSBT utilities from RPCs for use in GUI code; related refactoring. by gwillen · Pull Request #14978 · bitcoin/bitcoin · GitHub
< phantomcircuit>
sipa, what does openssl's rng do to gather entropy? anything that you're not already doing?
< gmaxwell>
phantomcircuit: I believe the code in sipa's PR does more or less strictly more than openssl does by default on linux at least.
< gmaxwell>
phantomcircuit: it reads dev/urandom and also combines in e.g. the stack pointer and a time.
< phantomcircuit>
also why remove the perfmon data from GetStrongRand* functions
< phantomcircuit>
oh i see you moved it to the scheduler, makes sense
< gmaxwell>
phantomcircuit: the permon is already only once in a while (its slow), in the PR it gets moved to the idle thread.
< hidden>
i have been trying for years now any help will be hugely appreciated
< gwillen>
hidden: is there a writeup somewhat of what happened, what the failure looks like, and what you've tried
< gwillen>
the easiest way to get help will start with writing those things down
< gwillen>
(also, I'm not sure this channel is the right place to seek help, but in any case it's not the right place to have a long discussion about it)
< hidden>
gwillen basically i salvagewallet and it said failed and wallet is corrupt. if i try to open the client again it i could try generate a new address to load up the reserved ones in the keypool, i could do so until i the "unknown key in key pool" error. i can dump the wallet (which was renamed ending .bak december 2013) using pywallet. in the dump i see the address and the corresponding "public_key_hex" under list of keypool but not the
< hidden>
private key.
< hidden>
I also downloaded berkeley DB, did a db_dump, didn't go anywhere much yet, i hear doing recovery without aggressive might help but still clueless and learning about berkely db.
< hidden>
maybe even a solution was provided there to retrieve keys but i couldn't follow. if anyone has idea on what to do, patiently just idling and searching solutions thanks.
< gwillen>
hm, if pywallet --dumwallet show private keys for other public keys, but not the one in question, presumably that is one of the corrupted records :-\
< gwillen>
that doesn't necessarily mean there are zero useful bytes to recover, but it does mean some bytes you want have probably been destroyed, and implies that fixing the salvagewallet issue probably wouldn't recover that key
< gwillen>
but again I don't want to do too much discussion in this channel -- it would be really helpful if you would write down somewhere, e.g. even just a google doc or something, everything you've tried so far and exactly what you see (e.g. what output do you get from pywallet? obviously don't paste any actual private keys.)