< wumpus>
would be possible to detect this in configure, I guess, and then not disable the sse-reliant stuff
< wumpus>
s/disable/enable
< bitcoin-git>
[bitcoin] practicalswift opened pull request #10701: Remove the virtual specifier for functions with the override specifier (master...virtual-override) https://github.com/bitcoin/bitcoin/pull/10701
< bitcoin-git>
bitcoin/master 4c72cc3 Wladimir J. van der Laan: Merge #10673: [qt] Avoid potential null pointer dereference in TransactionView::exportClicked()...
< bitcoin-git>
[bitcoin] jnewbery opened pull request #10703: [tests] Allow tests to pass when stderr is non-empty (master...test_stderr) https://github.com/bitcoin/bitcoin/pull/10703
< jonasschnelli>
Should we crate a "trivial" label?
< jonasschnelli>
or something that point concludes "typo only fixes / comments / etc."?
< wumpus>
we had a trivial label in the past, I removed that at some point because it didn't add anything that 'docs/output' or 'refactoring' doesn't do
< wumpus>
it made people have the idea that labeling something 'trivial' would make it accepted sooner, prompting the creating of tons of trivial changes
< wumpus>
in any case, better to label specifically. 'trivial' doesn't really tell anything about the change
< wumpus>
e.g. a comment change would be a documentation change
< bitcoin-git>
bitcoin/master 37065d2 John Newbery: [tests] remove unused imports from utils.py
< bitcoin-git>
bitcoin/master f1fe536 John Newbery: [tests] fix flake8 warnings in test_framework.py and util.py
< bitcoin-git>
bitcoin/master cad967a John Newbery: [tests] Move stop_node and start_node methods to BitcoinTestFramework...
< bitcoin-git>
[bitcoin] MarcoFalke closed pull request #10556: Move stop/start functions from utils.py into BitcoinTestFramework (master...testframeworkstopstart) https://github.com/bitcoin/bitcoin/pull/10556
< jonasschnelli>
wumpus: I see, good point about the "trivial" label.
< sdaftuar>
wumpus: thanks for the heads up, i'll investigate the dbcrash issue
< wumpus>
sdaftuar: they look like different problems; 248398016 seems a race issue (sending command to terminated process), 248398018 is a timeout while running `generate`. It's a tad strange that both problems happen to be in dbcrash.
< sdaftuar>
wumpus: it looks like the issue in 248398016 may just be that i got the exception name wrong somehow
< sdaftuar>
we can bump the rpctimeout for the test to fix that second problem
< wumpus>
sdaftuar: oops, good catch, didn't notice that between all the errors
< wumpus>
yes that seems just a case of 'travis VM too slow for timeout'
< cfields>
wumpus: hmm
< cfields>
wumpus: re the openbsd assembler, i think we could patch it to use the hex, same as rdrand? :(
< cfields>
(i don't know enough about assemblers, but i figured that was done that way for exactly this reason)
< cfields>
but yes, we could also check during configure
< sipa>
cfields: yeah, i tried using rdrand as a mnemonic, but the osx assembler didn't accept it on travis
< sipa>
hex always works, but the downside is that you must hardcode the registers
< wumpus>
cfields: I don't particularly care if the sse-accelerated code is not used on openbsd
< wumpus>
cfields: they have to use openssl -no-asm as well
< cfields>
heh, really? That's a big hit
< wumpus>
we definitely shouldn't dumb down the code just for this case, there's a large chance that openbsd will switch to clang at some pointa different as
< wumpus>
yes, really
< cfields>
yikes, ok
< wumpus>
I don't want to get involved in the politics here, they choose to use an old as that doens't support modern instructions, they get slower code. It does need to compile, though.
< cfields>
ok. let's just add an inline-compile check rather than trying to hunt down the assembler though. some (clang, at least) let you choose between an internal assembler or external
< sipa>
using hex asm also has the advantage of not needing separately compiled objects just to have access to one asm instruction
< wumpus>
their gcc is also - by choice - an ancient version
< wumpus>
ok, just don't do this for openbsd, it's likely a temporary issue
< bitcoin-git>
bitcoin/master 6d22b2b Matt Corallo: Pull script verify flags calculation out of ConnectBlock
< bitcoin-git>
bitcoin/master b5fea8d Matt Corallo: Cache full script execution results in addition to signatures...
< bitcoin-git>
bitcoin/master eada04e Matt Corallo: Do not print soft-fork-script warning with -promiscuousmempool
< bitcoin-git>
[bitcoin] laanwj closed pull request #10192: Cache full script execution results in addition to signatures (master...2017-04-cache-scriptchecks) https://github.com/bitcoin/bitcoin/pull/10192
< sipa>
w00t
< BlueMatt>
hey, neat
< BlueMatt>
0.15 is coming together :)
< Dizzle>
Definitely. I'm sad unix sockets aren't in. Looking forward to that for my electrum server.
< wumpus>
Dizzle: good to hear at least someone was waiting for the UNIX sockets stuff, did you review/test the PR?
< Dizzle>
wumpus: I will be this weekend, am getting an electrumx patch ready for it.
< wumpus>
awesome
< wumpus>
Dizzle: are you going to use P2P or RPC over unix sockets?
< wumpus>
(or both)
< Dizzle>
RPC
< Dizzle>
Electrum servers tend to maintain their own utxo DB. Syncing with your node's db over RPC is a bit of a bottleneck. Using asynchronous i/o speeds things along but there is plenty of overhead using the TCP loopback when both softwares are in the same operating environment.
< wumpus>
oh interesting, I had mostly seen the UNIX sockets as security improvement, not so much for performance, but yes avoiding local TCP might save some overhead
< gmaxwell>
wumpus: well for p2p it would be a performance improvement if we changed the protocol and dropped the 'crc', but I don't think we were planning on doing that.
< jcorgan>
i have a non-bitcoin related application that communicates between two processes using a unix socket at a continuous 5Gbps data rate
< jcorgan>
uses zmq over unix socket
< gmaxwell>
jcorgan: yea, right now though for us the fact that every p2p message gets sha2ed is way more overhead than TCP.
< jcorgan>
oh, sure, i was just commenting that unix sockets are pretty fast
< wumpus>
so there's really a performance gain in both latency and throughput because no local network needs to be emulates, I never realized that
< wumpus>
makes sense though
< gmaxwell>
perhaps an input to the BIP151 stuff... that it should be possible to run it in a mode that turns off the auth for use over a domain socket at least.
< wumpus>
well one of the uses for UNIX would be for tor; in that case we certainly don't want to disable the crcing, or auth. But agree it'd be nice to have it as possibility.
< gmaxwell>
yea. and one always needs to worry about downgrade attacks.
< jonasschnelli>
gmaxwell: great idea
< wumpus>
it'd be another kind of whitebind 'nocrcbind' 'noauthbind'
< jonasschnelli>
gmaxwell: But then there would be no checksum... if you disable the poly1305 mac?
< wumpus>
bindflags extension
< gmaxwell>
jonasschnelli: right but there isn't any need for one with a purely local unix domain socket.
< wumpus>
jonasschnelli: over a local socket, when communicating with local software, there's no point
< jonasschnelli>
Indeed.
< jonasschnelli>
Corruption through socket coms is not possible I guess?
< wumpus>
and the permissions of the socket itself are used for authentication
< jonasschnelli>
*domain sockets
< gmaxwell>
jonasschnelli: no, not any more than any random memory anywhere.
< wumpus>
no, unless memory/cpu corruption, in which case there's other issues
< wumpus>
this is a SOCK_STREAM AF_UNIX socket, so neither reordering nor corruption nor packet loss should happen
< jonasschnelli>
I see... yes. That mode would be awesome... especially for wallets (stuff I'm writing into libbtc) that downloads everything from the local peer
< wumpus>
it's theoretically possible for SOCK_DGRAM AF_UNIX sockets to deliver packets out of order, though I've never heard of an OS that does that (but anyhow not an issue for us)
< jcorgan>
one nice thing about ZMQ is that with a parameter change I can do the same thing between network endpoints on different hosts or two processes on the same host over a unix socket
< jcorgan>
no code changes
< jcorgan>
(or even two threads using zmq over shared memory)
< jonasschnelli>
suggesting: again multiwallet endpoint vs json parameter
< wumpus>
BlueMatt: instead of #10179?
< gribble>
https://github.com/bitcoin/bitcoin/issues/10179 | Give CValidationInterface Support for calling notifications on the CScheduler Thread by TheBlueMatt · Pull Request #10179 · bitcoin/bitcoin · GitHub
< BlueMatt>
correct
< kanzure>
hi.
< BlueMatt>
well, actually, its built on
< jonasschnelli>
Replaced BlueMatt's 10179 with 10652
< BlueMatt>
i mean 10179 is like one ack away, just want cfields to confirm i addressed his feedback sufficiently
< morcos>
So I don't think I've had any there for a couple weeks, if I could add two? It would be the first two of the fee changes, both have been open a little while, #10543 and #10589
< morcos>
I apologize I have not been around to do more reviewing recently
< wumpus>
BlueMatt: yes, as we discussed: it should still be merged, but it's no longer high-priority because you don't expect the dependent PR to get in in time to be safe for 0.15
< jonasschnelli>
morcos: which one di you want to add to the high-prio list?
< jonasschnelli>
*do
< wumpus>
both
< morcos>
both! :) but i suppose 10589, if i can only have one
< jonasschnelli>
Good
< BlueMatt>
wumpus: well I want some glances at 10652 pre-15 to see if its too much or if it can go ahead...if its small enough for 15 I do want it for 15
< cfields>
BlueMatt: yes, good enough. Will ACK it.
< jonasschnelli>
We need both for 0.15
< BlueMatt>
(since it fixes the kinda-not-a-big-deal provide-invalid-block attack thing)
< wumpus>
ok - any other suggestions?
< wumpus>
enough other topics otherwise
< wumpus>
#topic short update on signature aggregation
< sipa>
hi
< wumpus>
(sipa)
< praxeology>
Whats the status on the mempool data structure change?
< praxeology>
woops not mempool
< sipa>
this is just a status update of what gmaxwell, apoelstra and me have been working on lately
< praxeology>
utxo
< wumpus>
praxeology: you're interrupting a meeting
< sipa>
i presented on this in milan, and later we wrote a paper for bitcoin17
< gmaxwell>
praxeology: long since done.
< sipa>
the paper was rejected with the very valuable feedback that a solution already existed
< sipa>
namely a paper by Bellare & Neven from 2006
< sipa>
it only solves one of the problems we were trying to solve (signature aggregation, not key aggregation)... but that's the only consensus-critical part if we'd want aggregation in bitcoin trnasactions
< gmaxwell>
(which irritatingly never turned up in eons of searching for us)
< wumpus>
so that solution is usable for bitcoin?
< sipa>
yes
< sipa>
the advantage is that this is peer-reviewed scheme with a strong security proof under very wide assumptions
< wumpus>
nice!
< gmaxwell>
Their solution is almost equivilent to ours (or is equivient with the right kind of squinting about hash function definitions).
< gmaxwell>
jonasschnelli: doesn't look like the right paper (though maybe its one they published to another venue)
< BlueMatt>
cool!
< sipa>
so what this scheme gives us is a way for transactions to have a single signature (as long as all signers cooperate, so even in the case of coinjoin) overall... regardless of the number of inputs or multisig
< sipa>
what it does not do is an ability to turn multisig into signle sig (but that could be added on top later, as it's purely a wallet interaction thing)
< sipa>
it also supports batch validation
< cfields>
ooh
< sipa>
meaning that a whole block (or even multiple blocks) could be validated at once
< sipa>
the speedup depends on the size of the batch, but may go as high as 5x (for 4000 signatures)
< gmaxwell>
Unfortunately our paper isn't available because we need to update it to reflect that work, but it is much more targeted for the Bitcoin application (and would probably be much more clear for people here).
< sipa>
in the batch validation case (without aggregated signatures) the speedup would likely be restricted to 3.5x or so
< morcos>
gmaxwell: is that something that'll happen? can we just wait to read yours?
< sipa>
yes, we'll definitely finish up the paper
< sipa>
and discuss the change more widely
< sipa>
just wanted to give a heads up here
< wumpus>
yes, thanks for the update!
< morcos>
if i could have next topic, i have to leave early
< cfields>
sipa: what about that per-block aggregation that was briefly discussed? does this get us any closer to that?
< cfields>
nm, will follow-up after meeting
< wumpus>
morcos: what was your topic?
< gmaxwell>
~2.3x speedup for 32 signatures in the aggregate, fwiw.
< morcos>
Fee changes for 0.15
< wumpus>
#topic fee changes needed for 0.15
< wumpus>
morcos: sorry, missed that one
< wumpus>
morcos: you were actually first to propose a topic :)
< morcos>
I'll be relatively quick for my part, I think I've got all the PR's out now that I think need to go in for 0.15, but I want to encourage people to think about a bunch of the RPC API changes so they are good in their first release
< morcos>
But the other thign is there is one piece of missing functionality wheich I think is needed
< morcos>
Given how volatile fee estimates are and how much they change between short targets and long, I think it's important to give the GUI access to longer fee estimates
< morcos>
But someone more familar with QT can probably whip that up a lot quicker than me
< morcos>
Might be best to build it on top of all my other changes, #10707 shoudl have everything in one
< instagibbs>
it randomly doesn't work which is disappointing UX
< morcos>
gmaxwell: the ability to addother inputs? isn't it pretty rare to not have change?
< jonasschnelli>
But can happen...
< wumpus>
no, we don't persist RBF state, it has to be selected per transaction
< jonasschnelli>
wumpus: maybe the GUI should remember it
< instagibbs>
morcos, we are going to target more exact matches in future, fwiw
< wumpus>
the only way to make it persist is the command line option
< morcos>
wumpus: the gui initializes with the the command line argument, and then persists during the session
< wumpus>
jonasschnelli: meh, better to have it as "option" then
< gmaxwell>
FWIW, I believe electrum defaults to replacable now and pushes pretty hard in that direction, though users can flip it off on a per tx basis.
< morcos>
via checkbox
< wumpus>
jonasschnelli: persisting non-option settings between restarts would be unexpected
< jonasschnelli>
Yes. I guess your right..
< gmaxwell>
In any case, I think the default is kind of moot until bumping is sufficiently mature.
< wumpus>
between transactions in the same session makes sense I guess
< morcos>
I suppose I have one more question on that
< jonasschnelli>
Yes. If the bump won't work because it can't add another input the default should remain at the current state
< wumpus>
yes
< jonasschnelli>
It can happen quickly when fees are rising
< achow101>
hi. I'm late
< morcos>
Right now there are no options to the "Increase transaction fee" option in the GUI and it uses the default tx confirm target. Should it instead use whatever the slider is set to?
< jonasschnelli>
Yes
< morcos>
If the slider is not in use and custom fee is set, shoudl it use that?
< wumpus>
morcos: the slider is on another tab
< jonasschnelli>
I'd like to work on the replacability in the GUI for 0.16
< morcos>
Those would be easy changes to make after my PR
< BlueMatt>
the slider is in another tab, thats strange
< wumpus>
morcos: not sure that would be intuitive, people assume the slider is for new transactions, the bump option should probably have its own choice dialog
< jonasschnelli>
First I though of bringing back to tx to the original send-tx screen (you could even add recipients... ) but meh
< morcos>
wumpus: that seems maybe too much optionality
< jonasschnelli>
The bump window should just be lager and has the slider
< wumpus>
jonasschnelli: yes
< morcos>
ok, thats fine.. so leave it as the wallet default confirm target for now?
< wumpus>
yes
< BlueMatt>
yea, sucks, but its easy and reasonable
< jonasschnelli>
And also we have never really discussed the pre-signed bumps.. but that we should probably do in another meeting
< BlueMatt>
yea, that sounds like a 16
< instagibbs>
jonasschnelli, that will involve new strategy
< instagibbs>
:)
< instagibbs>
reasonably diff from after the fact fix imo
< jonasschnelli>
I'd say focus on fee opt. in 0.15, rbf in 0.16
< wumpus>
agreed
< wumpus>
#topic the need for the watchonly rpc flag after multiwallet (sipa)
< sipa>
hi!
< wumpus>
(we need to move forward a bit, lots of topics)
< sipa>
currently many RPCs have an optional flag "include watchonly"
< jonasschnelli>
is that similar to the -disablehot?
< * jonasschnelli>
is listening
< sipa>
at the time the need for that flag existed because of a desire to keep your "hot" wallet separated from your "watch only" wallet
< wumpus>
sipa: yes, on the long term I agree with you
< jonasschnelli>
sipa: you think with multiwallet the wallet should either be watch or hot?
< sipa>
jonasschnelli: no
< wumpus>
sipa: makes more sense to have a wallet either full-watchonly or has-keys
< sipa>
wumpus: perhaps, but that's orthogonal
< wumpus>
sipa: I don't understand you then
< instagibbs>
ok get to the point :)
< BlueMatt>
why is that a mistake?
< jonasschnelli>
Let sipa explain...
< sipa>
what i'm trying to get at is that the within-a-wallet separation is no longer needed
< wumpus>
how is that different from what I said?
< wumpus>
instead of watchonly within a wallet you'd have a watchonly wallet and a normal wallet
< sipa>
i'm not arguing to remove the ability to have both keys and watchonly in one wallet
< gmaxwell>
because if you want to have a mixed thing thats fine too, then you just have a mixed thing. No need to flag, if you want seperation, use two wallets.
< jonasschnelli>
but I fail to see the difference then between only allowing watch-only or hot
< sipa>
just that there is no need to just select coins that affect one part
< gmaxwell>
you're suggesting an extra restriction.
< sipa>
or see a 'balance' of just one part
< sipa>
a wallet is a wallet, and has a single balance
< sipa>
some of the keys may require decrypting your wallet
< wumpus>
oh, right
< sipa>
some of the keys may require a hardware wallet
< jonasschnelli>
I see... yes.
< sipa>
some of the key may be just watchonly and you need to use raw transactions to interact with thing
< BlueMatt>
fair, this sounds like an 0.17 or 0.18 thing, though
< gmaxwell>
Now, logically you probably will seperate or something, for convience, but I don't see a particular reason to require that right now.
< BlueMatt>
are you asking if we should deprecate?
< sipa>
i was hoping 0.15
< wumpus>
BlueMatt: agree, long term
< sipa>
just make the watchonly flag ignored and always set it to true
< wumpus>
this is not something we're going to change in the RPC interface pre-0.15
< sipa>
ok
< wumpus>
peopel rely on this
< wumpus>
we could document it as deprecated
< BlueMatt>
we'd need to mark it deprecated
< morcos>
sipa: that seems reasonable except what about identifying which things you have keys for and which you dont..
< BlueMatt>
probably deprecate after we have working multiwallet that is stable
< wumpus>
then remove the flag for 0.16 or 0.17, but this seems over-hurried
< BlueMatt>
so maybe deprecate in 0.16...
< morcos>
that seems a useful distinction to keep to me
< gmaxwell>
with 0.15 and multiwallet we can start deprication at least-- e.g. advise that this will happen in the future, suggest people use seperate wallets. . The one problem with that however is that your seperate watchonly wallet still needs the stupid flag everywhere. :(
< BlueMatt>
remove in 17 or 18
< wumpus>
let's focus on actually getting multiwallet into 0.15
< jonasschnelli>
I somehow think mixed wallets can be a footgun source... but right, it orthogonal
< instagibbs>
related topic: some way to signal that the funds are "safe" when you expect a hardware wallet to have the privkey
< instagibbs>
post-0.15 ofc
< sipa>
maybe i haven't made this clear, but how do you deal with hardware wallets, for example?
< wumpus>
hardware wallets in bitcoin core is a different topic
< BlueMatt>
we dont need to add a flag for hw wallets
< sipa>
BlueMatt: then why do we need a flag for watchonly?
< wumpus>
important, but certainly not one that's going to make it into 0.15
< BlueMatt>
we can say "hw wallets are always included in balance, flag for watchonly is deprecated" starting in the version that supports hw wallets
< gmaxwell>
sipa is pointing out that the model of 'watch only' when applied to also having hardware wallets starts adding combinitoric blowup.
< sipa>
BlueMatt: fair enough
< jonasschnelli>
If a wallet has no clear cur between hot and cold (watch-only), a code-level guarantee, I would not use it for hot funds...
< BlueMatt>
yes, agreed, we should not make it worse, but we dont need to worry about this until at least 16, I think
< jonasschnelli>
*cut
< wumpus>
agree on not making it worse
< BlueMatt>
need useable working good multiwallet first, which likely wont be 15
< gmaxwell>
BlueMatt: thats a point. now just give me a flag for importmulti that gives me a watching key imported that way and it's good to go. :P
< sipa>
jonasschnelli: again, orthogonal
< instagibbs>
I have a working Core+Ledger system, and have a couple thoughts, but this is a different topic yep
< gmaxwell>
BlueMatt: uhh, it's like done.
< jonasschnelli>
sipa: but why not just separating pure watch-only wallets from hot wallets? Why would that be orthogonal?
< BlueMatt>
gmaxwell: I know, but we need a cycle of finding more use-cases and making sure we've got it all covered, was my piont
< wumpus>
yes multiwallet is almost done, but in 0.15 it will at least be experimental
< BlueMatt>
eg createwallet flows within rpc, disconectwallet, etc
< sipa>
jonasschnelli: "orthogonal" means you can still do that
< sipa>
jonasschnelli: it has nothing to do with this issue
< wumpus>
it's the first release it is in, after all
< gmaxwell>
jonasschnelli: because that is an additional restriction that AFAIK isn't needed. maybe later its needed to not support mixed but it seems like a seperate issue to me.
< jonasschnelli>
Okay
< BlueMatt>
ok, so we all agree, eventually push people towards multiwallet away from watchonly :)
< BlueMatt>
next topic? :p
< sipa>
what i want to get add is that a wallet is just a collection of keys it considers "mine" - independent of its ability to actually fully sign
< sipa>
BlueMatt: yes, agree
< wumpus>
#topic rolling utxo hashes
< wumpus>
(sipa again)
< sipa>
hi!
< instagibbs>
sipa, ISMINE_* tho :)
< instagibbs>
ok next topic
< sipa>
with pertxout we changed the serialized_hash because the new format no longer maintains the tx version of the utxo
< sipa>
i posted about rolling utxo hashes a while ago on the ML
< sipa>
i'm not proposing actually implementing that, but would it be worthwhile to immediately switch to a scheme that is compatible with it?
< sipa>
so that there is no need to break the API again
< gmaxwell>
sipa: as in don't do the rolling thing, but have the oneshot thing compute the same hash?
< sipa>
yes
< sipa>
downside: makes gettxoutsetinfo slower
< wumpus>
how much slower?
< sipa>
upside: allows us to make gettxoutsetinfo super fast in the future
< gmaxwell>
lots slower.
< sipa>
several times
< wumpus>
could add a new RPC for it
< gmaxwell>
sipa: Well a challenge there is that I'm not sure that we've settled on the field. So that isn't a guarentee of compatiblity.
< wumpus>
instead of gettxoutsetinfo
< sipa>
interesting, i hadn't considered that
< sipa>
gmaxwell: yeah, i know
< gmaxwell>
actually if we drop the hash from gettxoutsetinfo i think thats the only thing now that requires scanning the whole thing.
< sipa>
no, everything does
< sipa>
(txout count etc)
< wumpus>
yes it's all aggregate statistics
< gmaxwell>
yes but it wouldn't have to with rather trivial changes.
< sipa>
though those things can be maintained on the fly
< gmaxwell>
which would be robust and wouldn't change.
< sdaftuar>
i think we will want an RPC that can scan the disk to calculate the answer, even if we are able to calculate everything on the fly
< sdaftuar>
so that we know our on-disk data is correct
< sipa>
sdaftuar: good point
< gmaxwell>
sdaftuar: restart your node. :P
< sipa>
an advantage of the fast hash is that you can compare it with a recompute-the-whole-thing
< gmaxwell>
okay interesting points.
< wumpus>
that'd be very nice
< wumpus>
a utxo hash that would be quick to compute for every block would be very nice to have
< gmaxwell>
(I was momentarily overestimating how easy it would be to switch to summary statistics, I forgot that they have to be saved and loaded across restart... or otherwise every startup needs the equal of a stats call)
< gmaxwell>
wumpus: right thats the goal of pieter's work. It's just a bit immature now, and if we implmenet it at the moment we may want to switch to an incompatible version later.
< wumpus>
I like to check that all my nodes have the same utxo hash, but calling getutxosetinfo for every block takes too much time, I've tried and given up :)
< gmaxwell>
Assuming we stay with the multiplicative group hash, we need to pick a prime where multiplication mod that prime is as fast as possible. Sipa has done some work there, but it's a research project that can sink as much time as we want to put into it.
< sipa>
or we could just use the elliptic curve version, which can probably be made ~2 slower than the GMP-based MuHash
< sipa>
which is just a few lines of code
< wumpus>
now doing it intermittently, but that means that when it fails we don't know exacly where it started to diverge
< gmaxwell>
right, I want to have UpdateTip log the value.
< sipa>
^ that
< sdaftuar>
wumpus: it's actually not clear to me how much the fast utxo hash calculation helps in comparing running nodes
< sipa>
well the fast utxo hash lets you do a consistency check on just a single node
< sdaftuar>
but what is exactly being compared as consistent?
< sipa>
by having a fast incrementally-updated version, and a slow recompute-from-scratch one
< gmaxwell>
sdaftuar: because you can log the utxo hash at each point, and so if they diverge in a way that the hash sees (e.g. not underlying disk corruption) you'll learn. Also you could run a command that checks the disk against the running value to catch that disk corrouption.
< gmaxwell>
so your disk <> your running <> my running <> my disk
< sdaftuar>
yes, i agree if you do the comparison with disk, then you get something valuable
< sdaftuar>
but just comparing the fast calculation between nodes doesn't seem like it does much, does it?
< wumpus>
hm yes good point
< gmaxwell>
right now it is a PITA to compare you and I at disk because we have to do it at the same time (and hope there isn't a block at that instant. :P )
< sdaftuar>
gmaxwell: agreed
< gmaxwell>
sdaftuar: it depends on where the errors you're concerned about are happening.
< wumpus>
gmaxwell: yes, even when you time the RPC command on blocknotify, it sometimes misses the block :)
< gmaxwell>
if they're below the layer where the running hash runs you only gain if you also do periodic checks between it and the disk. Above it, however, you have constant checking.
< gmaxwell>
but the nice thing is that disk and running can be async checked... You and I don't need to do our disk comparisons at the same time.
< wumpus>
indeed
< gmaxwell>
sdaftuar: this is all also machinery we almost certantly need for a reasonable UTXO-assume-valid kind of sync in the future.
< wumpus>
all in all a rolling utxo hash is an improvement, it creates more options, but you can still do the same as now if you want
< sdaftuar>
gmaxwell: yeah i agree and that's the use case i'm most excited about :)
< sdaftuar>
i was just trying to figure out exactly how i'd use to compare my own nodes, and wasn't sure of the utility
< gmaxwell>
wumpus: the challange though is that it isn't free. muhash on the whole utxo set takes CPU minutes.
< wumpus>
gmaxwell: yes I'm not sure it should replace the faster hash
< wumpus>
maybe it should justb e an additional thing
< gmaxwell>
well once it's a running hash its very fast. :P
< sdaftuar>
hash_serialized_3? :P
< wumpus>
OTOH we're already breaking the hash for 0.15
< wumpus>
(which is kind of sad, as it makes it impossible to compare against older versions)
< gmaxwell>
sipa backported the new hash to the old system for development testing, FWIW.
< gmaxwell>
(it's a pretty trivial change, IIRC, just drop the version from it)
< wumpus>
cool, that'd be useful, especially with the 0.14 to 0.15 database change it's important to be able to check synchronization
< gmaxwell>
This patch existed at one point already, dunno if sipa still has it.
< sipa>
the problem is that #10434 is quite a bit of intricate code
< sipa>
the EC version would be many times less code (given that we already have secp256k1), but be a few times slower
< wumpus>
I don't have a strong opinion on it
< sipa>
on the other hand, MuHash is very simple to implement in anything that already has big integers (it's a few lines in python)
< sipa>
ok
< wumpus>
though in general I'd say higher performance seems preferable to the ability to re-use code
< sipa>
in that case, some review would be welcome :)
< wumpus>
but I haven't seen the code
< wumpus>
yeah, hope to get around to it
< sipa>
i can drop the asm optimized version from the first PR if wanted
< praxeology>
Couldn't you put a delay on insert/remove from the rolling hash... say only for utxos that are 1 day of blocks old? isn't a hash for N blocks ago just about as good as the current hash?
< sipa>
praxeology: totally irrelevant
< sipa>
that would mean you need to keep those utxos around for processing later
< sipa>
we have an approach that can combine them into a running hash in _microseconds_
< gmaxwell>
All doing that does is perhaps saves you 1% of computation for blocks that are reorged out but at the expense of complexifying everything because the data is inconsistently available.
< praxeology>
What percent of utxos are spent within a day?
< instagibbs>
2 minutes left
< instagibbs>
if anyone has microtopic
< wumpus>
that seems irrelevant to this discussion
< wumpus>
(although it's interesting to know in its own right)
< praxeology>
Sounds like you guys are concerned about performance on the rolling hash
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Jun 29 19:59:24 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< gmaxwell>
praxeology: delaying it doesn't change that.
< instagibbs>
sipa, you still around?
< sipa>
instagibbs: sure
< instagibbs>
ok let me toss a wallet q to you
< gmaxwell>
praxeology: its the same amount of computation if you do it now or 100 blocks ago.
< midnightmagic>
Man I wish people would end meetings elsewhere with that kind of precision..
< wumpus>
yes it'd just move the computation in time
< * midnightmagic>
shudders
< wumpus>
midnightmagic: I guess they should hire me as meeting chair :p
< praxeology>
gmaxwell: delaying will reduce the number of things that are xored (err whatever math op you doing), since utxos that were spent before the delay window are never added
< gmaxwell>
praxeology: if you imagine something which is never consistent with any actual state of the system, then that really isn't all that useful.
< BlueMatt>
I was gonna say something about the swiss leading the meeting, but its wumpus, not jonasschnelli
< instagibbs>
sipa, so for hw wallets, would it be reasonable to have a new ISMINE type that means the wallet expects the output to be spendable by the wallet even though privkeys aren't physically present in the wallet.dat
< sipa>
instagibbs: in my opinion, ISMINE should become a bool
< sipa>
it's yours or not
< gmaxwell>
praxeology: that isn't delayed.
< instagibbs>
ah ok, so no
< midnightmagic>
wumpus: :-) Your nickname shall be The Guillotine
< instagibbs>
well in that case at least some of the logic could be moved elsewhere
< sipa>
instagibbs: the ability to spend should be independent from what is considered yours
< sipa>
instagibbs: (note, that's my personal opinion)
< gmaxwell>
praxeology: I believe what you are suggesting doing is computing it sparsely so that there isn't a value computed at every block.
< wumpus>
midnightmagic: :-)
< instagibbs>
no it's fair, I'm just trying to get my wallet to understand what I consider mine
< instagibbs>
and right now it's janky mess
< sipa>
instagibbs: perhaps the solvable distinction is still useful
< instagibbs>
so a new "consider yours" function would fix my current issue in a cleaner way
< sipa>
cfields: still around?
< achow101>
instagibbs: isn't there some combination of ISMINE_ types that indicate "no privkey in wallet but I can still spend it"
< gmaxwell>
praxeology: this would reduce computation somewhat, but at the expense creating coordination points... and then you only perhaps get a 2x speedup, but you also add bursts of letency rather than being able to compute it continually.
< cfields>
sipa: yes
< praxeology>
gmaxwell: I'm suggesting only adding a utxo to the hash on the 144th confirmation
< instagibbs>
achow101, there is solvable+watchonly
< instagibbs>
but that doesn't mean you can spend it
< gmaxwell>
praxeology: that would make something which is _never_ consistent with the utxo set at any point, I think this would be completely useless to us.
< instagibbs>
so if you have a hw wallet, you expect to be able to sign for it, but there's no enum for it
< gmaxwell>
praxeology: it couldn't be used to check database consistency, it couldn't be used to perform a sync from utxo.
< sipa>
praxeology: that sort of approach may be useful for UTXO/TXO commitment like approaches, where updating the commitment is very expensive and cheaper when batched
< instagibbs>
so in corner cases you consider that output untrusted, and get "insufficient amount"
< sipa>
praxeology: but the rolling UTXO idea was specifically intended to not need that, because it's so fast
< praxeology>
gmaxwell: if you have the last 144 blocks then you can do the remainder of the utxos from those blocks to finish the hash at a particular point
< sipa>
praxeology: just looking up a utxo spent from disk is more work than updating the hash
< cfields>
neha__: eh? :)
< instagibbs>
achow101, I coded a fix for this corner issue, but only for p2sh multisig... current methodology kind of forces me to do janky fix or make another ismine enum :/
< praxeology>
sipa: yes, well not sure the use case of when such would be needed
< achow101>
just make ismine a bool :p
< praxeology>
earlier you guys were talking about a tx just having one signature, even for multiple things that need to be signed. You talked about computation performance. What about impact on tx size?
< praxeology>
particularly since... i hear that network bandwidth is the main bottlneck
< instagibbs>
N signatures in 1 signature space possible, across inputs. Still need the pubkeys
< praxeology>
sure, still need public keys. but what about the signature size?
< neha>
cfields: BlueMatt gave me an issue to fix!
< sipa>
64 bytes instead of N*72
< sipa>
praxeology: ^
< gmaxwell>
praxeology: instagibbs said: one signature.
< sipa>
praxeology: and bandwidth isn't all that much of a factor anymore single compact blocks etc
< cfields>
neha: good to see! that's a rabbit hole. I'd be nervous if you didn't find more things to fix while you're down there :)
< instagibbs>
achow101, sounds simple, but let's see all the interaction with current wallet stuff
< gmaxwell>
(the signatures in bitcoin today are 72 instead of 64.125 just because they use a dumb encoding.
< gmaxwell>
)
< praxeology>
sipa: do you have a layman link for "single compact block"? or anyone?
< sipa>
praxeology: bip 152
< sipa>
i'm sure there are better explanation online than the bip, though
< gmaxwell>
I dunno the bit is pretty good.
< gmaxwell>
bip
< instagibbs>
gmaxwell, speaking of which how is 0.5RTT going these days, any change?
< sipa>
cfields: so the block-wide aggregation that adiabat proposed a while ago on the ML still applies to Bellare-Neven... allowing to have only 32 bytes of the signature in every tx, and another 32 byte block wide
< instagibbs>
(if you're monitoring)
< sipa>
cfields: the downside is that it doesn't play nicely with cached signature validation
< gmaxwell>
instagibbs: meh, we need the skip-recent-txn things in mining. It's gone up and down (in particular utility of the extrapool has gone up and down)
< sipa>
cfields: as wtxids would change after inclusion in a block
< gmaxwell>
instagibbs: during periods of long backlogs the extrapool is too small-- I see misses that my node had seen before.
< cfields>
sipa: hmm. Does parallel validation still apply as well?
< praxeology>
sipa: oh, that is just where txs are not re-relayed with blocks. Something like a 1/2 bandwidth used improvement.
< cfields>
*the parallel validation improvements
< praxeology>
or is there something else I missed when skimming? something on the order of mimblewimble improvement?
< sipa>
praxeology: yes, but the bandwidth needed to propagate a block quickly is massively reduced
< sipa>
praxeology: overall data volume is reduced by 2
< gmaxwell>
praxeology: typically at the tip blocks are transmitted with about 16kbytes.
< sipa>
cfields: yes
< sipa>
cfields: you can just shard and do the computation for a number of groups
< sipa>
and then do a simple cheap combine operation
< gmaxwell>
you lose some of the asymtoptic gains though we've been expirementing with parallel versions of the aggregate validation operation.
< gmaxwell>
instagibbs: in the last 144 blocks I see 23% requiring a round trip. 13% were saved from needing one by the extra pool. A week ago the extrapool saves were about half that I think.
< cfields>
sipa: if we cached as much as possible otherwise (hashing mainly, i suppose) and completely dropped the pre-validated cache, do you have a sense of how it'd compare? I realize it'd be worth keeping the cache as it'd still have a good hit rate, i'm just curious.
< sipa>
cfields: i don't understand
< gmaxwell>
cfields: I'm unclear about what you're asking.
< cfields>
trying to weight the benefits of parallel validation against losing some pre-cached hits
< sipa>
well, do both
< gmaxwell>
yea, there isn't any conflict. You parallel validate the things that miss the cache.
< sipa>
oh, you mean with the block-wide aggregation
< gmaxwell>
do you mean batch validation instead of parallel btw?
< sipa>
my concern there is mostly the layering violation
< gmaxwell>
the block wide aggregation stuff is ugh
< cfields>
yes, sorry. i meant batch.
< BlueMatt>
instagibbs: I think we'd actually see a bigger improvement by responding to getblocktxn requests in the background while connecting a block than making 0.5rtt more common
< BlueMatt>
network-wide that is
< BlueMatt>
though 0.15 may speed up block connection enough.....anyway
< instagibbs>
BlueMatt, i was hoping for lazy improvements, like "it got better" :P
< instagibbs>
yeah I reviewed a PR for that a long while ago
< sipa>
cfields: so for batch validation... batch validate the txn in a block you haven't seen yet, and ignore the rest
< sipa>
(assuming there is no block-wide anything going on)
< gmaxwell>
BlueMatt: if you want to make that even faster: create a ReadBlockFromDisk that returns a blockblob (don't deseralize the transactions in it).
< gmaxwell>
and use that for all getblocky kinds of requests.
< gmaxwell>
(the cases not covered by the cached getblocktxn are whole block requests...)
< gmaxwell>
(and we waste time deseralizing then reserializing the blocks...)
< cfields>
sipa: ah, makes sense
< gmaxwell>
cfields: there is a bit of a conflict right now because batch and parallel are in competition with each other... bigger batches get more speedup, but you want to use all your cores....
< cfields>
is the batch operation itself not parallelizable?
< BlueMatt>
gmaxwell: I'm less concerned about that w/ parallell ProcessMessages - if you ask for an old block its gonna be slow (though I agree that we should fix that, just less interesting for the latency improvements)
< BlueMatt>
instagibbs: well that one got dropped in favor of "doing it cleanly"
< cfields>
i suspect I'm misunderstanding the conflict
< BlueMatt>
instagibbs: now its 10652 + the two other PRs that make up that branch, then an actual parallellization PR
< BlueMatt>
so....16? maybe 17
< instagibbs>
BlueMatt, ah k
< instagibbs>
clearly behind the times
< sipa>
cfields: a batch of 4000 signatures takes less than 8 times as much CPU as a batch of 500 signatures
< sipa>
cfields: if you split the batch up in 8, and run those 8 on separate CPUs, you're going to do 8*batch(500) work, not 1*batch(4000) work
< gmaxwell>
cfields: the algorithim is not naturally parallizable, though with lots of synchronization traffic it can be made parallel.
< gmaxwell>
how much of the batch(4000) speed we can get out of something pushed to be made N-way parallel is an open question.
< gmaxwell>
If synchronization between threads is free the answer is "almost all of it"
< cfields>
ok that's what i was missing, thanks
< gmaxwell>
If it is very expensive, the answer appears to be "almost none of it".
< cfields>
i'll read the paper before discussing further
< gmaxwell>
None of this is in the paper.
< cfields>
oh. in that case, I already read the paper :p
< sipa>
cfields: the 'hard' part of the computation is doing a huge n1*P1 + n2*P2 + n3*P3 + ... (where the n's are integers and the Ps are EC points)
< sipa>
turns out, there is a very neat algorithm (multiple, in fact) that do this whole computation many times faster than just multiplying individually and adding
< gmaxwell>
Same as for a normal signature validation except there it's just n1 * P + n2 * G ... so only two ecpoints. in batch and aggregate validation there is pubkeys + 1.
< gmaxwell>
done simply, a n1*P requires 256 EC additions (technically 256 doublings and 128 additions, but doublings are about twice as fast as an addition)-- using grade school long multiplication (in base 2). The batch computation of n1*P1 + n2*P2 + n3*P3 + ... can do the job in about 26 additions per point for 4096 inputs.
< cfields>
whoa
< cfields>
oh, misread :)
< gmaxwell>
yes per point, but still thats almost a 10x speedup over a dumb algorithim.
< cfields>
are there further speedups if the result is known ahead of time and you're just attempting to verify correctness?
< gmaxwell>
What we do in secp256k1 for validation (which is two points) is far from simplisic and takes much less than 256 adds worth of work per each. I believe it's about equal to about 84 per point.
< gmaxwell>
cfields: the aggregate and batch both count on a property like that.
< gmaxwell>
the R value in the signature is the result of this calculation.
< cfields>
ah, so that's the 32bytes-per-block
< sipa>
cfields: eh, i think you're confused
< sipa>
cfields: perhaps we should move to #secp256k1
< cfields>
sure
< gmaxwell>
To be clear: Batch validation exploits ' the result is known ahead of time and you're just attempting to verify correctness' --- because the signature (or each signature) has an R value that comes with it, and the signature validation is trying to verify that an R value it computes is the same as the provided one.
< gmaxwell>
You can also encode signatures another way, like the wikipedia article on schnorr signatures does-- using "e,s" which is a hash and a scalar; and this form cannot be batch verified because you don't know the result of that multiexp equation in advance.
< cfields>
BlueMatt: ok, i have the shared_ptr change hacked in, and it's pretty huge. Lots of stuff has to change at the same time... it's kinda hard to avoid a giant commit.
< cfields>
BlueMatt: i have no problem doing your refcount change first, then undoing it with this big change next if you'd prefer.