< * gmaxwell>
checks to see if he created that article. no, surprising!
< * luke-jr>
assigns it to sipa's next proposal. jk
< adiabat>
Hey, wondering if anyone is working on the idea of the "Bloom Filter Digest", like if there's a BIP in the works or anything
< adiabat>
there was a post in the mailing list about a month ago
< Chris_Stewart_5>
If I specify a -datadir when launching an instnce of bitcoind, does my bitcoind instance look for a bitcoin.conf inside of my datadir, or ~/.bitcoin/bitcoin.conf?
< achow101>
Chris_Stewart_5: it looks for it in your specified datadir
< Chris_Stewart_5>
achow101: Thanks
< phantomcircuit>
cfields_, travis appears to be borked
< phantomcircuit>
the no-wallet test fails and that pr is only touching wallet code
< phantomcircuit>
so it's definitely a travis failure
< cfields_>
phantomcircuit: see 8164. master's currently borked, so that will need to be fixed first
< phantomcircuit>
jonasschnelli, also i can swap the order of the commits in 8152 such that there's never a performance regression even within the pr
< phantomcircuit>
ah ok
< cfields_>
wumpus: for backlog, ^^ please check out 8164 when you're around if jonasschnelli doesn't beat you to it :)
< GitHub141>
[bitcoin] theuni opened pull request #8167: gitian: Ship debug tarballs/zips with debug symbols (master...split-debug) https://github.com/bitcoin/bitcoin/pull/8167
< jonasschnelli>
wumpus: ParseInt does use int32_t and not uint32_t... isn't that a problem?
< wumpus>
do you need the full range of uint32_t?
< jonasschnelli>
nSequence is uint32_t i guess.
< wumpus>
if so, we need a ParseUint32
< wumpus>
I'll make one.
< jonasschnelli>
Yes. I just saw that there are atoi in bitcoin-tx (before my change) and I just was "continue" that way. But I agree, re-using the ParseInt* stuff is better
< wumpus>
do you need that atoi change in #8164 to 'unstuck' travis?
< jonasschnelli>
yes.
< wumpus>
ok
< wumpus>
I'll just merge #8164 then
< jonasschnelli>
yes.
< jonasschnelli>
We can change it bitcoin-tx wide later
< wumpus>
but we should move away from using low-level number parsing functions and use the ones in util where available
< wumpus>
right
< jonasschnelli>
There are servals atoi
< jonasschnelli>
and even atoi64
< wumpus>
the problem is that those functions don't have any error reporting, or range checking, etc
< wumpus>
let's see if this one passes travis, what surprises windows has in store for us...
< gmaxwell>
Great mysteries of the window--- who know what lies beyond it's curtain rimmed borders.
< sipa>
the Window kingdom has been in decline for years
< wumpus>
a perilous landscape full of traps and mines
< wumpus>
"In locales other than the 'C' locale, other strings may be accepted. (For example, the thousands separator of the current locale may be supported.)" ... oh crap, I thought strto* avoided the locale madness, I was wrong
< wumpus>
why is it so difficult to have a number parsing function that does strict parsing and the set of inputs that it accepts is independent of geographical conditions
< wumpus>
it's almost easier to just write one from scratch than use the C-provided functions
< wumpus>
in any case using our own utility function makes it easier to swap it out later...
< gmaxwell>
yea, there are actually a bunch of file formats that are befuxored by this... internationalization was bolted onto the standard library... so there aren't even normal locale independant functions, and you can't change the locale without potentially screwing up other threads.. (well there is the locale_t stuff, but not really portable yet AFAIK)
< wumpus>
I can imagine - you need to be such a language lawyer to get these things rihgt
< wumpus>
and yes it was bolted on in retrospect
< wumpus>
I'm entirely for handling locales where it makes sense, but there's a place for simple deterministic parsing functions (network protocols, file formats) and a place for internationalized handling (such as GUIs), those don't really overlap
< gmaxwell>
My general principle is to avoid strings in network protocols and file formats. :) but really, textual file formats are often fairly handy.
< wumpus>
sure, but that's a completely orthogonal discussion, usually you don't get to choose
< wumpus>
though passing binary numbers on the command line would be interesting :-)
< gmaxwell>
just don't type anything with any zero bytes...
< wumpus>
well wouldn't be the first time, passing addresses and code is very popular when trying to get setuid'ed executables to do eh non-standard things
< wumpus>
yes, those are pesky
< gmaxwell>
you mean, to do especially awesome things. :)
< wumpus>
exactly
< sipa>
wumpus: binary numbers... as in head -n 10011101 file
< wumpus>
binary, at the speed of one byte per bit!
< GitHub196>
bitcoin/master 2e49448 Wladimir J. van der Laan: tor: Change auth order to only use HASHEDPASSWORD if -torpassword...
< GitHub196>
bitcoin/master 761cddb Wladimir J. van der Laan: Merge #7703: tor: Change auth order to only use password auth if -torpassword...
< GitHub18>
[bitcoin] laanwj closed pull request #7703: tor: Change auth order to only use password auth if -torpassword (master...2016_03_auth_order) https://github.com/bitcoin/bitcoin/pull/7703
< GitHub172>
bitcoin/master 1b9e6d3 Pieter Wuille: Add support for unique_ptr and shared_ptr to memusage
< GitHub172>
bitcoin/master 8d39d7a Pieter Wuille: Switch CTransaction storage in mempool to std::shared_ptr
< GitHub172>
bitcoin/master dbfb426 Pieter Wuille: Optimize the relay map to use shared_ptr's...
< GitHub0>
[bitcoin] laanwj closed pull request #8126: std::shared_ptr based CTransaction storage in mempool (master...sharedmempool) https://github.com/bitcoin/bitcoin/pull/8126
< GitHub153>
[bitcoin] laanwj closed pull request #7530: autogen.sh: check for libtool before automake fails to find it (master...libtool-check) https://github.com/bitcoin/bitcoin/pull/7530
< jonasschnelli>
another cleanup required for one of my pulls..
< jonasschnelli>
*sight*
< sipa>
sight is good
< jonasschnelli>
*sigh* :-)
< jonasschnelli>
wumpus: can you extend your ParseUInt32 to univalue?
< wumpus>
why would univalue need to parse unsigned 32 bit integers?
< jonasschnelli>
createrawtransactions new sequence number input per vin does not support unsigned
< wumpus>
we treat all integers as 64 bit signed
< jonasschnelli>
So > 0x7FFFFFFF will be rejected.. :(
< wumpus>
which should be enough to support the full 32 bit unsigned range
< jonasschnelli>
It calls int UniValue::get_int() const
< jonasschnelli>
which does a `if (!ParseInt32(getValStr(), &retval))`
< jonasschnelli>
and throws > 0x7FFFFFFF
< wumpus>
oh, just use the 64-bit one then
< sipa>
use get_int64(), and rangecheck the result
< wumpus>
I don't think we should be adding more types of integers to JSON, that just complicates things
< jonasschnelli>
Right... let me try
< wumpus>
our previous JSON parsing library didn't even have a 32-bit signed integer get
< jonasschnelli>
should we allow -1 as sequence numbers? Pretty convenient.
< wumpus>
wouldn't be very consistent if we do strict 32-bit unsigned int parsing for the -tx
< jonasschnelli>
yes. Let me do a <0 check
< wumpus>
I'd say sequence number is a positive value, and that should be enforced in the API; though -1 is convenient, negative numbers are slightly ambigious
< wumpus>
we also print sequence numbers as unsigned int I hope?
< jonasschnelli>
wumpus: at least decoderawtransaction does this correct.
< jonasschnelli>
I guess UniValues has no uint32 pair?
< jonasschnelli>
but (uint64_t)txin.nSequence is fine IMO
< wumpus>
yea, please just use 64 bit signed integeres with JSON
< wumpus>
there's no need to support other integer types - certainly not for output, for input a specific range checked get function could be mildly useful, but that probably belongs at the application (argument checking) side, not in univalue itself
< GitHub34>
[bitcoin] jonasschnelli opened pull request #8171: [RPC] Fix createrawtx sequence number unsigned int parsing (master...2016/06/fix_crt) https://github.com/bitcoin/bitcoin/pull/8171
< GitHub68>
[bitcoin] sipa opened pull request #8172: Fix two warnings for comparison between signed and unsigned (master...fixunsigned) https://github.com/bitcoin/bitcoin/pull/8172
< GitHub12>
bitcoin/master 2d83013 Jonas Schnelli: Add support for dnsseeds with option to filter by servicebits
< GitHub12>
bitcoin/master cd0c513 Pieter Wuille: Merge #8083: Add support for dnsseeds with option to filter by servicebits...
< GitHub53>
[bitcoin] sipa closed pull request #8083: Add support for dnsseeds with option to filter by servicebits (master...2016/05/dnsfilter) https://github.com/bitcoin/bitcoin/pull/8083
< cfields_>
jonasschnelli: great work on ^^
< sipa>
petertodd, luke-jr: subtle ping to update your dns seeds to support service bits filtering
< jonasschnelli>
cfields_: the bitcoin part was easy, the part on the bitcoin-seeder was more complex. :)
< jonasschnelli>
Also code i'm not use to play around with.
< jonasschnelli>
But as always, sipa made the important last changes/fixes. :)
< cfields_>
jonasschnelli: yea, i took a quick look. steep learning curve there
< sipa>
i wrote it in a week while i was ill
< sipa>
certainly not the best code i've written :)
< cfields_>
heh, well it's apparently pretty bullet-proof. can't argue with that :)
< cfields_>
ok, I was trying to avoid this because i know everyone's busy with a dozen other things, but I'll be away for a while after Friday, and I'm hoping to get as much net refactor stuff as possible knocked out first...
< cfields_>
so... review begs for #8128 and #8085
< cfields_>
(and taking requests for anything I should prioritize before leaving)
< jonasschnelli>
Is there a way within CWallet / CWalletTx to detect which of the tx.vout's is the change output if the wtx is a spend-to-myself?
< jonasschnelli>
AddressBook lookup?
< luke-jr>
yes
< jonasschnelli>
So I need to solve the pubkey and check if its P2PKH (or different), get the CKeyID and do an addressbook lookup...
< jonasschnelli>
hmm... not good. :)
< jonasschnelli>
s/solve the pubkey/use the solver on the scriptPubKey to retrieve the address
< sipa>
jonasschnelli: no, you convert to a CTxDestination and then check whether that's in the address book
< jonasschnelli>
sipa: with ExtractDestination()?
< sipa>
yes
< jonasschnelli>
Okay.. that sounds feasible.
< cfields_>
sipa: wrt #7749, do we want to maybe filter the addresses we send in response to getaddr based on the current connection's common services? or maybe prioritize those somehow?
< jl2012>
is it possible to use signrawtransaction to sign a multisig tx with flags other than SIGHASH_ALL?
< gmaxwell>
Yes.
< gmaxwell>
it takes an argument to set the flags it will sign with.
< NicolasDorier>
sipa: long story short bytespersigop enforced in AcceptToMempool broke some use case on upper layer (https://github.com/bitcoin/bitcoin/issues/8079) we have a simple way to fix it, but it would require to count signatures for a transaction a second time accuratly. Do you think it would be a problem performance wise ? I don't think so but maybe I am missing
< NicolasDorier>
something.
< jl2012>
gmaxwell: the sighash flag is the 4th argument. 1st argument is the unsigned tx. However, when I use [] [] as the 2nd and 3rd arguments, it fails to sign
< jl2012>
without any optional arguments, it signs normally
< jl2012>
btw i'm signing a P2SH-P2WSH
< NicolasDorier>
oh whatever, I'm pretty sure it can't be a perf problem, as the second count is done only on a very specific case. nevermind
< sipa>
NicolasDorier: i don't understand the solution you suggest
< NicolasDorier>
sipa: Well basically bytespersig use GetLegacySigCount which overshoot multisig real signature count. Because of that, it broke an upper layer protocol (counteryparty)
< NicolasDorier>
the solution
< NicolasDorier>
would be "if(nSigOps > MAX_STANDARD_TX_SIGOPS)", then you count signature again accurately, and use such count for calculating if bytespersigop is reached or not
< sipa>
heh, does counterparty still e ist
< sipa>
i see
< sipa>
that would work
< NicolasDorier>
well, I'm not using it, but yeah it seems very much alive
< sipa>
:(
< sipa>
anyway, sounds like a good solution
< NicolasDorier>
ok cool
< rubensayshi>
counterparty uses it as fallback when > 80 bytes, which is 0.0x% or something
< gmaxwell>
People involved with counterparty have told me they intend for counterparty to replace the bitcoin currency because the distribution of bitcoins is "unfair" and counterparty is "more equitable"-- I don't think it's accurate to describe it as an "upper layer" system, it's a system that explicitly is in competition with the bitcoin currency.
< sipa>
NicolasDorier: your solution does not work, because it reintroduces the problem that the original PR was intended to fix
< sipa>
NicolasDorier: the consensus rules count those as 20 sigops; if for mining purposes do not count it as 20, the attack reappears
< sdaftuar>
sipa: doesn't the code as merged still allow for an attack, as you could fill up 400kb of block space with 20k sigops?
< sipa>
sdaftuar: hmm?
< sipa>
20k sigops should be counted as 1 MB, no?
< sdaftuar>
sorry i think i phrased poorly. at 20bytes/sigop, you could hit the 20k sigops limit with only 400kb of transactions
< sipa>
it should be 50 bytes/sigop
< sdaftuar>
yes
< sdaftuar>
unless there are valid use cases which we'd preclude, that is?
< sdaftuar>
but regardless we should have had this discussion when that PR was merged. i have no idea where 20 comes from
< sdaftuar>
or what the valid use cases are...
< sipa>
i think i assumed that number was the correct translation factor when it was merged
< sipa>
but dropping transactions that go over the limit is wrong, i think
< sipa>
we should just count them as if they were the correspondig size
< sdaftuar>
you mean for feerate purposes?
< gmaxwell>
thats what I argued for forever but there was some reason people didn't like it... convert each limit to a feerate, and take the worst... under the approximation that whatever is worst will be the limiting factor.
< sdaftuar>
that approximation doesn't hold in CreateNewBlock, most of the time, i'd think
< sdaftuar>
most of the time you're nowhere near the sigops limit
< gmaxwell>
it doesn't, but the size is the worst most of the time, so it doesn't matter except for excpetional transactions.
< sipa>
yes, this is the nonlinear optimization problem again
< paveljanik>
sipa, thanks for fixing the warnings!
< sipa>
but if you use the same count for mining and for relay/priority, there is no problem i think
< sipa>
at least you wouldn't lose money over it as a miner
< sdaftuar>
well miners woulnd't be doing the optimal thing?
< helo>
they'll intend to do the optimal thing, at least
< sipa>
sdaftuar: right, suboptimal, but not susceptible to losing a majority of yiur available block space
< gmaxwell>
Suboptimal though only in the presence of transactions whos sigops cost would dominate their size cost.
< sdaftuar>
yeah maybe that's good enough, and better than the status quo
< gmaxwell>
It's my belief (and hope) that the other limits are set so high that they should never come into effect in practice.
< gmaxwell>
(though this isn't true if people use bare multisig, but they mostly don't)
< sipa>
the fact that this problem only got detected so many months after 0.12 shows that probably not many people use bare multisig...
< GitHub47>
[bitcoin] laanwj opened pull request #8175: gitian: Add --disable-bench to config flags for windows (master...2016_06_disable_bench_windows) https://github.com/bitcoin/bitcoin/pull/8175
< rubensayshi>
offtopic; gmaxwell, I've never heard any1 say counterparty competes with bitcoin, it's focus is tokens (and soon EVM), it would be insane to think it could compete with bitcoin (considering the reduced efficiency)
< rubensayshi>
sipa, I wouldn't find it odd if you guys would decide to block bare multisig from isstandard, but this check wasn't intended that way and the result did
< rubensayshi>
though there might be people still using it, and I don't see it being isstandard as such a big problem
< sipa>
rubensayshi: agree, i think there are good reasons to make it nonstandard, but it should happen intentionally and after communication, not as an unintended side effect
< rubensayshi>
the consensus rules count bare multisig as 20 sigops, and considering it's part of consensus should continue to do so
< rubensayshi>
I guess the reason why we don't properly count the sigops to begin with is because it's been part of consensus since day 1
< gmaxwell>
it wasn't a part of consensus since day 1, </pedantic>
< rubensayshi>
oh?
< sipa>
i assume it was introeuced somewhere in 2010
< sipa>
whether they were part of the consensus rules on day 1 is also irrelevant; what matters if they are part of consensus now :)
< rubensayshi>
ok, but changing the `bytespersigop` check in AcceptToMempool to use `fAccurate=True` shouldn't be a problem right?
< gmaxwell>
that would defeat the fix against the bloat attack.
< gmaxwell>
The counting has to work exactly as the consensus rule does.
< rubensayshi>
hmm, the consensus prevents a block being larger than 1mb or 20k sigops, so you don't want to accept any txs that would tip over the balance to reaching the 20k sigops before you'd reach 1mb?
< rubensayshi>
as in; to optimize fees?
< gmaxwell>
rubensayshi: thats right.. there was some attacker a while back flooding the network with transactions that used huge amounts of sigops, which would cause miners to needlessly produce small blocks.
< gmaxwell>
there are multiple ways to address that.
< rubensayshi>
ok so I guess I get sipa's point, because we rely on fee/size and not on sigops/size when that's higher
< rubensayshi>
so there's no way to bring back bare multisig other than miners choosing to run with a lower `bytespersigop` (but you just said the default should be 50, not the current 20 to begin with) or changing the consensus rule where bare multisig is counted as 20 sigops?
< gmaxwell>
the broken counting was a softfork added in sept 2010, in ~0.3.12.
< rubensayshi>
so the only valid option would be to improve selecting TXs for blocks in a way that it won't use TXs with high sigops/bytes if it would result in not having a full block so that the check doesn't have to be in the mempool policy
< rubensayshi>
which is ...
< rubensayshi>
way to much complexity and too big of a task
< gmaxwell>
20 is more permissive than 50, fwiw.
< gmaxwell>
there was a discussion on IRC about setting it, and 20 seemsed to be the lowest that it could be set without outright enabling that attack.
< sdaftuar>
gmaxwell: pointer to the IRC conversation? i looked and never found any discussion
< gmaxwell>
rubensayshi: right, and generally we consider bare multisig undesirable for unrelated reasons too, and there is longstanding discussion toward moving to make it non-standard... so doesn't really justify a bunch of complexity to try to work around it.
< rubensayshi>
I guess it should just be made non standard then
< rubensayshi>
which it essentially is now
< rubensayshi>
how about some extra bytes for opreturn then :P ?
< gmaxwell>
sdaftuar: turns out searching for the number 20 is really hard.
< luke-jr>
regardless of what we do to fix bytespersigop, I think we should disable bare multisig by default; with that in mind, *which* solution we go with seems less important
< luke-jr>
(but IMO the better fix is to simply count CHECKMULTISIG correctly for this purpose, since the goal is spam prevention, and higher fees don't matter in that case)
< luke-jr>
rubensayshi: what happened to OP_RETURN counterparty?
< gmaxwell>
it isn't about charging more fees, the whole attack was causing miners to produce needlessly small blocks because they thought sigopbloat txn were more attractive to produce than they were.
< gmaxwell>
if an attacker had to pay as much to 'fill' a block that way as they would with ordinary transactions, then it's no longer an interesting attack vector.
< rubensayshi>
luke-jr, literally 99.988% of CP txs are opreturn
< btcdrak>
luke-jr: they use OP_RETURN for messages < 80 bytes which is most of them.
< luke-jr>
rubensayshi: why not 100%?
< luke-jr>
rubensayshi: is there a good way we could teach Core to identify CP OP_RETURN separate from spam OP_RETURN, so we can allow longer lengths for CP only?
< rubensayshi>
is there a reason not to change isstandard to allow opreturn with more data?
< rubensayshi>
they dont polute utxo set and are prunable
< rubensayshi>
and fee is paid for them
< gmaxwell>
rubensayshi: Bitcoin is a currency not a public shared database.
< sipa>
you're storing data on my disk, without benefitting me or the bitcoin ecosystem
< btcdrak>
gmaxwell: what concerns me is if systems resort to bloating the UTXO with unspendable transactions as a way to encode >80 bytes.
< gmaxwell>
btcdrak: they're not unspendable AFAIK.
< luke-jr>
btcdrak: it seems they currently use spendable CHECKMULTISIGs
< rubensayshi>
the bare multisig are spendable btw
< luke-jr>
1-of-2 with the 2nd key up to 500 or so bytes
< gmaxwell>
though if they were we should simply filter them out generally.
< rubensayshi>
that's the reason to use bare multisig, they're 1-of-3 and the 3rd key is a real key
< rubensayshi>
the last resort is pubkeyhash encoding ...
< luke-jr>
we could probably enforce a pubkey format for bare multisig even when they're enabled, but nobody afaik is legitimately using it, so might as well just disable it by default
< btcdrak>
gmaxwell: pubkeyhash encoding is unspendable afaik
< luke-jr>
pubkeyhash encoding can't do >80 bytes anyway?
< rubensayshi>
yea
< rubensayshi>
100s of outputs ...
< luke-jr>
-.-
< rubensayshi>
all unspendable
< luke-jr>
sigh, maybe we really do need p2sh^2 sooner rather than later
< rubensayshi>
the bare multisig was perfect tbh, because we could clean the outputs
< btcdrak>
it really is about the lesser of the evils. I would say a slightly larger OP_RETURN is preferable than unspendable junk
< rubensayshi>
as a fallback to opreturn that is
< luke-jr>
[19:18:46] <luke-jr> rubensayshi: is there a good way we could teach Core to identify CP OP_RETURN separate from spam OP_RETURN, so we can allow longer lengths for CP only?
< gmaxwell>
rubensayshi: you should have your own network, and stop storing data unrelated to bitcoin in the bitcoin network.
< gmaxwell>
The OP_RETURN as standard facility was intended to store _commitments_ not data.
< rubensayshi>
luke-jr, is there a way you won't change that to drop all of them the next release xD?
< rubensayshi>
gmaxwell, I'm just a script kiddie who dropped by a project that needed some work and sounded fun to do
< rubensayshi>
I didn't come up with this stuff xD
< gmaxwell>
rubensayshi: :)
< btcdrak>
:D
< rubensayshi>
I just get the bug reports
< gmaxwell>
rubensayshi: but you could help rescue it. :)
< gmaxwell>
Dare to dream.
< btcdrak>
sidechains...
< rubensayshi>
hehe
< gmaxwell>
it's not even a 'sidechain'-- it's a seperate currency/asset tracking network. :)
< luke-jr>
rubensayshi: believe it or not, the 0.12 thing was an accident in affecting CP
< gmaxwell>
indeed, that wasn't intended. if we had intended to block CP then it would be blocked completely.
< rubensayshi>
I gave you the benefit of doubt luke-jr, but not having any tests for it makes it look funky (I'll write some tests 2morrow for it!)
< adiabat>
is there a recommended / reliable way to put this data in the witness stack?
< adiabat>
it'd be nicer to have it there instead of in an OP_RETURN
< rubensayshi>
would that be better?
< luke-jr>
adiabat: it's probably better in OP_RETURN tbh
< gmaxwell>
adiabat: it's not under a signature there, so it can be stripped in relay, unfortunately.
< adiabat>
yeah...
< gmaxwell>
There is no good place to store arbritary data, unfortunately.
< adiabat>
you'd have to put like a hash of it in the output script, then put your 520 byte preimage in the witness
< rubensayshi>
op_drop?
< luke-jr>
gmaxwell: Factom! :P
< adiabat>
heh
< gmaxwell>
except via a _commitment_ in op_return and the data elsewhere, which was already whas op_return as standard was supposted to be.
< adiabat>
op_2drop is even more efficient :)
< rubensayshi>
if all my addresses would be P2SH I could put it in the scriptSig with an op_drop no?
< gmaxwell>
The problem with op_return is that the data still ends up in prued-nowit-sync and in SPV scans. The problem with putting it in the witness is that it's not signed.
< gmaxwell>
rubensayshi: sure you can, but any joker can strip it out of the transactions as they go past.
< adiabat>
at some point someone should make like a p2pool backed merkle-branch-service
< rubensayshi>
ah right
< adiabat>
~most of the data on the segnet4 blockchain is op_2drop witness items
< adiabat>
nobody modified any of them :)
< gmaxwell>
I think we should add some 'notes' facility, where people can publish data into a DHT(spit) with access rate limited by ownership of stationary txouts. Then if they want commitments in op-returns great. We'd hoped people would build this for themselves, but building complex infrastructure isn't something many people do when there is a speculative asset they could be pumping instead.
< gmaxwell>
adiabat: thats been done, it was called chronobit.
< gmaxwell>
adiabat: yea, sure lots of things work for a while. :)
< adiabat>
huh! look at that. and people use op_return instead...
< luke-jr>
gmaxwell: the hard part there is proving ownership
< gmaxwell>
luke-jr: you just write a non-minable transaction to perform your insert.
< luke-jr>
gmaxwell: that involves key reuse :<
< gmaxwell>
adiabat: utter refusal to use anything except the simplest possible thing. "This is why we can't have nice things".
< gmaxwell>
luke-jr: so? not in a way that matters.
< gmaxwell>
Key reuse isn't inherently bad. Reusing in dumb ways is.
< gmaxwell>
If your alternative was to transact; then signmessaging is stricly superior.
< adiabat>
oh, semi-unrelated but, gmaxwell: remember the "bloom filter digest" post about a month ago on the mailing list
< luke-jr>
there is zero risk of QC ever?
< adiabat>
and you said, since you're not updating, there's better structures than bloom filters
< gmaxwell>
luke-jr: it doesn't change anything wrt QC when your alternative was just transacting!
< luke-jr>
transacting doesn't leave coins on the key ;)
< gmaxwell>
adiabat: yes.
< adiabat>
gmaxwell: Do you know if anyone's working on that, or a BIP or anything? It looked like it was from a troll account...
< gmaxwell>
I don't think anyone is working on it right now.
< adiabat>
I don't really want to commit to like "I'll work on that" because I might not have time but...
< adiabat>
it seems so much nicer than the current merkle block stuff
< gmaxwell>
adiabat: well would be good for you to, and if other people show up I can point them at you.
< luke-jr>
a better way IMO would be to use sign-to-contract to commit to another key, and use that key to sign the DHT publication
< gmaxwell>
TBH, I'm a bit afraid to work on technology that I really like right now, because it'll just get attacked because I like it. :(
< AaronvanW>
"Do you guys know what the the latest up to date spec for stealth addresses is?" (asking for someone... who is asking for me.)
< gmaxwell>
luke-jr: but then access to publish is people who don't have bitcoins anymore which would be kinda odd. :)
< adiabat>
gmaxwell: heh yeah... do you have any links to the more efficient filters, like the binomial codec you linked to?
< luke-jr>
gmaxwell: contracthash then? :P
< gmaxwell>
adiabat: well the more efficient filter is just like a bloom but with a single hash function and large number of candidate positions, and then you compress the result, using your choice of optimal compressor for cases where the probablity of a 1 is very low
< gmaxwell>
luke-jr: doesn't achieve your goal.
< luke-jr>
hm
< gmaxwell>
I wish I never pointed out that the use of hashed keys might harden a little against QC, it's vastly overestimated in performance. If we think that is at all a threat we need to be urgently migrating to secure schemes.
< luke-jr>
MAST with a branch for payment vs message vs DHT-message :P
< gmaxwell>
adiabat: So, for example, a simple range coder would work. rice coding might be reasonably efficient, or things like the binomial codec I linked to.
< gmaxwell>
luke-jr: cute.
< adiabat>
gmaxwell: so maybe start with the current murmur hash, and look at different compression encodings?
< luke-jr>
gmaxwell: I consider it a useful property in that it doesn't cause QCs to take all old coins not being spent immediately. It does that, at least, no?
< gmaxwell>
adiabat: or siphash 1-3. Yes, thats possible.
< gmaxwell>
adiabat: another newly trendy kind of data structure for this is a cuckoo filter, though for ideal use here you'd also need to compress it, though the compression could be simpler.
< gmaxwell>
it might work better simply because it will be smaller in memory when matching.
< adiabat>
gmaxwell: OK thanks, will look into it. Probably can't work on it much, but it doesn't *feel* that hard... in fact feels simpler than the current merkle-blocks and then send txs stuff
< gmaxwell>
Yes, it's simpler. I think fully elaborated out you could go very deep down a rabbit hole, but the basic idea is simple.
< adiabat>
gmaxwell: the main design tradeoff seems to be the size of the block digest. Too small and lots of false positives and people have to download lots of blocks. Too big and you're spending a lot of space on the digest.
< gmaxwell>
A cool thing about it is that it could be used before being commited, so people could try many different designs and explore the solution space pretty far.
< adiabat>
yeah, could implement it on the p2p layer without any consensus changes
< gmaxwell>
well you can also do things like multiple tiers, like a digest covering groups of 8 blocks, and a digest covering single blocks.. even digests covering parts of blocks.
< adiabat>
and the node could lie to you, but they can do that right now with bloom filters anyway
< adiabat>
also feels like maybe a perverse incentive would be that you'd not want to make as many new addresses, as your false positive rate would go up and you'd have to download more...
< gmaxwell>
the current system has that issue, these commited schemes have less of it.
< adiabat>
yeah, basically nothing about it seems any *worse* than the current filter system
< gmaxwell>
There are other totally different alternatives though, like PIR scan services, which I think we almost have enough in bitcoin core to support as a purely external add on.
< adiabat>
PIR scans would be better but also seems more complex...
< gmaxwell>
much more complex, though a lot of the heavy lifting has been done by other people (percy++)
< adiabat>
not that people shouldn't do it, but I feel like I could reasonably get a block digest to work, but getting a PIR system seems more .. innovative :)
< gmaxwell>
hah
< gmaxwell>
I only bring it up in the hope that someone will be foolish enough to think it easy.
< gmaxwell>
Though actually I think they're more similar in difficulty than you think... in both cases the details are what get you.
< adiabat>
yeah, I guess with the digest, you can get something working even if the details are wrong and it works sub-optimally
< adiabat>
with PIR, well... I guess you wouldn't really know if it was horribly broken and revealed everything that you were requesting
< gmaxwell>
PIR is straight forward: there is existing software that lets you have a {key, value} database and query it privately. Take the existing utxo set, order by address, for every txout for each address, generate a txout proof (gettxoutproof rpc), for it. ... now thats your database. People query it with each of there addresses, and import the results into the wallet after verifying the proofs. Tad
< gmaxwell>
a.
< gmaxwell>
The reality is less simple, because different addresses have (vastly) different numbers of txouts connected with them.. so you have to have some way of handling it.
< gmaxwell>
Probably the thing to do is take all the txouts for each address and make the keys key, key_2, key_3, key_4... for each of them. and have the first key tell you how many txouts there are in total.
< gmaxwell>
Though that still will have some inefficiency since the txoutproof has the whole transaction paying you in it, so they'd all need to be padded up to a constant (large size).
< gmaxwell>
and maybe the inefficiency of all that makes it unreasonable to use.
< adiabat>
yeah... also percy++ looks a little scary to work with in that I don't think there's a lot of real world implementations using it right now
< adiabat>
wheras the block digest idea just jumps out as like "Oh yeah that'll work!"
< gmaxwell>
Right. Well most people have no desire, "keep user data private? but then how will we sell it?" :) a positive point is that the academics working on it are actually competent programmers too (not always the case)... so I think it's likely to not be a software engineering disaster (and it's always worked right when I've messed around with it)
< adiabat>
yeah I looked at percy++ a year or two ago and it does seem to be well made
< gmaxwell>
adiabat: indeed, it will, though less private! it's also not exclusive with using PIR.. a simpler way to use PIR would be to use it to fetch the whole blocks you're going to fetch.
< adiabat>
I guess I'm also not just looking at it from a privacy perspective, though that's a big part of it
< gmaxwell>
::nods::
< adiabat>
with LN channels, a false negative can be a big problem
< gmaxwell>
the existing bloom stuff is just broken on a bunch of levels.
< adiabat>
yeah, if the node you're asking to filter for you omits the tx where someone closes a channel incorrectly, you might not know and lose coins
< gmaxwell>
it's vulnerable to attack both from data hiding and from denial of service, its resource intensive, .. strongly discourages single use addresses.. quite non-private...
< adiabat>
oh and the merkle-block data structure is... weird...
< adiabat>
and then it sends the txs, unrequested
< _anthony_>
bitcoin satellites are definitely the way to go
< gmaxwell>
yea, a side effect of the bitcoin protocol having no mechensim to just fetch txn already in blocks, which has a good reason for it (among other things, it helps keep the network from being abused as a file trading DHT)
< gmaxwell>
BIP152 actually adds a sutiable mechenism-- getblocktxn that fetches txn in a block by index, that only works for recent blocks.