< wumpus>
seems like a nice feature, if it can list the failed checks in the PR instead of having to dig into the travis log every time
< wumpus>
although having to look for 'apps' in a 'marketplace' seems kind of sily
< fanquake>
heh
< wumpus>
"
< wumpus>
Organization owners and users with push access to a repository can create checks and statuses with GitHub's API. For more information, see "Checks" and "Statuses" in the GitHub Developer documentation."
< wumpus>
so we could push *our own* statuses, funny. Though agree with promag it would be useful if this was integrated into, say, travis, to not have to run parallel checking infrastructure.
< wumpus>
I think we should try to keep high-priority discussion in the meetings
< wumpus>
that makes sure at least everyone has an idea of what is added. I'm okay with adding one inbetween when there is really a hurry, but this even has a [WIP] tag still
< wumpus>
also I think high priority == 0.16.1 now
< bitcoin-git>
bitcoin/master 97c112d Ben Woosley: Declare TorReply parsing functions in torcontrol_tests...
< bitcoin-git>
bitcoin/master 536120e MarcoFalke: Merge #13291: test: Don't include torcontrol.cpp into the test file...
< bitcoin-git>
[bitcoin] MarcoFalke closed pull request #13291: test: Don't include torcontrol.cpp into the test file (master...tor-reply) https://github.com/bitcoin/bitcoin/pull/13291
< jnewbery>
promag: Of course I will, in good time. I've just reviewed #13100, and I don't think there's a huge rush to review all the load/unload wallet PRs in parallel
< promag>
jnewbery: what should come first? menu entries or unload?
< jnewbery>
I don't think it matters. I'm more concerned about not hogging reviewer/maintainer time.
< jnewbery>
I got another update from Ben at Github: One more quick update, the performance improvement I mentioned also made it into production moments ago, so hopefully the URL argument workaround should be less-and-less necessary as well.
< jamesob>
potentially incorrect behavior in waitforblockheight (and anything else relying on rpc/blockchain.cpp:latestblock)
< jtimon>
MarcoFalke: ok, no hurry, good to know someone tried it, now I'm curious about the issues in the tests and I'll play with travis and the backport
< MarcoFalke>
jamesob: waitforblockheight is a hidden tests-only rpc
< jonasschnelli>
Looks good,.. maybe orthogonal, but the prune settings should be in the intro...
< provoostenator>
There's some confusion around whether using QT settings is appropriate for this, and I see three ways out.
< jonasschnelli>
That's where it is probably most valuable
< provoostenator>
1. ignore the problem
< wumpus>
I'm ok with most solutions, except writing to bitcoin.conf
< provoostenator>
2. go the writable config file route
< jonasschnelli>
what wumpus sais
< kanzure>
hi.
< provoostenator>
3. interpret a lack of prune= setting differently
< jonasschnelli>
Can't the GUI settings not just override what is already set?
< instagibbs>
(3) is interesting
< achow101>
there currently are a few settings that are saved to qt settings that are shared between qt and bitcoind
< achow101>
but bitcoind can't access
< provoostenator>
If we go for (2) then I'd like to nominate #11082 for priority review
< gribble>
https://github.com/bitcoin/bitcoin/issues/11082 | Add new bitcoin_rw.conf file that is used for settings modified by this software itself by luke-jr · Pull Request #11082 · bitcoin/bitcoin · GitHub
< jonasschnelli>
If prune is set via confi/startip, disallow access in the GUI settings (only display it)
< achow101>
so if we follow what has been done previously, then we ignore it
< wumpus>
yes an additional writable config file would be fine
< jonasschnelli>
Do we want four(!) levels of configuration?
< jimpo>
What about augmenting the GUI to help you generate a default config file when none already exists?
< jonasschnelli>
And eventually importing conf-files?
< wumpus>
there have also been plans to add RPCs to change configuration settings, that'd require a similar thing, just don't write the -conf file it's just as likely to be in an read-only directory
< jonasschnelli>
Would the rw_conf file be replacing the QSettings layer?
< wumpus>
jonasschnelli: probably
< provoostenator>
Right, I'd also like to get rid of QT settings completely and use a read-write (seperate) config file.
< wumpus>
jonasschnelli: long term, at least
< jonasschnelli>
That would be acceptable
< provoostenator>
I wrote a migration away from QTSettings here: #12833
< jonasschnelli>
But please not conf<->startup-cli-params<->QTSettings<->level4_rw_conf
< wumpus>
nice
< jonasschnelli>
Thanks provoostenator for working on that!
< achow101>
since this is a problem that effects multiple options, can we just ignore the problem for now and deal with them all together at the same time with a better solution?
< wumpus>
yes storing some of the settings in a different place has been problematic
< wumpus>
(at least in the QSettings - because bitcoind can't get there)
< instagibbs>
users could by and large be migrated over the rw unless they have a need for read-only
< instagibbs>
for simplicity
< wumpus>
the only thing that would idally be stored in the QSettings would be the data directory
< wumpus>
well the bitcoin.conf is for human editing
< provoostenator>
This migration would also work if it's done _after_ we add prune stuff to QTSettings.
< wumpus>
the _rw is machine writable, all comments will be discarded etc
< instagibbs>
ah hm
< jonasschnelli>
yes
< provoostenator>
But if we can get the proper solution over with, that might be better.
< jonasschnelli>
For 12833 I'm also unsure about the term "Limit".
< wumpus>
on the other hand, having things be dependent on each other is usually a bad idea, just draws out things
< jonasschnelli>
Since this is not true
< provoostenator>
jonasschnelli: what "Limit"?
< jonasschnelli>
provoostenator: 12833 currently tells user "Limit block storage to: " which is not true... though the tooptip hints towards the right handling.
< provoostenator>
Which I fixed, but screenshot is outdated.
< provoostenator>
It's now "Prune &block storage to"
< jonasschnelli>
wumpus: AFAIK the 550MB assumption (if one sets 550) is still on 1MB blocks
< jonasschnelli>
But 288min blocks & undos are enforced
< wumpus>
jonasschnelli: right, that's an issue for the command line help too though, I think?
< jonasschnelli>
but maybe we should fix that (if it turns out to be a problem) rathern then changing the word "limit" :)
< provoostenator>
I'm not too worried about details of the text and minimum size. It's what we need to do about saving settings that seems to cause things to get stuck.
< jonasschnelli>
Indeed
< wumpus>
other topics?
< jonasschnelli>
I also hoped we could educate the user withing the GUI a bit more about pruning... but can be done later via extending the intro.cpp
< wumpus>
right, let's focus on getting the functionality in first. I think educating the user is a good thing, but having the PR depend on all those things being worked out is going to put it way past 0.17 probably.
< jonasschnelli>
I have a short topic: sipa raised concerns about the scantxoutset RPC command
< jonasschnelli>
(if that is something we want to discuss here)
< wumpus>
so the feature freeze for 0.17 is 2018-07-16, less than two months away
< provoostenator>
Also note that I don't think Mac shows the intro dialog.
< wumpus>
you need to delete -whereever it stores the qsettings on Mac-
< wumpus>
on UNIX you can pass the flag -resetguisettings but that's not easy on Mac I think
< jonasschnelli>
(works on mac as well)
< wumpus>
good :)
< provoostenator>
Resetting gui settings didn't do it for me, but will debug some other time.
< achow101>
jonasschnelli: what is the command supposed to do?
< jonasschnelli>
The scan functionality allows utxo sweeping (rawsweeptransaction) with no block scanning
< jonasschnelli>
You can pass in n pubkeys/addresses or even xpubs with a lookup window and it gives you back all unspents
< sipa>
yeah i just mentioned that we preferably don't commit to having functionality that's hard to maintain in the future
< jonasschnelli>
And eben a rawsweeptransaction to a single address
< sipa>
fun.
< instagibbs>
err what
< wumpus>
services massacre
< cfields>
irc unicorns...
< cfields>
let's move to slack!
< cfields>
(/s)
< wumpus>
:-(
< * jonasschnelli>
stabs cfields
< sipa>
back to topic?
< sipa>
yeah i just mentioned that we preferably don't commit to having functionality that's hard to maintain in the future
< jonasschnelli>
Yes. I think sipa's point is a valid point. Do we want to maintain something that may be incompatible with future utxo handling (or new model=
< provoostenator>
Such functionality seems quite useful for watch-only stuff
< provoostenator>
Without actually having to import things into a wallet.
< sipa>
which isn't a problem if it were implemented on top of an optional index
< jimpo>
sipa: is the optional index a full scriptPubKey index?
< provoostenator>
But without caching of some sort (or an index?) I'm guessing it'd be very slow.
< jimpo>
or something less than that?
< jonasschnelli>
sipa: what if we allow it for now an mention in the RN it may later require an optional index?
< sipa>
yes, perhaps
< jonasschnelli>
provoostenator: 30seconds for the whole index with an xpub & 1000 keys lookup window on a SSD/fastCPU machine
< jimpo>
Or just have an explicit flag enabling the RPC now, even if it requires no additional index at present?
< jonasschnelli>
*whole set
< sipa>
yeah, that seems fine
< sipa>
i do see the usefulness of scanning the UTXO set
< provoostenator>
All the way back to the genesis block?
< jonasschnelli>
jimpo: yes. But feels a bit after artificially holding back functions due to possible future work (which may never happen)
< sipa>
provoostenator: it scans the UTXO set, not the blockchain
< wumpus>
yes, scanning the utxo set would be incredibly useful
< wumpus>
I've wanted this functionality since forever really
< provoostenator>
Ah OK, it just gets unspends, not transaction history. Well that's still quite useful indeed.
< jonasschnelli>
I wanted that in master before the fork-coins happend. :)
< wumpus>
even if slow it's so much faster than scanning the entire chain
< wumpus>
(without index)
< jonasschnelli>
it would also allow importing wallets (no rescan required) if one don't care about the tx history
< wumpus>
using importprunedfunds I guess?
< jonasschnelli>
Yes..
< jonasschnelli>
conclusion? Hide behind an artificial block-setting or risk it will not maintainable over time?
< jimpo>
I think it's probably fine to risk it not being maintainable over time without an explicit index
< jonasschnelli>
Yes. I would feel okay with that...
< jonasschnelli>
Let me mention that risk in the RN and fix the PR in general /topic
< wumpus>
so it will just become slower due to the linear scan?
< provoostenator>
By "not being maintainable over time" do you mean if the UTXO set gets really large or is it a code maintenance things?
< jonasschnelli>
from sipa:
< jonasschnelli>
Overall, I'm unsure about this. This is functionality that is more easily provided by software that maintains a UTXO index by script, and is not possible in general if we'd move to a design like UHF (see mailinglist) or other UTXO avoidance techniques. Those are far away of course, and features like this can be made optional (like txindex is) if needed. I'm just generally unconvinced a full node is the best place to put
< jonasschnelli>
this.
< wumpus>
or is this 'unmaintainable' as in 'will give wumpus more headaches'?
< jonasschnelli>
no.. changes of the general model
< sipa>
wumpus: in a UHF model, without indexes, implementing a scan of the UTXO set requires going through the blockchain
< sipa>
(where we store hashes of UTXOs rather than the UTXOs themselves)
< instagibbs>
sipa, searchable index has been tried a few times, and sadly dropped, something like 3 times
< wumpus>
sipa: so that's a far future thing right?
< instagibbs>
not saying it's not the right way
< sipa>
instagibbs: yes, my preference is that's implemented in other software
< jcorgan>
i bet there's a dozen private implementations of tx/addr external indexing for xpub related things
< wumpus>
I'd really like a way to scan UTXOs, my own appraoch was to stream the UTXO set over HTTP and do it client-side, but that ran into problems with the libevent http server :(
< jonasschnelli>
Yes. But there is no fast access to the UTXO set from outside of our code-base IMO
< jcorgan>
i do it with zmq notifications and the REST interface
< * jonasschnelli>
curses zmq
< jcorgan>
you're welcome :-)
< * wumpus>
still wants to resurrect #7759 some day
< achow101>
I've taken to modifying gettxoutsetinfo whenever I need to scan the utxo set. takes like 20 minutes though to scan the whole thing
< jonasschnelli>
interesting... almost forgotten
< provoostenator>
In this future where "we store hashes of UTXOs rather than the UTXOs themselves", wouldn't there simply be an optional "index" with the UTXO, which this RPC method could then move to?
< jonasschnelli>
achow101: only if you are in debug mode? right... takes 30secs here
< wumpus>
provoostenator: yes... exactly... I say it's a problem/option for then
< provoostenator>
We'd have to make it clear that in the future the method would / might require an index.
< achow101>
jonasschnelli: I think it was the operations I was doing, or my slow hdd
< wumpus>
achow101: on ARM it's pretty slow (but still faster than scanning the whole chain!)
< sipa>
it's the same issue as we had with txindex
< sipa>
people build solution that assume txindex is always there... then we moved to a UTXO model
< sipa>
and txindex became an inefficient optional thing
< wumpus>
we can even deprecate the RPC then
< achow101>
sipa: then it will be the same situation as getrawtransaction
< jimpo>
with #13243, it should become less costly to build future indexes in the background...
< wumpus>
I don't think we should reject useful, optional, functionality just because of some future data structure change
< sipa>
achow101: my concern is not the incompatibility
< sipa>
achow101: my concern is people building an ecosystem that assumes it's always possible and cheap to do
< sipa>
but okay, i agree with the points here
< wumpus>
sipa: I agree with that in general, but I'm not sure here
< jonasschnelli>
jimpo: great work!
< provoostenator>
The fact that it takes 30 seconds is helpful in that case. :-)
< jcorgan>
if we want to encourage people to treat bitcoind as the "ground truth", instead of baking up their own stuff, giving them easier access to the "database" would help
< sipa>
jcorgan: yes... except that the ground truth in the future may not be the UTXO set
< provoostenator>
jcorgan: that too, would be nice to be able to get easy to export dumps of useful things, though not should what format.
< jcorgan>
sure, but anything can happen in the future
< wumpus>
sure, but anything can happen in the future <- that
< jonasschnelli>
jimpo: the question is, if we not want to build a base for an external indexing daemon (outside of the Core project)
< wumpus>
in any case this is hard to do out-of-process right now!
< wumpus>
even harder than indexing
< jcorgan>
i'd be happy with that, too, instead of making everyone else recreate it
< jonasschnelli>
wumpus: Indeed
< sipa>
it's also less a concern to add optional indexes now with jimpo's background index work
< sipa>
before, new indexes always required ugly hacks all over the validation code
< wumpus>
nice!
< jimpo>
My guess is there will be ongoing tension between adding RPC functionality and keeping the node requirements small unless there are more options for users
< jcorgan>
yes
< echeveria>
it's a really bad idea to add an address index.
< provoostenator>
Not too mention that you couldn't turn on/off txindex without reindexing everything.
< instagibbs>
jimpo, an rpc call in every garage!
< jcorgan>
i used to maintain the addrindex patch set and it got uglier and uglier over time
< echeveria>
it means people will willyfully build insane systems.
< jimpo>
haha
< sipa>
echeveria: yes :(
< jonasschnelli>
echeveria: agree
< sipa>
but i fear they'll do so anyway
< echeveria>
rather than sane, scalable things that don't require an ever growing index.
< jimpo>
I tend to agree a better solution is to have a separate indexing service that doesn't do consensus but maintains the full chain state
< jonasschnelli>
The question is, is it faster to index everything or to have Core running with 10k wallets
< jcorgan>
+1 jimpo
< jimpo>
and gets blocks via zmq
< sipa>
decent wallet software shouldn't ever need to scan
< sipa>
except for recovery
< jcorgan>
i want to let bitcoind do the hard stuff (validation)
< jcorgan>
but then i want to easily get at all the validated data
< wumpus>
maybe it never *needs* to, but there are legit situations in which it's useful as a tool
< sipa>
sure!
< jonasschnelli>
jimpo: fork Core, ripout the validation stuff and you have an indexing daemon you can connect to your core node over p2p
< sipa>
having a way to disable validation would also help with that :)
< sipa>
anyway, this is turning into a philosophical discussion
< wumpus>
jonasschnelli: why not use your indexing daemon?
< wumpus>
yes, I think we're through the meeting
< jonasschnelli>
wumpus: maybe. Not sure if we want another p2p library introduced
< wumpus>
jonasschnelli: not into bitcoin core, I mean as external thing
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu May 24 19:59:27 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< jonasschnelli>
wumpus: yes. But even there we may want to reuse the primitives...
< jonasschnelli>
or maybe its good to use another library to find things we would not find using the same core code
< jimpo>
yeah, I think diversity at the P2P layer is healthy, using the same validation engine
< jimpo>
and some can have features requiring additional indexes
< jimpo>
speaking of which, what were you saying about the BIP 158 compression ratios, sipa?
< sipa>
jimpo: sec
< wumpus>
jimpo: rust P2P layer? *ducks*
< jimpo>
of course
< jimpo>
:-)
< sipa>
jimpo: so say you have a set of N elements
< sipa>
(after deduplication)
< sipa>
this means you have a list of N entries, each in range 0..2^20*N-1, whose order does not matter
< sipa>
the number of combinations for that is (2^20*N)^N / N!
< sipa>
(approximately; this formula ignores that there may be duplicates among those N, but that chance is low)
< sipa>
this means that information theoretically, you need at least log2((2^20*N)^N / N!) bits in total
< sipa>
otherwise you couldn't express every combination
< sipa>
for N=10000, that number is 214419 bits, or 21.4419 bits per element
< provoostenator>
When I do "bitcoin-cli -config=/the/usual/place -datadir=/some/other/place getblockheight" it creates a folder /some/other/place/wallets
< sipa>
jimpo: a GCS implementation will use 21.5819 bits on average
< jimpo>
oh, interesting
< provoostenator>
Even if the RPC connection fails.
< jimpo>
(damn it small graphs on Wolfram Alpha)
< sipa>
jimpo: what i don't know is if there's isn't another probabilistic data structure that has a 1/2^P false postive rate which needs less information
< sipa>
but at least any construction that follows from compressing a list of N elements in range 0...2^20*N will at least need 21.4419
< jimpo>
thanks for the explanation
< sipa>
but this is *really* good
< jimpo>
yeah, seems GCS is doing very well
< sipa>
less than 1% overhead above the theoretical minimum
< jimpo>
I've been thinking a bit about tuning the P value. Kind of unfortunate that it's static.
< sipa>
even at just 100 elements, GCS has less than 1% overhead
< sipa>
at a false positive rate of 1/2^10 it's close to 1.5% overhead
< sipa>
gmaxwell and i were looking at whether a better custom entropy coder could do better than GCS, but then he pointed out to me that the limit isn't 20 bits per element, and that your numbers looked suspiciously close to the limit already
< jimpo>
what is a custom entropy coder?
< sipa>
not using golomb-rice coding but something custom that could get us closer to the theoretical limit
< sipa>
it's certainly possible with a range coder
< sipa>
though that would be complex and computationally expensive
< sipa>
and now that it looks like golomb coding is so close to optimal already, it doesn't look worth looking into
< jimpo>
out of curiosity, have you looked at PFOR/FastPFOR for integer sequence compression?
< sipa>
i have never heard about those
< jimpo>
I was experimenting with it yesterday for compression header values
< jimpo>
and it gets pretty amazing results
< jimpo>
if you take all off the versions in a 3,000 header range in a sequence, and do the same for timestamps, bits, and nonces, it compresses all of them together by ~90%
< jimpo>
can even compress sequences of nonces 75% on average
< echeveria>
that seems kind of unlikely.
< jimpo>
yeah, that's what I thought. but I tried decompressing some of the ranges and it seems to work?