< sipa>
it's not counted exactly according to consensus
< * luke-jr>
cowers in fear of the new Bitcoin Core configure that yells at you :P
<@wumpus>
bitcoin: where even the configure scripts yell at you
<@wumpus>
you think the build system is agressive? We'll introduce you to some of our community members :p
< sipa>
we have this guy called Travis who continuously yells at very active developers
<@wumpus>
which reminds me, there is this guy called 'Coveralls' in some projects, maybe we should invite him too
< luke-jr>
lol
<@wumpus>
I do think he's a bit on the spammy side tho, like our old pulltester
< * jonasschnelli>
likes Coveralls
< jonasschnelli>
It someone encourages contributors to write more tests (even if coverage does not prove correctness)
<@wumpus>
yeah coverage is especially good in languages such as python where many typos and such are only found by executing some code
< gmaxwell>
speaking of yelling, would people be opposed to me adding a ifdef check so that if daemon is not either defined or failed to be detected the compiler errors out? I keep running into not-supported-on-your-OS when switching between master and 0.13. :) That fact that it completes the build and only fails at runtime is annoying (esp because the build takes a long time on my laptop)
<@wumpus>
though it helps for C too I guess, having coverage in the first place is essential for complete tests, just not sufficient
< gmaxwell>
C++ makes smart coverage harder because all the boilerplate stuff often adds unreachable code. :(
<@wumpus>
gmaxwell: no problem
<@wumpus>
though you *really* should make a habit of re-running autogen when changing branches
< * wumpus>
has different repo checkouts for different major versions to avoid problems like that
<@wumpus>
also saved me from committing a patch against the wrong branch at least once :)
< gmaxwell>
libsecp256k1's tests are something like 99.2% line coverage, 95.3% branch coverage. ... a few branches can't be reached without solving intractable crypto problems. alas. :(
< gmaxwell>
wumpus: I figure if it keeps hitting me it will hit random users who find it more surprising. :)
<@wumpus>
I wish autoconf would just yell if your generated files are out of date with the build system files
< jonasschnelli>
For C: high coverage together with a CI valgrind memleak check is desirable IMO
<@wumpus>
and fuzzing, of course
<@wumpus>
gmaxwell: anyhow, no problem with adding that, it'd also catch problems with people forgetting to include configure.h
< gmaxwell>
memleak? seldom have memleaks in interesting code. Valgrind's undefined behavior checking, OTOH. Is super great. :)
< gmaxwell>
wumpus: thanks okay, I'll do so.
< gmaxwell>
for bitcoin I do think we need DRD run on it more often, it's just a bit frustrating because its so slow.
<@wumpus>
at least make sure you run valgrind against the optimized executable not the -O0 debugging one :-)
<@wumpus>
but yes valgrind is slow in any case, though it's surprisingly fast if you realize what it does in the background, converting executed code blocks to VEX and back to machine instructions on the fly
< gmaxwell>
yes, it's amazing.
< sipa>
(-ever, -50k, -2k also exist)
<@wumpus>
sipa: nice!
< TD-Linux>
sipa, having them on the index page would also be nice :)
< gmaxwell>
there is an index page?
< sipa>
TD-Linux: that would involve loading my html knowledge back from swap space
< jonasschnelli>
Yeah. Migrate the non-pngs html stuff to bitcoincore.org
<@wumpus>
not me though, my html knowledge is in the 2000-2004 archive in deep storage
< jonasschnelli>
Yeah. Perl. How did I missed this.
< sipa>
also C
< sipa>
and bash
< sipa>
and bash that generates gnuplot
< sipa>
;;nethash
< gribble>
2049301157.56
< CubicEarth>
13.1, Ubuntu.... my client is passing transaction data, synced several dozen blocks on startup, but then stalled for almost 10 minutes before downloading the most recent blocks. Is it just that none of the connected peers were providing the blocks it needed? Or is it something else?
< sipa>
how many blocks do you have in total?
< CubicEarth>
I synced it this morning, so it was only about 10 hours behind... then it stalled at about 6 hours behind
< CubicEarth>
chewed though the first 4 or 5 hours in 30 seconds
< sipa>
if you had an abnormal shutdown before, that's normal
< sipa>
well, not normal, but known
< CubicEarth>
interesting. Would just one abnormal shutdown ever be enough? The most recent one was normal (so far as I could tell)
< sipa>
just the last one
< sipa>
it was busy processing the blocks it had on disk but not applied to the database, and does that in the background while connecting to other peers, ignoring their block announcements
< sipa>
s/was/would be/
< gmaxwell>
an unclean shutdown will delay the initial headers fetch?
< sipa>
no, it will cause skipping it
< sipa>
we should fix that
< CubicEarth>
I wasn't looking at cpu usage... but in the debug window the current number of blocks was not advancing either. I thought the 'current number of blocks' displayed the total that had been validated and added to the chain. Does it actually count them before they are fully processed and appended?
< sipa>
CubicEarth: my theory is that your node downloaded a number of blocks, and stored them on disk, but didn't apply them to the main chain before crashing
< sipa>
after restarting, it noticed those blocks, and applied them immediately in the background
< bitcoin-git>
bitcoin/master 079142b Jonas Schnelli: fNetworkActive is not protected by a lock, use an atomic
< bitcoin-git>
bitcoin/master 62af164 Wladimir J. van der Laan: Merge #9131: fNetworkActive is not protected by a lock, use an atomic...
< bitcoin-git>
[bitcoin] laanwj closed pull request #9131: fNetworkActive is not protected by a lock, use an atomic (master...2016/11/net_toggle) https://github.com/bitcoin/bitcoin/pull/9131
< CubicEarth>
separate question: does anyone know about a standard for multi-sig interoperability? Something so that wallets from different providers could talk to each other?
<@wumpus>
CubicEarth: this is not a channel for general bitcoin related questions, please use #bitcoin
<@wumpus>
jonasschnelli: what is 'out of band' about out-of-band block requests? don't you mean something like 'asynchronous'?
< jonasschnelli>
wumpus: not sure if its the right term. But with out-of-band, I mean not-in-the-IBD-sequence...
<@wumpus>
jonasschnelli: (just a question about terminology, I haven't seen 'out of band' used a lot except for some weird network protocols)
< gmaxwell>
I had the same thought, fwiw.
< jonasschnelli>
I see. Any better wordings?
< jonasschnelli>
Asynchronous is probably also not ideal.
< jonasschnelli>
Independent-block-requests?
< jonasschnelli>
prioritized-block-downloads?
< gmaxwell>
"Third distinct block downloading mechenism" :P I would have called it 'unordered block fetching' perhaps.
< jonasschnelli>
Though, its not always a download
< jonasschnelli>
I like "unordered block fetching"
< gmaxwell>
even that is confusing since the normal fetch isn't in strict order. :)
< jonasschnelli>
Indeed...
< jonasschnelli>
Seems to be hard to nail it in two or three words.
<@wumpus>
so this provides a separate interface to block downloading
< jonasschnelli>
wumpus: not really,.. it still uses the internal block download mechanism
< jonasschnelli>
It just priories individual blocks.
<@wumpus>
yes it uses the same mechanism, but provides an interface that that other parts of the software (not directly associated to validation) can use
< jonasschnelli>
And you have certainty that all these requested blocks get passed through the signals in the correct order
< gmaxwell>
Block on Demand.
< jonasschnelli>
Blocks on Demand?
< jonasschnelli>
Otherwise people think this blocks something. :P
< gmaxwell>
My node has hot and cold running blocks.
< jonasschnelli>
But not going to do this. It's just not worth...
<@wumpus>
I understand. THis is better handled on a per-case basis probably
< luke-jr>
wumpus: wxBitcoin didn't support the stable wx, and required unicode XD
<@wumpus>
luke-jr: ok. No clue then how he managed then :)
< luke-jr>
did the RPC really prohibit it?
<@wumpus>
no, but no one uses JSON without unicode, the sane languages prohibit it
< luke-jr>
CLI?
<@wumpus>
isn't cli usually unicode too?
< luke-jr>
not always
<@wumpus>
no, not always, so it's possible, it'll just be very rare on all accounts
< luke-jr>
I think it's more commonly non-Unicode outside the English-speaking area
< luke-jr>
or was until some years ago
< luke-jr>
but nobody else has reported a problem.. so maybe not
<@wumpus>
we're not talking about the 90's here
<@wumpus>
or original VT-100 terminals or whatnot :) bitcoin isn't that old
<@wumpus>
but yes it'll obviously be more common outside ENglish-speaking areas
< jonasschnelli>
I located the "issue" in the code.
< jonasschnelli>
Its in univalue_read.cpp
< jonasschnelli>
Univalue can't handle non UTF8
< jonasschnelli>
(even if its allowed by the JSON RFC)
<@wumpus>
that's fine, we don't want it to
< jonasschnelli>
the read() function will resturn false
< jonasschnelli>
Yes. We don't want this
<@wumpus>
univalue, as well as bitcoind, will acquire lots of cruft if you want full character set handling. We don't need that as no one uses JSON that way. E.g. the "JSON is a minefield" tests assume the JSON parser strictly checks UTF-8
<@wumpus>
(neither did any of the previous JSON libraries that we used, but they didn't do any input validation, which is quite a bad thing)
< bitcoin-git>
[bitcoin] laanwj reopened pull request #8747: [rpc] Fix transaction size comments and RPC help text. (master...rpc_comments) https://github.com/bitcoin/bitcoin/pull/8747
< morcos>
cfields: sorry, will have to wait a bit longer. my initial results seem to show that maybe its a little bit of an improvement, but that the previous branch was a more significant improvement
< morcos>
so i'd like to run longer tests to get rid of the noise...
< cfields>
morcos: ok, np. thanks for testing
< cfields>
morcos: i suppose it's possible that all of the changes just slowly added up to a noticeable speedup. I figured it was more likely that there was a single silver bullet.
< gmaxwell>
btcdrak: any interest in trying to get that spurrious AV warning fixed?
< Victorsueca>
theymos: Baidu lol
< Victorsueca>
isn't that a Chinese page?
< Victorsueca>
would bet Chinese government is behind that
< morcos>
cfields: yeah i'll keep running tests, but actually when i tried your old branch, i accidentally did it without the removal of the CScript copy constructor (b/c i didn't want to use the Transaction stuff in that second commit)
< morcos>
once i re-removed the CScript copy contructor from the copy-move branch, its clearly much faster than the prevector-move branch
< cfields>
morcos: interesting, that points to the speedup coming from less time copying
< btcdrak>
gmaxwell: a job for a native chinese speaker I think.
< cfields>
which is strange, because last i looked, there were very few of those copies
< cfields>
morcos: hmm, though i suppose it could also come from construction/destruction. prevector as-is is pretty heavy on those
< cfields>
grr, i should just bite the bullet and do a quick specialization of prevector for unsigned char. It should be simple enough.
< sipa>
cfields: can't use use a my_is_trivially_copyable predicate class instead, and define it just manually for char for old compilers, and use std::is_trivially_copyable for new ones?
< cfields>
sipa: yes, but i'm pretty sure it could be sped up substantially if written specifically for packed bytes
< cfields>
sipa: for ex, no alignment concerns
< gmaxwell>
maybe time would be better spent working on flat transactions (which rrequires finishing making them immutable first)?
< cfields>
flat transactions?
< sipa>
cfields: one malloc
< cfields>
oh, accessing serialized fields on the fly, or so?
< gmaxwell>
doesn't have to be (and probably shouldn't be) in the seralized form we use on the wire.
< cfields>
right, you mentioned this the other day
< cfields>
gmaxwell: you mentioned you'd doodled some notes about a possible format. Happen to have those around?
< gmaxwell>
Well what I was working on is on the other side, improved serialization for disk and network that is more compact.
< gmaxwell>
For use in memory, one would have no varints, for example, just just character arrays and primitive types, and a set of pointers to allow random access to every field even though some parts are variable length.
< sipa>
doesn't sound hard
< sipa>
concatenate all scripts into a single byte array, and have (begin_ptr, size) tuples to refer to them
< sipa>
we'd need to modify the script interpreter to no longer use CScript anymore (which seems trivial, it doesn't need any of CScript's representation - even iteration is done byte by byte)
< gmaxwell>
no, but it needs transaction to be immutable though.. since you can't make changes that would change the size or count of any scripts.
< sipa>
of course
< gmaxwell>
for the more effficent disk/wire format what I'd written up was https://people.xiph.org/~greg/compacted_txn.txt though one thing it doesn't achieve, which would be nice though I don't know if its realistic, is a very fast procedure to decide how much memory would be needed for a flat in memory rep... which would facilitate some later optimizations.
< cfields>
yes, i suppose in any scheme where all scripts are cat'd in memory, there'd be no need for something like prevector
< sipa>
cfields: sure there is. prevector's primary benefit is in CCoins
< gmaxwell>
but thats why I suggested that flattening transactions may be a better time investment than expanding prevector's use in transactions. I think with transaction const there is no reason to not flatten it all the way except that a lot of code may need to be adjusted to twiddle how accesses work. (Though perhaps enough C++ magic could keep the interface almost identical-- beyond my pay grade). Not
< gmaxwell>
that prevector isn't useful, but just for const transactions its a half-step.
< cfields>
sipa: i figured CCoins would use some similar monster allocation structure and indicies. But I suppose you'd run into crazy fragmentation issues quickly
< cfields>
*indices
< sipa>
cfields: for CCoins i want to move to a per-output model instead of per-tx
< sipa>
Chris_Stewart_5: because 6e340b9cffb37a989ca544e6bb780a2c78901d3fb33738768511a30617afa01d is the hash of OP_0 (0x00) and not of OP_TRUE (0x51)
< sipa>
Chris_Stewart_5: so it's a perfectly valid p2wsh output, for the OP_FALSE witness script (which is unspendable)
< cfields>
makes sense, though admittedly i'm not very familiar with that code.
< sipa>
cfields: it needs design, implementation, and benchmarking
< sipa>
it's not trivial to do, but the current system has an ugly O(n^2) behaviour, where a transaction with n outputs needs O(n) database writes (one for each spend) each of size O(n) (because all unspent outputs are rewritten every time)
< cfields>
ah, i see
< sipa>
the easiest solution is storing height/coinbaseness in each output
< sipa>
and txid
< sipa>
but that's a lot of duplication
< sipa>
though leveldb does deduplication of identical prefixes in the database
< sipa>
an alternative is a txid->{height,coinbase,id} map, with id a short sequentially-assigned local id
< sipa>
and then use (id,index)->{utxo} map for the utxos
< sipa>
but that's an extra indirection
< cfields>
well the first potentially brings a nice read speedup, no? Since the outputs are then somewhat sorted by hotness
< sipa>
leveldb sorts by key
< sipa>
and the way we're using leveldb i think hardly results in actual levels
< sipa>
the batches are so large we always trigger compactions
< cfields>
heh
< Chris_Stewart_5>
sipa: Thanks for the help, mistakenly assumed it was still HASH160(SHA256()), is there a reason it is only 1 SHA256()?
< sipa>
Chris_Stewart_5: yes, 160 bits is too little
< sipa>
(it's too little for P2SH as well, but we weren't aware of the collision attack against it at the time)
< Chris_Stewart_5>
sipa: Is it a HF to change P2SH to a SHA256?
< sipa>
Chris_Stewart_5: of course
< sipa>
when done naively, at least
< sipa>
just defining something new P2SH-like could be done
< sipa>
... but that's essentially what P2WSH is
< Chris_Stewart_5>
and with a versioning system now we can totally redefine the semantics of any witness scripts between versions, correct?
< sipa>
yes
< sipa>
Chris_Stewart_5: my answer was wrong. any interpretation for "changing P2SH to SHA256" i can come up with would be a soft fork
< sipa>
it would however also break any existing p2sh users
< Chris_Stewart_5>
sipa: Isn't the P2SH flag a required flag at this point in script? Would it have to do with redefining the IsScriptHash function?
< sipa>
that's an implementation detail
< sipa>
you can just add an IsScriptHash2 function, and add a new flag that assigns sha256-p2sh behaviour to IsScriptHash2, and outlaws any older IsScriptHash results
< Chris_Stewart_5>
Why would you need to outlaw?
< sipa>
because that's how i interpret 'moving to sha256' :)
< Chris_Stewart_5>
ahhh ok
< sipa>
if you would have said 'adding p2sh-sha256 feature', yes :)
< sipa>
but that's pretty much what p2wsh is... in a better way
< Chris_Stewart_5>
... but I want to make p2sh great again. I'll see myself out.