< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #11824: Block ActivateBestChain to empty validationinterface queue (master...2017-12-11822-debug) https://github.com/bitcoin/bitcoin/pull/11824
< BlueMatt>
seems to be working for me
< achow101>
pulling now
< achow101>
why's it so big?
< BlueMatt>
cause you cant call the callbacks themselves with cs_main held...I mean the first commit is just a nice-to-have that makes the queue drain faster, the third commit adds DEBUG_LOCKORDER annotations and removes spurious locks in tests, and the last two should probably be squashed
< achow101>
BlueMatt: build error ---> validation.cpp:2480:18: error: ‘promise’ is not a member of ‘std’
< BlueMatt>
its not as bad as it looks
< BlueMatt>
achow101: oops, try now, forgot to commit that line
< achow101>
what's the queue depth set to?
< BlueMatt>
max is 10
< BlueMatt>
feel free to play with it
< achow101>
well it compiled and it's running
< sipa>
ship it.
< BlueMatt>
someone should tag 11822 as 0.16
< achow101>
I'll let you know how it goes, so far so good.
< BlueMatt>
I mean it definitely fixes the obvious infite-queue issue, but there could be other gremlins hiding
< BlueMatt>
I doubt it
< BlueMatt>
my bigger worry with that patch is future lockorder violations, but not a ton to be done about it, honestly
< sipa>
tagged
< cfields>
BlueMatt: i think it'd make for a smoother transition if a fence was inserted after each block's callbacks had been queued, rather than stalling at an arbitrary depth
< cfields>
that way we could still work with the same assumptions as before, while we work out any remaining out-of-sync issues
< BlueMatt>
cfields: ugh, we started the fence-removal process two releases ago, I'd rather not add it back if we can avoid it :(
< cfields>
BlueMatt: the PR is the same thing, I'm just suggesting using a block-callback-depth (depth == 1 initially) rather than an individual callback depth.
< BlueMatt>
hmm, that patch may not be sufficient :/ achow101 you see it staying low?
< achow101>
BlueMatt: it's reasonable given my current dbcache of 8000
< BlueMatt>
cfields: well if you want to do that we should move the blocking to the end and not the beginning and set it to 1 right before release/on the backport branch, not on master, please
< BlueMatt>
cfields: 90% of the reason to merge this stuff sooner rather than later is to get testing cycles on it
< BlueMatt>
otherwise its never gonna happen
< achow101>
BlueMatt: I'd like this to be merged so I can run my fucking node :p
< BlueMatt>
achow101: lol, you dont carry patches on your node?
< achow101>
BlueMatt: my node is also what I test and develop on and I would like to be able to test the other stuff I'm working on and PR it without extra patches
< BlueMatt>
heh, well feel free to review
< cfields>
BlueMatt: to be clear, i was suggesting that we let it run a full block ahead (depth of single block's callbacks). I wasn't suggesting that we use an actual depth of 1.
< BlueMatt>
well res size seems to have stabilized on my reindex-chainstate node
< BlueMatt>
cfields: I dont know how much that'd do aside from hide bugs?
< BlueMatt>
if we want to neuter it cause we're worried, we should make ProcessNewBlock/ActivateBestChain block until callbacks finish for the latest action
< BlueMatt>
letting it run ahead but heavily limiting how much it can run ahead just sounds like it'll hide more bugs than anything, no?
< cfields>
well yes, but it'd ensure that we're only introducing one class of bugs at a time
< achow101>
for each extra item in the queue, how much extra memory does that use
< sipa>
a few MB, i expect
< BlueMatt>
achow101: (or not reindex)
< BlueMatt>
achow101: its (during IBD, mostly just) a shared_ptr<const CBlock>
< achow101>
BlueMatt: I'm not sure if the problem I saw was limited to reindex only though
< BlueMatt>
cfields: I'm not sure which class of bugs is resolved by your suggestion, tbh?
< BlueMatt>
achow101: it should almost entirely be
< BlueMatt>
your wallet shouldnt take 10 minutes to process a new block....
< achow101>
BlueMatt: the node crashed at some point before I had to reindex, and I think that was because of an OOM to (had to do the whole reboot computer stuff as I did when it crashed during reindex)
< BlueMatt>
cfields: blocking before return may fix some stuff, but....
< achow101>
that crash caused me to need to reindex it now
< BlueMatt>
achow101: thats.....weird
< achow101>
it's possible that that was unrelated, but I'm not sure
< BlueMatt>
was that like right after a restart where you needed to catch up a few days blocks?
< BlueMatt>
though even that shouldnt use all that much memory....
< BlueMatt>
i mean this has been on master a while...
< achow101>
no, it had been up for a few days at that point, and I had it running for several weeks with a build of master prior to that
< achow101>
although it is possible that the build I was running before did not have your PR merged
< achow101>
I can grep through my log file to see what was running before I started on this build. although it may take a while since that log file is 42 GB because I forgot to turn of debug=net
< BlueMatt>
hmm, I mean I did a read-through of the queue logic and didnt see any obvious races
< BlueMatt>
ls -lh ~/.bitcoin/debug.log
< BlueMatt>
-rw------- 1 matt matt 6.5T Dec 4 19:45 /home/matt/.bitcoin/debug.log
< BlueMatt>
errr, s/races/leaks/
< achow101>
BlueMatt: do you have your node patched to not shrink the debug.log at each restart?
< BlueMatt>
i believe it does not shrink by default if -debug? or something like that
< * BlueMatt>
-> out, if bug is still there, do a heap profile
< kallewoof>
I've been asked about address index on top of bitcoin core (to replace the unmaintained fork that exists somewhere else). What are people's opinions on this? I know some people prefer a minimal code base, but having an address index option seems like it would alleviate a lot of problems in a lot of places.
< gmaxwell>
a history address index is unsustainable; anyone who uses it is eventually going to be forced to use trust a third party service.
< kallewoof>
Why?
< kallewoof>
Too much space, or..?
< gmaxwell>
I don't see any problems with having hooks to support such things, but directly incorporating an index of history wouldn't just be a waste of our review and maintaince resources, it would be actively harmful to the ecosystem.
< gmaxwell>
Yes too much space, too much seek load, and incompatible with pruning; or any kind of pruned sync.
< * kallewoof>
nods
< gmaxwell>
we've also already seen that behavior where people using such indexes constructed via abe or other tools that created them, failed back to blockchain.info and similar.
< gmaxwell>
They're also not required except for unusual cases (like making an explorer, which also requires spent-position indexes). An index of unspent outputs would be far more reasonable.
< kallewoof>
Yeah, this would be used by explorers, I would expect. I think some (lots?) of them use the unmaintained fork I mentioned earlier.
< gmaxwell>
I sure hope not.
< kallewoof>
Rumor says it is so. Part of why I was asked to look into it.
< gmaxwell>
I know with absolute certanty that that old index patch could lose data, I just don't know _how_.
< kallewoof>
I think having hooks in bitcoin core that let such a system exist separately is a good plan. It would only rely on API breakage, which bitcoin core is very careful about.
< gmaxwell>
As in I had a node with it where it was faliing to actual spendable outputs connected to an address, and I tested that it was incorrect quite throughly. I assumed the bad data was some gotcha with reorgs or likewise, but no one investigated.
< gmaxwell>
Yes, hooks for such things seem fine to me.
< kallewoof>
Ah.. yeah, reorgs is definitely a challenge.
< sipa>
if i had infinite time i'd write a separate efficient indexing service, which just speaks the p2p protocol
< sipa>
which you then can connect to bitcoind
< kallewoof>
sipa: I guess that would have the obvious benefit of not actually needing a node at all if it could just sync from the network?
< sipa>
right
< sipa>
it would be nice to reuse some of the headers sync logic in core, but if you assume you're connected to a trusted peer anyway, i don't think it needs to be allthat complicated
< sipa>
just deal with reorgs
< gmaxwell>
from an arch perspective you'd even want it to just keep a SQL database in sync, unfortunately the performance would be horrible except on gold plated system. (massive nvem raid 1 arrays and gobs of ram or whatever)
< kallewoof>
SQL db sounds like it would be kinda slow yeah.
< gmaxwell>
thats how abe worked.
< kallewoof>
I was thinking of using redis, but I think I'm gonna have to do profiling to find a good solution.
< gmaxwell>
last I heard of anyone successfully running it, it required well over 1TB state and would take a month to sync or something.
< kallewoof>
Wow..
< sipa>
i think it's been unusably slow for the better part of 5 years now
< gmaxwell>
this was years ago, it would presumably be some multiple of that now.
< kallewoof>
It would be address -> tx list, tx -> address list, block -> tx list.. tx'es could be compressed as block height + index (so ~8b). Not sure what else neat stuff you could do. sipa's work on tx format should be inspiring.
< esotericnonsense>
blockheight + index doesn't work well at all, what about reorgs
< kallewoof>
Maybe keep track of last 1000 entries and undo all based on shared fork point height?
< gmaxwell>
works fine, so long as you're only tracking a single chain at a time.
< gmaxwell>
you have to back out the index and reapply.
< kallewoof>
Could just look at block again and undo, yeah.
< esotericnonsense>
eh I suppose that's right. SQL is a bit abstracted for me i'm not sure how it stores repeated data internally. f.ex. if you have a ton of tables with foreign keys do they store the keys repeatedly or use a more space efficient method.
< esotericnonsense>
for example on chain you might have a 256bit txid used hundreds of times if it has a bunch of outputs but you can probably just store that as a 32bit(?) incrementing integer on every tx you've seen since genesis.
< kallewoof>
I really doubt SQL would work well for this, but I could be wrong.
< Randolf>
esotericnonsense: It depends on the SQL server software being used. As far as I know, whatever you choose to store in the columns is what's stored there.
< Randolf>
esotericnonsense: Usually it's up to the DBA to choose to set up smaller IDs that reference more data in a separate row.
< kallewoof>
We've used up 6.5% of a uint32 (278236762/4294967295), but it could work for awhile at least.
< Randolf>
esotericnonsense: For Bitcoin stuff, I'd create custom datatypes that save the data in binary format.
< esotericnonsense>
basically what i'm thinking is that these indexes can easily multiply the blockchain size by five and possibly more depending on how you do it if you store the full data.
< Randolf>
It's possible.
< esotericnonsense>
address -> txlist, block -> txlist, and tx (unless you don't store tx but pull it out of a stored blockchain) is 3x txid
< Randolf>
An index normally refers to an internal reference that's usually smaller than the column data it's indexing.
< Randolf>
Can arrays help? In PostgreSQL a column type can be an array.
< Randolf>
These can also be indexed efficiently.
< Randolf>
While you'd still be storing TX data, you could cut down on the number of rows.
< gmaxwell>
this sort of speculation isn't so useful.
< gmaxwell>
IIRC the abe schema wasn't too offensively inefficienct, but there are overheads and billions of records add up... but nothing is really learned from this discussion. just "I think this would be not efficienct" and "I think it would be, maybe" :P
< esotericnonsense>
yeah agree.
< kallewoof>
I'll just try a bunch of stuff I guess.
< esotericnonsense>
it sounds to me like what might be useful is to have a seperate service that only maintains the set of things core doesn't
< esotericnonsense>
so basically addrindex but as an external thing. all it needs is addr->[txids]. if you want the transactions just ask core rpc with txindex on
< esotericnonsense>
that'd only need the core rpc api to remain (relatively) static and not have to be maintained as a thing on an ongoing basis if internals shuffle around.
< kallewoof>
If txs are stored as height+index you don't even need txindex, at least if #10275 is merged.
< gribble>
https://github.com/bitcoin/bitcoin/issues/10275 | [rpc] Allow fetching tx directly from specified block in getrawtransaction by kallewoof · Pull Request #10275 · bitcoin/bitcoin · GitHub
< kallewoof>
And yeah, we agreed this should be an external thing that either uses hooks in bitcoin or implements the p2p protocol and talks to the network directly.
< esotericnonsense>
i guess i was commenting on "address -> tx list, tx -> address list, block -> tx list" and how i don't think you need the latter two because bitcoind does it
< * kallewoof>
nods
< kallewoof>
I wasn't assuming the person running had a local synced node. Maybe I should.
< esotericnonsense>
eh? addrindex implies that no? (well that or that you have rpc access anyway).
< kallewoof>
sipa had the idea of an efficient indexing service which speaks the p2p protocol.
< gmaxwell>
that assumption saves you basically 1x copy of the blockchain at most, but then means you can't run the indexer on the seperate hardware from the node or fail over between multiple nodes, etc.
< gmaxwell>
or upgrade the node independantly from the indexer
< kallewoof>
Makes sense
< kallewoof>
(to not assume local node)
< esotericnonsense>
i'm confused. you can run the indexer on seperate hardware just fine. failover works in the same way anyway (you can have multiple bitcoinds if you want)
< esotericnonsense>
height+index storing of transactions becomes complicated in that case sure.
< kallewoof>
Ultimately it doesn't matter to the indexer if you have a local node or not. It won't use it directly, it will use the p2p network. The only time it needs the RPC is when it wants to convert e.g. block height+index into a tx, but it might just as well give you the height+index pairs and let you deal yourself.
< gmaxwell>
also if the node the indexer is behind is pruned then the overhead of a redundant copy mostly goes away.
< kallewoof>
If the node is pruned, the indexer will not be able to use it for any lookups of block height <= pruned height
< * esotericnonsense>
is in a state of befuddlement
< esotericnonsense>
if it has p2p logic (basically it's a node) then what is the seperate bitcoind node being used for at all
< esotericnonsense>
trusted verifier i guess?
< kallewoof>
The indexer doesn't store anything but the indexing parts, so it needs a node to resolve the 8 byte (block height + tx index) into actual transactions. It could store the entire block chain while at it, but that seems a bit redundant.
< sipa>
kallewoof: that seems inefficient
< sipa>
relying on a node for storage won't gove the kind of performance a blockexplorer needs
< kallewoof>
I thought you were suggesting a p2p indexer that doesn't depend on a local node. Why not use RPC then?
< sipa>
rpc is slow
< kallewoof>
Or are you saying store the entire transaction in the indexer db?
< sipa>
yes
< kallewoof>
Ah
< kallewoof>
Hum. I could avoid a lot of reinventing wheels by simply requiring libbitcoin_[x].a in the indexer to link p2p stuff in. Potential minefield, but would be super easy to keep up to date with bitcoin core if it worked.
< kallewoof>
Rewriting a minimal p2p codebase could be useful for lots of cases, but would need to be maintained to a greater degree.
< gmaxwell>
also less important if its out of date... p2p protocol intentionally does not change frequently.
< gmaxwell>
an indexer written against 0.2.10 would likely still work (though wouldn't index p2sh/segwit, but that may not matter if you're not using those things)
< kallewoof>
That's a good point.
< kallewoof>
I guess I'll try linking directly and see how far that gets me. I suspect there will be a lot of issues where the bitcoin code expects a fully fledged bitcoin instance running.
< kallewoof>
If not, not regularly maintained would not be a critical issue as you noted.
< bitcoin-git>
[bitcoin] MeshCollider opened pull request #11829: Test datadir specified in conf file exists (master...201712_datadir_crash) https://github.com/bitcoin/bitcoin/pull/11829
< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #11831: Always return true if AppInitMain got to the end (master...2017-12-startup-exit-return-code-race) https://github.com/bitcoin/bitcoin/pull/11831
< promag>
BlueMatt: I didn't realise main thread was sitting there in WaitForShutdown
< BlueMatt>
hmm?
< BlueMatt>
ohoh, yea, we have a global thread that sits there and does nothing :(
< promag>
actually it does.. are we there yet? are we there yet? are we there yet? ..
< BlueMatt>
heh
< BlueMatt>
yea...that
< promag>
so zapwallettxes causes the process to exit with non-zero
< BlueMatt>
Ive never seen that failure before, but I think its the race with ReacceptWalletTransactions
< BlueMatt>
if nothing else that pr should be an improvement
< BlueMatt>
its crazy to have that race
< sipa>
off topic for at least 5 years: c++17 is now officia
< spudowiar>
sipa: With these new C++ standards I'll soon playing fast and loose with the word "know" when I say that I know C++
< spudowiar>
*soon be
< sipa>
haha
< phantomcircuit>
BlueMatt, wait doesn't WaitForShutdown wait in the kernel?
< BlueMatt>
phantomcircuit: because signal handlers cant do "things"
< BlueMatt>
so eg we cant do a cv notify in the SIGTERM signal handler
< BlueMatt>
which is what we'd prefer to do
< BlueMatt>
so *something* has to check for fShutdownRequested regularly
< sipa>
BlueMatt: did i do something wrong when merging #10773? All travis master builds since report 'HEAD was not signed with a trusted key!'
< bitcoin-git>
[bitcoin] TheBlueMatt opened pull request #11834: [verify-commits] Fix gpg.sh's echoing for commits with '\n' (master...2017-12-verify-commits-fix) https://github.com/bitcoin/bitcoin/pull/11834
< BlueMatt>
spudowiar: lots of folks seem like the "bitcoin has a p2p network, lets shove more features in it so that *everything* can use it!" approach, cause its easy, but it also doesnt scale....ultimately the bitcoin p2p network is full of garbage evil nodes anyway, its not magic...
< spudowiar>
BlueMatt: Awww, but I was trying to run bitcoind in the browser though :P
< BlueMatt>
spudowiar: please build second network of servers which provide blockchain data, it'd be useful to have redundancy for bitcoind to sync from them, even
< BlueMatt>
(or...whats the wallet for that again? electrum?)
< spudowiar>
huh?
< spudowiar>
Does --with-sysroot= work with Bitcoin Core?
< achow101>
spudowiar: why the hell would you want to run bitcoind in the browser
< spudowiar>
achow101: Because I can, I guess?
< spudowiar>
I'm a weird guy
< sipa>
do you mean... compile with emscripten?
< spudowiar>
sipa: Yeah, I'm using Emscripten and I have Bitcoin-Qt mostly working (except networking is disabled)
< BlueMatt>
the service bit seems to imply that you can/will serve the top 288 blocks, but if you request a block, and then they get a new block before they respond to you, you get disconnected
< BlueMatt>
even though you didnt violate the bip
< jonasschnelli>
I guess if a client detects that he will request at the very bottom of NODE_NETWORK_LIMITED_MIN_BLOCKS he needs to expect a disconnect
< BlueMatt>
I'd call the current implementation a violation of the BIP
< jonasschnelli>
BlueMatt: what you propose would essentially increase the 144 block buffer leading to a similar event just two blocks higher?
< BlueMatt>
"If signaled, the peer MUST be capable of serving at least the last 288 blocks "
< BlueMatt>
the current implementation is clearly not capable of doign so
< jonasschnelli>
BlueMatt: hmm? why? I can serve the last 288 blocks?
< BlueMatt>
if I request block at 288 deep, I do not expect to get disconnected for it, even if there is a race and you get another block at that same time
< jonasschnelli>
hmm..
< BlueMatt>
while we're on the bip (cause I just pulled it up): "A safety buffer of additional 144 blocks to handle chain reorganizations SHOULD be taken into account when connecting to a peer signaling the NODE_NETWORK_LIMITED service bit." <-- I have absolutely no idea what in the hell that means
< jonasschnelli>
Okay... your right. Got your point!
< BlueMatt>
I mean you're not *really* serving deeper, maybe one or two, but its certainly not a fingerprinting attack if everyone does it
< jonasschnelli>
Though...
< jonasschnelli>
But what if we use two blocks and other implementation would use a single block as additional buffer?
< jonasschnelli>
But... meh,.. doesn't matter
< BlueMatt>
then they'd already be fingerprintable because they have a different subver :p
< jonasschnelli>
let me just add two blocks
< jonasschnelli>
BlueMatt: indeed
< BlueMatt>
I'm still confused as to what you meant by that sentence in the BIP
< BlueMatt>
"A safety buffer of additional 144 blocks to handle chain reorganizations SHOULD be taken into account when connecting to a peer signaling the NODE_NETWORK_LIMITED service bit."
< BlueMatt>
I have no idea what that means
< gmaxwell>
144? I thought that was supposted to advise 12.
< sipa>
i think it means you should only connect out to a LIMITED peer if you don't think you need more than 144 blocks (rather than 288)
< jonasschnelli>
BlueMatt: assume you are a light client and what to fetch a block within the last 144
< jonasschnelli>
what sipa said
< BlueMatt>
sipa: elaborate? I'm still lost...
< BlueMatt>
gmaxwell: wat?
< gmaxwell>
oh jesus people.
< sipa>
BlueMatt: i think it means that you should only signal LIMITED if you keep the last 288 blocks, but only fetch from a peer if you think you don't need more than 144
< BlueMatt>
sipa: yea, sorry, that half I got, the *why* part I'm still missing...
< sipa>
i don't know!
< BlueMatt>
gmaxwell: what'd we blow up?
< jonasschnelli>
BlueMatt: It's a kindly reminder for a reorg buffer...
< jonasschnelli>
BlueMatt: if you have a light client app, you need tip-150, don't connect to LIMITED
< gmaxwell>
BlueMatt: please read the prior discussions.
< jonasschnelli>
BlueMatt: but do if you need block TIP-144
< jonasschnelli>
(it's pure informative)
< sipa>
it shouldn't be just informative
< jonasschnelli>
Technically, it's always 288
< gmaxwell>
gah
< sipa>
gmaxwell: i vaguely remember discussions about shorter windows, but i wouldn't know where to find them
< gmaxwell>
look, the BIP requires you keep your last 288. Thats it, the rest is advice.
< jonasschnelli>
"A safety buffer of additional 144 blocks to handle chain reorganizations SHOULD be taken into account when connecting to a peer signaling the NODE_NETWORK_LIMITED service bit."
< jonasschnelli>
Is purely advice... you can't set a MUST there IMO
< sipa>
it can be a SHOULD
< gmaxwell>
If a client _guesses_ it's going to need blocks 285 ago, then a LIMITED peer is quite possibly useless to it: one, it's estimate of how many it will need will not be accurate (there could have been more blocks) and even it if is, they could be on a fork from where the peer is.
< BlueMatt>
yes, that much I get, I'm just really confused as to where the N/2 came from there
< BlueMatt>
why does the buffer need to be 2x what you want?
< gmaxwell>
The actual correct behavior is complicated; if you're estimating your need based on nothing but time you should be pretty conservative (n/2 for example). If you have headers and know what you need, that doesn't exist.
< BlueMatt>
ok, it just seemed to be weird to be so specific...
< gmaxwell>
I think the 144 number comes more out of "if we want to be able to reliably serve 144, then our target will need to be higher, e.g. 288)
< BlueMatt>
ok, fair
< BlueMatt>
it just reads funny, was my point :p
< BlueMatt>
jonasschnelli: wanna just squash and I'll ack
< jonasschnelli>
BlueMatt: Yeah. Just squashing (had to rebase anyways)
< jonasschnelli>
Give me a min
< BlueMatt>
suresure
< jonasschnelli>
BlueMatt: done
< BlueMatt>
lol, how's this for an idea: try to load libsensors and if it loads write cpu temperateure to debug.log on a regular basis
< bitcoin-git>
bitcoin/master a38686c Matt Corallo: [verify-commits] Fix gpg.sh's echoing for commits with '\n'
< bitcoin-git>
bitcoin/master 5bea05b Pieter Wuille: Merge #11834: [verify-commits] Fix gpg.sh's echoing for commits with '\n'...
< bitcoin-git>
[bitcoin] sipa closed pull request #11834: [verify-commits] Fix gpg.sh's echoing for commits with '\n' (master...2017-12-verify-commits-fix) https://github.com/bitcoin/bitcoin/pull/11834
< gmaxwell>
BlueMatt: oh no, moar dependencies.
< jimpo>
Is there a preferred implementation of union/variant types? I know about boost::variant, but I've heard Core is trying to use boost less.
< BlueMatt>
gmaxwell: well you could do a if-you-cant-find-dep-dont-print-temp
< BlueMatt>
gmaxwell: but it'd be so nice in debugging peoples' rejecting-chain issues to see debug.log entries that say "WARNING: My hardware is overheating garbage, dont trust output"
< sipa>
jimpo: std::variant in c++17 ? :)
< jimpo>
Oh, is C++17 allowed/enabled?
< BlueMatt>
lol no
< jimpo>
derp
< sipa>
no
< sipa>
it was just officially published... today
< BlueMatt>
is it even in debian-testing?
< BlueMatt>
i think clang supports it
< sipa>
needs GCC 7
< sipa>
and Clang 5
< jimpo>
Ah, that explains why std::variant came up in searches today and I didn't see it a month ago
< sipa>
to be fair, the last month or so was just formally for ISO to publish it
< sipa>
it was exactly known what the spec would be
< jimpo>
So is a ghetto struct with an enum + union preferred to boost::variant?
< sipa>
i think you can use boost::variant, as we're already using it anyway
< jimpo>
k
< BlueMatt>
if you can keep it in a cpp, use boost::variant, if its in an infection header....ugh, but maybe ok
< luke-jr>
jimpo: boost is fine outside of libconsensus I think
< GAit>
patches to get core to build with the ndk (working on android shell with no deps or glibc/LD_LIBRARY_PATH hacks) if anyone is interested can be reviewed at https://github.com/greenaddress/bitcoin_ndk - warning, just like abcore, these patches are also very alpha and currently I don't think there's anything worth making an upstream PR for. and fwiw later ndk releases seem to fail.
< GAit>
mostly deals with setting up the toolchain and patching some different headers or things missing in android (ifaddrs for example)
< TD-Linux>
BlueMatt, I think a built-in memtest would be a better use of time :)
< BlueMatt>
TD-Linux: that is almost certainly true
< phantomcircuit>
iirc there are user space memtest utilities but i dont see how they can actually be effective unless they use all available memory
< phantomcircuit>
BlueMatt, ^
< BlueMatt>
I think, or at least have a suspicion, that half our user-memory-corruption errors are due to cpu overheating and writing garbage
< BlueMatt>
but that isnt based on much and may be very bogus
< phantomcircuit>
hmm
< phantomcircuit>
indeed that would be caught with a pretty simple test
< phantomcircuit>
just load some test vectors and check siphash results
< BlueMatt>
yea, I mean its a shame we have assumevalid now...otherwise I'd suggest just busywaiting one or two of the script check threads with something like that
< gmaxwell>
we could make block validation fails sleep for a second then try again... and log&crash if successful the second time.