< wumpus>
luke-jr: jeremyrubin was also talking about that, it's pretty hard to do in practice, though
< wumpus>
jonasschnelli: my mac just arrived, so I'll hopefully be able to help with macosx releases and development now, too
< warren>
brg444: it just occurred to me that optimizations beyond validation performance have helped core deal with larger block sizes up to 1MB. Do you already have the few cases where the performance of `getblocktemplate` was dramatically improved in the context of a huge mempool? That became slow at some point, and fear of orphans from it being slow also contributed to centralization pressure.
< bitcoin-git>
bitcoin/master a327e8e Wladimir J. van der Laan: devtools: Make github-merge compute SHA512 from git, instead of worktree...
< bitcoin-git>
bitcoin/master cce056d Wladimir J. van der Laan: Merge #9984: devtools: Make github-merge compute SHA512 from git, instead of worktree...
< bitcoin-git>
[bitcoin] laanwj closed pull request #9984: devtools: Make github-merge compute SHA512 from git, instead of worktree (master...2017_03_merge_hash_git) https://github.com/bitcoin/bitcoin/pull/9984
< jeremyrubin>
luke-jr: there are some nice ways to effectively recover, the key is to make the bad_alloc handler rentrant-safe (it's not right now) and then try to flush some caches, or even just wait a split second in hopes that another thread frees whatever large allocation it made.
< jeremyrubin>
I have some notes somewhere on some of the various things that could be done. if you're interested, email me
< luke-jr>
I suppose the bigger problem will be overcommitments
< gmaxwell>
luke-jr: you can do virtually nothing without more allocations happening, you certantly can't flush the dbcache.
< gmaxwell>
maybe you could drop the mempool completely.
< gmaxwell>
but its unlikely that you caught the exception at a point you could continue from.
< jeremyrubin>
so one idea i had
< jeremyrubin>
is to have 10MB as a "break in case of emergency" memeory
< jeremyrubin>
so when you OOM
< jeremyrubin>
you free that
< jeremyrubin>
and then try to flush
< midnightmagic>
That seems to be a single entity, by the way, joining and leaving with all those usernames and IP addresses.
< wumpus>
gmaxwell: after an exception it is hard to recover, the idea of set_new_handler is to handle out-of-memory errors before an exception happens; "make more memory available" is an explicit goal of it
< wumpus>
so it will call that on allocation, if it notices no memory is available
< sipa>
wumpus: so, it seems part of our oom prblems are due to the fact that the writebatch in peveldb is allocated as one continuous region
< sipa>
*leveldb
< wumpus>
it's exactly designed to be able to flush caches and such, though getting the locking etc. straight is a challenge
< wumpus>
sipa: yes the batches are huge in memory
< sipa>
but from a casual look, it's just an std::string in memory that gets appended over and iterated over
< sipa>
i.e., nothing a linked list of blobs can't do
< wumpus>
right, a deque or such
< wumpus>
dividing it up is a good idea, would also avoid copying while resizing, blobs shouldn't be too small to avoid this resulting in more overhead
< sipa>
i was thinking 1 MB as blob size
< wumpus>
yes something of that order
< wumpus>
a serious number but not enough to run into address space fragmentation issues on 32 bit
< jonasschnelli>
wumpus: A Mac?! Will it be your main workstation? :-)
< wumpus>
it'd be a great improvement from using strings... that always bothered me too
< wumpus>
jonasschnelli: hah, probably not, I have to chase my gf away from it first :p
< jonasschnelli>
hehe...
< wumpus>
sipa: I actually have very little problem with improving leveldb to be a better database for our specific usecase; it seems upstream development is kind of dormant anyway, though you could always try to upstream things ofcourse
< wumpus>
but the more this UTXO set grows the less likely is that an off the shelf database library just provides what we need. And leveldb seems a good starting point in any case, it worked better for our use case than anything else I tried
< sipa>
wumpus: right
< sipa>
utxo size growth is an unsolvable problem except using txo/mmr trees
< sipa>
though consensus rules that punish growth may go a long way
< wumpus>
that would be solving the problem at the source, yes
< wumpus>
in any case we could do a 0.14.1 that reduces default memory requirements
< wumpus>
then in 0.15 try to improve memory handling at large
< sipa>
how fast do we want a 0.14.1?
< gmaxwell>
wumpus: ah, indeed if you can catch it before the exception, thats at least somewhat interesting.
< warren>
I personally have always been sad we have been unable to bend down the growth curve of UTXO expansion by realigning the relative cost of UTXO creation. I am aware and appreciative of how segwit mitigates this problem a bit by reducing the cost to spend a UTXO. But sadly that is only effective for the legitimate UTXO ... there are plenty of questionable other UTXO that may never be spent. Runaway growth of UTXO expansion for data storage
< warren>
is among the worst efficient uses of Bitcoin. I think a more effective guard against too-cheap UTXO creation leading to such a persistent negative externality in which people use the precious resource inefficient ways would be fix the original design problem. Maybe change it so fees to create a UTXO are more front-loaded more toward time of creation instead of at the time it is spending?
< wumpus>
sipa: I don't know. All the OOM issues reported seem to point at this being quasi urgent
< gmaxwell>
I think for 0.14.1 it would be sufficient to twiddle the mempool sharing enough that an 2GB aarch64 device wyncs.
< wumpus>
gmaxwell: right - it does it before getting back to the application
< gmaxwell>
warren: that is exactly what segwit does, increases cost to create while decreasing cost to consume; assuming constant relative load.
< warren>
gmaxwell: which mostly works, except what of the cheap-to-create utxo that didn't have the goal of being spent?
< sipa>
gmaxwell: i'm not sure that making dbcache borrow only 50% of the free mempool space... i'm not sure that's enough to make the odroid-c2 sync out of the box
< sipa>
but i haven't tried
< gmaxwell>
warren: it was made more expensive.
< warren>
I might have missed something ... how?
< gmaxwell>
...
< gmaxwell>
Because it is exclusively non-witness data, and under constant relative load non-witness data is several times more expensive than witness data.
< jeremyrubin>
I have a feeling that theres a bunch of small things that could be refactored to save a MB here or there. I know there are a couple places where queues are pushed/popped without actually freeing the space in the vector later. I think if 0.14.1 is going to have a memory goal, a lot of headway can be made with a bunch of small things that require less review.
< gmaxwell>
no.
< gmaxwell>
0.14.1 is a bugfix release it will not get a pile of little MB shaving changes.
< wumpus>
that would be for 0.15
< gmaxwell>
ya.
< gmaxwell>
it might pick up a single blunt fix to prevent IBD from failing on 2GB hosts... sure. but I think thats all thats interesting for a backport.
< jeremyrubin>
ah ok I misunderstood what "in any case we could do a 0.14.1 that reduces default memory requirements" meant
< jeremyrubin>
you just mean the parameters?
< gmaxwell>
jeremyrubin: because of fragmentation shrinking those vectors often doesn't reduce memory usage-- though fine enough to do so if we're actually done with them.
< jeremyrubin>
gmaxwell: yeah these are ones that are done with, freeing them won't worsen fragmentation
< gmaxwell>
yea, won't worsen but due to it may not reduce usage. Still good to do. Don't get your hopes up too much, we suffer badly from fragmentation.
< wumpus>
parameters, or small tweaking, anything that is sure to fix the "IBD crashes on 1GB/2GB ARM boards by default" issue
< jeremyrubin>
There's at least one that is at most 4MB that is held permanently, never could be resized, that could be released when not used.
< jeremyrubin>
*never is resized/freed
< jeremyrubin>
There are a couple things I think might be similar, but I need to actually measure what their max loads are
< gmaxwell>
well if you can add up to the several hundred megs of overage that we have, that would be nice. :P
< wumpus>
even someone with 4GB reporting OOM crash issues
< wumpus>
windows 32 bit even, err yea, the amount of memory can be safely ignored I guess..
< rabidus_>
:D
< gmaxwell>
likely virt more of an issue than actual usage...
< wumpus>
IIRC windows 32 bit virtual memory use is terrible, even worse than linux 32-bit usage
< wumpus>
exactly
< wumpus>
ah yes: split is 2GB/2GB between kernel/user on windows by default, 3GB/1GB on linux (most distros), although on both OSes it is configurable in some way
< bitcoin-git>
[bitcoin] NicolasDorier opened pull request #9989: [Doc] Removing references to Windows 32 bit from README (master...patch-2) https://github.com/bitcoin/bitcoin/pull/9989
< bitcoin-git>
bitcoin/master cc44c8f NicolasDorier: ContextualCheckBlockHeader should never have pindexPrev to NULL
< bitcoin-git>
bitcoin/master 972714c Daniel Cousens: pow: GetNextWorkRequired never called with NULL pindexLast
< bitcoin-git>
bitcoin/master 4d51e9b NicolasDorier: Assert ConnectBlock block and pIndex are the same block
< bitcoin-git>
[bitcoin] NicolasDorier closed pull request #9989: [Doc] Removing references to Windows 32 bit from README (master...patch-2) https://github.com/bitcoin/bitcoin/pull/9989
< luke-jr>
#define MSG_DONTWAIT 0 +typedef unsigned int SOCKET;
< wumpus>
if I were to design something like this I'd just use int for fds, as that's what all the UNIXes have, but someone coming from windows would use unsigned types
< wumpus>
SOCKET is a standard type on win32
< wumpus>
which is an unsigned int
< luke-jr>
yes, but this is ifNdef WIN32
< * luke-jr>
peers at the paste adding MSG_DONTWAIT in
< wumpus>
I know, but defining it as signed on unix would be confusing
< wumpus>
as all the socket handling code is written to handle unsigned integers
< luke-jr>
weird.
< wumpus>
I hope most of this can go away once the P2P code is switched to libevent
< luke-jr>
is that going to finally be in 0.15? or maybe I should reopen the libevent-optional PR :p
< wumpus>
I hope so.
< wumpus>
anyhow if "we're going to nuke all of this and replace it with libevent soon" is the reason to not merge #9921 I'd be happy, but if that's going to take a while it's somewhat useful to me
< bitcoin-git>
bitcoin/master abe7b3d Suhas Daftuar: Don't require segwit in getblocktemplate for segwit signalling or mining...
< bitcoin-git>
bitcoin/master c85ffe6 Suhas Daftuar: Test transaction selection when gbt called without segwit support
< bitcoin-git>
bitcoin/master 416809c Wladimir J. van der Laan: Merge #9955: Don't require segwit in getblocktemplate for segwit signalling or mining...
< bitcoin-git>
[bitcoin] laanwj closed pull request #9955: Don't require segwit in getblocktemplate for segwit signalling or mining (master...2017-03-mining-segwit-changes) https://github.com/bitcoin/bitcoin/pull/9955
< bitcoin-git>
bitcoin/master c5adf8f Jonas Schnelli: [Qt] Show more significant warning if we fall back to the default fee
< bitcoin-git>
bitcoin/master 3e4d7bf Luke Dashjr: Qt/Send: Figure a decent warning colour from theme
< bitcoin-git>
bitcoin/master 7abe7bb Luke Dashjr: Qt/Send: Give fallback fee a reasonable indent
< bitcoin-git>
[bitcoin] laanwj closed pull request #9481: [Qt] Show more significant warning if we fall back to the default fee (master...2017/01/fee_warning) https://github.com/bitcoin/bitcoin/pull/9481
< afk11_>
could someone get me the raw block hex for 000000000066757b6b59f9a18b1021f160e48f0f75211800961c4fe2535acd7f - pm please
< afk11_>
(on testnet)
< nemgun>
one minut
< afk11_>
thanks. your node version would interest me as well
< nemgun>
i use an api
< nemgun>
webbtc.com says the block doesn't exists
< sipa>
afk11_: i onoy have the header
< sipa>
*only
< Victorsueca>
afk11_: still need it?
< afk11_>
my segwit node has something different for that height. its currently on 1093617, but v0.12 explorers are on 1093623. the last block I have in common with them is 00000000000000ebf174a2ccaaf2024baadba5cef04862d2ce261097c574f712
< afk11_>
which is 1093555
< Victorsueca>
yeah, I think testnet got hard-forked at some point around that
< nemgun1>
Victorsueca, didn't heard of a testnet hard fork
< afk11_>
looking for a reject message or something
< Victorsueca>
I don't remember well but I think bitcoin classic got to hard-fork on testnet
< nemgun1>
Victorsueca, bitcoin classic is annother coin no ?
< Victorsueca>
it's a hard-fork that replaces the rule that says blocks can't be more than 1MB for one that says blocks can be up to 2MB
< nemgun1>
ah
< tb302>
Hello, is there a possibility at electrum to receive a notification when a payment is received?
< achow101>
tb302: this channel is for bitcoin core, not electrum
< luke-jr>
a number of decent BIPs have negative comments; I suggest perhaps people may wish to provide positive feedback to counter them. https://github.com/bitcoin/bips/pull/500
< gmaxwell>
luke-jr: really? thats what you're going to do?
< gmaxwell>
I can't even see what the comments are!
< gmaxwell>
oh it's fucking Voskuil shitting on everything.
< gmaxwell>
no, I think I'll just recommend no one use BIPs for anything. Process has failed.
< luke-jr>
…
< luke-jr>
why can't you see what the comments are? why not leave positive comments?
< luke-jr>
processes may fail if people just give up rather than using them, but we're not quite there yet.
< gmaxwell>
luke-jr: the editor on github is uselessly bad.
< gmaxwell>
luke-jr: because someone who does pratically nothing but shit over anything has a fundimental advantage in this process.
< gmaxwell>
from a position of low reputation they can fling poo all day.