< sipa>
building with -O2 -g0 -flto here with make -j1 has max res memusage 850MiB here
< sipa>
and that's during compiling of main.o, not linking
< sipa>
linking has max 375MiB
< sipa>
gcc 5.3.1
<@wumpus>
ok, nice, so we can start recomming using lto to people that want to reduce compile memory usage and have sufficiantly new compiler
< sipa>
with debug enabled it's likely more
< sipa>
though debugging lto binaries is harder in any case
< sipa>
trying the same without lto
< sipa>
i can't imagine it to use more memory
<@wumpus>
it's exactly what I expected, that compiling is faster and uses less memory with lto
< sipa>
lto is purely additive at the compilation stage
< sipa>
it builds normal single-object-optimized assembly output, and internal representation as well
<@wumpus>
it emits intermediate code instead of processor code
<@wumpus>
so can skip a step
< sipa>
ah, since 4.9:
< sipa>
When using a linker plugin, compiling with the -flto option now generates slim object files (.o) which only contain intermediate language representation for LTO. Use -ffat-lto-objects to create files which contain additionally the object code. To generate static libraries suitable for LTO processing, use gcc-ar and gcc-ranlib; to list symbols from a slim object file use gcc-nm. (This requires that ar, ranlib and nm have been compiled with...
<@wumpus>
that's how it is for clang at least, maybe gcc is different in that regard
< sipa>
plugin support.)
<@wumpus>
clang -flto .o files are actually llvm bitcode
<@wumpus>
ok, so gcc started doing the same
< sipa>
max 995 MiB res when compiling without lto
<@wumpus>
which joker is signalling TESTDUMMY on testnet? "errors": "Warning: unknown new rules activated (versionbit 28)"
< sipa>
trying with gcc 4.8 now
< sipa>
no clue
< sipa>
hmm, has anyone tried using the various sanitizers that gcc now has?
< sipa>
-fsanitize=address -fsanitize=thread -fsanitize=leak -fsanitize=undefini hadn't even heard about -fanitize=undefined
< sipa>
i hadn't even heard about -fsanitize=undefined
< sipa>
compiling with g++-4.8 fails here:
< sipa>
libbitcoin_util.a(libbitcoin_util_a-util.o): In function `boost::program_options::detail::basic_config_file_iterator<char>::getline(std::string&)':
< sipa>
util.cpp:(.text._ZN5boost15program_options6detail26basic_config_file_iteratorIcE7getlineERSs[_ZN5boost15program_options6detail26basic_config_file_iteratorIcE7getlineERSs]+0x8e): undefined reference to `boost::program_options::to_internal(std::string const&)'
< sipa>
i guess there is something deficient about my g++-4.8 setup
<@wumpus>
my guess: ABI conflict between c++ libraries compiled with 5.3 and 4.8
<@wumpus>
I'm sure compiling with 4.8 works, as it compiles fine on trusty
< sipa>
yes, indeed
<@wumpus>
a depends build with g++4.8 will probably also work on your system
<@wumpus>
(as it has no external c++ dependencies)
<@wumpus>
(and as long as you skip the GUI, building the depends is very fast)
<@wumpus>
doing a depends build with flto would be interesting as well, the final link could optimize use of all the dependencies too, although with qt that will probably be a memory explosion
< sipa>
wumpus: jinx
< sdaftuar>
wumpus: i assume the bit 28 activation on TESTNET is a bip109 thing (nothing to do with TESTDUMMY, which inadvertently reused the same bit)
<@wumpus>
okay
< btcdrak>
sdaftuar: correct.
< gmaxwell>
oh I thought that was TESTDUMMY.
< gmaxwell>
if it were 109 I would have expected it to not trigger BIP9 due to the longer activation window and higher trehshold.
< gmaxwell>
threshold*
< sdaftuar>
it wasn't totally clear to me from the bip, but i assume the semantics specified there called for hte bit to continue to be set after activation:
< sdaftuar>
Miners that support this BIP should set bit 0x10000000 in the block version until 1-Jan-2018. After that date, that bit can be safely re-used for future consensus rule upgrades.
< gmaxwell>
Just so.
< btcdrak>
we should prolly have a switch to mute warnings we specifically want to ignore
< dgenr8>
would it not be a good idea to reserve bit 28 for sizeforks?
< paveljanik>
e.g. reserve it for the softfork to 0.5M?
< paveljanik>
or maybe 0.75M so we finally close these PRs to bump the default block size?
< sipa>
for a hardfork you don't nedd bip9
< sipa>
you can use bip 31 if you really want a means to indicate tgat the rules changed
< sipa>
it's also not very relevant here: currently bit 28 is unusable for anything
< dgenr8>
sipa: to be clear, are you saying you believe block version is unsuitable for signaling sizefork support?
< sipa>
dgenr8: i believe bip9 is unsuitable for hardforks
< sipa>
because it measures miner support, which is not relevant
< sipa>
a hardfork is the ecosystem deciding to switch to new rules; if miners don't follow, that's their own problem
< dgenr8>
so "yes" then? (since not only bip9 but anything in the block header is showing miner support)
< sipa>
if for you a sizefork implies a hardfork, yes
< dgenr8>
I think I detect another rabbit-hole softfork on the horizon
< btcdrak>
dgenr8: what does that mean?
< dgenr8>
I would imagine something like the old adam3us expansion blocks proposal
< dgenr8>
sipa: the ecosystem would follow miners to a larger max block size, just as they are dragged along with all the softforks
< dgenr8>
With either a soft or hard fork, the effect of not following is the same: total inability to validate part of the block
< dgenr8>
the best thing for bitcoin is for you guys to adopt bip109
<@wumpus>
bit 28 is the tabboo bit
<@wumpus>
this is not the place for hard/soft fork disussion, and certainly not block size discussions
<@wumpus>
with the current escalating reindex times, and utxo set size, there's no manouvring room either inside or in expansion blocks to increase transaction space
< gmaxwell>
and a reindex with no txindex, and no signature checking went from 31021.760436 seconds before to 31360.383818000002 seconds after.
< gmaxwell>
(dbcache settings default)
< gmaxwell>
so I think this suggests the leveldb cache is doing almost nothing.
< gmaxwell>
it's a 1% difference to go from the defaults to 1MB.
< gmaxwell>
going to try 2mb now and see if I can turn that into almost nothing.
< sipa>
gmaxwell: with what dbcache size was your benchmark?
< sipa>
my guess is that the leveldb cache becomes relatively less important as the chainstate cache grows
< gmaxwell>
sipa: the default.
< gmaxwell>
my plan was to determine if leveldb cache could be radically reduced without harming performance. Then give over all that memory to the coincache.
< morcos>
wumpus: RE: #8273: FWIW, I was the one I think who advocated for 300MB for the mempool and I now think that is more than is really necessary.
< morcos>
I haven't seen if the ratio to wire size vs memory size has changed substantially with all the recent mempool changes, but previously 300MB of mempool was roughly 100MB of wire bytes. I think thats a larger backlog than is necessary to handle normal operation.
< morcos>
Although lowering that is something that may warrant a bit of testing. There can be these weird feedback loops with eviction and tx time outs, and I wouldn't be sure that say 100MB mempool wouldn't somehow lead to worse network behavior.
< gmaxwell>
morcos: it changed substantially.
< gmaxwell>
it's about 150mb now.
< morcos>
oh, its that much more efficient? that's nice. anyway, i'm just mentioning this, b/c i thought i saw discussion about potentially lowering maxmempool in conjunction with raising dbcache in order to not overly affect the total mem footprint. i'd agree with doing that.
< gmaxwell>
"bytes": 149293989,
< gmaxwell>
"usage": 274484752,
< gmaxwell>
for CPFP to work though it does need to be a fair bit bigger than it would otherwise.
< morcos>
gmaxwell: well i should look again, but last time i looked, i never saw it go over 100M of usage for things that weren't spam (defined as those 14kB tx that paid 15000 satoshis in fee)
< morcos>
of course there was occasionally legitmate free or min fee rate txs mixed in with the spam. but no size mempool will be big enough to reliably include those if they aren't paying higher price thant he spam
< gmaxwell>
A useful question is, what is the probablity that a txn confirms during a measurement window, as a function of the worst 'depth' it expirenced in the mempool.
< morcos>
I guess what I'm saying put another way is that any tx paying <= 1 sat per byte had an extremely variable chance of ever being confirmed at all
< morcos>
Tx's paying > 1 sat per byte may be interesting to study more, but they seem to have always fit in a 100MB (usage) mempool.