< GitHub137>
bitcoin/master 4d2a926 dexX7: Ignore coverage data related and temporary test files
< GitHub137>
bitcoin/master d425877 dexX7: Remove coverage and test related files, when cleaning up...
< GitHub137>
bitcoin/master 8e3a27b dexX7: Require Python for RPC tests, when using lcov...
< GitHub173>
[bitcoin] laanwj closed pull request #6813: Support gathering code coverage data for RPC tests with lcov (master...btc-test-lcov-rpc) https://github.com/bitcoin/bitcoin/pull/6813
< jonasschnelli>
anyone getting the same build errors (osx) for current master: boost: mutex lock failed in pthread_mutex_lock: Invalid argument?
< * jonasschnelli>
is doing a fresh checkout / build
< jonasschnelli>
hmm.. same issue when doing a fresh checkout / build...
< wumpus>
is that a *build* error?
< jonasschnelli>
libc++abi.dylib: terminating with uncaught exception of type boost::exception_detail::clone_impl<boost::exception_detail::error_info_injector<boost::lock_error> >: boost: mutex lock failed in pthread_mutex_lock: Invalid argument
< wumpus>
looks like a runtime errror to me, something abusing boost?
< jonasschnelli>
i try now to upgarde from 1.57 to 1.58...
< wumpus>
let's first try to debug it
< wumpus>
when does it happen? can you get a traceback?
< jonasschnelli>
stacktrace stops when program tries to call the first LOCK
< wumpus>
ok
< jonasschnelli>
EnterCritical() / push_lock()
< wumpus>
when was the last time it worked? maybe try a git bisect
< jonasschnelli>
that is strange: *pthread_mutex_lock: Invalid argument*
< jonasschnelli>
wumpus: okay... i'll try to find out if it's related to a source change or something on my local machine.
< wumpus>
can be a use-after-free of a lock
< wumpus>
if it happens at startup it may be a unexpected error path that doesn't wind down properly
< GitHub136>
bitcoin/master 10e2eae Wladimir J. van der Laan: rpc: Add maxmempool and effective min fee to getmempoolinfo
< GitHub136>
bitcoin/master 26f5b34 Wladimir J. van der Laan: Merge pull request #6877...
< GitHub170>
[bitcoin] laanwj closed pull request #6877: rpc: Add maxmempool and effective min fee to getmempoolinfo (master...2015_10_mempool_effective_fee) https://github.com/bitcoin/bitcoin/pull/6877
< dcousens>
gmaxwell: with the address index comment, you mention it might drive away users, but, this is an opt-in patch
< dcousens>
I'd much rather use this than my own self-maintained version of the patch, hell, several block explorers I know have written their own implementations eating up much more than 50GB of space etc
< btcdrak>
dcousens: FYI, my .bitcoin folder is 67GB with addrindex turned on. What is it without?
< dcousens>
btcdrak: moment, just checking
< dcousens>
56G
< dcousens>
with -txindex
< btcdrak>
dcousens: but as a maintainer of an addrindex fork, I think this particular implementation is absolutely not the right way to go about it. It's ugly. I'd much prefer to see something like #5048 which maintains a separate index entirely.
< dcousens>
Oh no doubt
< dcousens>
I meant in concept only
< dcousens>
Hence, only my concept ACK
< dcousens>
There might be a way to do this as gmaxwell pointed out, via fast re-scan or something
< dcousens>
It would just have to be super fast in the case of being feasible for block explorers etc, but, I guess you could just throw some caches/LBs in front of it
< btcdrak>
dcousens: yeah, I'm not sure how addrindex would scale in the long term.
< dcousens>
yeah, but I mean, we have that same concern for the blockchain itself haha
< dcousens>
I just figured, in terms of worrying about users losing resources, in the end, their opting-in, so they'd be aware no?
< btcdrak>
dcousens: I'm no longer as passionate because I provide a fork for those that want it. It's easy enough to maintain.
< wumpus>
indexing the whole block chain is generally the wrong thing to do
< wumpus>
and I understand there are forensic or research reasons to do so, but if you end up at that it's generally an indication you're doing something wrong in your design
< dcousens>
wumpus: that really depends on your trust model though?
< wumpus>
no, it depends on your scaling model
< wumpus>
it's best to regard the block chain as transitory data used to update the utxo state
< wumpus>
(as well as other things, such as wallets)
< dcousens>
agreed, but the UTXO may not be all you care about. But yes your right, in that sense, its your scaling model
< wumpus>
that's why I added (as well as other things) - it's data you want to use to apply changes to your running state, then throw away
< dcousens>
wumpus: but that lacks the information in the case of a user wanting to see the 'history' of a certain address
< wumpus>
bitcoind's wallet, as one exaple, does this correctly. It only requires the whole beast for rescans. It would be nice to have an external example though that isn't embedded into bitcoind.
< dcousens>
the only way to generate that history would be to view all hte transitory data
< wumpus>
the way to generate that history would be to store it as it comes along, as metadata
< dcousens>
that assumes you know the address from the start
< wumpus>
the same way bitcoind's wallet does - store the transactions, or whatever you need to list it later
< dcousens>
for example for a wallet I maintain, we very often have users that come along with BIP39 mnemonics with 100's of used addresses, forcing a rescan for each time they introduce a new mnemonic/wallet would be a huge rescan burden for us
< wumpus>
history (or the part of history that you want to store for accounting purposes) is part of your running state
< dcousens>
Which again, relates to our scaling model, but, that too relates directly to this patch (as a concept)
< wumpus>
yes, if you essentially offer rescanning-as-a-service it makes sense to keep an index. I still think it should be something external though, not part of bitcoind. There are tons of differents kinds of indexes one could want, for specific purposes
< wumpus>
ideally some external indexing tool would source block data from bitcoind then keep an index, as running state
< wumpus>
so my points are: do not encourage people to keep that index, and do not clutter the codebase with all kinds of extra indexes not necessary for node functionaity. It is not "never create an index". sure there may be reasons to do so but it's very custom
< dcousens>
wumpus: hmmm, but, maintaing this indexing is not a simple task
< btcdrak>
wumpus: I think I finally see your point
< wumpus>
I haven't used the word 'simple'
< dcousens>
It requires a lot of complex logic in regards to re-orgs etc
< dcousens>
wumpus: I know, just putting it out there as to why this is probably a persistent feature request
< wumpus>
yes, it requires logic with regard to re-orgs. To be precise, you need to be able to roll back updates from a block.
< wumpus>
bitcoind does this by keeping undo files, but your database may have features to do so as well
< wumpus>
and once this code is written people can just use it, it's no more complex than using your patch
< dcousens>
wumpus: in my mind, it seems more like its just a way to offload the maintenance burden haha
< btcdrak>
why dont we use zmq to notify external apps?
< wumpus>
there's --enable-zmq?
< dcousens>
wumpus: is there a re-org notification?
< dcousens>
That would make the roll backs super simple
< btcdrak>
it could store whatever it wants, bitcoind would notify about reorgs etc. that would allow to build a sort of indexd. The indexd could always query bitcoind for things like blocks as a passthrough
< wumpus>
not sure - there is a notification when there is a new block, you can use that to determine if there has been a reorg. But a more direct notification of what blocks have to be un-done/applies might be useful too. Though be careful, zmq notifications can be lossy, so you shouldn't rely on their content.
< wumpus>
btcdrak: right
< wumpus>
indexd would pull the data it requires from bitcoind
< btcdrak>
wumpus: if we're missing notifications I'm sure they are easy to add.
< dcousens>
wumpus: really? Are they consistently lossy?
< wumpus>
dcousens: yes, if the queue is full new messages are discarded
< dcousens>
wumpus: right, so its just a matter of consumption
< wumpus>
not sure if are any guarantees for delivery - that's why I have always been kind of careful there, it's easy to use it in a wrong way. Notifications are "hey, wake up, there is new information". If those get lost there's no big issue, it will pick them up next time.
< dcousens>
wumpus: mmm, I guess, in the end, its as btcdrak as suggested. The underyling problem is that the maintenance of the address index becomes difficult due to re-org detection. If we can have bitcoind provide us with that information in a clear way, most of this becomes trivial
< wumpus>
I'm not sure reorg detection is so difficult? keep the list of hashes up to the current block, then if a new best block comes in, look at its parents, where it intersects with the list of your blocks is the fork/reorg point
< dcousens>
wumpus: requires maintaing another list, another source of data
< wumpus>
yes it does
< wumpus>
that's life, you need to maintain the data structures that you need to do your work. only be keeping this state yourself you can be sure your state is consistent.
< btcdrak>
indexd could even use bitcoind for SPV lookups
< wumpus>
would be great to have some library or existing software to handle this for you, I'm just talking concepts here not implementation
< dcousens>
btcdrak: I was thinking about that
< dcousens>
you don't even need to verify since you'd already trust your own node
< wumpus>
right - you outsource trust to your node, verification, that's what it's for
< dcousens>
just as wumpus said, probably still need to maintain the hash-chain if theres no notification for re-orgs
< wumpus>
well you need to know what hash-chain your state is based on
< dcousens>
well, not really, all you actually care about is what blocks got reverted, such that you can roll back the relevant transactions in the index
< wumpus>
otherwise the couling is dangerously tight, say, your indexd was disconnected from bitcoind for a while and you reconnect and have to catch up
< wumpus>
coupling*
< wumpus>
yes, but you need to detect that yourself. You can't rely on bitcoind to keep state for you
< dcousens>
wumpus: it would know what blocks get reverted in a re-org, no?
< wumpus>
but only in realtime - ideally you don't want that kind of tight coupling. What if indexd wasn't connected at the time the reorg happens. Do you want to restart all the way from the beginning?
< dcousens>
true
< wumpus>
you have a) your own hash chain b) bitcoind's hash chain , you always want to go from a to b
< wumpus>
(given that you trust bitcoind to give you the best chain)
< GitHub168>
bitcoin/master 9d55050 Mark Friedenbach: Add rules--presently disabled--for using GetMedianTimePast as endpoint for lock-time calculations...
< GitHub168>
bitcoin/master dea8d21 Mark Friedenbach: Enable policy enforcing GetMedianTimePast as the end point of lock-time constraints...
< GitHub168>
bitcoin/master ff057f4 Wladimir J. van der Laan: Merge pull request #6566...
< GitHub161>
[bitcoin] laanwj closed pull request #6566: BIP-113: Mempool-only median time-past as endpoint for lock-time calculations (master...medianpasttimelock) https://github.com/bitcoin/bitcoin/pull/6566
< btcdrak>
dcousens: sure.
< dcousens>
wumpus: would it better to maintain indexd in repo or as a separate application?
< wumpus>
well - better is always to have it completely detached, but I suppose developing it inside the repository is easier for now if you need serialization and certain data structures
< wumpus>
we don't have a way to export those as a library yet
< jonasschnelli>
clear() calls LOCK() in cxx_global_var_init()
< jonasschnelli>
i think this should be changed.
< wumpus>
common, straightforward to do this is to separate it out into a private function, say clear_(), and a public function clear() that gets the lock and calls clear_()
< wumpus>
this way, functions that already have the lock, or don't need it otherwise, can call clear_ instead
< wumpus>
in this case it is the constructor so obviously it doesn't need the lock
< jonasschnelli>
okay... can try a patch.
< wumpus>
not that it should normally hurt to lock a lock that is part of your object inthe constructor... but this looks like some global obscure initialization order thing because the mempool object is global :(
< wumpus>
(probably it would be better to use an explicit lifetime for objects such as these, but fixing that is more than you bargained for)
< Luke-Jr>
huh, the wallet rescan % shown in the splash screen is much higher than the debug.log Progress - block count vs tx count? :/
< wumpus>
looks like the IP parsing (using getaddrinfo) always returns 0:0:0:0:0:0:0:0/128, IPv4 parsing as well as IPv6 parsing fails
< GitHub175>
[bitcoin] jonasschnelli opened pull request #6889: fix locking issue with new mempool limiting (master...2015/10/fix_mempool_lock) https://github.com/bitcoin/bitcoin/pull/6889
< morcos>
Does anybody have any thoughts on storing the number of sigOps in a tx in the CTxMempoolEntry?
< morcos>
We calculate this on ATMP, and we need it in CreateNewBlock, but it is on the expensive'ish side to calculate.
< sipa>
sounds good
< morcos>
Wow, I was prepared to have to fight for that... :)
< morcos>
I was naively hoping you could just assume every scriptSig was satisfying a P2SH and you wouldn't over count by too much, but turns out that is NOT the case..
< dgenr8>
morcos: any plans to cache ancestor pkg sums the way descendants are?
< morcos>
dgenr8: sdaftuar has written something which he might publish soon. but i don't think there is any point in merging it until we have mining code that can take advantage of it.
< morcos>
i'm starting by just trying to make the existing mining code much faster
< gmaxwell>
02:58 < gmaxwell> Is there a reason that for fee purposes we are not using max(size, sigops*BLOCK_MAX_BYTES/BLOCK_MAX_SIGOPS) as the size of a transaction?
< gmaxwell>
as in "the size" that we use for mempool/etc. purposes.
< gmaxwell>
dcousens: Opt-in isn't sufficient to prevent it from driving users out, I do explain this...
< morcos>
gmaxwell: i saw that. on my list somewhere is to research algorithms for satisfying multiple constraints. presumably if we know that actual size is usually the limiting factor, then it doesn't seem right to sort something lower b/c it has a lot of sigops even though you're not going to hit that limit
< morcos>
i was thnking about it in the terms of some future consensus requirement on utxo deltas as well
< morcos>
the discussion we were having the other day about a separate thread for template generation seems very difficult to me. its very hard to extract code from needing to lock at least the mempool.
< morcos>
i think to do that, we might need 2 mempools, or to just allow the mempool to queue tx's while the mining code is running
< morcos>
but its an easy win to make existing createnewblock significantly faster just by adding an index on individual tx feerate, so i figured i'd start with that.
< gmaxwell>
morcos: I've generally assumed the multivariable optimization would be much harder and not really worth it-- the sigops limit is high enough that it doesn't normally have an effect. What I suggested at least prevents the case where some silly dos attack transaction with a high per byte fee kills a feerate greedy selection.
< gmaxwell>
morcos: hm. So I didn't expect the seperate template thread would avoid locking the mempool. I assumed it would. But that having it would allow the createnewblock to always run without taking any long-held locks.
< gmaxwell>
E.g. block creation thread would lock the mempool and build a block. Then take a blocktemplate lock for just an instant to replace the last template.
< morcos>
gmaxwell: ah, interesting, so trick the mining code into producing small blocks unecessarily.. maybe that is a good idea.
< morcos>
gmaxwell: i see. i guess i need to understand how getblocktemplate is used in practice. i was assuming the threading was so we could be optimizing all the time. i didn't realize it was because you wanted to return a previously generated template to someone calling gbt. i assumed they had already gotten the previous template
< gmaxwell>
and if the cached template is out of date with the chain when gbt runs, you just generate an empty template, which you can do without touching he mempool.
< morcos>
yeah, so i did write a threaded version that does what you suggested, but it was bad, because it was basically just busy grabbing cs_main and mempool, and i thought well ok we'll have to make it run only every so often, but then i didn't see why that was better than the existing usage
< morcos>
but you want to optimize for the case where the caller currently has nothing valid to work on?
< gmaxwell>
When a new block is accepted by the node the miner process learns this (either via p2p inspection or via longpolling) and calls GBT. You want the lowest latency response possible; because until it returns the mining gear is working on a fork.
< gmaxwell>
Then the mining clients will periodically poll to update the transaction list. In this polling you don't care if the transaction data returned is seconds stale.
< morcos>
yes, understood that the new block case needs to be optimized, that was my next step, and that seems pretty easy
< gmaxwell>
since all that does is deny you the little incremental fee rate from new transactions that came in.
< gmaxwell>
We already do createnewblock caching. but it only helps with the second and later requests.
< morcos>
so what i was assuming is that when you periodically poll, which is the common use case, you already have valid work, so the existing longpoll functionality is fine, and there is no need for background process
< morcos>
yes right
< morcos>
so i think what you're saying is that for instance a new block could come in while you happen to not be longpolling, so then it won't be until you call getblocktemplate that it even tries to do anything, and it woudl be better if it started immediately?
< gmaxwell>
Ultimately it would be good to avoid the case where if you poll just before a new block is accepted, you don't delay getting work on the new block. Though, indeed, then I think you run into locking issues with the mempool.
< gmaxwell>
morcos: yes, when a new block comes in and you have been mining it should immediately compute a new block template, and until thats done, return empty ones so you'll at least be working on extending the longest valid chain.
< gmaxwell>
(a failure to make this optimization is part of what contributes to miners mining on other miners work without validating-- they go and 'optimize' at a layer higher than Bitcoin, and as a resut complex details like validation go out the window. :) )
< morcos>
it seems like you'd want to be notified of 2 things, when there is a new block template (empty) and when there is the first template with txs based on the new block. how do you get that notification if you're not longpolling constantly?
< morcos>
i was just going to change it so the longpool returns immediately with an empty block if a new block comes in, via a flag to create new block or something. and then your next long polling call to getblocktemplate also calls CNB immediately which will return with txs.
< gmaxwell>
We could trigger the GBT LP's every N seconds or when the fees change by X, I don't actually know what (if anything) triggers the GBT LP now except a new block. I also don't know how widely GBT LP is used at the moment: RPC concurrency has been kinda broken for a long time (fixed in git master), and GBT LP was a late addition, so I think most things decide when to poll GBT based on monitoring
< gmaxwell>
P2P and then otherwise run on a timer.
< morcos>
ok... well doing it the threaded way shouldn't be hard either...
< gmaxwell>
morcos: we do want to get people mining transactions quickly too-- as otherwise you're missing out on fees (and the public tends to get really angry when they're waiting for transactions and an empty block shows up. :) )
< morcos>
gmaxwell: yes, will be much much faster, so as long as you poll again immediately, shouldn't be more than 100ms
< morcos>
i'll probably have a lot more questions when our mining hardware shows up.. :)
< gmaxwell>
There is some ... potential for "can't win" here, so there is some hardware that really dislikes being longpolled often. E.g. for some dumb reason (usually a complete lack of understanding about all the reasons you might LP) and they flush the pipeline completely. Now, GBT LPing doesn't necessarily mean the hardware gets disrupted, as some higher layer thing can ignore an early long poll and pi
< gmaxwell>
ck up the non-empty template on a later timed run.
< gmaxwell>
So it might be the case that we expose breakage if we start LPing 100ms after the last time we LPed, just because we now have transactions. But if so, we can probably add a flag to delay that second LP event.
< gmaxwell>
(that same hardware is the stuff that cannot be used with P2Pool)
< GitHub52>
bitcoin/master 143d173 Eric Lombrozo: Use BOOST_CHECK_MESSAGE() rather than BOOST_CHECK() in alerts_tests.cpp and initialize strMiscWarning before calling PartitionCheck()."
< GitHub52>
bitcoin/master c8322ff Wladimir J. van der Laan: Merge pull request #6888...
< GitHub11>
bitcoin/master 872fee3 Jonas Schnelli: Introduce -maxuploadtarget...
< GitHub11>
bitcoin/master 17a073a Suhas Daftuar: Add RPC test for -maxuploadtarget
< GitHub11>
bitcoin/master 7939164 Wladimir J. van der Laan: Merge pull request #6622...
< sipa>
morcos, sdaftuar, BlueMatt: perhaps we need to talk a bit about what the expectation of the coincache + mempool size limiting is
< sipa>
do we want a "all of the mempool's dependencies are guaranteed to always fit in memory" ?
< morcos>
sipa: hmm, i don't think i was trying to argue for that. in fact i don't think it would be at all reasonable to do that.
< morcos>
i think we should set the default sizes so that is true in general though
< dgenr8>
Was reversing the feerate sort just an expressiveness change, or did it fix something? 78b82f4
< sdaftuar>
dgenr8: expressiveness mostly, avoids an issue with using project
< sipa>
morcos: one way to accomplish that is to just have a combined coincache+mempool memory limit size, and depending on whether the dependencies are large or small, effectively have a smaller resp. larger mempool limit as a result
< sipa>
morcos: but if we don't have that, we _must_ enforce the coincache limit
< dgenr8>
sdaftuar: thx!
< sdaftuar>
i think that might be a little dangerous though because that has an effect on relay policy
< sipa>
morcos: if the coincache exceeds its size, it will be wiped anyway when the next block comes
< sipa>
calling flush in the tx code just makes it happen a bit earlier to avoid an OOM
< sdaftuar>
sipa: i think we should revisit that behavior. ideally the coinscache would have in it the set of things likely to be needed when the next block arrives
< morcos>
sipa: you realize that we can't always enforce the coincache size? when we're connecting a block it could temporarily exceed for example
< sipa>
morcos: yeah...
< morcos>
i agree that we should _eventually_ put in something that enforces limitation of the coincache size between blocks
< morcos>
but that might as well wait for something smarter than just flushing
< sipa>
i think the only real solution is to switching to a per-txout cache instead of a per-tx cache
< sipa>
so the ratio is more or less fixed
< sdaftuar>
ah i was wondering how feasible that would be
< morcos>
and in the meantime we set the ratio of the defaults to be reasonable so that we don't usually blow through the cache limit
< morcos>
given that you have to pay for relay to increase the cache size there is some cost to the attack after all
< morcos>
and how big are we worried the cache size could get?
< sipa>
define usually
< morcos>
i guess i'm thinking that if we set mempool to 300M then it'd be rare than the coinsview cache was over 600M or something
< morcos>
thats just a lot bigger than the current default
< morcos>
i just think flushing after txs should be rare, if we want to set some higher limit to protect against OOM and flush after txs there, then thats fine, lets just set all these defaults so that what happens usually is the cache is big enough to contain a good chunk of the mempool. it doesn't really have to contain the whole thing, because
< morcos>
you wont' get to 100% mempool size in between blocks
< phantomcircuit>
indeed why are we keeping the already accepted mempool dependencies in the cache? they're already validated isn't that going to reduce the cache hit % ?
< sipa>
phantomcircuit: they're validated again when a block is received
< sipa>
which we want to make fast
< phantomcircuit>
sipa, ah that's right
< sipa>
an approach to avoid that is caching the success, but if you store that cache value in the mempool, you're making the mempool code consensus critical
< phantomcircuit>
yeah lets not do that :)
< sipa>
eh, and that doesn't even help
< sipa>
signature validation is already cached anyway
< phantomcircuit>
oh that reminds me
< sipa>
and you need to be able to update the spends in the chainstate anyway after the block is validated
< phantomcircuit>
the size of the sig cache isn't in the help menu
< sipa>
good point
< phantomcircuit>
it significantly reduces worst case AcceptNewBlock() latency to increase that
< phantomcircuit>
miners should basically all have that be 10-100x the default
< gmaxwell>
what?
< gmaxwell>
the entries are _tiny_, we should be able to have a "big enough for ~100% hitrate" cache for everyone.
< sipa>
i actually have no idea what size the sigcache is
< phantomcircuit>
huh it is in the help
< gmaxwell>
phantomcircuit: for block acceptance speed it may make sense to seperately cache success and failure.
< wumpus>
you do have commit 5ce43da03dff3ee655949cd53d952dace555e268?
< phantomcircuit>
wumpus, yes
< wumpus>
if so, this shouldn't happen - unless gdb happens to reenable the signal
< phantomcircuit>
it shouldn't
< wumpus>
yeah it does, it reports all signals by default. Try: "handle SIGPIPE nostop noprint pass"
< phantomcircuit>
but who knows... gmaxwell do you know :)
< phantomcircuit>
wumpus, sigh
< phantomcircuit>
ok yes that's good
< phantomcircuit>
bleh why does it do that?
< phantomcircuit>
wumpus, can verify that MALLOC_ARENA_MAX=1 works btw
< sipa>
phantomcircuit: what about 2 instead of 1?
< phantomcircuit>
sipa, not sure it takes ages to load enough transactions into the mempool for the result to be valid
< wumpus>
phantomcircuit: awesome
< sipa>
too bad we can't set MALLOC_ARENA_MAX from the program itself, i suppose?
< phantomcircuit>
sipa, well we can but...
< sipa>
how so?
< wumpus>
I think you can
< sipa>
we can change the env variable, but i expect that the arenas are created at library load time
< sipa>
before we can change it
< phantomcircuit>
sipa, the easiest would be to set the env and execve
< phantomcircuit>
ie h4x
< wumpus>
let me see glibc source...
< wumpus>
it indeed makes no sense to set the env var, as it parses it at startup as expected, however you should be able to do mallopt(M_ARENA_MAX, 1)
< wumpus>
/* mallopt options that actually do something */ in /usr/include/malloc.h :-)
< sipa>
man mallopt does not list M_ARENA_MAX here
< wumpus>
it's an undocumented option
< wumpus>
probably should call it before any call to malloc to be effective
< wumpus>
or at least before calling it in a thread
< sipa>
the execve approach works...
< sipa>
it's ugly because it needs to know the name of the binary you just started
< wumpus>
that's really ugly, and makes debugging harder
< sipa>
agree
< phantomcircuit>
ha it's not even in the libc manual
< wumpus>
and that (though you should always have that in argv[0] on unix)
< phantomcircuit>
it looks like we could set M_MMAP_THRESHOLD which would be slower but seems to guarantee memory is actually free'd
< wumpus>
but gdb will show something like "process 28686 is executing new program: /bin/dash" when a program execve's, and it loses all its state
< phantomcircuit>
wumpus, yeah i know thus "h4x"
< phantomcircuit>
it would probably also break anybody using start-stop-daemon
< phantomcircuit>
gtg
< wumpus>
but the mallopt should work too (can't check it right now though)
< wumpus>
ok later
< phantomcircuit>
41k transaction in the mempool and counting
< sipa>
yeah, 50k sigcache entries won't be good in that case...
< phantomcircuit>
yeah i changed mine to maxsigcachesize=1000000
< gmaxwell>
sipa: sigcache is currently uint256,vector<char>,cpubkey.... bloat bloat bloat.
< gmaxwell>
Could be compressed with H(the above).
< gmaxwell>
H() need not even be very collission resistant if there is a per node secret nonce.
< phantomcircuit>
gmaxwell, yeah that's what i was thinking
< sipa>
it's around 220 bytes per entry now
< phantomcircuit>
sipa, depends on the size of the pubkey script right?
< phantomcircuit>
it would be nice if it didn't
< phantomcircuit>
upto 44.5k now
< sipa>
phantomcircuit: it's a pubkey; not a script
< sipa>
pubkeys are 65 bytes
< sipa>
in memory
< sipa>
+ alignment
< phantomcircuit>
oh
< gmaxwell>
sipa: plus whatever overhead there is from the std::set.
< sipa>
gmaxwell: around 40 bytes, i included that
< gmaxwell>
sipa: would it be really ugly if accepting the block carried a flag to the checker that told it to remove the entry? with that alone the hitrate would no longer be low absent attacks because the cache would not be full all the time.
< morcos>
sipa: phantomcircuit: i don't think we need to worry about M_ARENA_MAX. we can drastically reduce the size of the problem. first we rewrite CreateNewBlock to only pull into the cache 1 blocks worth of txs. second we make it run in only 1 thread.
< morcos>
but yeah that maxsigcachesize will bite you. i couldn't figure out why my new code was slower and it was that its sigcache was getting blown away