< morcos>
jtimon: surely you don't need to make estimateSmartFee take a mempoolsize argument? as in that branch
< jtimon>
morcos: getMinFee doesn't need to be called inside policy/fee yet, but nGlobalMempoolSizeLimit can be already in policy/fee instead of main if you want (I mean, no complain on my part)
< morcos>
one sec and i'll show you what i had in mind
< jtimon>
yep, I will update the PR with the minimal thing and a bunch of stuff for you to discard soon as well
< jtimon>
just got distracted with something else
< jtimon>
morcos: updated #7115 focus on the first commit and then nack the other two if you want
< morcos>
and then if you want i'd be happy for you to add a commit that has a global mempool limit variable which TrimToSize calls
< morcos>
and per our recent agreement i'll stop objecting if you move GetMinFee call to inside CTxMemPool, although I will silently still disagree
< jtimon>
ok, so you agree with hiding the param in getminfee, I thought that was what you disliked more for some reason and removed it
< jtimon>
morcos: do you have plans for dynamic trimming beyond testing?
< jtimon>
I really don't understand that or what it even has to do with our discussion
< morcos>
jtimon: no i don't, we talked about using a different trim limit during IBD, but to me its really about encapsulating properly what GetMinFee cares about
< jtimon>
IBD?
< morcos>
i looked at your pull, i don't like adding a global argument to estimateSmartFee
< morcos>
yes, during IBD you could allocate less size to the mempool and more to your cache
< morcos>
or during initial sync if you're not caught up or whatever
< morcos>
doesn't have to technically be IBD
< jtimon>
in the future that can be fixed if getminfee moves to the estimator
< jtimon>
it could be an attribute in the estimator
< morcos>
what is it you don't like about the commit i just showed you. i realize it still doesn't solve your problem, but i'm ok with you solving your problem on top of that.
< jtimon>
what is IDB?
< morcos>
initial block download
< jtimon>
thanks
< jtimon>
so, wait
< jtimon>
you want to have a different limit for IBD but a constant one for the rest right?
< morcos>
i don't want to do any of that right now
< morcos>
i would like to fix it so GetMinFee doesn' tneed an argument
< morcos>
but i don't think the proper value for it to use is your command line argument
< morcos>
i think the proper value for it to use is the last size the mempool was trimmed to
< jtimon>
I want to understand your long term vision, and I would like you to understand mine
< morcos>
so yes, if GetMinFee moves to the estimator, then that attribute could also move to the estimator (or i mean policy code), perhaps a bit tricky since trim to size needs to set it, but perhaps trim to size should be a policy function anyway
< jtimon>
morcos: ok GetMinFee doesn't need an argument, a global (a different one from argsMap I mean) can fix that problem, where do you want the global?
< morcos>
jtimon: a global what?
< morcos>
this is where we are miscommunicating
< jtimon>
main, mempool or policy/fees?
< morcos>
the value GetMinFee needs to know about isn't the command line argument, it is literally the last agrument that was given to TrimToSize
< jtimon>
argsMap is the global we're currently using for this command line param
< jtimon>
we should use an extern global initialized in init.cpp like everybody else
< morcos>
but i think you're missing my point
< morcos>
if TrimToSize still takes an argument, which I think it should
< morcos>
then GetMinFee has to be aware of that argument, not the global just because we happen to call TrimToSize with the global
< jtimon>
should TrimToSize use a different value 2 different times for a given comman-line call?
< morcos>
lets step back for a second, what is your objection to doing it the way i'm proposing?
< jtimon>
why does TrimToSize needs to be "reset" every time, why everyone else must depend on "whatever TrimToSize was called with last time" instead of just -maxmempool?
< morcos>
because thats literally what the logic is, what was the mempool trimmed to
< morcos>
and now how far below that number are we
< jtimon>
morcos: it doesn't make any sense to me, I think you should start with why is this command line option different from all the rest
< morcos>
i don't really think i have anything else to say thats not repeating myself
< jtimon>
morcos: I just made the logic to be "never change once the user sets it, TrimToSize will eat whatever was set in init.cpp like anybody else" in 3 different ways
< jtimon>
that can be changed as shown, why do you want ot maintain it as a parameter in TrimToSize but not int getminfee? what's the fundamental difference you see but I don't?
< morcos>
well i think one evidence of what i'm saying is the fact that you had to create two TrimToSize functions
< jtimon>
I could make them one if you prefer that
< jtimon>
trvially I must add
< morcos>
i think it would be hard to get the unit tests to work with no argument
< jtimon>
the same goes for getminfee, again, what's the difference?
< morcos>
I think TrimToSize should take an argument because its logic that should exist outside the mempool as to what size it should be trimmed to
< jtimon>
morcos we're using other globals in the tests
< jtimon>
for example argsMap
< morcos>
I think GetMinFee should not take an argument, because it is constrained to have to use the value that TrimToSize was called with
< jtimon>
like we're currently using
< morcos>
the unit tests call TrimToSize with different arguments within the same tests or GetMinFee or whatever
< morcos>
both probably
< jtimon>
calling GetArg in 11 places instead of one is just a style choice
< jtimon>
would you even oppose to a macro?
< morcos>
ok, well at this point we're cluttering up IRC with a ridiculous conversation. you are the one who is trying to get something merged.
< morcos>
i've tried to give you some reasonable compromises that i would support
< jtimon>
"I think GetMinFee should not take an argument, because it is constrained to have to use the value that TrimToSize was called with" this doesn't make any sense, if you call TrimToSize with a value, you want GetMinFee to use that value
< morcos>
if it gets merged without my support, that's also fine... but its just a bit annoying since i'm the one spending the most time coding in this area
< phantomcircuit>
jtimon, theoretically we could fix GetArg but we cant if noboey uses it
< morcos>
jtimon: huh? yes, this is correct: "if you call TrimToSize with a value, you want GetMinFee to use that value"
< jtimon>
morcos: whatever, I can leave it at the minimal fix-circular-dependency commit and focus on what we agree instead of what we disagree
< morcos>
i have not seen any commits that i can agree with
< jtimon>
phantomcircuit: ?? init.cpp uses it plenty
< phantomcircuit>
jtimon, GetArg isn't thread safe, so we cant change settings outside of init.cpp
< phantomcircuit>
if we aren't using GetArg it becomes a hastle to find and change settings all over the place
< morcos>
jtimon: the part i don't like about that is passing through nMempoolSizeLimit through estimateSmartFee. that just doesnt make sense to me. callers want to just say estimateSmartFee, why do they need to be aware of certain global variables. Furthermore it limits the ability to call TrimToSize with different values for no good reason. but talk about circular dependencies, you and i have a good one going now... so let
< morcos>
so lets just let it go.
< jtimon>
"the part i don't like about that is passing through nMempoolSizeLimit through estimateSmartFee. that just doesnt make sense to me" do you have another simple alternative beyond creating an unnecessary circular dependency?
< morcos>
i pasted it to you above!!!
< jtimon>
callers should call the estimator directly without calling the mempool as a facade, but that's another topic entirily
< morcos>
then you can fix the circular dependency on top of that.
< jtimon>
"Furthermore it limits the ability to call TrimToSize with different values for no good reason" do you expect to need to call TrimToSize with values different from -maxmempool beyond testing?
< morcos>
jtimon: sorry, i have to stop discussing this. no i don't expect to right now except for the edge case of doing so in IBD as mentioned above.
< jtimon>
btw, I asked many times without anwer, do you agree that getminfee belongs in policy/fees ?
< jtimon>
morcos: ok, whenever you have time to answer questions come back to me
< GitHub24>
bitcoin/master fad3035 MarcoFalke: [doc] Minor markdown fixes
< GitHub24>
bitcoin/master fa22a10 MarcoFalke: contrib: Del. gitian downloader config and update gitian README
< GitHub24>
bitcoin/master 9999cb0 MarcoFalke: Fix url in .travis.yml
< GitHub11>
[bitcoin] laanwj closed pull request #7136: [trivial] Fix markdown, links and help messages (master...MarcoFalke-2015-trivial5) https://github.com/bitcoin/bitcoin/pull/7136
< gmaxwell>
I am imagining wumpus all PullBo, a muscled, tanned warrior, with ammo belts around his shoulders, firing automatic weapons bursts into the forrest of pull requests in front of him.
< BlueMatt>
wumpus: it seems a bit late to be pushing 6898 in?
< BlueMatt>
though I would really very much like to see it
< Luke-Jr>
wumpus: I'm still mid-code review on that one, and also NACK it without 6357 :/
< wumpus>
BlueMatt: that's also my opinion
< wumpus>
BlueMatt: one idea was to add it as a 'getblocktemplatebeta' call, so that it is already as beta in 0.12 but if it turns out to have bugs, people can still use the old version
< Luke-Jr>
might make more sense as a "beta" boolean parameter to getblocktemplate?
< Luke-Jr>
no need to duplicate the RPC side
< * wumpus>
hates adding parameters
< Luke-Jr>
wumpus: GBT is explicitly designed to be nice about parameters
< Luke-Jr>
there's an Object parameter for every GBT call, that can have any variety of keys
< wumpus>
Luke-Jr: true, you're passing it as an object? right
< gmaxwell>
wumpus: make it a config option not another RPC. The reason is that it's burdensome to have to change your other software.
< gmaxwell>
and I don't see much use to switching them on the fly.
< phantomcircuit>
huh what
< phantomcircuit>
oh yeah
< wumpus>
"Reduce latency of switching to new block. I'm not really sure this is a good idea." hehe, that sounds even more unconvincing than "not tested on mainnet at all"
< Luke-Jr>
gmaxwell: switching on the fly can be automatic degrading
< wumpus>
gmaxwell: right...
< phantomcircuit>
wumpus, i've actually tested it and it does what it's supposed to do
< BlueMatt>
wumpus: well i dunno if it'll break existing pool software, maybe it should be optional
< Luke-Jr>
phantomcircuit: does it, with multiple poolservers connected?
< gmaxwell>
Luke-Jr: yea it could but I think it's not worth the other costs.
< Luke-Jr>
phantomcircuit: seems like it might be better to just do the empty block on longpoll connections
< phantomcircuit>
Luke-Jr, gbt takes 500-3000ms
< Luke-Jr>
gmaxwell: a GBT parameter with a config default would be pretty trivial
< phantomcircuit>
returning an empty block reduces that significantly
< wumpus>
gmaxwell: yes an option makes more sense, only for e.g. comparison testing you'd want to be able to call both, and in that case you could run two bitcoinds
< BlueMatt>
wumpus: but its definitely the smallest thing we can do that'll fix a bunch of the latency issues for gbt
< gmaxwell>
wumpus: heh. I laughed at that too, but we shouldn't punish patches for being super frank. :)
< Luke-Jr>
phantomcircuit: obviously; I don't see how that's related to my suggestion :p
< phantomcircuit>
Luke-Jr, oh with multiple pool servers no it doesn't
< gmaxwell>
Luke-Jr: would still require updating software. Come one, for BIP66 I recall having to pull teeth to get a new libblkmaker from you. :)
< BlueMatt>
just a gbt option that says "I want no txn, DO AS I SAY" would probably be sufficient if we got pool software updated
< phantomcircuit>
Luke-Jr, i couldn't figure out any way to support that without adding a full round trip though
< phantomcircuit>
BlueMatt, no it's not because that would only work if you already knew there was a new block
< Luke-Jr>
gmaxwell: it's strictly better to have param-with-config-default rather than config-only :p
< phantomcircuit>
and most often you dont
< BlueMatt>
phantomcircuit: huh? dont most pools know this because p2p told them?
< Luke-Jr>
BlueMatt: that does sound ideal actually.
< gmaxwell>
Luke-Jr: I specifically decided to not do that because the date could be simply adjusted at any time.
< gmaxwell>
Though it should probably be rebased and adjusted to feburary.
< Luke-Jr>
if he makes it the upcoming subsidy halving point, I'd probably concept-ACK it. <.<
< gmaxwell>
Not that we're going to merge it, but for people who are confused and think there is likely to be some grave emergency, it's not unreasonable to be able to _show_ that we have an emergency thing.
< phantomcircuit>
Luke-Jr, ok i'll switch it such that only long poll responses will have empty blocks
< phantomcircuit>
and all other responses will be normal
< wumpus>
Luke-Jr: I had to laugh at "BaseParams(CBaseChainParams::MAIN).RPCPort(), BaseParams(CBaseChainParams::TESTNET).RPCPort())" - ut yes it makes sense
< wumpus>
maybe break off the line for somewhat easier human parsing
< gmaxwell>
FWIW, github lies, I think everything needs rebasing now, but github reports that for nothing.
< Luke-Jr>
:/
< Luke-Jr>
oh. master doesn't build. :|
< gmaxwell>
oh maybe my rebasing kung fu is less bad than I thought.
< BlueMatt>
LOL
< wumpus>
let's first get master into a working/building state again then
< wumpus>
hm it builds fine here
< BlueMatt>
yea, builds fine here, too
< BlueMatt>
tests also pass fine here...running build-in-crazy-confs-with-all-tests script
< BlueMatt>
wumpus: re: networking in gitian builder doc: I'd REALLY, REALLY prefer to remove all mention of how to get networking to work in the vm and just have a description of how to do it with no networking
< wumpus>
BlueMatt: I agree but have you seen the section about running with no networking?
< BlueMatt>
wumpus: well we shouldnt tell people to use apt-cacher and should just have instructions to disable the auto-update
< Luke-Jr>
this emits a signal for a different object?
< wumpus>
with networking it just works, that's enough to build executables, maybe it's somehow cleaner or what to not require it, but I don't want to get it to work
< wumpus>
Luke-Jr: eh huh
< BlueMatt>
wumpus: hmm? if you just remove the gitian auto-update-apt step it should just work (tm) ?
< Luke-Jr>
apparently Qt5 allows this, but not Qt4
< wumpus>
if you remove apt-cacher then it will download and redownload the packages time after time from the miror, not very nice. Remember that it's not just the base system, the descriptors themselves also have package requirements
< wumpus>
BlueMatt: it's too late to redesign gitian for 0.12
< BlueMatt>
argh, ok
< phantomcircuit>
Luke-Jr, would you be opposed to making longpoll return only when there's a new block?
< BlueMatt>
we should do this for 0.13, though
< wumpus>
Luke-Jr: still very uncivil to raise a signal on another object :)
< Luke-Jr>
phantomcircuit: yes, that would result in only ever mining empty blocks :x
< wumpus>
BlueMatt: for the further future I'd prefer deterministic building without any VMs at all, just a 'blessed' deterministic toolchain
< phantomcircuit>
Luke-Jr, well no you would be calling gbt in parallel to the long poll but at a much reduced frequency from my current "lol infinite loop" version
< BlueMatt>
wumpus: oh, we need #7022, too
< Luke-Jr>
wumpus: so how shall we fix this? a dummy "raise the signal please" method? :p
< wumpus>
Luke-Jr: I'll take a look at it..
< Luke-Jr>
phantomcircuit: there is literally no reason to ever do a non-longpoll GBT
< Luke-Jr>
phantomcircuit: and BFGMiner will in fact not do so, I think
< gmaxwell>
builds fine for me on a guiless system.
< wumpus>
gmaxwell: yes, only qt4 buildsfail
< Luke-Jr>
gmaxwell: lol, of course it builds guiless :P
< wumpus>
qt5 or guiless works fine
< wumpus>
that's why I don't notice it either, I haven't used qt4 for more than a year
< * Luke-Jr>
waits for a usable Qt5 DE :/
< midnightmagic>
gonna be waiting a while.
< wumpus>
Luke-Jr: qt creator is not qt5?
< phantomcircuit>
Luke-Jr, well there is though
< wumpus>
oh, what's DE?
< phantomcircuit>
Luke-Jr, even with this you need at least two bitcoind instances to get absolute best latency
< Luke-Jr>
wumpus: Desktop Environment
< phantomcircuit>
one that's long polling and another that's calling gbt in a loop
< phantomcircuit>
ie one that returns transaction
< phantomcircuit>
otherwise the cs_main lock will add on average 500ms+ of latency to the long poll result
< Luke-Jr>
phantomcircuit: when longpollid is a previous block, return an empty one with a longpollid that then gets returned with a full block immediately.
< wumpus>
oof he's using Q_EMIT cross-thread
< Luke-Jr>
wumpus: that sounds scary. :|
< wumpus>
I had no idea that even worked, used QMetaObject::invokeMethod to manually queue messages
< wumpus>
which apparently works in qt4, so going to do that here too
< phantomcircuit>
Luke-Jr, that's actually the same thing thanks to random distribution of blocks
< phantomcircuit>
i guess really it doesn't much matter since you need cs_main for both empty blocks and filling the template
< phantomcircuit>
so either way there will be some latency from it
< Luke-Jr>
wumpus: in the meantime, I'll work on getting Travis to test Qt4
< wumpus>
Luke-Jr: makes sense for as long we still support it
< GitHub181>
[bitcoin] laanwj opened pull request #7143: qt: use QMetaObject::invokeMethod for cross-thread signaling in clientmodel (master...2015_12_qt4_build) https://github.com/bitcoin/bitcoin/pull/7143
< Luke-Jr>
hm, I guess I should rebase mine off that XD
< wumpus>
please test it too, I haven't yet :)
< Luke-Jr>
yes, of course
< Luke-Jr>
phantomcircuit: you active mining testnet? ;)
< * Luke-Jr>
wonders what he needs to look for to change in the UI to test this properly
< phantomcircuit>
Luke-Jr, i believe that im currently 100% of testnet actually
< Luke-Jr>
phantomcircuit: as long as there's new blocks..
< Luke-Jr>
wumpus: qt/clientmodel.cpp:257:56: error: macro "Q_ARG" requires 2 arguments, but only 1 given
< GitHub167>
bitcoin/master cfdc662 Suhas Daftuar: Explicitly set chain limits in replace-by-fee test
< GitHub167>
bitcoin/master 16f4a6e Wladimir J. van der Laan: Merge pull request #7137...
< GitHub145>
[bitcoin] laanwj closed pull request #7137: Tests: Explicitly set chain limits in replace-by-fee test (master...fix-rbf-test) https://github.com/bitcoin/bitcoin/pull/7137
< wumpus>
#6898 and #6872 need rebasing again, sorry for that, but this is what is bound to happen if so many merges wait until the last day
< MarcoFalke>
wumpus try $ for i in {1..100}; do src/test/test_bitcoin --log_level=ALL --run_test=scheduler_tests;done
< MarcoFalke>
Does this run fine?
< wumpus>
MarcoFalke: no time to check at the moment, but I'm ok with disabling the scheduler tests if people are experiencing issues with them
< jonasschnelli>
would it make sense to auto-choose a higher nDefaultDbCache in case bitcoind/qt detects more available ram?
< wumpus>
(if so many people can reproduce them it doesn't mean much whether I can or not anyway...)
< wumpus>
jonasschnelli: *up to a limit*
< jonasschnelli>
I think for most modern server systems 100MB (current default) is very low.
< wumpus>
jonasschnelli: e.g. scale down for small systems, scale up a bit for large systems, but please don't make an application use X% of ram indiscriminately.That's awful and rude.
< jonasschnelli>
Hmm... right... maybe its fine if the user just tweaks the -dbache manually.
< jonasschnelli>
sipa asked about adding a option in Qt ("provide X MB of ram for bitcoin-qt")...
< wumpus>
e.g. say you buy more ram because your system is slow/runs out of memory, next boot some background applications start claiming it all, and it's like nothing changed
< jonasschnelli>
hah. right.
< wumpus>
jonasschnelli: usually such a thing is done by having multiple system profiles, and allowing the user to choose between them (and having a custom option for advanced users)
< wumpus>
jonasschnelli: it could auto-select one based on some information gathered from the system
< jonasschnelli>
Yes. That would be nice...
< wumpus>
just make clear what the compromise is: sync is somewhat slower, but less RAM usage, sync is faster, but lots of RAM waste
< jonasschnelli>
Is changing the cache size during runtime possible? (CCoinsViewDB), flushing, deleting pcoinsdbview and recreating it?
< wumpus>
hm or maybe an option to reduce the dbcache after the initial sync?
< jonasschnelli>
wumpus: this is a very good point. A big cache size mainly helps for IBD performance...
< wumpus>
right now it's not possible to change the dbcache size during run time. I don't think it would be very hard to add if necesary, at least for the coinsviewcache
< wumpus>
(dbcache is divided up between some caches, I don't know if they're all easy to resize/reconfigure)
< jgarzik>
phantomcircuit, I think it's more there's a grey area about how nodes will behave if a remote peer only has a subset of the full chain - sometimes they can respond to block requests, sometimes not.
< jgarzik>
phantomcircuit, older nodes that don't getheaders might get stuck in IBD?
< jtimon>
and if #7145 cannot be accepted for some reason, there's probably no point in me participating on any mempool/policy encapsulation anymore, and I will be able to stop distracting other devs and myself with it
< phantomcircuit>
jgarzik, older peers would definitely get stuck
< phantomcircuit>
hmm it seems the only way to "prime" the cache view is to reindex...
< GitHub54>
[bitcoin] paveljanik opened pull request #7146: Name union to prevent compiler warning (master...20151201_prevector_name_union) https://github.com/bitcoin/bitcoin/pull/7146
< jgarzik>
hmmm. !NODE_NETWORK should have no problem relaying blocks... they just won't get asked for them.
< * jgarzik>
goes to test
< phantomcircuit>
jgarzik, correct
< phantomcircuit>
(probably)
< sdaftuar>
jgarzik: the headers announcements PR should mean that pruned nodes can relay blocks just fine now
< sdaftuar>
including if there's a reorg that isn't more than 8 blocks
< jgarzik>
sdaftuar, for keeping !node_network you mean? Because otherwise that doesn't fix older nodes that would get stuck
< jgarzik>
I wonder if we need NODE_RELAY
< sdaftuar>
i'm specifically talking about relaying new blocks, not helping other nodes sync
< sdaftuar>
ah are you asking what it would take to let pruned nodes serve up historical blocks?
< jgarzik>
No just looking at all the angles of pruning with fresh eyes
< jgarzik>
even with !node_network and just relaying new blocks, I presume a block locator can still be returned at a pruned boundary-and-beyond
< sipa>
yes, locators are built from the block index
< sipa>
they don't need the actual blocks
< sdaftuar>
so i think 0.9 and earlier nodes will download blocks in a reorg from a pruned peer fine via their getblocks mechanism
< sdaftuar>
we patched the getblocks response so that it won't return inv's for blocks that aren't on disk
< sdaftuar>
0.10 and 0.11 nodes will indeed not be able to download reorgs from pruned peers
< sdaftuar>
0.12 and later nodes should work fine for downloading reorgs from pruned peers, via the direct-fetch mechanism on headers messages
< sipa>
will 0.10 and 0.11 get disconnected if they try?
< sdaftuar>
they won't even try
< jgarzik>
Sounds like pruned nodes should only announce blocks to >= 0.12 peers
< sipa>
we still need granulatiy in the service flags
< sdaftuar>
well there's not much harm in inv'ing the tip to 0.11 and earlier peers is there?
< sipa>
as even 0.12 just won't connect to non-network nodes
< sipa>
sdaftuar: it may hide problems, i wouldn't
< jgarzik>
thus the proposed NODE_RELAY
< sipa>
sdaftuar: people may think that it works fine, until it doesn't due to reorg
< sdaftuar>
hmm... we merged that change a long time ago (to relay the tip)
< sipa>
oh well
< jgarzik>
pruning defaults to off so nearly nobody uses it
< jgarzik>
plenty of freedom to change
< sdaftuar>
it's not too late i suppose to change it so that in 0.12, pruning nodes only relay if headers announcements are on?
< sipa>
sdaftuar: that does seem reasonable
< sdaftuar>
0.11 didn't have relay on for pruning at all, so no expectations yet about how it should work i'd guess
< jgarzik>
nod
< sdaftuar>
it does needlessly harm 0.9 and 0.8 nodes... but probably not many of those left
< sipa>
sdaftuar: what if there is a +8 deep reorg?
< sdaftuar>
then we revert to inv
< phantomcircuit>
Luke-Jr, fyi switching to longpoll significantly increased my latency
< phantomcircuit>
by like 4 seconds
< sipa>
in the test framework is there anything i need to do to call a new RPC?
< sipa>
JSONRPC error: Method not found
< sipa>
sounds like bitcoind actually doesn't have it?
< gmaxwell>
jgarzik: pruned nodes have relayed fine at the tip in master, though if there are reorgs they won't advertise them; with later changes they should also handle reorgs but I don't know if anyone has really tried that. I kept a node behind a pruned node for a while a month or so ago until moving on to test other things.
< jgarzik>
Ideal goal is a lightweight router mode that contributes usefully to the network (subjective definition, I know)
< gmaxwell>
well we have that now, fortunately (I think), we can't yet signal it explicitly-- though for blocks at the top we don't have to-- it'll just work; though I think it's probably good to finish hammering out what a pruned node is before we define a service flag as signifying the minimum it must be willing to do.
< GitHub151>
[bitcoin] MarcoFalke reopened pull request #7084: mempool: Replace maxFeeRate of 10000*minRelayTxFee with maxTxFee (master...MarcoFalke-2015-mempoolMaxTxFee) https://github.com/bitcoin/bitcoin/pull/7084
< morcos>
sipa: woohoo thanks! now i can start changing it again. :)
< Luke-Jr>
sigh, so I guess I'm going to need to strongly recommend miners not upgrade to 0.12?
< morcos>
Luke-Jr: or you could still advocate for 6357 to be merged... but honestly i don't think it makes much difference, have you looked how different it is to only use original in-chain inputs?
< morcos>
For instance if you are not sending chains of txs, then there is no difference
< morcos>
if your tx has at least a significant part of its inputs already in the chain, then it'll still increase in priority and eventually get mined
< morcos>
(or course assuming its not evicted before that)
< Luke-Jr>
morcos: hm, that's a point.. and I guess the bad policy default (priority size = 0) can be overridden still
< Luke-Jr>
6357 appears to have been auto-closed by bad GitHub logic.. are you able to reopen it?
< morcos>
yes i really don't think its that bad
< morcos>
i assume i could, it might need rebasing, but if you really want it, why don't you just take the commits. i'd prefer to have you come around to the view that 7008 (lower bound priority) is really good enogh especially with the tradeoff with complexity
< Luke-Jr>
7022 is pretty bad :/
< Luke-Jr>
morcos: "complexity" as in CPU time is a consideration, but not "complexity" as in a few more lines of code since I'm going to need to maintain that code regardless of whether it's in Core or not.
< sipa>
Luke-Jr: i really don't believe the existing priority system is worth maintaining; it is a burden on performance and maintainance
< Luke-Jr>
in practice, it's more complexity in LOC/maintenance to have it NOT in master
< sipa>
that's your view
< morcos>
Luke-Jr: the complexity i'm concerned with is complexity of correctness
< sipa>
i have no problem with creating a mechanism that takes coins value/age into account for prioritization, however
< Luke-Jr>
sipa: that's my view, as the guy maintaining the mining stuff..
< sipa>
Luke-Jr: i know that's your view; i respectfully disagree, and i'm not alone
< morcos>
sipa: i love it how even you have your irrational pet projects. :)
< sipa>
morcos: if everything in bitcoin was rational, nobody would have started using it
< Luke-Jr>
sipa: your disagreement appears to be about a cost that I bear specifically.
< jgarzik>
It's a feature specifically -for- miners. What do -most- miners want? Can't make a specifically-for-mining feature that miners don't use :)
< Luke-Jr>
unless I have misunderstood that, I ask that my evaluation of my own costs be respected with a slightly higher weight :/
< Luke-Jr>
jgarzik: majority of miners have no business dictating what minority of miners do. and lots of miners *do* use the priority space.
< morcos>
Luke-Jr: priority is irrepairably broken with 0.12 and limited mempools that don't respect it
< Luke-Jr>
the miners who have disabled it, have done so because of the performance issues that are now fixed
< morcos>
when we somehow decided to not address that or failed to decide to address it
< morcos>
i think we put ourselves clearly on the path to removing priority
< Luke-Jr>
morcos: I thought we had decided to address it, but nothing got done :p
< Luke-Jr>
morcos: but if that's a serious issue, then it puts us back to "miners should not upgrade to 0.12" :/
< morcos>
that will be my first pull for 0.13, because its going to make so much easier to do everythign after. and i'll get to steal a bunch of deleted code lines that otherwise jgarzik woudl have gotten credit for :)
< sipa>
0.12 will work with priority fine as a miner, if you have a sufficiently large mempool
< Luke-Jr>
after what?
< jgarzik>
heh
< Luke-Jr>
sipa: that seems like a reasonable expectation of miners IMO
< morcos>
Luke-Jr: working with the rest of the mempool/mining code will be much easier when priority is gone
< sipa>
but yes, it is my view (and i think of most other developers) that the priority as a separate sorting criterion will go away
< Luke-Jr>
morcos: priority will never be gone
< morcos>
it could theoretically be implemented by a quite complex companion program which just selectively applied feeDeltas i suppose
< Luke-Jr>
morcos: it shouldn't need to be
< Luke-Jr>
it should be as simple as it is now, or simpler.
< Luke-Jr>
anything else is harmful to Bitcoin
< jgarzik>
hyperbole
< jgarzik>
Operative questions to me: (1) Do most CNB consumers care about this or just luke? (2) Is there a way to have cake & eat it too - perform a second pass for priority exclusively in the mining code for any stragglers that failed the first pass, applying some local map filter?
< jgarzik>
with the goal of #2 being avoiding all the current complexity that the consensus of devs agrees is there
< Luke-Jr>
do I need to go make a big deal about this on reddit etc to prove (1)?
< Luke-Jr>
(most people right now don't know anything about this discussion and won't understand it without explaining the details.)
< jgarzik>
a CNB consumer = a CNB user = a miner
< jgarzik>
It sounds like the ideal solution is a post-pass filter for applying some site local policy
< jgarzik>
maybe a pre-pass filter for spam filtering
< jgarzik>
(though 99% of that happens via IsStandard)
< sipa>
jgarzik: so the scoring for CNB is just a single sort function now, that could be overridden to be anything
< sipa>
jgarzik: priority is more complicated because it needs re-evaluation
< sipa>
so anything that doesn't need re-evaluation would be trivial to do
< morcos>
sdaftuar and i have been sitting here trying to figure out if there would be a nice way to add back (for 0.13) a way to fill a portion of the block with an alternate metric
< morcos>
but the main thing i keep coming back to is non fee based metrics are broken for other purposes
< morcos>
relay
< morcos>
mempool limiting
< morcos>
so how much benefit is there to be able to mine based on a metric if you can't ensure that txs with a high score will even propogate
< morcos>
and by fee based, i mean including feeDelta
< Luke-Jr>
morcos: you can't ensure anything with propagate ever
< morcos>
we've certainly worked to make that still work, although to tell you the truth, i don't think we fixed pre-existing bugs for feeDelta not influencing ATMP
< morcos>
also sdaftuar just pointed out feeDelta for trimming hasn't been merged yet, that should be tagged for 0.12
< morcos>
should we also fix it so feeDelta applies for ATMP? seems like yes. that'll make it easier for people to experiment with whether using feeDeltas can get them close enough to achieving the policy they want
< morcos>
but that would mean relaying based of feeDelta too
< sipa>
i think that's reasonable
< sdaftuar>
i can add a commit to 7062 to address that
< GitHub68>
[bitcoin] luke-jr opened pull request #7149: Bugfix: Correctly calculate priority when inputs are mined after dependent transactions enter the memory pool (master...bugfix_priority) https://github.com/bitcoin/bitcoin/pull/7149
< Luke-Jr>
morcos: ^
< Luke-Jr>
sipa: re release notes "Unlike earlier versions, unconfirmed but non-conflicting transactions will never get a negative confirmation count." afaik that never could happen in earlier versions..?
< GitHub39>
[bitcoin] pstratem opened pull request #7150: Print size of pcoinsTip cache in AcceptToMemoryPool (master...2015-12-1-printcachesize) https://github.com/bitcoin/bitcoin/pull/7150
< sipa>
Luke-Jr: by conflicting i mean conflicting with the chain
< Luke-Jr>
sipa: I'm aware..
< sipa>
earlier, conflicting with the mempool was enough to get -1 confirms
< Luke-Jr>
ah
< Luke-Jr>
was that actually possible? O.o
< sipa>
maybe it was very hard to reach that
< sipa>
as the wallet tried to get its transactions in the mempool at startup
< Luke-Jr>
I wonder if this should be considerd a bugfix and backported
< sipa>
i agree the explanation there is confusing
< Luke-Jr>
maybe with a min(-1) on the count
< sipa>
it's mostly needed because of mempool eviction
< sipa>
so i wouldn't consider it a bugfix pre-0.12
< Luke-Jr>
k
< Luke-Jr>
7096 has me confused about whether it is a fix, part fix, or what :|
< gmaxwell>
Luke-Jr: On the priority stuff, why not make it work so that it can run using an external policy server that adjusts the transaction score (feerate) using the rpc?
< gmaxwell>
Luke-Jr: then that gets policy beyond the basic behavior out of bitcoind, and probably fixes all the performance concerns too.
< Luke-Jr>
gmaxwell: that makes policy unnecessarily and gratuitously complicated to implement.
< Luke-Jr>
and doesn't address the mempool side
< gmaxwell>
But it's good because it makes it infinitely configurable without turning everyone into a bitcoind hacker.
< Luke-Jr>
gmaxwell: I'm already working on an infinitely-configurable change that doesn't gratuitously complicate things.. but it takes time :/
< gmaxwell>
The eviction uses the modified score (feerate), no? so it's only ingress thats not addressed.
< gmaxwell>
also the process I described is inherently safe, inside bitcoind if a miner configures things (esp by hacking code) they risk making their daemon crash.
< gmaxwell>
And can't do things like so anti-spam analysis or what not because its in the critical path.
< Luke-Jr>
gmaxwell: none of this is possible for 0.12
< gmaxwell>
an external daemon would work with git master right now.
< jgarzik>
for ingress, pass firstseen rejected TXs to external as well