< gmaxwell>
I triggered some kind of hang or deadlock today on master by running bitcoin-cli stop shortly after starting bitcoind but before it had come up. Was in a rush to fix something else so I didn't attach a debugger before killing it. Mentioning so that if someone else encounteres it, you're not imagining it.
< gmaxwell>
will try to reproduce tomorrow.
< gmaxwell>
[OT] I was really impressed by the technical accuracy of today's SMBC, then I saw it had a co-author.
< gmaxwell>
The purpose of the change is to return an error if you ask for seralization version 9 on software that supports 0/1.
< BlueMatt>
wumpus: didnt say backlog, said critical to address current ongoing network issues
< instagibbs>
yes I think that's well understood
< gmaxwell>
we can talk about that next.
< wumpus>
would have been useful if luke-jr was here
< kanzure>
hi. late.
< gmaxwell>
oh missed his comment.
< phantomcircuit>
#9332
< gribble>
https://github.com/bitcoin/bitcoin/issues/9332 | Let wallet importmulti RPC accept labels for standard scriptPubKeys (on top of #9331) by ryanofsky · Pull Request #9332 · bitcoin/bitcoin · GitHub
< gmaxwell>
I read luke's comment as saying he wanted a "max you support" version.
< phantomcircuit>
you mean 9322?
< gmaxwell>
and the response was that this was expected to be the default. Or at least thats my understanding.
< gmaxwell>
I agree that being able to ask for a max possible is fine. (though 9999 isn't an especially good number for it. :P)
< instagibbs>
I think #9262 is ready, but some disagreement over default value?
< gribble>
https://github.com/bitcoin/bitcoin/issues/9262 | Prefer coins that have fewer ancestors, sanity check txn before ATMP by instagibbs · Pull Request #9262 · bitcoin/bitcoin · GitHub
< jtimon>
do we have a topic?
< gmaxwell>
jtimon: pr backlog
< wumpus>
gmaxwell: in any case that doesn't have to be done in that pull, so we can just go ahead and merge it
< gmaxwell>
ACK
< gmaxwell>
in 9262 I don't believe this should default to on, for the same reason that spending unconfirmed coins is enabled by default.
< gmaxwell>
The transactions will be queued in the wallet and periodically rebroadcast (due to other fixes) and go out once they're no longer overlimit.
< gmaxwell>
the meat of the change was avoiding those cases (sometimes) when it could.
< cfields>
#9289 is holding up the next round of changes, and I believe the linked issue is unrelated
< bitcoin-git>
[bitcoin] laanwj closed pull request #9292: Complain when unknown rpcserialversion is specified (master...nofutureserial) https://github.com/bitcoin/bitcoin/pull/9292
< wumpus>
cfields: agreed
< wumpus>
ok, so #9262 off by default? should it still be backported then?
< gribble>
https://github.com/bitcoin/bitcoin/issues/9262 | Prefer coins that have fewer ancestors, sanity check txn before ATMP by instagibbs · Pull Request #9262 · bitcoin/bitcoin · GitHub
< BlueMatt>
cfields/wumpus: I think there is a fix commit for 9212 on the issue page at the bottom (I havent pr'ed yet because testing, but I think it'd work)
< gmaxwell>
wumpus: yes, it should, the main thing in the change is that it avoids creating those poorly propagating transactions when it's possible.
< gmaxwell>
(My opinion)
< sipa>
wumpus: 9262 does 2 things 1) avoid long chains 2) pre-reject created wallet transactions that would exceed limits
< wumpus>
gmaxwell: so it still does something even if it's disabled? okay
< sipa>
wumpus: only 2 is optional
< wumpus>
okay, right, that wasn't clear to me
< wumpus>
BlueMatt: ok, will test that too
< instagibbs>
yes so with default off it will simply try harder to pick coins that have shorter chain length
< instagibbs>
rather than blindly
< sipa>
which won't have an effect if you're always sending your full change
< sipa>
but better is better
< cfields>
BlueMatt: the reason I didn't do that is that it hides the previous behavior. The current asserts point out issues that need to be backported to 0.13
< cfields>
(which admittedly should've been loud errors rather than asserts)
< gmaxwell>
The original suggestion to create that change was (1) based on me actually encountering users that could have avoided the long chains.
< btcdrak>
here
< wumpus>
cfields: critical issues? or nothing that needs to hold up 0.13.2?
< gmaxwell>
cfields: I had a node go down with-- we think-- that assert.. but can't tell where it was triggered from.
< sipa>
cfields: do they really need backporting?
< cfields>
wumpus: likely nothing critical, just possible data leaks
< BlueMatt>
cfields: why are those data leaks? anyway, I think we previously discussed not using nVersion != 0 for this check
< wumpus>
just one I mean, the other *is* a a backport
< sipa>
cfields: i'd say that such issues are things where we're certainly violating some of our own assumptions about how the p2p implementation works, but unlikely things that cause issues in interaction with other nodes
< cfields>
any assert represents some case where we should be disconnected, but instead are still sending/responding.
< jtimon>
#8855 could need rebase if there's new uses of Params(std::string), but if there are, they won't necessarily cause git conflicts
< BlueMatt>
cfields: no, in this case it means we are sending, but havent yet sent version message
< gmaxwell>
wumpus: I believe #9352 will be tagged for backport-- but it's too green to comment on it for the moment.
< wumpus>
gmaxwell: that's too bad, I hoped we could finally get this over with this week
< cfields>
BlueMatt: right, which specifically here means that we've refused the connection due to missing connection flags, but we're still sending/responding
< wumpus>
gmaxwell: can't it wait for 0.13.3?
< sdaftuar>
gmaxwell: should i go ahead and open the backport version of #9352?
< BlueMatt>
wumpus: its a relatively simple patch, I'm hopeful we still can :)
< instagibbs>
I will review asap
< cfields>
BlueMatt: let's take it up after the meeting
< BlueMatt>
cfields: sure
< wumpus>
okay, any other topics to discuss?
< gmaxwell>
sdaftuar: I think that would be useful.
< gmaxwell>
wumpus: I really want 0.13.2 in RC ASAP. just have some specific conerns about needing that. We'll work through it.
< MarcoFalke>
Could 9262 delay the rc?
< MarcoFalke>
Is it well tested?
< jtimon>
#8498 has been in the backlog for a while too (before that, #6445 was waiting for #6526/#6625/#6557 and friends, which were merged or closed long ago)
< MarcoFalke>
(Note that it was not yet merged into master)
< gmaxwell>
MarcoFalke: I've tested the heck out of it. dunno about others.
< MarcoFalke>
(I haven't really looked at it)
< cfields>
wumpus: regarding the assertion backports, nothing would be a regression from 0.13, so no need to delay, only a bonus if we get fixes in.
< btcdrak>
sdaftuar: ack on backport #9352
< gmaxwell>
MarcoFalke: it's the oldest of thes long-chain wallet fixes, just last to merge. as it had lots of oppturnities for shed painting and resulted in deciding to fix the other issues. :)
< wumpus>
MarcoFalke: there was at least the discussion to disable the setting by default, but after that change I don't know why it should hold up anything
< wumpus>
MarcoFalke: I don't think there's any critical concerns with it left
< gmaxwell>
with the default off it only changes 'non-determinstic' behavior.
< gmaxwell>
(selectcoins)
< sipa>
the patch always had the setting off by default - i was the one arguing that it should be on by default instead (and it seems few people agree, fine)
< instagibbs>
Hm? it was on before
< instagibbs>
but this is pre other 2 changes
< sipa>
oh? maybe before i looked at it :)
< wumpus>
let's just settle on having it disabled by default in the initial merge and the backport, it can always be set to be enabled by default later...
< gmaxwell>
sipa: you could argue for that in 0.14 later.
< gmaxwell>
that.
< MarcoFalke>
Agree wumpus
< wumpus>
there's no need to fix everything in one pull, or one version for that matter, sometimes things are held up too long on minor discussion points
< instagibbs>
better is better
< wumpus>
right.
< MarcoFalke>
morcos: gmaxwell: Do you have a strong opinion about the fLimitFree flag in the #9290
< MarcoFalke>
backport?
< gmaxwell>
sometimes better is worse, there is totally like an essay on this. :P
< jtimon>
sipa: just said fine on not having it on by default, didn't he?
< wumpus>
yes he did, I meant in general
< sipa>
jtimon: yes, i'm fine with it being off
< wumpus>
MarcoFalke: ah yes that's an important point
< gmaxwell>
MarcoFalke: Didn't see your question until now. will evaluate.
< MarcoFalke>
Imo it should not matter too much, but I'd rather have a second opinion
< bitcoin-git>
[bitcoin] sdaftuar opened pull request #9357: [0.13 backport] Attempt reconstruction from all compact block announcements (0.13...backport-optimistic-cb) https://github.com/bitcoin/bitcoin/pull/9357
< MarcoFalke>
I haven't checked if it caused issues with txes evicted from the pool due to low fee.
< gmaxwell>
I need to look into it carefully to make a decision on my view, not going to manage it during the meeting.
< MarcoFalke>
ok, other topics?
< morcos>
MarcoFalke: I hadn't seen the flimitFree thing before now, I'll take a look and get back to you after... (same as gmaxwell)
< MarcoFalke>
great, thx
< gmaxwell>
We could talk about the compact block announcement stuff not the backports but the change; just so people know what the change is about.
< wumpus>
#topic compact block announcement stuff
< gmaxwell>
Right now, if someone sends us a header, we request a block and mark the block in flight. If a compact block (e.g. from a HB mode sender that sends unsolicited ones) shows up while we're waiting.. we just ignore it, instead of trying to reconstruct the block.
< gmaxwell>
This means that if a peer is broken and slowly transmits or fails to reply, the HB mode will fail to work around it.
< gmaxwell>
There is a deep rabbit hole we can go down towards optimal behavior, but what is proposed right now is a super minimal change where even if a block is in flight, we'll still see if we can recover the whole block from just the compact block. And if we can, we take it, and mark the block as complete.
< wumpus>
sounds sensible
< gmaxwell>
greater than 2/3rs of all blocks can be recovered from just the compact block (varries a lot based on miner/network behavior) so even this small improvement should be a pretty big help.
< wumpus>
there seems some potential for race conditions though
< BlueMatt>
this is especially important with prefill, where, if your peer upgrades to prefill txn in the announcement you can recover the block somtimes and recover from stalling without yourself upgrading
< wumpus>
what if the compact block is reconstructed, and then the inflight block comes in?
< gmaxwell>
wumpus: yes, though we don't count on the in-flight to protect against that, and if a full block shows up right now we'll accept it.
< sdaftuar>
wumpus: should not be a problem. there's generally no downside to receiving a block you already have.
< gmaxwell>
wumpus: then its just like someone sending us an unsolicited full block, which we'll process if it's not best already.
< wumpus>
sdaftuar: in general there's no downside, just thought it'd be a potential edge case, but if that's handled that's ok
< gmaxwell>
In any case, I think that summarizes where that is, I have several nodes testing live right now.. obviously will need review and testing.. but I just wanted everyone to know what was going on there.
< jtimon>
thanks, I assume more questions about this or other topics?
< achow101>
when are we planning to start rc'ing for 0.13.2
< wumpus>
any other topics? if not we'll close the meeting early
< sipa>
very short report: gmaxwell and i have been experimenting with a per-txout utxo cache approach
< gmaxwell>
Close meeting early and make 0.13.2 great again ACK.
< sipa>
so far results don't look too promising
< wumpus>
heh
< morcos>
sipa: yeah i haven't looked at that yet
< morcos>
i'm surprised!
< morcos>
i was super optimistic
< sipa>
me too
< wumpus>
sipa: so grouping the utxos per transaction turns out to have been a good optimization? I'm surprised too
< gmaxwell>
Well when it's operating totally in memory it's 15% faster even though sipa has not exploited the new structure for better cache intelligence (so its still doing the same dumb flush thing). But when leveldb came into the picture it ate dirt.
< morcos>
15% is for babies
< instagibbs>
what level are you on morcos :)
< sdaftuar>
i'm going to give a cheers for the sigcache cuckoocache merge now!
< jtimon>
mhm, haven't looked at the branch, are the utxo's catched per txout but stored per-tx?
< sipa>
jtimon: both per txout
< gmaxwell>
Assuming the issue isn't extra debugging sipa added, the downside is perhaps that it is just much harder on leveldb and writes a lot more traffic to the leveldb log.
< BlueMatt>
gmaxwell: seems like something where you can per-utxo in memory and per-tx on disk?
< wumpus>
BlueMatt: yes I was about to suggest that too
< gmaxwell>
The real gains from the change would come from making the cache smarter, so I thought 15% was great news.. since that likely came from reduced malloc traffic.
< BlueMatt>
i mean might lose all the performance on the boundary, but its worth a shot
< jtimon>
sipa: thanks. mhmm, yeah, this is surprising then
< sipa>
BlueMatt: that doesn't solve the O(n^2) issue with large transactions
< gmaxwell>
BlueMatt: yes, I made that observation too.... but it means that read modify write cycles would be needed.
< wumpus>
gmaxwell: yeah that would be bad...
< wumpus>
lookups are slow, if you need read-modify-write cycles it's not going to help performance
< sipa>
the O(n^2) issue is that a tx with many outputs on every spend needs to write n-i outputs to the database
< gmaxwell>
wumpus: yes, though it might pay for itself by the cache being much more effective. I guess we won't know until after more testing.
< cfields>
sdaftuar: +1. Still catching up, didn't see that got merged. Great to see :)
< gmaxwell>
the other negative is that it looks like this change will require a chainstate reindex. making it compatible with undo files seems really hard.
< sipa>
basically my reason for wanting per-txout cache is that the current behaviour may be good on average, but it's terrible for big transactions
< jtimon>
maybe somehow writting txouts in batches could help? (thinking out loud, may be a stupid thought)
< wumpus>
requiring everyone to do a chainstate reindex would be bad too :/
< sipa>
jtimon: we're already batching _all_ writes from many blocks
< jtimon>
sipa: I see, it was a stupid thought
< sipa>
anyway, just reporting on an experiment - nothing more at this point
< morcos>
gmaxwell: i'm not sure what you mean about making the cache smarter
< gmaxwell>
wumpus: right now everyone's chainstate is corrupted... to at some point we'll need to do something about that. (TXversions)
< wumpus>
writes are pretty fast with leveldb, it's lookups/reads are slow, especially on slow disks
< sipa>
morcos: not wiping the cache after a write
< morcos>
in my view once its only keeping utxos that were actually accessed and not the rest that tagged along with the tx, then thats as smart as it gets
< Chris_Stewart_5>
Are we thinking txs are going to become larger in terms of inputs/outputs as Bitcoin grows? UTXO size is constantly growing right?
< morcos>
sipa: sure but you still have to do something when you hit memory limits
< sipa>
Chris_Stewart_5: i wish it were not growing at all
< wumpus>
batching writes more is not going to help, and batches are already huge in memory
< morcos>
you can save the things that are in blocks from the top of your mempool, but thats really small... small enough that it can be done pretty effectively with the existing model
< gmaxwell>
morcos: yes the right thing to do is to expire only the oldest entrties at that point. Which is much cleaner when there is no such thing as entry mutation.
< Chris_Stewart_5>
I guess it is just interesting to hear the tidbit about terrible performance of large txs.
< wumpus>
gmaxwell: requiring everyone to reindex at the same time is not an acceptable solution though :)
< morcos>
ah, oldest, yes ok, but that requires extra state
< wumpus>
gmaxwell: maybe it could support two database versions for a while
< sipa>
Chris_Stewart_5: in general, we need to optimize worst-case performance, not average performance
< wumpus>
gmaxwell: new reindexes/syncs would use the new format
< wumpus>
in any acsae, thanks for trying this experiment
< sipa>
Chris_Stewart_5: as a large difference between worst-case and average means we could be missing DoS opportunities where an attacker can force us into the worsr
< gmaxwell>
wumpus: if it made it N fold faster, than reindex on a new version... might be something we could have happen. I think perhaps we'd want to finish your snapshooting work and other things at the same time. ... in any case it's just an expirement now.
< wumpus>
even if it turns out it's not better it's good to know this for sure
< gmaxwell>
it also has resulted in some other optimizations, e.g. the flushing optimization PR that we have right now.
< sipa>
Chris_Stewart_5: but it's really sad when you need to decrease your average performance in order to improve the worst case... because people don't observe the worst case
< wumpus>
gmaxwell: if it was possible to convert the old database to the new database without a reindex (e.g. just rewriting) then an upgrade process would be acceptable. But a full reindex? no
< gmaxwell>
Good, the meeting has run over, so all is well with the world. :)
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu Dec 15 20:01:08 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< sipa>
wumpus: i believe it is convertible, but it's nontrivial
< gmaxwell>
wumpus: that would be possible, though perhaps a lot of code.
< Chris_Stewart_5>
sipa: Thanks for the food for thought. I appreciate the extra explanation :-)
< wumpus>
gmaxwell: ah yes thanks for reminding me of the snapshotting
< sipa>
the tx height/coinbase is currently only stored in the undo data for the last spend from one tx's outputs, and needs to be stored in all
< sipa>
but that can be done by walking the undo data backwards (which is always possible, even in pruned mode), and building a temporary database with tx->metadata maps, and using that to rebuild the undo data in the new format
< jtimon>
wumpus: what snapshotting?
< wumpus>
jtimon: automatic utxo database backups
< * jtimon>
nods
< gmaxwell>
morcos: my thought was that with the per-utxo model we could simply have a list of keys as they're read into the cache... and then when the cach is full, pop from the beginning of the list and flush those entries... to take it back to 90% or something (whatever is big enough to have a reasonable batch size for leveldb).
< wumpus>
sipa: btw I still get no results from "host seed.bitcoin.sipa.be"
< gmaxwell>
Sipa noticed that right now we end up using somewhat less than twice our memory limit; the flush process copies the data being flushed.
< morcos>
gmaxwell: wow, really... i didn't realize that
< morcos>
sdaftuar also points out that just deleting spent entries will help
< gmaxwell>
morcos: well their deletion needs to hit the disk if their creation ever did.
< morcos>
i had observed that it wasn't THAT helpful, but that was with requiring the whole TX to be spent... should be a much bigger effect
< gmaxwell>
oh yea, thats one of the benefits of the per txout model.
< morcos>
yes, so on flush, you can keep everything that isn't spent and your memory usage may reduce non-trivially
< gmaxwell>
in any case, because of that memory usage we should be limiting our leveldb batch sizes. I'm guessing there probably is no real performance benefit to a batch of 200MB (or 2000mb) over 20MB.
< sipa>
the size of batches is determined by how much has changed
< morcos>
yeah, thats what i was thinking a bit annoying to have to track that too
< sipa>
unless we maintain multiple checkpoints in-memory, to know which entries combined form a consistent state, that's very hard to reduce
< sipa>
multiple in-memory checkpoints also implies we can't do the fresh optimization until an entry is in no snapshot
< wumpus>
that sounds overly complicated
< sipa>
yes.
< morcos>
gmaxwell: one advnatage of bigger batch sizes is the ability to delete fresh pruned entries... you lose all your freshness after a flush
< gmaxwell>
well, the alternative for memory usage is that we start making changes to the leveldb api so that it can do some kind of gather callback or something for the batch.
< sipa>
yes.
< gmaxwell>
(I'm not even sure if thats possible, but it looked like it with a quick glance at the leveldb code)
< sipa>
we discovered that a leveldb write batch is a wrapped std::string
< sipa>
which just gets writes and erased appended to in a binary format
< wumpus>
optimizing leveldb's batch representation/scheduling would certainly be possible, yes
< wumpus>
but in my experience it's reads / lookups that take lots of time, especially on slow disks, not so much writing, writing with leveldb is much faster than comparable databases such as lmdb
< sipa>
well writing 2GB of modified utxo entries required 90s in gmaxwell's benchmarks
< gmaxwell>
well it was 40s with master, 90s with the per-utxo.
< wumpus>
but that's hardly realistic for normal usage
< wumpus>
if you have a dbcache of gigabytes you hardly really need a database at all
< gmaxwell>
wumpus: so right now our initial sync performance is really poor with the default cache. it takes 21076 seconds to chainstate reindex even with all signature checking disabled on a fast machine. We often tell people to crank their dbcache to big numbers to make ibd take more acceptable time.
< wumpus>
gmaxwell: but is that due to writing? as said, in my experience, it spends almost all the time connecting inputs - e.g. fetching and random lookups
< wumpus>
it could be I just have very strange hardware of course
< gmaxwell>
sorry, I thought you were saying that it wasn't realistic for people to run with a big dbcache; and I was just countering that running with a big dbcache is the only way to get ibd to run in a sane-ish amount of time. I guess I was on a tangent from your point.
< wumpus>
right - maybe the best solution for small memory usage and large memory usage is completely different
< wumpus>
another thing with writes is that things can be pipelined, as soon as the batch buffers have been filled it could be shipped off to a background thread doing the writing, there's no need to wait for it to continue
< gmaxwell>
wumpus: yes. Indeed. perhaps a little tricky with consistency between the chainstate and the blockindex.
< gmaxwell>
ohh sipa's per utxo code had debugging code that was trashing performance, rebenchmarking is looking much more promising!
< sdaftuar>
yay!
< sipa>
(with large dbcache)
< gmaxwell>
Man. Thomas Zander wrote some article attacking segwit today that says up front "Once a user gets a SegWit transaction, she will only be able to move that money forward in a SegWit wallet. So if a person doesn't upgrade they will eventually not be able to accept money from anyone." -- will there be no consequence in this ecosystem for this kind of incompetence or dishonesty? damn. It also rep
< gmaxwell>
Also deceptively claims to make transactions smaller (actually-- it increases the amount of information in a transaction-- because it makes the field ordering non-normative--, making the smallest possible representation larger...)
< juscamarena>
Just saw that. Was the site hacked? He can't really believe that?
< sipa>
i'm sure he believes it
< gmaxwell>
he's previously posted a number of absurd things, e.g. the posts claiming that BIP152 was going to "disrupt the network" and trying to get us to abort the 0.13 release.
< btcdrak>
gmaxwell: what the heck? that's just ...
< juscamarena>
Sigh. He might have gotten confused here: "When spending your bitcoins after the upgrade to segwit, you will still be able to pay the original type of Bitcoin addresses that start with a ‘1’ (a P2PKH address) as well as being able to pay other users of P2SH addresses."
< juscamarena>
Thinking upgrade meant upgrading the wallet.
< gmaxwell>
I'm having a really hard time believing that he is actually this confused.
< morcos>
gmaxwell: just skimmed what he wrote.. i don't think hes confused.. (except about the 2 buckets crap, but you know "math is hard")
< morcos>
i think he was just trying to make a point that i don't think really makes any sense, that people with segwit wallets would prefer to send to other segwit addresses
< morcos>
well yes i guess maybe thats what you meant by confused, since there is no reason they would prefer that?
< gmaxwell>
there is no reason they would prefer that.
< gmaxwell>
Doesn't cost them any more or less.
< gmaxwell>
it's indistinguishable to them.
< morcos>
it seems maybe the text changed if yours was an actual quote
< gmaxwell>
actually, since the only kind of address type right now used for segwit is p2sh-p2w* it is cryptographically indistinguishable.
< gmaxwell>
morcos: my test was an actual quote.
< gmaxwell>
er text.
< morcos>
it still says this which is at best badly misleading
< morcos>
"receiving a SegWit transaction requires a SegWit wallet which then will generate SegWit transactions forcing everyone around you to get one too."
< gmaxwell>
that is absurdly untrue too.
< gmaxwell>
amusingly one of the big reasons we didn't move foward with a new address type was specifically to avoid this class of misunderstanding. (the other being that several people wanted time to establish a new base-32 based encoding with proper error detection)
< MarcoFalke>
morcos: Motivated from the rpc test failure: Should the feefilter rounder not return a fee that is less (or equal to) the target fee?
< MarcoFalke>
otherwise you might miss some tx if you "round up"
< morcos>
MarcoFalke: which test failure?
< MarcoFalke>
fundrawtx
< MarcoFalke>
Just a sync mempool issue due to feefilter, I guess
< morcos>
i mean is there a link to what you are talking about
< MarcoFalke>
If you run fundrawtransaciton on master it will fail randomly
< morcos>
i'm not following... ok, thats what i was wondering
< morcos>
really?
< MarcoFalke>
Likely due to current choice of the feefilter
< MarcoFalke>
It becomes visible when the transaction pays a fee close to the minrelayfee
< MarcoFalke>
your feefilter will be maybe minrelaytxfee+x, so you never see the tx
< morcos>
yeah i guess if you tried to pay exactly the minrelayfee it might not work
< MarcoFalke>
Would it make sense to always send feefilters that are less than the currentFeeFilter?
< morcos>
the variance was put in there to slightly obscure the exact state of your mempool.. but ehh, i'm not sure its worth the effort
< MarcoFalke>
You can keep the variance
< morcos>
realistically i doubt it would be a problem except in tests..
< MarcoFalke>
Sure, on current main net
< MarcoFalke>
with default minrelaytxfee
< morcos>
i mean its been like that since it came out, the only difference is it happens now before your mempool gets full
< MarcoFalke>
Right
< morcos>
i don't feel strongly...
< MarcoFalke>
As we send a feefilter now by default for all connections, it might not be too much wasted bandwith if we received some minrelaytxfee-dust txs
< gmaxwell>
I think it's okay to leak that you're at the floor. e.g. apply the max after the variance.
< MarcoFalke>
In which case your node is identifiable if you set a non-default value for the relay fee, no?
< gmaxwell>
it's identifyable by behavior in that case, regardless.
< morcos>
MarcoFalke: yeah the floor is a separate issue. we already send under the floor all the time anyway... i think that could be a special case perhaps.
< morcos>
i'm just not sure how much any of this is worth it. to make sure the tests work at exactly minrelaytxfee, need to check all the < vs <='s as well
< luke-jr>
instagibbs: more of a suggestion than disagreement re 9322