< CubicEarth>
I had a repeat of my node stalling when trying to sync a couple of days ago. Recent version of Ubuntu, 13.1, and basically nothing else on the machine. It was about 6 days behind, and promptly caught up to about 40 hours remaining. Then it just stopped. CPU was idle, only relaying traffic was seemingly being passed. After about 10 minutes I restarted the node, and syncing was completed within just a coup
< CubicEarth>
le minutes of he restart.
< CubicEarth>
I did try booting peers, but that didn't help anything.
< sipa>
this is a known issue
< sipa>
it is resolved automatically if you wait for the next block
< gmaxwell>
CubicEarth: it's due to nodes (usually spy nodes) that break the protocol and don't respond to headers requests interacting poorly with the sync logic.
< CubicEarth>
I was wondering... Seems like my node ought to be a little bit more selfish, a little bit more aggressive, in requesting the data it needs. At least I wish it was. I'm guessing what you are describing is due to the fact Core codes nodes to be good network citizens, and non-standard nodes can disrupt that?
< CubicEarth>
Once payment channels and LN become a reality, I foresee the P2P layer integrating lots of micro-fees, charging for serving block data, for relaying TX, etc.
< gmaxwell>
CubicEarth: no, it's not due to that, pretty much the opposite.
< gmaxwell>
if it requests redundant data, then it might knock itself off the network if otherwise it only has enough bandwidth available to only fetch one copy.
< CubicEarth>
So my node requests a piece of data, what happens? A peer says "yes, I have it, I'll give it to you" and then never does? And my node sites there, waiting, because if it asked another peer for data they could both end up providing it?
< gmaxwell>
CubicEarth: right. It will eventually give up-- but for this particular request its triggered by a new block showing up on the network.
< gmaxwell>
Smarter would be dynamic timeouts, -- the tricky thing is that care has to be taken to avoid unstable algorithins that can suffer congestion collapse. E.g. you have a limited bandwith network with 5 nodes on it... and then they fall behind and start agressively rerequesting and never recover.
< gmaxwell>
It's not _that_ hard, but ... so many other things going on...
< CubicEarth>
Other priorities for sure. It's nice that it respects bandwidth limits currently. It makes sense for it to be conservative by default, but perhaps there should / could be a setting where you tell how much bandwidth you would like it make use of, not just an "upper limit" for inbound connections, but "please use this much to make things happen faster"
< CubicEarth>
Onto the back burner...
< gmaxwell>
well it's not that it respects, it just has no idea what they are, so it assumes it's operating with very little, since thats consderative.
< gmaxwell>
RE manual setting, I think very few users will use that correctly-- we should have settings, but they're way less important than better default behavior.
< CubicEarth>
The funny thing is the software is already 'aware' in the sense that it can generate a graph of network activity, but I get that it's probably not 'hardened code'.
< gmaxwell>
Past performance doesn't indicate future results. Assuming that it does is how you get schemes that suffer congestion collapse. :P
< CubicEarth>
gmaxwell: so a node dosent DDoS itself?
< gmaxwell>
CubicEarth: e.g. you have 5 nodes on a 1 mbit connection. They each observe 1mbit available.. but then they all try to use it at once and there will be far less than 1mbit available.
< CubicEarth>
got it
< bitcoin-git>
[bitcoin] droark opened pull request #9304: Allow linearization scripts to support little endian hashes (master...master) https://github.com/bitcoin/bitcoin/pull/9304
< BlueMatt>
hmmmmmm....I'm pretty sure CheckForkWarningConditionsOnNewFork is completely useless atm...
< BlueMatt>
it looks like it was written assuming pindexNewForkTip would be a CBlockIndex* to the highest block on a new fork (which I think is the case in the original code, and I'm not just saying it because I originally wrote it)
< BlueMatt>
but in the current code its given the last block which ActivateBestChainStep wanted to connect (but failed because either it, or a previous block) was invalid
< BlueMatt>
and that last block is always either current chain + 1 unless its a reorg
< BlueMatt>
ehh, excuse me...CheckForkWarningConditionsOnNewFork is called with a larger vector which isnt exactly the things ABCS tries, but what it might've tried if it didnt want to give up cs_main earlier
< BlueMatt>
still, i think due to headers first this stuff is horribly broken
< gmaxwell>
nah it does work.
< BlueMatt>
you sure? that code doesnt look sane to me now
< BlueMatt>
(do we have any tests for it? couldnt find any)
< BlueMatt>
and jl2012 was saying he tried to trigger it by sending invalid blocks with valid headers and couldnt
< * BlueMatt>
-> out
< bitcoin-git>
[bitcoin] ryanofsky opened pull request #9306: Make CCoinsViewCache::Cursor() return latest data (master...pr/coins-cursor) https://github.com/bitcoin/bitcoin/pull/9306
< Teroxice>
I build an ATM software and right now its sending money from a centralized bitcore server with just one wallet. I will offer my software to the market but I would like that every client has their own independent wallet in my centralized bitcore server. That is why I'm asking if it is posible to have more than one wallet on the same bitcore server. Anyone knows?
< achow101>
Teroxice: bitcore or bitcoin core? There is a difference.
< achow101>
also, wrong channel
< Teroxice>
bitcoin core
< achow101>
bitcoin core does not have multiwallet support
< bitcoin-git>
[bitcoin] ryanofsky opened pull request #9310: Assert FRESH validity in CCoinsViewCache::BatchWrite (on top of #9308) (master...pr/coins-batch-assert) https://github.com/bitcoin/bitcoin/pull/9310
< morcos>
if you ran that same command on the old code your grep would find fee twice
< morcos>
it prints an overall fee for the transaction and it prints a fee for each accounting entry
< morcos>
it was already the case that if it didn't think the tx was from you (meaning none of it was from you) it would leave off the overall fee for the tx
< gmaxwell>
ah, I see. what the heck is the fee for the accounting entries for?
< morcos>
i changed that logic to be whether all of it was from you
< morcos>
don't get me started on that
< morcos>
getbalance("*") uses those random incorrect negative fees to offset other errors in tracking balances
< morcos>
so it was not possible to fix those
< morcos>
i suppose we could not print them, but that seems like an api change
< gmaxwell>
We should probably be telling the wallet about the actual values of all the inputs... we know them (we're a full node, after all!). then the fees it displays can be correct.
< gmaxwell>
but thats a bigger change.
< morcos>
yeah i don't think we should try to make any changes of that behavior until we remove accounts
< morcos>
then we can clean up a lot
< morcos>
which reminds me i need to be participating in wumpus's labels discussion
< sdaftuar>
gmaxwell: +1 on telling the wallet!
< sdaftuar>
er, about al the fees
< sdaftuar>
i think it's nuts the way that works now
< gmaxwell>
we've let the accounts stuff deadlock us for a long time, I believe that this was also the reason we didn't fix the absurd handling wrt fees when it was first noticed. :(
< sdaftuar>
i think my proposal would just be to pass fee information through SyncTransaction(). we have it during ATMP, and we could cache it while validating a block
< sdaftuar>
that doesn't go as far as all input values, but i think it'd be a simple improvement
< gmaxwell>
it would be. full input values are needed if we want to create detailed corrective accounting entries for txn with inputs which aren't fromme.
< gmaxwell>
e.g. txid 1234 spend 1000 of our coins, spent 4001 coins from {these sources}, paid 5000 coins, and 1 coin fee.
< gmaxwell>
if {these sources} is just "from outside this wallet" then we don't need per input amounts.
< gmaxwell>
we just need the fee.
< sdaftuar>
well i definitely agree that with the goal of per-input amounts being passed through! maybe that's not so hard to implement either, actually...
< sdaftuar>
we can probably come up with some reasonable data structure and pass that through to the wallet as well
< gmaxwell>
sipa: do we have a philosophical opposition to decoderawtransaction using the UTXO set, telling you if each of the inputs is unspent, and if they all are-- displaying the fee?
< sipa>
gmaxwell: i think that should be a separate rpc call
< gmaxwell>
sipa: what would it be called?
< sipa>
gmaxwell: decoderawtransaction is purely a utility function now, and i think it should stay that way
< sipa>
analyserawtransaction ?
< gmaxwell>
please no more words with different en_gb/en_us spelling!
< gmaxwell>
:P
< sipa>
rawtransactionanalysis
< sipa>
:p
< sdaftuar>
could we add memory-only per-input values to CTransaction(), so that they get filled in and passed through to the wallet in SyncTransaction()?
< sipa>
evaluaterawtransaction
< sipa>
sdaftuar: bleh... what if they aren't known? the consensus logic (which uses CTransaction) shouldn't need such values
< sdaftuar>
right, consensus wouldn't use it. but it would be convenient to fill in for downstream consumers
< gmaxwell>
well consensus certantly does eventually need to know the input values! :)
< sdaftuar>
we could do outside of CTransaction() of course
< sipa>
gmaxwell: but CTransaction is by design now immutable
< sdaftuar>
the witness is not?
< sipa>
sdaftuar: it will be
< sdaftuar>
ah! didn't realize that.
< sipa>
(also, CTransactionRef is a ref to a _const_ CTransaction, which includes the witness)
< sdaftuar>
ok so i guess stuffing data into that just won't work
< sipa>
but we could have a wrapper around CTransaction that adds some metadata, which is used by ATMP and wallet code
< sipa>
or just pass along a separate object that contains that metadata
< morcos>
but speakign of that idea... lets imagine inputs were part of CTxMempoolEntry, then maybe you don't need a UTXO cache anymore
< sipa>
morcos: though it makes the mempool's correctness now consensus critical
< sdaftuar>
nack
< morcos>
thats what we're arguign about
< gmaxwell>
eventually the fast that txn are already verified in the mempool will have to be exploited for performance reasons. :(
< sdaftuar>
i would like that to be the last change that goes in before i stop working on the codebase :)
< gmaxwell>
hah. yea. :(
< gmaxwell>
or we'll just end up with miners using Joe-Marketers-Recklessly-Optimized-Fork that "validates blocks 5x faster"...
< gmaxwell>
and then all the care in not being reckless didn't matter because relevant parties aren't using it.
< sipa>
morcos: even if we had that, we'd need to apply utxo changes to the chainstate... which is perhaps harder if we haven't previously looked up the entry (because it's missing from intermediate cache layers, we don't know if it's fresh...)
< sipa>
i'd prefer something weak-block based to have pre-evaluated sets of transactions to apply to the chainstate
< sdaftuar>
sipa: so assuming the set that gets mined is identical to something you were expecting, you can have very fast validation?
< sipa>
sdaftuar: yup
< sipa>
you can basically have the utxo set diff cached
< gmaxwell>
(not just that but you can have the block template for the next block you'd mine after it cached)
< morcos>
gmaxwell: the whole requesting parents of orphans functionality... i didn't realize it basically doesn't work when your peer is a core node. were you aware of that?
< morcos>
b/c you won't let a peer getdata a tx that's not in your relay map
< gmaxwell>
morcos: it works so long as the parents are still in the relay pool, or if they're an older version.
< gmaxwell>
(which will answer out of the mempool)
< gmaxwell>
Yes, I knew that when I did it.
< gmaxwell>
In particular it's helpful for older versions that still do the trickling.. they're the source of most orphans I see, and they also answer out of the mempool.
< morcos>
ok.. i just noticed it on a new node i started up
< gmaxwell>
I'm glad someone else noticed.
< morcos>
i'm about to PR a small fix to fee filter your minrelaytxfee if your limitfreerelay is 0.. will save some unnecessary free tx requesting/rejecting
< gmaxwell>
thank god.
< gmaxwell>
Re the minrelayfee decay, did much thought or testing go into it? ISTM it looks like it goes too low. e.g. continues dropping at a relatively high speed even once its past a level where your mempool will fill again, given enough time.
< bitcoin-git>
[bitcoin] morcos opened pull request #9313: If we don't allow free txs, always send a fee filter (master...minminfee) https://github.com/bitcoin/bitcoin/pull/9313
< morcos>
gmaxwell: much thought went into it, there is a bit of a discussion on #6722 on my reasoning behind the half-life of 12 hours... But its certainly possible that the tradeoff has changed a bit.
< gribble>
https://github.com/bitcoin/bitcoin/issues/6722 | Limit mempool by throwing away the cheapest txn and setting min relay fee to it by TheBlueMatt · Pull Request #6722 · bitcoin/bitcoin · GitHub
< morcos>
The idea behind the min fee was to protect the limited resource of your memory, so it wasn't meant to be smart enough to know that certain fees are really never going to be worth it..
< morcos>
Actually rereading the justification now, I think if anything we'd move it the other way.. Packages are limited to much smaller than the 2.5MB envisioned in that argument... So the amount of "free relay" is really small.
< gmaxwell>
morcos: I guess what trips it up is that it decays at a constant rate but the supply of transactions is not uniform (or even 1/rate).
< morcos>
what exactly is the behavior you are seeing that you think is not good
< gmaxwell>
that it drops it back down to nothing and then one of these clowns that relays old transactions connects, fills me back to 300mb... then 72 hours later, these expire, and it slides back down....
< gmaxwell>
At least in my mind what it should be trying to do is find a value that results in the mempool being close to full-- but it ends up far lower than that; maybe I'm thinking of the wrong goal.
< morcos>
yeah but even if it didn't go down at all, he could do the exact same thing at 2 sat/byte. They still wouldn't be mined within 82 hours
< morcos>
72
< sdaftuar>
i think modeling the arrival rate of transactions is hard
< gmaxwell>
I think what I'm talking about is as much a question of the distribution of feerates in the supply of unconfirmed transactions, as it is arrival. (more so, because at the 2sat byte level, they're never getting mined.)
< morcos>
i think if you look at tx supply.. there is a backlog of between 1MB - 100MB of txs that pay > 1 sat/byte and there is a backlog of 100-1000MB additional of txs that pay 1 sat/byte it just happens to be the distribution of txs now i think
< morcos>
gmaxwell: ah, but thats wrong
< morcos>
for txs that pay between 1.5-2 sat/byte 95% of them are mined within 1000 blocks and 75% of them are mined within 256 blocks
< morcos>
crazy right
< gmaxwell>
bleh. okay. Crazy.
< morcos>
it's just there are sooo many in the 1-1.5 range that only about 10% of them get mined within 1000 blocks
< luke-jr>
gmaxwell: hmm, I wonder if we should be rescanning for conflicts then (#9290)
< gmaxwell>
luke-jr: make rescan take less than N hours? :-/
< luke-jr>
hours now? :o
< gmaxwell>
I think it takes 5 on my laptop.
< gmaxwell>
on a really fast machine it's not so terrible.
< luke-jr>
I guess ideally we should be waiting until the blockchain is synced, checking if we have unconfirmed txns, checking a wallet flag, and rescanning at runtime
< luke-jr>
sounds complex though :/
< luke-jr>
(oh, and then we could simply not broadcast unconfirmed txns until it finishes the rescan)
< gmaxwell>
the 'getting real fee info' will be another reason to rescan the chain for all wallets.
< gmaxwell>
so it would be worth giving some thought to adding an extra kinda of versioning to the wallet (metadata-rescanned-since).. and making rescan faster...
< luke-jr>
yeah
< luke-jr>
"rescan depth" or something
< sdaftuar>
luke-jr: it's in general not really possible to always know that a transaction is conflicted. i think we should keep that in mind before doing anything expensive...
< gmaxwell>
well they won't broadcast if conflicted-- they'll fail for mempool add. so the only harm is wasting a little time trying to look up utxos that aren't there.
< luke-jr>
to be happy with downgrading+upgrading, we'd need a timestamp per each depth, but that seems unnecessarily over-compatible
< dcousens>
gmaxwell: rescan is for private keys no?
< dcousens>
or is it UTXO?
< luke-jr>
gmaxwell: oh, right. that's not too bad compared to hours rescanning
< luke-jr>
sdaftuar: ah, parent conflicts
< gmaxwell>
Yes, though we won't spend non-wallet inputs (ignoring coinjoins because stupid) until they are six confirms old.
< gmaxwell>
So it's really hard to end up in a case where there is an undetectable conflict in your wallet.
< luke-jr>
gmaxwell: if someone else sends you a payment using coins later conflicted?
< dcousens>
why is rescan so slow? is it because it tries to match all possible scripts or?
< gmaxwell>
luke-jr: you won't spend that payment until it is 6 confirmed.
< gmaxwell>
dcousens: I think much (most?) of the time is spent hashing the blocks.
< luke-jr>
gmaxwell: but you'll try to mempool-add the receive, no?
< gmaxwell>
luke-jr: okay, sure nevermind me.. that payment itself will be an undetectable conflict, I was only thinking about IsFromMe transactions.
< dcousens>
gmaxwell: why is it hashing blocks? (i'll look into the code ooi)
< gmaxwell>
dcousens: side-effect of deseralization.
< dcousens>
gmaxwell: still weir,t I use similar enough deserialization code (lib/consensus) in my own parser and it does a full parse purely bottlenecked at IO, so 3-4 minutes on an SSD, script checking shouldn't be much more on that i'd of though...
< dcousens>
but I guess I'd have to look into the code to find out why
< gmaxwell>
At least my vague recollection of profiling it before was that the time was all in the heap allocator and sha256.
< dcousens>
gmaxwell: hmmm, by hashing do you mean the merkle tree generation?
< dcousens>
well, merkle root calculation***
< gmaxwell>
I assume, I don't know what else sha256 would be used for while scanning blocks.
< dcousens>
a checksum would probably work wonders in bypassing that
< dcousens>
esp. given the situation of all the zero padding in the files
< dcousens>
RE: 9312, 2 week expiration time, won't that further prevent parties from attempting broadcast a conflict of a stuck transaction 2.1 days after the first broadcast, and having a high chance of the other transaction being "mostly" out of the network by then?
< dcousens>
of course, that is what RBF is for, but
< gmaxwell>
dcousens: not really. (1) right now those conflicts are pretty successful immediately; due to nodes restarting, full-rbf miners, etc. (2) there are people who go around connecting to everyone constantly rebroadcasting old transactions... so they're already defeating that timeout.
< dcousens>
gmaxwell: in practice it works though?
< dcousens>
oh wait, misread what you wrote
< gmaxwell>
in practice it works even in 1 hour, so it works-- but not because of the 72 hour timeout.
< dcousens>
yeah
< gmaxwell>
we could also consider adding a new datastructure, a blacklist: if something hits the expire time, that txid becomes blacklisted for n-months. That would actually make things much easier to replace with a double spend after two weeks.
< dcousens>
gmaxwell: is there a way to "push out" a transaction from the mempool using the RPC? aka, "oops, forget the last one, use this instead"
< dcousens>
aka, ignore conflicts, or drop conflicts
< dcousens>
just thinking about you could by-pass that timeout without wiping your local mempool
< dcousens>
obviously that wouldn't help others
< dcousens>
but, as you say, others are quite the dynamic
< gmaxwell>
dcousens: kinda useless to do something locally, it won't do it to anyone else...
< dcousens>
gmaxwell: depending on their expiration timeout, mempool size & fee filter, full-RBF, etc, could go either way no?
< gmaxwell>
dcousens: you can send things that aren't in your mempool, the whitelisting stuff does that already.