< CodeShark>
polling for things like balance make perfect sense - I was referring to things like getting notified of new blocks or transactions
< gmaxwell>
polling makes sense for that in most contexts.
< gmaxwell>
Blocks show up once per ten minutes on average, polling once a second adds a neglgible amount of overhead and delay, and you avoid any issues with corner cases where the notifcation gets lost.
< CodeShark>
you can solve those issues with another protocol layer but yeah, it does add some complexity
< CodeShark>
what I've usually done is open the connection and subscribe, then ping periodically
< CodeShark>
the pings don't have to be every second
< CodeShark>
you still get extremely low latency when it works correctly - and you detect malfunction fairly quickly
< CodeShark>
this is all middleware, though
< CodeShark>
app developers shouldn't have to deal with all the inner plumbing
< CodeShark>
and the model of writing event handlers is a pretty nice one
< CodeShark>
you can even have an event handler for when there's a connection error
< CodeShark>
makes all the clientside logic easier to follow
< gmaxwell>
the logic needed to handle a lost event is so hard that even experts usually don't get it right though.
< gmaxwell>
(generally speaking)
< gmaxwell>
there are plenty of places where you have no choice but to be event triggered-- when load and latency are critical.
< gmaxwell>
but they result in a lot more exceptional conditions which are hard to reason about and hard to test.
< CodeShark>
which is why it's the kind of stuff you want in a very well-tested library
< CodeShark>
and not the kind of thing you want every app developer to be writing ad hoc
< CodeShark>
but I agree that the general cases can be quite hard
< midnightmagic>
it should then be in an external library. putting reliable message delivery into bitcoin core would also make the binary liable in the event something (inevitably) goes wrong.
< CodeShark>
midnightmagic: agreed - that's what I do
< CodeShark>
I have a process connect to bitcoin core via p2p which receives inv messages and sends getdata
< CodeShark>
all the messaging stuff beyond that is entirely in the other process
< CodeShark>
then the other process can be queried for its connection status - if it's connected, if it's synched, etc...
< kanzure>
i don't think it would be obvious to others to do application integration at the p2p layer.
< CodeShark>
they wouldn't
< CodeShark>
they only need to interact with this other process
< kanzure>
i know. but i mean they wouldn't know to look for something that does this :).
< kanzure>
the typical thing is to "use rpc" not "look for a p2p client that bridges events on the network to your own messaging system, without using bitcoind rpc".
< CodeShark>
indeed - I'd like to see this kind of thing become more readily available
< CodeShark>
in my ideal world, Bitcoin Core would just do peer discovery, validation, and relay
< CodeShark>
and other processes would handle all app layer event processing
< CodeShark>
perhaps a shared block storage process could be incorporated into the design
< CodeShark>
and shared utxo storage
< luke-jr>
I'm tempted to work on multiwallet stuff. is anyone else? (poke CodeShark)
< CodeShark>
heh
< CodeShark>
I don't think what I had done will rebase too easily :p
< luke-jr>
probably not, or I'd have been including it in Knots already :p
< luke-jr>
assuming you didn't keep maintaining it privately
< CodeShark>
nah, I decided to write my own wallet from scratch instead :p
< luke-jr>
yeah, I figured :p
< CodeShark>
if I were to attempt the multiwallet in Core again I would take a very different approach ;)
< CodeShark>
especially in regards to the merge process
< luke-jr>
I'm doing tiny commits so far, mostly cleanups
< luke-jr>
the difficult part I see (and may just skip) is closing/removing a wallet at runtime
< luke-jr>
I'm not really sure our shutdown close is even safe
< CodeShark>
you mean in terms of RPC?
< luke-jr>
I mean shutdown deletes the wallet pointer, but who knows if another thread might still be using it
< CodeShark>
reference counting? :)
< luke-jr>
maybe it's safe, but it's far from clear that it is
< luke-jr>
there is no refcounting right now
< luke-jr>
I'll probably add something like that *if* I implement close
< luke-jr>
(my main practical goal is to use JoinMarket without touching my real Core hot wallet, and without two node instances)
< CodeShark>
simplest is to just provide a list of wallet files in the config and just have them all load for the duration of runtime
< luke-jr>
oh, I guess that's a possibility
< luke-jr>
-wallet exists, could just accept multiples
< CodeShark>
from a GUI perspective it's nice to be able to open and close wallet files during runtime - but if you'll be using a specific setup and connecting via RPC it doesn't really matter
< CodeShark>
in fact, being able to open and close wallet files via RPC is potentially exploitable
< luke-jr>
only slightly more-so than backupwallet, I think? :p
< CodeShark>
heh
< CodeShark>
point is if your payment processing app is designed to deliberately open and close wallet files at runtime there's probably something wrong in your design ;)
< luke-jr>
well, that's probably true as well
< luke-jr>
maybe.
< luke-jr>
it might make sense to do wallet rotation in some capacity
< CodeShark>
yeah, you could have tools for such things
< CodeShark>
but I think they should be conceptually separate from the business logic
< luke-jr>
probably
< CodeShark>
you have your business logic, then below that you have a policy layer
< CodeShark>
specifying which wallet, which keys, which signing policies, etc... should be used is part of the policy layer
< jonasschnelli>
jl2012: I guess this is kind a strange. It let you define your return value.
< jonasschnelli>
looking at validation.h L45-L48 is looks wrongish
< jonasschnelli>
Meh.. maybe it makes sense like that.
< jonasschnelli>
Calling the DoS function basically allows you to set the return value of that function.
< luke-jr>
it's to avoid 2 lines
< jl2012>
jonasschnelli: it seems it is always set as false. I can't find any example of true
< jonasschnelli>
jl2012: Indeed. I guess the error() function also resolves to false.
< jonasschnelli>
Maybe remove it to increase code readability?
< jl2012>
so it's just some kind of dead code?
< jonasschnelli>
Looks like... though, haven't analyzed it in detail
< luke-jr>
used to have state.DoS(0, error( some places IIRC
< luke-jr>
still do I think
< luke-jr>
wee, reduced pwalletMain references to 42 lines, half in tests
< luke-jr>
why is nWalletUnlockTime not on CWallet?
< da2ce7>
I was thinking that the RPC wallet-calls should include an monochromic increasing nounce: the only one that I can reliably think of is the 'total blockchain work' statistic.
< da2ce7>
block hash and block hight are both not reliable.
< da2ce7>
then the RPC call should guarantee the return values are correct for at least the total work specified; or respond with the error 'not enough total work'.
< luke-jr>
da2ce7: eh, block hash is always a specific known amount of work
< luke-jr>
using total work would fail to solve issues for reorgs
< da2ce7>
luke-jr I thought you reorg to the chain that has the 'most total work'.
< luke-jr>
da2ce7: competing chains often have the same work
< da2ce7>
this is important to at times of difficulty adjustment.
< luke-jr>
two blocks at height X are almost always going to have the same total work
< da2ce7>
except at the case of the edge of a difficulty adjustment?
< luke-jr>
yes
< luke-jr>
or an attack
< luke-jr>
(but not all attacks)
< da2ce7>
well it seems like a thing that we should take account of; then the wallet can give out a stronger consistency guarantee
< luke-jr>
da2ce7: the problem then is that in some cases, you want the RPC call to execute regardless of what block you're at - including the "in between blocks" state
< da2ce7>
luke-jr: then if without specifying the nounce the RPC call should return a statement: "this response is accurate for at least X total work"
< jonasschnelli>
I having a hard time to analyze our main.cpp code how Core is detecting if a peer is not responding to a getheader message... there must be a timeout or something I guess.
< sipa>
yes
< sipa>
search for 'stall'
< luke-jr>
instagibbs: why does removeprunedfunds call ThreadFlushWalletDB? :/
< jonasschnelli>
sipa: i searched, but can only link it to block downloads (not to headers)
< luke-jr>
I can't imagine that's right. It will rename the RPC thread!
< sipa>
jonasschnelli: i don't think we have timeouts for headers
< sipa>
but we do ask for headers from multiple peers
< sipa>
so it's less risky
< gmaxwell>
it will continue with a new source once someone else offers it a block.
< gmaxwell>
I was explaining this in #bitcoin two days ago in fact.
< jonasschnelli>
Okay...
< gmaxwell>
as it interacts poorly with the large number of fake nodes out there that just monitor transaction advertisements and don't do anything else.
< jonasschnelli>
we not increase the missbehave score if a node does not response to a getheaders request in an adequate amount of time?
< jonasschnelli>
seems to be the cheapest method to detect fake nodes... though, keeping a headers-chain is probably simple, but I guess most fake nodes are not willing to implement that
< gmaxwell>
that can cause network partioning. just dos attack nodes and peers will ban them.
< gmaxwell>
misbehavior score can't be just handed out like that
< sipa>
i think we should get rid of misbehaviour score
< gmaxwell>
Agreed.
< sipa>
and just have a boolean: bannable offence or not
< gmaxwell>
Right.
< sipa>
i guess there are cases where you disconnect but not ban
< sipa>
but that's more for not wanting to deal with a request, and a courtesy to the peer, so he can find someone else
< sipa>
not for misbehaviour
< gmaxwell>
jonasschnelli: headers should come from all network peers, the only real complication with that is the initial headers sync, which you don't want to do redundantly as its some 40MB of data or so.
< sipa>
yes, there is some logic that does not overeagerly send out headers requests during ibd
< gmaxwell>
thats the only element of headers handling that I've ever seen cause stalls.
< gmaxwell>
and they recover on new blocks, but it does sometimes causes new users to footgun themselves.
< gmaxwell>
e.g. they are syncing and they decide it's slow, so they restart... and the first peer they get is a black hole... and so they sit there not syncing for a bit.. then decide to reindex.
< GitHub33>
bitcoin/master 125b946 Pavel Janík: Do not shadow upper local variable 'send', prevent -Wshadow compiler warning.
< GitHub33>
bitcoin/master 4daf02a Wladimir J. van der Laan: Merge #8677: Do not shadow upper local variable 'send', prevent -Wshadow compiler warning....
< GitHub48>
[bitcoin] laanwj closed pull request #8677: Do not shadow upper local variable 'send', prevent -Wshadow compiler warning. (master...20160907_Wshadow_8606) https://github.com/bitcoin/bitcoin/pull/8677
< gmaxwell>
[20 minutes later] and as they strap their head to a table and start pushing the drill bit into their temple, some don't think that perhaps lobotomizing themselves is too far to go to complete a software install... :P
< Lauda>
Still has "TBA*" on 0.13.0
< gmaxwell>
Lauda: thanks.
< Lauda>
Np.
< sipa>
gmaxwell: also, i believe we still have the bug that after a reindex finishes, we do not actively start asking pers for headers
< sipa>
wait a minute
< sipa>
since the reindex change in 0.13 we already know all headers beforehand
< sipa>
so we could in fact start asking peers for headers after the first stage
< sipa>
and learn about new headers while still reindexing blocms from disk
< gmaxwell>
I don't recall what governs the decision between asking only a single peer vs everyone.
< sipa>
IBD or not, i guess
< sipa>
but this is something else
< sipa>
at startup we pick one peer to actively start asking for headwrs, to bootstrap the sync process
< sipa>
we don't do that during reindex, and not after reindex finishes either
< gmaxwell>
jonasschnelli: re 'detect fake'-- not reasonable to do, they'll just continue to do the bare minimum to not get detected whatever that is. We should focus on being robust (and increasingly so) in the face of abusive peers.
< jonasschnelli>
gmaxwell: Yes. Agree. I haven't tested it,... but somehow I had the assumption a peer that signals NODE_NETWORK not does not respond to getheaders messages will result in stalling IBD (if we have chosen that peer for headers sync during IBD).
< jonasschnelli>
But I guess I'm wrong with that assumption.
< jonasschnelli>
Just couldn't find the corresponding code part that breaks my assumption
< gmaxwell>
it will, until the next block on the network, and then it will select another peer.
< jonasschnelli>
Wouldn't a timeout of lets say 30 secs make sense in this case? If timeout fired, select next node for header-sync
< sipa>
30s is extremely short
< jonasschnelli>
for a headers message after a getheaders? Why short?
< sipa>
it's trivial to dos a peer to make them unresponsive for 30s
< jonasschnelli>
Would you care switching to the next header sync peer in case your DOSed?
< gmaxwell>
and it also means you couldn't use bitcoin anymore on a connection of less than about 50kbit/sec which, otherwise, could keep up with the chain.
< sipa>
during IBD that's fine
< gmaxwell>
(as the first header response will be 160kb of data, 30 second timeout means the minimum usable bandwidth around 50kbit/sec.
< jonasschnelli>
Yes. The bandwith / timeout problems still appears...
< gmaxwell>
even just switching that fast would mean that a host with borderline bandwith you would livelock requesting the same data over again and changing peers.
< luke-jr>
I have a peer right now ping wait over 300s
< gmaxwell>
luke-jr: that one is probably not actually working.
< gmaxwell>
though I do frequently have working peers with rather high pingtimes.
< luke-jr>
gmaxwell: it's responded to pings before?
< luke-jr>
connected 3 hours, last ping time 227s
< gmaxwell>
luke-jr: I mean it's probably fallen off the network.
< luke-jr>
hm
< luke-jr>
debug window claims it's got the best block
< gmaxwell>
"minping": 21.978129,
< GitHub91>
[bitcoin] luke-jr opened pull request #8687: Bugfix: RPC/Wallet: removeprunedfunds should flush the wallet db (master...bugfix_removeprunedfunds) https://github.com/bitcoin/bitcoin/pull/8687
< gmaxwell>
I wish we could control the number of headers retured by getheaders in that initial case... as the obvious thing to do is request a tiny amount from every peer. Then pick among the responses, and keep doubling the request size, requesting from a single peer, so long as the replies are sufficiently fast. Then a short timeout would be fine.
< gmaxwell>
jl2012: yes, thats an internal function in libsecp256k1.
< jl2012>
^ sipa: if the pubkey does not pass this test, the verify must return 0?
< gmaxwell>
jl2012: a pubkey that fails that test will never result in a passing signature validation.
< sipa>
jl2012: if that functioms returns 1 there always exists some valid signature/message pairs
< sipa>
jl2iif it returns 0, no signature can verify
< jl2012>
so P2WPKH has an implicit requirement for key size
< jl2012>
pub[0] == 0x06 || pub[0] == 0x07 are so called "hybrid key"?
< sipa>
yup
< sipa>
afaik, never used on mainnet
< jl2012>
should we make keys not 33 bytes non standard in segwit?
< sipa>
i don't think there is any reason not to
< sipa>
but that should be communicated
< jl2012>
i'll post it the ML with all other segwit standard limits
< luke-jr>
the github pull # isn't actually optional
< wumpus>
yes the pull # is the most important part, the Rebased-From commit ids are optional / nice to know for cross-referencing, I don't use it for the release notes
< GitHub103>
[bitcoin] sipa opened pull request #8690: Use a heap to not fully sort all nodes for addr relay (master...heapaddrsort) https://github.com/bitcoin/bitcoin/pull/8690
< GitHub155>
bitcoin/master 0480293 Jonas Schnelli: [Qt][CoinControl] fix UI bug that could result in paying unexpected fee
< GitHub155>
bitcoin/master 702e6e0 Jonas Schnelli: Merge #8678: [Qt][CoinControl] fix UI bug that could result in paying unexpected fee...
< GitHub17>
[bitcoin] jonasschnelli closed pull request #8678: [Qt][CoinControl] fix UI bug that could result in paying unexpected fee (master...2016/09/qt_cc_ui_radrio_fix) https://github.com/bitcoin/bitcoin/pull/8678
< morcos>
sipa: i may be thoroughly confusing myself, but once you made the change to track explicit conflicts. was it any longer necessary to have the SyncWithWallets loop run on txConflicted?
< morcos>
it seems like that code actively does the wrong thing now, but its ok since its corrected by the loop immediately following which calls SyncWithWallets on the in block txs which will then call your MarkConflicted code
< morcos>
of course if we could see mid block state, then that would be bad. But if it is true that we can just remove the txConflicted updates, then maybe that problem goes away?
< sipa>
morcos: txConflicted... that's the thing used for malleability detection?
< sipa>
or you mean the txConflicted in the block connection logic
< morcos>
block connection logic
< morcos>
it appears to me that both removeForBlock and AddToWalletIfInvolvingMe try to do the same thing, which is find conflicting txs. In removeForBlock (at least in the block connection logic) this is used to build a vector txConflicted, which we then call SyncWithWallets with (essentially with a null hashBlock).
< morcos>
Thats actually the wrong way to mark something conflicted now
< morcos>
but maybe i'm getting confused
< sipa>
txConflicted (iirc, not looking at the code) is just transactions that used to be confirmed which go back to being unconfirmed after the reorg
< sipa>
ah, no, which were in the mempool and are removed due to being in conflict with the new change
< sipa>
i believe you are right
< sipa>
trying to think whether one of the approaches can subsume the other
< morcos>
i'm having trouble figuring out why we ever needed the SyncWithWallets(txConflicted)
< sipa>
ah, there may be conflicts that are only obvious once you take dependencies through the mempool into account
< sipa>
the wallet's internal conflict detection code can only work when the disconnected chain is entirely inside the wallet
< morcos>
yes, makes sense
< sipa>
but it may make sense to keep the wallet's internal code as well, as i think it's more robust (it can also track things that are temporarily removed from the mempool), and it would keep working in a hypothetical spv mode
< sipa>
but i guess we do need to mark the things from txConflicted as actually conflicted
< sipa>
not just as unconfirmed
< morcos>
sipa: i'm not sure that's very doable
< sipa>
how so?
< morcos>
well, i don't think we've really ever tracked that
< morcos>
prior to your conflict change
< sipa>
i think we can?
< sipa>
we know what block caused the conflict
< morcos>
it was txs that were no longer in the mempool that were viewed as conflicted, it didn't really matter that you'd called SyncWithWallets(txConflicted)
< sipa>
yes, but i think we can make it work
< morcos>
but how would you ever know if it became unconflicted
< sipa>
when the block it conflicts with is no longer in the active chain
< sipa>
i believe that's what the negative confirmation logic already does
< morcos>
perhaps you are right...
< sipa>
we remember either the block which contains the transaction, or the first block that directly or indirectly conflicts with it
< sipa>
then we count the number of confirmations on that block, and negate the number if it's a conflict
< morcos>
when we disconnect a block, where do we go back and reevaluate things that might have conflicted with the now disconnected txs?
< sipa>
we don't
< sipa>
there is no need, i think
< sipa>
the confirmations are always calculated based on the current best chain
< morcos>
ah, i guess what there is a need for is to mark the cached credits/debits as dirty though right?
< sipa>
so if that disconnected block was the one that conflicted with a transaction, the confirmations for that tx will go from -n to 0 afterwarfs
< sipa>
yes, we may need dirtymarking
< morcos>
ok, all very confusing... we probably need to comment this way better
< sipa>
agree.
< sipa>
it was a relatively short notice change after the mempool eviction had been implemented
< sipa>
we haven't really revisited it since
< morcos>
hmm, so to summarize, even in 0.13, there is a problem right?
< morcos>
if you have a mempool tx which is conflicted only via a chain of mempool txs... then the wallet code won't catch it to mark it properly conflicted, so it'll just get marked with a null hash block which in the 0.13 code does not make the balance available again
< sipa>
if that is true, it is probably 1) infrequent and 2) also wrong in 0.12
< morcos>
agreed and agreed, but damn annoying when it happens. :)
< sipa>
it also seems easy to fix, but probably hard to test
< sipa>
i will probably not have much time next week to look at things
< morcos>
yeah, i'm not sure i will either, but i'll make an issue for it at least...
< morcos>
sdaftuar thinks we may have already known about this, i remember we knew that it was hard to correctly identify chains of txs that should be marked conflicted, but didn't remember it cause a (re)spendability problem
< sipa>
it was known to me there were potential issues with chains outside of the wallet... but i think i considered it a best effort thing
< sipa>
and it is... if the chain gets broken due to eviction, it stops working too
< sipa>
i guess the novel thing is that the current code is mostly a no-op (ignoring those edge cases)
< sipa>
so either we do not care about through-mempool conflicts, and we should just remove the code
< sipa>
or we do, and then we should make it mark dependent txn as actually conflicted
< morcos>
yes, i think i agree with all that. i also think that the mid-block state is a bigger concern here, b/c then the existing code might be worse than a no-op. also aside from the mid-block state, the existing code might be worse than a no-op in the event that you've abandoned a tx, and then it's marked unabandoned if it gets conflicted via a mempool only chain.
< sipa>
right
< morcos>
sipa: ok last comment i think and then i'll shut up. the value of that SyncWithWallets(txConflicted) was essentially that it was marking things as dirty, that wouldn't have been caught by the wallets own conflict detection code. B/c it doesn't seem like it was ever changing any of the state of the wtx anyway. Is that right? that makes it easier to reason about.
< sipa>
my head hurts :)
< sipa>
i'll check the code later
< jl2012>
sipa: I think we could not make uncompressed key non-std in segwit, because addwitnessaddress may create adddresses with existing or imported uncompressed keys
< sipa>
jl2012: good point, but addwitnessaddress could test for that?
< jl2012>
it also needs to test pubkeys inside multisig addresses. Would it be too much?
< BlueMatt>
the total diff against master is simpler to me
< bsm117532>
BlueMatt: you're going to be in NYC from Monday, no? What do you have planned for your "hacker residency" and time here? I'd definitely like to swing by since I'm here and all...
< BlueMatt>
bsm117532: I'm already here, but, yea, we start on monday...if you want to swing by for a day or two the week after next that'd be cool
< BlueMatt>
next week might be tricky since we'll be doing a bunch of spin-up and such
< BlueMatt>
yea, I had it on my mental list to figure out when bitdevs things are and show up for some of them
< bsm117532>
The following week (Sep 21) a fellow has asked to present the Rootstock whitepaper. Hopefully I'll announce that event today.
< sipa>
jl2012: i dislike that a local wallet implementation detail would stand in the way of clearly useful network rule
< BlueMatt>
jl2012: I would have no problem if addwitnessaddress failed for uncompressed pubkeys
< BlueMatt>
altneratively, we could just compress the pubkey for them, at that point, but thats probably not ideal
< jl2012>
is mutlisig the only allowed P2SH address type in wallet?
< jl2012>
is addwitnessaddress the only possible way to create and add a witness address to the wallet?
< sipa>
i believe so
< sipa>
there is createwitnessaddress as well, however
< sipa>
which allows computing the address, but does not actually allow soending
< jl2012>
so createwitnessaddress will return an address, even if the script is clearly invalid? (e.g. OP_RETURN)?
< sipa>
yes
< jl2012>
so that's another problem
< sipa>
i don't think so - createwitnessaddress takes a raw script
< sipa>
it's a raw tool; we could warn that it does not guarantee it results in a spendable script
< sipa>
or just delete it
< jl2012>
yes, either way
< instagibbs_>
sipa: is there a reason addwitnessaddress doesn't add the address to the address book? Any funds sent to those addresses gets counted as change, which is a bit off
< morcos>
cfields: sdaftuar: MarcoFalke: re: #8680, i actually think we need to improve the design a bit. the problem is latestblock can be accessed when it hasn't been updated by even the reference node yet? in practice this doesn't happen very often because the python control flow blocks on the reference node completing chain operations, but might happen in a p2p test.
< morcos>
In addition invalidateblock (and reconsiderblock?) don't even notify latestblock after chain operations are complete, but this i think is fixable separately
< morcos>
It seems like a better overall testing design might be to say sync_blocks(reference_node) where then you call reference_node.getbestblockhash() and then wait for all latest blocks to be at that hash (including at this point waiting for the reference_node to have finished wallet ops)
< morcos>
but this requires changing all the rpc tests. it's probably the case that reference_node is node0 a very high percentage of the time, but thats a bit of an unintuitive thing to have as a default
< morcos>
cfields: on an unrelated note, maxuploadtarget.py is failing now. i'm guessing its due to your network refactor? did you try that test?
< morcos>
cfields: yeah, i just checked, the merge of #8085 broke that test.
< cfields>
morcos: hmm, I was getting that spuriously, but I thought i was seeing it in master as well
< cfields>
probably crossed wires, though. checking now, thanks for letting me know
< cfields>
morcos: re rpc stuff, yea, there's lots to fix, i'm not sure where is the best place to start.
< cfields>
i think we need to nail down an approach to attack these async issues in general though, rather than reacting to changes
< morcos>
cfields: ha, that was sufficient for me to erase what i just typed. :)
< cfields>
heh
< cfields>
morcos: i started working on more async calls/features, but it all feels so ad-hoc that i'm having a hard time moving forward
< cfields>
i think a design doc is necessary :\
< morcos>
i think one question we should answer in that design doc is how important it is to fully support invalidateblock and reconsiderblock
< sipa>
i consider them unsupported raw emergency tools
< morcos>
originally i thought these were just kind of developer tools, and it wasn't maybe necessary that production code worked 100% seamlessly with them
< sipa>
they were introduced because of experience with the march 2013 attack
< morcos>
but seems like over time i've heard them being talked of as yeah, tools that we would use in an emergency
< sdaftuar>
i seem to find myself using them in tests a lot
< morcos>
which to me means they also need to make sure behavior is correct after you use them... such as if you're asking for a balance and you expect it to be after wallet is updated, then that ought to also work for invalidate block
< cfields>
morcos: they're interesting to me because they break all notions of fencing. And I assume there are other cases, or will be in the future. But it really makes me crave atomicity as a feature.
< kanzure>
which things feel adhoc about the async stuff?
< gmaxwell>
I also think of invalidate/revalidate as emergency tools / developer tools, I expected that they'd cause some misbehavior. OTOH, I don't think there is any fundimental reason why they need to: we already _must_ handle reorgs in the system.
< cfields>
kanzure: thinking in terms of individual calls rather than a general approach
< morcos>
gmaxwell: but a reorg that lowers height is basically impossible
< BlueMatt>
i mean if those rpcs break wallet, does anyone give a fuck?
< kanzure>
3 blocks replaced by 2 blocks at the tip?
< morcos>
gmaxwell: and reorgs don't return control while height is less than prev height
< BlueMatt>
they're primarily considered there for mining, no?
< BlueMatt>
anyone who's running a wallet and is aware of issues like this enough to want to run invalidateblock shouldnt run invalidateblock - they should stop using their damn wallet until the problem is fixed
< gmaxwell>
morcos: reorgs can reduce height, in fact.
< gmaxwell>
They commonly do on testnet, but they can on mainnet too.
< morcos>
gmaxwell: yeah i know, thats why i said basically
< gmaxwell>
oh sorry.
< morcos>
if we forget about where we are and just think about where we want to be
< morcos>
what we really might want is a blocking version of a wallet call that takes some hash
< morcos>
and then blocks until the work on the wallet chain is >= work on that hash?
< morcos>
which should then work for all reorgs, but not invalidateblock?
< morcos>
but maybe matt is right, maybe thats good enough
< cfields>
morcos: yes, that's basically what i've arrived at as well
< morcos>
then we could modify the tests to use a simple get bestblockhash on our reference node, and then call the blocking balance calls with that hash
< morcos>
then we'd have to do something somewhat smart to make tests that use invalidateblock work properly
< cfields>
<cfields> I think the reason today's discussion devolved so quickly is because "getblockcount" is impossible to define, because there's no global height. So the only fix is to ensure that we're asking a specific interface a specific question. Then there's no way of being out of sync because you've specified your constraints
< morcos>
at least thats what we want to do if we want to compare balances, if we're syncing chains for other reasons, thats fine, we just since the getbestblockhashes
< morcos>
yep, i think we agree'ish
< cfields>
morcos: close, but i think that's falling into the same trap. What you really want in that case is "tell the wallet to wait for balance x at height y and hash z"
< morcos>
which part
< BlueMatt>
morcos: would we need to do something smarter for tests? I mean if we get to control what it blocks until isnt that fine?
< cfields>
(the missing constraint in yours was the balance itself)
< morcos>
well i was trying to imagine what might be a useful rpc call in general (and then how would we use that to accomplish the tests goals)
< morcos>
in general you might say, oh i know this many blocks have happened
< morcos>
i want the wallet to give me the balance as soon as its caught up that far
< morcos>
but you have no control if it accidentally proceeds beyond that before you get a chance to ask it
< morcos>
which is already the case
< sipa>
gmaxwell: invalidate and reconsider do need to do pretty invasive things to the internal data structures (they pretty much iterate over all block index objects and modify values here and there to keep things consistent)
< * BlueMatt>
missed a bunch of discussion, but what happened to "wallet rpc calls, by default, block until the wallet is up-to-date with the state of the chain at the start of the call, or, if the state of the chain changes and we never reach that point, until the wallet is caught up to the state of the chain"
< sipa>
BlueMatt: i am not convinced that is necessary
< BlueMatt>
not neccessary in what regard?
< BlueMatt>
not neccessary for tests or not a good idea to target for?
< sipa>
BlueMatt: it's a promise that's hard to keep if the wallet becomes more independent
< sipa>
maybe it is necessary, but i am not convinced yet
< BlueMatt>
I dont think its hard to keep?
< BlueMatt>
I mean you start wallet calls with "get current chain state" and then wait until the wallet thinks its up to date with that or better before returning
< sipa>
well, yes, a huge cost
< sipa>
of course it is easy to implement
< sipa>
but it may reduce performance a lot
< sipa>
so i'd rather not
< sipa>
anyway, off to have beer
< BlueMatt>
i mean, yes, there should be an option to say "actually, only wait until point X", but it should default to X being the current chainstate when entering the wallet call
< sipa>
i don't know about that
< sipa>
that seems like an over-conservative approach that we may regret
< morcos>
i think that what BlueMatt and I are saying is the same thing. Although I'd expose the argument as to what hash you want the wallet to require it is synced up to at least (workwise) and possibly add a default that does what matt suggests
< morcos>
but certainly i suppose there should be a just give me what you got version as well
< BlueMatt>
sipa: I mean I think the api change has a lot of cost to users, so would prefer we default to what it does now, though I suppose I donnt care as long as there is a way to easily switch to the original behavior
< sipa>
fair enough
< sipa>
the reason i think it may not be needed os that right now, between any two calls the height can already cjange
< sipa>
so i don't think that the wallet should report a state that is necessarily at the beginning of the call
< sipa>
it just needs to be internally consistent and well-ordered with the results of other calls
< cfields>
morcos: pretty sure i found the maxupload problem, working on a patch now. thanks again for the ping.
< BlueMatt>
heh, fun, if memory allocation fails in script (which it def could), bitcoind will reject the block as an invalid block
< BlueMatt>
yay overcommit
< gmaxwell>
actual overcommit will not cause an allocation failure.
< gmaxwell>
it will cause a crash, which would actually be preferable here. :)
< BlueMatt>
indeed, overcommit is required to run bitcoin core in consensus
< gmaxwell>
required is a bit too strong, in practice that divergence may be untriggerable.
< BlueMatt>
true
< gmaxwell>
This can probably be handled with a limited wrapper that catches any unexpected exception inside block validation and asserts.
< gmaxwell>
Which I think would be a really good idea, especailly in the short term.
< BlueMatt>
i think in block validation its fine, it will throw out to caller and either give you an exception in rpc or in ProcessMessage
< BlueMatt>
but in script we have a general catch
< BlueMatt>
indeed, should probably have a catch for bad_alloc there that does an assert(false)
< gmaxwell>
I looked before to see if it was possible to catch bad_alloc process wide, but found nothing. (Also, I don't think that would work, because while stepping through with GDB I've seen boost code try to alloc TBs of memory then fail and continue on with life)
< BlueMatt>
in any case should probably put a second catch in script interpreter above the catch-all and assert(false) on bad_alloc
< gmaxwell>
hm. I don't understand why my node is attempting feeler connections on IPv when the only v6 ifs I have are ::1 and a link local.
< phantomcircuit>
gmaxwell: the reachable stuff was largely disabled because it didn't work
< phantomcircuit>
so right now unless something is marked explicitly as unreachable everything is reachable
< phantomcircuit>
(ie you're using tor)
< gmaxwell>
this is suboptimal. :) at least it probably shouldn't think it can connect to IPv6 if I litterally have no IPv6 addresses but local and linklocal.