< luke-jr>
sipa: I see your point. paveljanik: are you already working on a PR, or shall I?
< gmaxwell>
16:53 < Michail1> gmaxwell - Mental note: bitcoin-0.13.2-win64-setup.exe is detected by Norton (File Insight) as not safe and automatically removes it. (Not that you can do anything about it, just letting you know.) I am confirming the prior versions are not auto deleted.
< bitcoin-git>
[bitcoin] kallewoof closed pull request #9478: Trivial refactor: BOOST_FOREACH(a, b) -> for (a : b) (master...replace-boostforeach) https://github.com/bitcoin/bitcoin/pull/9478
< kallewoof>
sipa: i have 4 nodes (n1-n4) partitioned into two nets (n12 n34). n2 and n3 both spend the same UTXO, then 6 blocks are generated on each side after which the nets are merged and an additional block is generated on n4 causing the n3 transaction to take precedence, knocking the n1 transaction out. this is sometimes listed in listsinceblock and sometimes not.
< kallewoof>
sipa: I tried putting a 1 sec delay after the generate to see if there was some wallet fiddling that did not finish in time but this did not change the outcome.
< gmaxwell>
listed in listsinceblock on which side?
< gmaxwell>
and which block are you listing since?
< kallewoof>
I am calling listsinceblock from n1
< kallewoof>
What's fascinating is that, for the MISSING case (i.e. when it doesn't list the now-orphaned transaction originating from n2), n1 has this in the debug.log file, and for the present case it does not have this:
< kallewoof>
2017-01-05 11:04:46 Transaction efd9e5d4daf8d47a2cc07db0e153513b2d02da2e031d3f2398f804b2a3d7ba03 (in block 2fd09b11259689722ec38873aeedc7e27753a587f66a542bb2ae64546b17943f) conflicts with wallet transaction 5c717b80ee4e4909e9f4a15bfacd728c3be8c1e75d6eed83a3e9324f6b9ea38c (both spend f72d9d38468c40777640e5b7731dab76cd961ee60cc93198b2ec706c21fb1801:0)
< kallewoof>
I guess since the transaction was overridden by another one, 'orphaned' is probably not the right term. 'Invalidated'? 'Betrayed'? Anyway, the tx that lost the UTXO race.
< paveljanik>
luke-jr, I'm not (I was sleeping ;-)
< gmaxwell>
morcos: par=4 synctime of 24453.715550 (all signatures checked), and par=max (24 actual core host) 12182.296543... so yea, I suppose it's scaling better than it used to.
< gmaxwell>
(these figures with dbcache=2000)
< bitcoin-git>
[bitcoin] jonasschnelli closed pull request #7826: [Qt] show conflicts of unconfirmed transactions in the UI (master...2016/04/ui_mem_cflct) https://github.com/bitcoin/bitcoin/pull/7826
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9481: [Qt] Show more significant warning if we fall back to the default fee (master...2017/01/fee_warning) https://github.com/bitcoin/bitcoin/pull/9481
< btcdrak>
jonasschnelli: I'm willing to test those this weekend...
< jonasschnelli>
btcdrak: They need overhaul from my side, don't bother
< btcdrak>
ok
< jonasschnelli>
I'll re-base overhaul them as soon as they rise on my pile-of-work
< morcos>
gmaxwell: re: CreateTransaction logging, I think thats a great idea. If needed we could even make fee estimation take a bool to output more debugging information for the times its called via CreateTransaction. but even without that, just having the basic info would be nice.
< jonasschnelli>
Idea: would it be stupid to use the first 16 addrs of the dns-seeder DNS response as a "hidden" secp256k1 compact sig for the next 16 addrs of a complete response of 32 addrs?
< jonasschnelli>
Then ship apps with an ec pubkey per seeder (that supports signed dns responses)
< jonasschnelli>
I think this approach would be a simple hack and much less work then switching to DNSSEC
< sipa>
jonasschnelli: i believw intermediate resolvers can reorder/filter responses
< gmaxwell>
s/can/constantly/
< gmaxwell>
a lot of intermediate resolvers trim the results to just a few, and many (most?) reorder them (e.g. under the assumption that the final device will use the first, then yielding a cached response they reorder to avoid overloading the source).
< jonasschnelli>
hmm... that unfortunate.
< jonasschnelli>
We do we try to connect to the dnsseeder on 8333? One-shot getaddr?
< sipa>
?
< sipa>
you mean why does the oneshot concept exist?
< sipa>
if you're connecting over Tor you shouldn't do DNS lookups, as they'd leak your traffic
< jonasschnelli>
sipa: That was the problem! Dam Qt settings...
< jonasschnelli>
Now I also know why my SPV block downloads where slower then expected. :)
< sipa>
so instead we "connect" to the seeder, which in practice means we're connecting to a Tor exit router, and make the router resoove the seeder, and connect tovit
< sipa>
we don't actually learn what IP we're connecting to in that case
< morcos>
cfields: just finished reading through 9441, didn't understand your last comment on the PR, you are going to change it back to what?
< sipa>
morcos: the current PR makes ProcessMessages only process one message at a timre
< morcos>
i don't think you need to change it (now) i think its fairly clear any edge case change from current behavior is a net benefit
< morcos>
sipa: but thats what it did before
< sipa>
yes
< morcos>
so you think he should change it to process more than one?
< sipa>
i'd be more confortable with that
< BlueMatt>
you'd be more comfortable with a change in behavior?
< morcos>
that seems like the change to me, possibly a good one, but the PR seems more clearly correct without doing that b/c it doesn't require thinking about that
< BlueMatt>
I mean I agree we should probably do that in the furture, but why change it now?
< morcos>
BlueMatt: +!
< morcos>
1
< sipa>
i'm confused
< sipa>
the current code processes multiple messages at once
< BlueMatt>
the pr does not change the behavior except for some stupid weird edge cases
< sipa>
his current PR changes that to only process one at a time
< BlueMatt>
specifically, currently, if you have a message with a bad hash, it will process more than once
< morcos>
morcos> sipa: but thats what it did before (meaning master does not process more than one)
< BlueMatt>
but the current code does NOT process more than one message if it calls ProcessMessage
< sipa>
morcos: heh?
< BlueMatt>
(also the pr just disconnects on a bad hash, which I think is a change, but a good one imo)
< sipa>
master does process more than one, unless it's a block
< BlueMatt>
no
< sipa>
or the send buffer is full
< BlueMatt>
it does not
< BlueMatt>
I had the same misunderstanding initially
< morcos>
line 2566 in net_processing.cpp on master i think
< sipa>
ok, i was not aware of that
< sipa>
but that seems to be a bug
< morcos>
:) i think cfields tried to explain it multiple times
< morcos>
i think we all agree it may be better to process multiple messages, but it seems to me to make more sense to do that as a follow on PR
< BlueMatt>
(and the overhead of not doing so is (likely) not too terrible)
< BlueMatt>
(except for the bug fixed by #9315)
< gribble>
https://github.com/bitcoin/bitcoin/issues/9315 | Request announcement by cmpctblock AFTER requesting cmpctblock/blocktxn by rebroad · Pull Request #9315 · bitcoin/bitcoin · GitHub
< morcos>
we need to think carefully if there could be negative situations in the other direction.. you're about to announce blocks to all your peers or respond to their getblocktxns messages and some other peer manages to tie you up with a slew of expensive to process received msgs
< BlueMatt>
heh, fun github bug - if you "start a review" and then "add single comment", it will publish all pending comments
< sipa>
on a different page?
< sipa>
ok, so it seems i had completely forgotten about #3180 (> 3 years old)
< BlueMatt>
you dont /usually/ forget prs from 3 years ago???!
< BlueMatt>
man, I cant remember prs from 6 months ago
< cfields>
sipa: heh, i commented on it in about ~5 places :)
< morcos>
i think clearly we should do SOMETHING smarter, i mean if a block has been announced to you and is sitting in yoru receive queue, seems silly to announce it back just b/c you haven't read it...
< cfields>
sipa: i thought you just wanted to minimize the diff here
< sipa>
cfields: you said "in nearly all cases, only one message is processed" - i thought that referred to cases where the buffer was full or we're processing blocks - the pre-3180 behaviour
< sipa>
and i didn't understand why you'd be changing it
< morcos>
so do we all agree that cfields should leave 9441 alone, and any further change should be in a separate PR?
< sipa>
yes.
< cfields>
sipa: ah, sorry. the only cases for processing multiple are for bad hash, or bad header
< sipa>
i was trying to ask "why are you changing behaviour" - it would have been clearer if you just had said "it doesn't"
< jonasschnelli>
luke-jr: I first was using the term "non-validation mode". But than – after reading satoshis whitepaper again – considered to use the name "Simple Payment Verification".
< sipa>
sorry, i was probably misreading what you said
< morcos>
or something.. nm
< cfields>
sipa: it's mentioned in a bunch of places and at one point you said "I certainly noticed it only processed one message at a time", so i thought we were on the same page
< cfields>
no worries though, sounds like we're all good now
< morcos>
cfields: i'll review again after you fix outstanding comments
< sipa>
cfields: i noticed that your PR changed it to only processing one message at a time
< sipa>
i didn't realize that that was not a change
< cfields>
sipa: ah, heh. i misread you too then :)
< morcos>
i'm not sure i 100% understand the wait_until condition, is the idea that you don't want spurious wakeups to cause another loop?
< cfields>
ok. I rebased this morning to address all nits and keep the loop in. Will back that out and push.
< sipa>
anyway - misunderstandings in both directions. i should have read the code, instead of making (apparently 3-year old) assumptions about the code
< cfields>
sipa: sure, no worries. It's not obvious behavior _at all_.
< cfields>
morcos: the condition lets us wake the processor from the message handler thread when a new message comes in
< bitcoin-git>
bitcoin/master 9479f8d Jonas Schnelli: Allow shutdown during LoadMempool, dump only when necessary
< bitcoin-git>
bitcoin/master 325e400 Jonas Schnelli: [Qt] Do proper shutdown
< bitcoin-git>
bitcoin/master 46b249e Pieter Wuille: Merge #9408: Allow shutdown during LoadMempool, dump only when necessary...
< bitcoin-git>
[bitcoin] sipa closed pull request #9408: Allow shutdown during LoadMempool, dump only when necessary (master...2016/12/memp_dump) https://github.com/bitcoin/bitcoin/pull/9408
< morcos>
cfields: ah yes, ok good.. cool. i like the new design...
< cfields>
morcos: great, thanks.
< luke-jr>
BlueMatt: how's this look now?
< cfields>
morcos: updated. Very little changed from before, mostly just fixed some things in the interim commits to be more correct for easier review
< brg444>
can someone confirm how many iterations of segwit testnet there were?
< sipa>
4, i believe
< brg444>
thanks
< morcos>
cfields: one more question on that wait_until.. lets say a block message comes in, takes you a while to process.. if any peers sent you a new message in the interim, you won't wait b/c of fMsgProcWake correct?
< morcos>
But if no peers sent you a message, you will announce quickly to peers N+1 -> MAX_CONNECTIONS, but then sleep up to 100ms before announcing to peers 1 -> N-1 ?
< gmaxwell>
sipa: Processmessage _currently_ processes one at a time, there is a break stuck in the bottom of the loop.
< morcos>
I thought I had seen somewhere talk about making the wait_until time be 100ms from start of loop and not end, or perhaps we should add a WakeMessageHandler for new tips in particular?
< gmaxwell>
morcos: I opened a PR that did those things, and also explicitly asked about the fact that it changed the message handling back to process multiple messages. After no one replied for a bit I just changed it back to one message at a time.
< cfields>
morcos: yes, that would probably make sense. I think gmaxwell's change included that
< gmaxwell>
Then after cfields said he preferred his net refactors I closed it.
< gmaxwell>
it didn't really make that much of a difference when the other problems were fixed.
< morcos>
gmaxwell: I like the changes in 9441 and i think its good to be making progress as part of a larger roadmap.. i think it captures most of the improtant improvements..
< morcos>
perhaps we can add quick relay of new tips, and processing multiple messages requires a bit more thought in my view.. as we can see by the fact that it was ever taken out in the first place
< morcos>
that said, i admit i did not look at your PRs
< morcos>
hard to keep up!
< cfields>
It's certainly reasonable to tweak it now that the dependencies are untangled, I just tried to keep the scope narrow at first
< gmaxwell>
I know, that why I closed them rather than have more things to look at!
< gmaxwell>
not gonna stop me from going 'I already pointed this out, if you only listened!' :P
< morcos>
insufficient emoticons
< sipa>
gmaxwell: i finally understand your comment on the PR
< sipa>
gmaxwell: i thought you were trying to say that that PR changed it to only process one at a time (which i noticed)... and i was very confused by what you said afterwards about fixing it
< gmaxwell>
sipa: it changed it to process multiple at a time initially, and I immediately added a line comment on that change asking for review of that (which github seems to have lost when I force pushed a change that reverted back to the old behavior)
< cfields>
morcos: hmm, wait. Are you talking about the new quick-relay from 9375 in particular?
< morcos>
cfields: no i'm talking about exactly not that
< sipa>
seems i was just assuming i knew what master did, and i misinterpreted everything that people tried to explain it to me
< cfields>
morcos: ok, good
< gmaxwell>
sipa: I think it should probably handle multiple at a time subject to some time limit. ::shrugs::
< morcos>
cfields: its very easy to see the behavior with 9375 now though b/c of the cached blocks making the annoucements so fast. you watch annoucements in quick succession up to some high numbered peer and then a pause before announcements start to low numbered peers
< sipa>
gmaxwell: absolutely agree
< sipa>
but if we've done it this way since 0.10, i really don't care whether that happens in 0.14 or not
< morcos>
sipa: gmaxwell: i think ideally we might do something even smarter, such as queue received transactions to be ATMP'ed after we'd gotten through block relay.. however its a bit tricky in that you dont' want to reorder those after any potential other cmpctblock messages for reconstruction purposes
< cfields>
morcos: ah, i think that'd be a different culprit then
< gmaxwell>
sipa: in other things people probably didn't notice, I've been complaining that our socket recieve buffer (5 MB) is _way_ too small given how long executions of the message handler take; and it is adversely impacting performance; even with cfields' PR.
< morcos>
cfields: wait, why a different culprit? didn't the old code have the same problem in a different way?
< cfields>
morcos: to be clear, you're saying that you observe that behavior with the cached blocks?
< morcos>
cfields: :) yes, but not the pre-announced cached blocks. we use the cached blocks for regular announcment too.
< gmaxwell>
sipa: e.g. right now we can, during a single handler run, connect 999 blocks which will take 2000 seconds even on a moderately fast computer... all the rx buffers just fill. (and on a really slow computer, we'll ping timeout our peers while connecting a wad of blocks)
< morcos>
they just make the rest of the loop so fast that the pause is more visible
< gmaxwell>
morcos: my strategy was basically to make the loop run immediately again after anything interestin happened. I think it's harmless to always execute the loop an extra time.
< morcos>
gmaxwell: how would you define interesting? any message processed?
< gmaxwell>
morcos: yes, though my PR didn't bother catching any message processed, it did catch any message sent or any block recieved, I believe.
< gmaxwell>
I think it would be fine to make it do any message sent or recieved.
< gmaxwell>
I also made it run through the loop an extra time in any case it skipped something due to lock contention.
< sipa>
gmaxwell: i'm aware... but i think it's just fundamentally broken that we're pegging the message handler thread for 2000s
< sipa>
gmaxwell: it's not too hard to not always connect everything we have
< gmaxwell>
sipa: That would be good, the change to accomplish that wasn't obvious to me or I would have PRed it already; though that is necessary it's not sufficient.
< gmaxwell>
sipa: since even a _1_ second delay coupled with a 5MB buffer is enough to limit our IBD sync speed to far below what we can reindex-chainstate at on reasonable hardware.
< gmaxwell>
morcos: the wake on lock contention seemed important to me, otherwise we could get a message from a peer, bounce off cs_main before getting to handle it, then sleep for 100ms before trying again.
< gmaxwell>
(e.g. if we're competing with rpc for cs_main)
< morcos>
gmaxwell: hmm.. that makes sense. where are you referring to where we skip handling the message if we can't get cs_main?
< gmaxwell>
morcos: ah, I think one of the just merged networking changes just removed where we did that.
< morcos>
oh in SendMessages
< gmaxwell>
oh no, there it is.
< morcos>
gmaxwell: on another note, i guess our ability to make use of >4 cores isn't as good as I thought... I had a whole series of PR's that removed successive bottlenecks, but i'd misremembered how much progress we could make wiht only the cuckoocache
< morcos>
And then I got sidetracked by the near conesnsus bug with CCoinsViewCache.. but when I get a chance I'll go back and see if there are other parts of that that are still worth doing now
< gmaxwell>
yea...
< BlueMatt>
ugh, ffs, fibre cuts in india means fibre rtt between eu and asia is up 90ms on one path :(
< luke-jr>
BlueMatt: no, where is it assumed to return non-NULL?
< gmaxwell>
there is some advantage in skipping processing peers that need a lock, when other peers can be processed without it.
< BlueMatt>
luke-jr: ok, please add documentation to note that it is assumed it can return null
< gmaxwell>
though on the other hand, I think that that construction could leave us sending pings even if cs_main is deadlocked which would be undesirable. (not an actual issue because our message handler will get stuck elsewhere in the case, but still)
< luke-jr>
I would think that's implied in returning a pointer type, but ok
< BlueMatt>
luke-jr: could you not rebase when people are in the middle of review?
< luke-jr>
haven't rebased that PR in a week, but ok
< BlueMatt>
did you just rebase again?
< luke-jr>
no, just added the comment
< BlueMatt>
no? the previous head is now gone?
< luke-jr>
indeed, I amended it
< BlueMatt>
please do not do that
< * luke-jr>
starts a list of the contradicting development processes preferred by various people.
< * BlueMatt>
restarts review from the start :(
< BlueMatt>
does anyone ask for regular squash/rebase mid-review?
< luke-jr>
I don't usually distinguish (or have a way to) when others are actively reviewing. (note there was already amended commits before I became aware you were)
< gmaxwell>
at what point is someone not reviewing?
< luke-jr>
gmaxwell: exactly, hence why some desiring squashing vs others not liking squashing [at particular times] is confusing to resolve
< BlueMatt>
when it is time to merge and/or there are limited comments showing up?
< BlueMatt>
no one desires squashing unless its been a while since things have been commented on, though to be fair, you should, at a minimum, comment when you squash and note changes
< BlueMatt>
eg respond to the previous comments and note where you did/didnt make changes, instead of letting them sit
< sipa>
i usually prefer squashing trivial fixes immediately, unless it's a very complicated PR
< sipa>
most of the time is understanding what the PR is trying to achieve and how... reading the code is easy
< gmaxwell>
matt's style can be helpful on more complicated ones, but as someone who has often come to a PR to review later, I've found the style to really suck in that case. I waste my time finding bugs that are already fixed in later updates.
< BlueMatt>
not for a major refactor pr where most of the pr is just code movement
< sipa>
gmaxwell: likewise
< sipa>
(but i don't mind following either style... just my personal preference is usually fixing things immediately)
< BlueMatt>
gmaxwell: I try to title such commits f "Commit title this should be squashed into" fix XYZ
< BlueMatt>
which should at least make it easy to glance at such changes
< sipa>
i guess things have improved with the "review" thing
< gmaxwell>
also it leaves me having to re-review the squash since sometimes a pair of reasonable looking changes my reveal their brokenness once merged. (also we have no tools to tell if a squash was faithful in any case, so someone malicious could slip something in if people didn't review the squash just as carefully)
< sipa>
so you can comment on an issue, and then later delete if you see it's already fixed ahead of time
< BlueMatt>
yes, my bigger annoyance is having to re-review after squash was shit
< BlueMatt>
its easy if its clear and the commits are just being cleanly squashed (can compare treeish)
< luke-jr>
I have an alias that compares diffs that I've gotten used to using to see what changed in an amend
< owowo>
is there a time when core drops a tx if unconfirmed for a long time? Or will it just keep on rebroadcasting it until the return of the Messiah?
< BlueMatt>
the wallet will keep rebroadcasting, but you can "abandon" a transaction both in the gui and the rpc
< BlueMatt>
the mempool will eventually drop them, but there are "helpful" nodes which like to rebroadcast lots of shit to their peers all the time
< owowo>
ok, thx
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9484: Introduce assumevalid setting to skip validation presumed valid scripts. (master...script_elide_verified) https://github.com/bitcoin/bitcoin/pull/9484