< morcos>
BlueMatt: The remaining sdaftuar comment is in regards to the fact that you could announce two compact blocks via NewPoWValidBlock in quick succession even after the change to only announce potential new tips
< morcos>
If those are tied at the same height, then one of them will not have ever become our tip and sp wpm
< morcos>
oops.. and so won't be BLOCK_VALID_SCRIPTS or in our best chain and we'll stall any node asking for it
< morcos>
but yes the change to BLOCK_VALID_TRANSACTIONS is in the later line where we set "send". and the recently added code in getdata to fix my bug from 9447 can go away.
< gmaxwell>
I've kinda wonered if we shouldn't just remember the NewPoWblock announcements that we've done e.g. {peer, hash} in a limitedmap, and then if we get in a request that is in that map, just blindly reply with the data on disk.
< gmaxwell>
Similar to how the relay pool works.
< gmaxwell>
(in that the relay pool means we'll give you a transaction we offered to you, even if we've since discarded it)
< morcos>
gmaxwell: that's what i was thinking at first too, but i think sdaftuar's solution might just be more elegant
< morcos>
as long as you lock cs_main if the requested hash doesn't match the cached block, that'll mean any announced block will be written to disk, and be available to be read..
< morcos>
since these will be rare, i don' tthink there is a performance issue with reading them back from disk, the only existing problem is we weren't serving them b/c we weren't seeing them as BLOCK_VALID_SCRIPTS b/c they'd never been connected
< morcos>
oh shoot... i keep forgetting... you can't serve them in that case
< morcos>
well if we're going to be pedantic about following the protocol we've got a problem
< morcos>
b/c we're announcing blocks before we even now if we're going to test them for validity, so what do we do if we never test them
< morcos>
also perhaps we never though through the case of what to do if we announce a block that turns out to be invalid... surely stalling the requester isn't the right course of action
< gmaxwell>
(actually the deseralizzation is pretty slow, as an aside)
< gmaxwell>
your concern is because we didn't really properly consider relaying unvalidated blocks, but we just glommed it on because it would have unfortunate to specify BIP152 another way.
< morcos>
i hope thats a statement and not a question
< gmaxwell>
We should specify a network message that says "I think block XYZ is invalid, if I told you about it before, forget about it, I'm not going to respond to requests about it."
< gmaxwell>
statement.
< gmaxwell>
A negative-inv if you will.
< morcos>
yeah.. but we need to do something for now..
< gmaxwell>
though perhaps like the relay pool we ought to be willing to serve up data for any block we've pre-relayed, even if we never verified it and don't intend to.
< gmaxwell>
which we could do with a relay pool like strategy. even avoid putting them on disk... (who cares about a few megabytes of blocks in ram?)
< morcos>
yes but avoiding disk is an unnecessary optimization... we're talking about a rare race condition here (and we've already writtent them to disk before we've even decided whether to validate or not)
< gmaxwell>
okay, from your comments above I was thinking there was some path where we didn't save to disk.
< gmaxwell>
We don't generally want to allow arbritary peers to fetch blocks on disk that aren't in our best chain. (leads to fingerprinting attacks). Which was why I made the peer,hash suggestion above.
< morcos>
gmaxwell: we already have other various protections that aren't always that it's in our best chain, in this case, we require non-best-chain blocks to be less than 30 days old
< morcos>
but we also require BLOCK_VALID_SCRIPTS, if we just loosen that we're ok (for the case of untested blocks)
< morcos>
i still think we potentially have a problem about what to do with blocks that turn out to be invalid
< morcos>
back in a bit
< gmaxwell>
I think for now we should serve them if we've advertised them... but should make a protocol extension so we can refuse to.
< gmaxwell>
(we do ourselves no favors to help the network converge on things we think are invalid)
< gmaxwell>
why do we know have a birthday/rescan height for importaddress?
< gmaxwell>
I had thought this was going to get fixed with a range setting for the rescan rpc, but I see we closed that with 'you should import with birthdays' but there is no way to do that for a watching address import.
< gmaxwell>
oh I see importmulti has a timestamp. cool.
< gmaxwell>
midnightmagic: wrt the question someone was asking in #bitcoin about how do you handle an import backlog for a node that was offline: You use importmulti with the approrpiate birthdates, and it will handle any required rescan.
< gmaxwell>
perhaps the importaddress RPC help text should recommend you use importmulti instead.
< gmaxwell>
gah we still sign inside the transaction creation inner loop?! I thought we fixed that a long time ago. :-/
< gmaxwell>
insanity.
< jonasschnelli>
gmaxwell: you don't if you are using fundrawtx
< jonasschnelli>
well, you use the dummy signer
< Jouke>
pruning node do forward blocks nowadays right?
< gmaxwell>
yes.
< Jouke>
A rescan on my problem node didn't work, so I removed the block and chainstate dir and let it sync from fresh.
< gmaxwell>
rescan? why were you rescanning?
< Jouke>
I had that problem before which you helped me with a couple of days ago.
< Jouke>
It couldn't read a block from disk when I issued a getblock rpc call
< gmaxwell>
did you rescan or reindex?
< gmaxwell>
rescan will do nothing, so if that didn't work it's unsurprising.
< Jouke>
Euh, reindex indeed
< Jouke>
I did a reindex
< gmaxwell>
whew. Interesting!
< Jouke>
But since it was connected only to a pruning node, I had it temporarily connect to an other node of mine through an ssh tunnel.
< gmaxwell>
yes, can't fetch the history through the pruned node.
< Jouke>
That connection went down this evening (I'm in europe), and it synced to block 445248
< Jouke>
But the pruning node it connects to should have the rest of the blocks as prune=80000
< gmaxwell>
if you were expecting it to fetch blocks that weren't at the tip but were within the pruning window, that also won't work: the pruned node has no way to tell the peer what blocks it has.
< gmaxwell>
your fetching node will not try to sync off it. But when the pruned node has a new block it will relay it along (and if there is a reorg required, the client node will still fetch whatever blocks are required to accomplish the reorg).
< Jouke>
Hmm, ok.
< Jouke>
So my expectations were wrong. Thanks again for explaining!
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9465: [Wallet] Do not perform ECDSA in the fee calculation inner loop. (master...no_signing_in_inner_loop) https://github.com/bitcoin/bitcoin/pull/9465
< gmaxwell>
wumpus: I'm glad I wasn't the only person who thought it was already the case.
< bitcoin-git>
[bitcoin] fanquake opened pull request #9467: [Trivial] [Doc] Install Protobuf v3 on OS X (master...osx-protobuf-doc) https://github.com/bitcoin/bitcoin/pull/9467
< gmaxwell>
only noticed it wasn't when I went to decrement the fee when we found we needed less than we expected and found I couldn't because the transaction was already signed.
< wumpus>
gmaxwell: I thought that was exactly why the dummy signer was introduced, but apparently not
< gmaxwell>
I ... think there was a miscommunication when it was proposed that a dummy signer be used there. :P
< gmaxwell>
As in someone thought that it was only needed for the fundrawtransaction case. :P
< wumpus>
probably
< wumpus>
ohh, so that's why it was introduced. Yes I remember now
< wumpus>
anyhow using it for normal transaction generation makes a lot of sense
< gmaxwell>
should be a lot faster in some cases.
< gmaxwell>
but also lets us do more sensible things with fees.
< jonasschnelli>
We could remove the "real" signing from CreateTransaction and factor out SignTransaction (from signrawtx) and use everywhere Create/Fund/Sign.
< jonasschnelli>
(everywhere internally)
< jonasschnelli>
Qt / sendtoaddr / sendmany
< jonasschnelli>
Ah. Just saw. #9465
< gribble>
https://github.com/bitcoin/bitcoin/issues/9465 | [Wallet] Do not perform ECDSA signing in the fee calculation inner loop. by gmaxwell · Pull Request #9465 · bitcoin/bitcoin · GitHub
< fanquake>
Is libevent 2.1.7 actually a release candidate, or just new beta version? Maybe a chance for a new release in the next few months.
< fanquake>
wumpus would you be opposed to moving depends to the latest libevent beta (which it is pre 0.14) for the 0.14 release?
< fanquake>
Just looking through dependancies now. Qt is another one to consider. 5.7 has had a point release, so should be more stable.
< cfields>
fanquake: speaking of, I know I owe you a bunch of review/acks on depends PRs. Been rushing to get net stuff finished up. I'll find some time this week to go through those.
< fanquake>
cfields no worries. I've just fixed-up the Zero-MQ one. Was going to open a more general depends related one tonight/tomorrow. Are you working on anything there that might conflict?
< cfields>
fanquake: no, only qt would conflict. I have a halfway-rewritten .mk to take advantage of some of the new build features for 5.7. But iirc we can bump without that and save it for 5.8 if needed.
< cfields>
er, that should say halfway-done rewrite
< fanquake>
cfields cool. I'll get these changes finished up and PR. Are we at all confident of dropping Boost for 0.14.0? Otherwise is a 1.61.0 -> 1.63.0 bump worth it in the interim?
< cfields>
fanquake: yea, not going to make it for 0.14. The changes are no fun to review, so it tends to go slowly
< fanquake>
cfields fair enough, it's a long slog. You're doing great work though. :)
< cfields>
heh, thanks
< wumpus>
fanquake: no, I'd propose the same
< wumpus>
fanquake: the libevent beta is *lots* better than the last stable, lots of fixes and new features
< fanquake>
wumpus: Great, i'll work on some changes for that as well.
< wumpus>
IIRC also means a windows-specific patch is no longer necesary
< fanquake>
Yes, I think there is the possibility to drop some workarounds as well.
< bitcoin-git>
bitcoin/master 0513c70 Gregory Sanders: Make rpcauth help message clearer, add example in example .conf
< bitcoin-git>
bitcoin/master c2ea1e6 MarcoFalke: Merge #9401: Make rpcauth help message clearer, add example in example .conf...
< bitcoin-git>
[bitcoin] MarcoFalke closed pull request #9401: Make rpcauth help message clearer, add example in example .conf (master...rpcauthnotes) https://github.com/bitcoin/bitcoin/pull/9401
< bitcoin-git>
[bitcoin] laanwj opened pull request #9470: qt: Set (count) placeholder in sendcoinsdialog to notranslate (master...2017_01_more_translate_fixes) https://github.com/bitcoin/bitcoin/pull/9470
< * jonasschnelli>
is downloading github issue/PR-html 0-10000 to process html and calculate how many comments where made in 2016
< fanquake>
jonasschnelli What's your guess?
< jonasschnelli>
No idea. :)
< jonasschnelli>
I guess around 30 per day
< jonasschnelli>
I though about counting words per comment,... but due to code/log copy and paste, etc., this is not really representative
< fanquake>
I was going to say far less than that, maybe between 3000-5000 comments total. However now I think about it, some days I'd easily see > 30 emails from bitcoin/bitcoin heh
< fanquake>
Depends how many people you have blocked :p
< MarcoFalke>
Huh, you can block comments from specific people?
< fanquake>
I don't think so, just block the entire user. So you won't see any PRs/comments etc
< MarcoFalke>
hmm, interesting. Luckily we don't have returning spammers.
< fanquake>
One thing I had been considering was going back through some ancient issues/PRs, and "locking" the conversation. That would at least stop the random comments/replies on some.
< fanquake>
wumpus looks like we can drop both libevent patches
< sipa>
jonasschnelli: isn't easier through the json interface?
< MarcoFalke>
jonasschnelli: I think there is already a repo with all the github comments to Bitcoin Core. (Including history for each comment)
< MarcoFalke>
But I can't find the link. I think someone mentioned it last year in Zürich
< jonasschnelli>
Yes. I remember that. Is it up to date? I can't recall the URL though
< jonasschnelli>
A html dump is always a good thing. Also, parsing auto-generated-html is simple and a good regex exercise. :)
< luke-jr>
a markdown dump would be better *hides*
< jonasschnelli>
luke-jr: Yeah. But does Github offers a md dump? I don't think so.
< rabidus_>
http://bitcoinfees.21.co shows that there are plenty of more those 421-450 satoshi/byte transactions. Maybe related to that bug?
< MarcoFalke>
kanzure: ^ Might know the url to the repo. :)
< kanzure>
actually no, i'd like to know that repo as well
< MarcoFalke>
How come no one ever reported that wallet fee bug?
< kanzure>
(i've also never heard of it.)
< luke-jr>
jonasschnelli: there must be some way, since the AJAX gets it for comment editing
< kanzure>
or email inbox receiving all comments
< kanzure>
which is better because you have even deleted comments (which, ahem, has happened to btcdrak and others)
< jtimon>
jonasschnelli: I just realized I never understood your proposal for the spv mode in bitcoin core (or maybe I'm missing something about bloom filter based implementations of SPV)
< jonasschnelli>
jtimon: It's simple.
< jtimon>
you said that this is better for privacy because it will download full blocks, which makes sense to me
< jtimon>
but how do you know which blocks you want to download?
< jonasschnelli>
Just download blocks, scan transactions, if relevant to the wallet, mark them a fSPV=true
< jonasschnelli>
Just all of them!
< jonasschnelli>
But not deeper then your wallet birthday.
< jonasschnelli>
If you start a fresh bitcoin-qt/d, you download only a couple of blocks and can start using Core with SPV
< jtimon>
mhmm, you download all blocks without validating them?
< jonasschnelli>
jtimon: You have to do that anyways
< jonasschnelli>
(if you want a full node)
< jtimon>
ok, I get now, thanks
< jonasschnelli>
Just download the relevant blocks first, don't validate and use them for SPV
< jonasschnelli>
As soon as we have block-bloom-filters, we can find out which of the blocks are relevant to the wallet (2nd step)
< gmaxwell>
luke-jr: so it needs to be changed to a #if NEW_OPENSSL_API and a #define NEW_OPENSSL_API that has a different version test for libressl and openssl?
< bitcoin-git>
[bitcoin] sipa opened pull request #9472: Disentangle progress estimationc from heckpoints and update it (master...update_tx_estimation) https://github.com/bitcoin/bitcoin/pull/9472
< bitcoin-git>
[bitcoin] sipa opened pull request #9474: Mark the minconf parameter to move as ignored (master...stale_minconf_parameter) https://github.com/bitcoin/bitcoin/pull/9474
< timothy>
luke-jr: you have to use AC_CHECK_FUNCS instead
< jtimon>
what is it missing? as said in a comment, segwit.py and p2p-compactblocks.py still need to run with self.chain = "regtest" instead of self.chain = "custom". But perhaps that's ok?
< jtimon>
maybe a new rpc test changing some chainparams values?
< MarcoFalke>
cfields: Any guess why #9416 causes a undefined reference to `qInitResources_bitcoin()'?
< Chris_Stewart_5>
Just to be clear, there are public keys in the blockchain that do not pass the `IsCompressedOrUncompressedPubKey` check right? This is enforced as policy right?
< sipa>
there may be
< sipa>
i'm not sure
< bitcoin-git>
[bitcoin] instagibbs closed pull request #8992: Enable pubkey lookup for p2sh-p2wpkh in validateaddress (master...validatep2pkh) https://github.com/bitcoin/bitcoin/pull/8992
< sipa>
there certainly are in testnet
< Chris_Stewart_5>
sipa: Is there a BIP that was proposed that check?
< sipa>
not afaik
< cfields>
MarcoFalke: hmm, sounds like something's legitimately not getting included
< MarcoFalke>
paveljanik had a hint. Let me try to solve the wildcard
< cfields>
MarcoFalke: ah, missed that
< cfields>
MarcoFalke: hmm, it's been a while, but i'm assuming that I somehow missed the other qrc in fc4ad0c7fcf2e5841756c9d1003f95c879ee5cd2
< Chris_Stewart_5>
sipa: We don't have a more general public key check do we? For instance, a valid public key encoding that encompasses all encodings that are valid in openssl?
< cfields>
MarcoFalke: unsure. The above is just a guess. I need to catch a flight, I'll catch up on irc logs later
< MarcoFalke>
sure, will try your patch
< gmaxwell>
sipa: I'm doubtful that SIGCHECK_VERIFICATION_FACTOR is correct anymore except on really slow computers, fwiw.
< cfields>
MarcoFalke: heh, that came off as very rude. Wasn't intentional :)
< sipa>
gmaxwell: willing to change it based on HARD DATA
< MarcoFalke>
cfields: I didn't read any rudeness. :P
< gmaxwell>
'HARD DATA, Soft forks.'
< sipa>
gmaxwell: though, it's probably better to overshoot than undershoot
< sipa>
(seeing a progress bar speed up over time is nice)
< Chris_Stewart_5>
/r/oddlysatisfying
< gmaxwell>
jeremyrubin: did you ever explore having a simple bypass of the signature cache when validating blocks far from the best header, to reduce lock contention during IBD?
< sipa>
gmaxwell: oops
< sipa>
my code is wrong
< sipa>
it assumes no signature checks up to the timestamp given
< Chris_Stewart_5>
sipa: I did not realize libsecp256k1 had java bindings in the repo, nice!
< gmaxwell>
they might even work!
< sipa>
gmaxwell: they have unit tests!
< Chris_Stewart_5>
assert(true == true, "true")
< sipa>
gmaxwell: jeremyrubin: did you ever explore having a simple bypass of the signature cache -> you can't do that at the sigcache level, as there may be CHECKSIG NOTs in the chain
< gmaxwell>
sipa: hm? just treat it as there being nothing in the cache for the whole block.
< sipa>
Ah
< sipa>
nvm
< gmaxwell>
which there won't be, in ibd.
< jtimon>
btw gmaxwell did you stop working on your patch to get rid of Checkpoints::GetLastCheckpoint() ? it looked quite advanced a while ago
< gmaxwell>
We're really review starved right now.
< jtimon>
I see
< jtimon>
but you didn't open a PR, did you?
< jtimon>
no, just looked
< jtimon>
anyway, leaving now...
< MarcoFalke>
cfields: Your patch works! Mind to create a pull for that some time after your flight?
< morcos>
gmaxwell: re: 9465, it concerns me a little that DummySignatureCreator creates 72-byte signatures and not 73-byte signatures. I realize most of the time its neglible, but it seems like it could just lead to bad edge cases. It seems like we should always overpay instead.
< sipa>
morcos: since low-S, the maximum is 72, i think
< sipa>
and on average 71.5
< gmaxwell>
darn sipa with the answer while I was off trying to confirm it. :)
< morcos>
ah, missed that.. yeah i was caring about the maximum
< sipa>
gmaxwell: of course, i also tried to confirm first :)
< morcos>
out of curiousity, how were you confirming that