< sipa>
you could start by not downloading them :)
< sipa>
s/downloading/getdata'ing/
< gmaxwell>
yea, not getdata is an obvious first step and need no protocol handling.
< gmaxwell>
but INVs actually use a lot of bandwidth... on the order of 5% of the size of all transactions per peer.
< CodeShark>
an extra protocol message rather than negotiating it during the handshake?
< CodeShark>
presumably miners would not want to connect to peers that won't relay unconfirmed transactions
< gmaxwell>
CodeShark: shouldn't be a handshake thing, since you may with to adaptive cork and uncork.
< gmaxwell>
er adaptively*
< CodeShark>
if a node does not intend to accept nor relay unconfirmed transactions, it can't really hurt for others to know this
< CodeShark>
one would hope that of 8 randomly chosen nodes at least one is relaying transactions
< CodeShark>
furthermore, given a more efficient block propagation protocol (i.e. one that does not require resending txs to nodes that already have them in their mempool) the main benefit would be only for transactions that take a very long time to confirm or never get confirmed
< phantomcircuit>
gmaxwell, filterload an empty filter
< phantomcircuit>
tada
< CodeShark>
heh
< wumpus>
but then you won't receive any transactions in blocks either?
< CodeShark>
right
< phantomcircuit>
wumpus, getdata MSG_BLOCK
< phantomcircuit>
isn't filtered at all
< CodeShark>
true
< phantomcircuit>
(i run with empty filters on mining full nodes)
< wumpus>
nice
< gmaxwell>
CodeShark: you wouldn't be listening (or at least not node-network and advertising yourself) while also doing this.
< gmaxwell>
CodeShark: thats assuming spherical cow efficiency, and again; it leaves the user saddled with the inefficienct gossiping of transactions. If you don't care about unconfirmed transactions all that is a waste of time.
< gmaxwell>
beyond the bandwidth efficiency gains, not taking lose transactions removes an attack vector.
< wumpus>
empty filterload should indeed work, the filter is checked before relaying transactions to a node
< gmaxwell>
wumpus: won't that screw up relaying blocks?
< phantomcircuit>
gmaxwell, nope
< wumpus>
that's what I thought too, but phantomcircuit corrected me
< gmaxwell>
phantomcircuit: care to clean up a patch that adds a blocksonly=1 that turns off node network and does the empty filterloads?
< wumpus>
only getdata w/ MSG_FILTERED_BLOCK will have the filter applied
< gmaxwell>
yup. duh.
< gmaxwell>
phantomcircuit: should also check that the neighbor supports bloomfilters and not do it otherwise. And not getdata transactions even if INVed except from whitebind peers, I guess.
< wumpus>
MSG_FILTERED_BLOCK with an empty filter could be handled very quickly, but it doesn't seem that we special case it :-)
< gmaxwell>
hm. I thought I added some special case handling of empty filters before.
< wumpus>
yes you probably want to disconnect from peers that don't support bloom filters
< wumpus>
maybe I missed it
< wumpus>
looks like there is no way to query isEmpty from outside CBloomFilter
< gmaxwell>
inside it there is an isEmpty test that shortcuts contains.
< gmaxwell>
(it was added as the 'cover change' for that divide by zero crash... but it's an actual optimization too)
< wumpus>
what I meant was more something that bypasses *everything* and just returns empty if you query with an empty filter
< wumpus>
now it still happily reads the blocks from disk, just to discard every transaction
< gmaxwell>
Would be reasonable to bubble up the full/empty state.
< phantomcircuit>
wumpus, it is special cased
< gmaxwell>
phantomcircuit: in the wrong place.
< wumpus>
phantomcircuit: oh? where?
< phantomcircuit>
it's special cased in the bloom filtering code
< gmaxwell>
Existing special casing inside the bloom object itself was really just done to make that divide by zero bug unreachable. But as wumpus notes it means you'll read the block and test every transaction against the filter, the shortcutting is a bit late.
< phantomcircuit>
yes agreed
< wumpus>
it's interesting how with three people in a low-traffic IRC channel we're stlll managing to talk past each other half the time :-)
< gmaxwell>
Following this thinking, might be interesting to support fee-rate filtering for inv.
< gmaxwell>
Bleh, three days of AFLing univalue on 48 cores and it's still finding new paths.
< warren>
Running Bitcoin Core v0.11.2rc1 with blank data dir on Windows 10 ... it connected to 9 peers including a local peer. Stuck at 0 blocks for the past 10 minutes. what should I try?
<@sipa>
warren: does it have any headers?
<@sipa>
warren: what does getchaintips RPC report?
< warren>
ah, it's working now
< gmaxwell>
wumpus: I guess you did more security review on miniupnpc? ... I went to review that update and I wanted to jump off a snprintf bridge. :)
< wumpus>
gmaxwell: heh it's things like that make me so fed up with C. in this case snprintf resturn specification is different per platform, so it could have been OK on windows, but on Linux it returns the length that the output would be without buffer limit. So if the application sends that many bytes to the network, and you control an arbitrary string input, you have a configurable memory contents leak :)
< wumpus>
gmaxwell: which happens to be very useful if you need to know the stack cookie and function addresses
< Luke-Jr>
wait what? snprintf doesn't do that on Windows? :/
< midnightmagic>
miniupnpd (the other side of the coin) I'm not sure is any better. in the event that a client requests an impermissible port, rather than say something useful it instead says that doing so would conflict with an already existing mapping (which doesn't exist)
<@sipa>
Concerning the return value of snprintf(), SUSv2 and C99 contradict each other: when snprintf() is called with size=0 then SUSv2 stipulates an unspecified return value less than
<@sipa>
1, while C99 allows str to be NULL in this case, and gives the return value (as always) as the number of characters that would have been written in case the output string has
<@sipa>
been large enough.
<@sipa>
snprintf doesn't exist in C89
< Luke-Jr>
If len > count, count characters are stored in buffer, no null-terminator is appended, and a negative value is returned. <-- wtf Microsoft
<@sipa>
That obeys the SUSv2 standard.
< wumpus>
Luke-Jr: not just the return value, in multiple ways it doesn't behave the same on all platforms. we had issues due to that was well, one of the reasons why bitcoin core switched to a typesafe library for formatting that doesn't rely on libc
< wumpus>
Luke-Jr: anyhow as C programmer you're supposed to know all these trivia, and if not you'll create awful security holes, there is zero leeway for human error
< Luke-Jr>
If buffer is a null pointer and count is zero, len is returned as the count of characters required to format the output, not including the terminating null. To make a successful call with the same argument and locale parameters, allocate a buffer holding at least len + 1 characters.
< Luke-Jr>
hm
< wumpus>
yep.
< sipa>
Luke-Jr: the 'best practice' way to use snprintf is with a wrapper that checks the return value, and if larger than the buffer size, reallocate to exactly that size, and if not, double it... and then retry
< Luke-Jr>
"Also, some buggy older systems ignore the length and overrun the buffer (e.g., 64-bit Solaris 7)." -.-
< wumpus>
another pet peeve with *printf is how the formatting characters are different per platform, at least for not-so-obscure types like 64 bit integers, size_t's, etc...
< sipa>
or how putting type information in a string is just crazy
< wumpus>
and then there is a header which sould privide handy macros for that.. **but it doesn't exist on every platform**
< wumpus>
GRR...
< Luke-Jr>
wumpus: PRId64
< wumpus>
sipa: oh yes
< wumpus>
Luke-Jr: no need to tell me :p
< Luke-Jr>
what platform doesn't have it? :p
< wumpus>
Luke-Jr: I find those issues because my head is filled with nonsense like that
< sipa>
hey let's bypass the minimal type safety C provides, because converting any data to a string should be possible with a single function call
< wumpus>
sipa: yes and initiallly to an arbitrary-length string, buffer limiting was introduced later, hey because it's an afterthought right :-)
< sipa>
i think when these functions were designed, the programmer and the user were considered to be one, and if you're writing outside of your buffer, well then you're just stupid and making your computer do weird things, right?
< wumpus>
and then the format string vulnerabilities, in which someone accidentally passes a user-provided string to the first argument to *printf. Those functions birthed whole classes of bugs.
< Luke-Jr>
heh
< CodeShark>
when these functions were designed they were just a convenience to avoid having to write directly to io devices in assembly language :p
< wumpus>
sipa: true, many of it was before the networked world. What does it matter if you crash your MSX home computer :)
< Luke-Jr>
oh well, is it really a security issue if I just say I require C99? :P
< Luke-Jr>
then it's Windows at fault, right? <.<
< wumpus>
Luke-Jr: then you exclude windows, which, now in 2015, still doesn't implement C99. I used to be angry about that, but now udnerstand they rightly have moved on from C :)
< wumpus>
s/windows/MS compiler/
< CodeShark>
type safety was a luxury back in the early 70's :p
< Luke-Jr>
I already excluded MS compiler <.<
< Luke-Jr>
I'm not sure MingW has a workaround for this snprintf thing though
< wumpus>
but be careful: mingw behaves cameleon-like, in some things it's like posix in some ways it's like MSVC
< CodeShark>
or even the late 70's for that matter :p
< wumpus>
Luke-Jr: it behaves in the windows way
< wumpus>
CodeShark: but all those ideas did originate from them, and other languages such as PASCAL were ahead of C in some ways
< wumpus>
s/them/the 70's/
< CodeShark>
true - PASCAL was far more heavily typed...if only it hadn't been so verbose :)
< wumpus>
e.g. ADA, another dinosaur, has very strict types and array bounds checking
< CodeShark>
ADA wasn't very practical for systems level programming, though
< CodeShark>
but it did introduce a number of very important paradigms
< Luke-Jr>
no, it looks like MingW is C99-compatible, at least with -std=c99
< sipa>
it links against msvc's runtime, no?
< wumpus>
right, I just mean that it is not the case that no one back then thought about security. Maybe it was the step from e.g. mainframes to home computers that lost a lot of knowledge. Suddenly doing things "fast" was more important than being robust :)
< Luke-Jr>
sipa: in some cases, it will override stuff
< wumpus>
Luke-Jr: please don't assume that affects strprintf
< Luke-Jr>
you mean snprintf?
< wumpus>
ye
< Luke-Jr>
wumpus: I checked the headers
< Luke-Jr>
it has an override *and* a comment explaining this
< wumpus>
ok. Well you checked. I just said 'don't assume' :)
< Luke-Jr>
☺
< wumpus>
maybe add a test that makes sure
< Luke-Jr>
well, part of why I checked is, my code should have crashed badly if it got Windows behaviour :P
< GitHub126>
[bitcoin] jonasschnelli opened pull request #6984: don't enforce maxuploadtarget's disconnect for whitelisted peers (master...2015/11/maxupload_whitebind) https://github.com/bitcoin/bitcoin/pull/6984
< warren>
I have a Windows 10 machine 80% sync'ed now, shut it down for now.
< warren>
wumpus: how did you simulate system crashes?
< wumpus>
it offers a menu of different crashes, e.g. from the kernel, from userspace, from I/O, and as a bonus you can even change the color of the BSOD :-)
< jonasschnelli>
fanquake: than i can open a PR with both commits
< jonasschnelli>
fanquake: or if to complicated, just write comments. Whatever is best for you.
< sipa>
warren: set a large dbcache, and start closing bitcoin-qt right before the crash
< sipa>
warren: that will make sure leveldb is writing stuff out during the crash
< fanquake>
jonasschnelli ok
< fanquake>
wumpus pushed a commit to bump zeromq.
< wumpus>
thanks
< wumpus>
fanquake: thanks for the mail
< wumpus>
fanquake: re: #1040 I still think we should move the manpage from contrib/debian to a place where it gets installed on 'make install', it's not just useful for debian
< wumpus>
.TH BITCOIND "1" "January 2011" "bitcoind 3.19" ... hah, and the build tooling should fill those in automatically
< * sipa>
remembers the 100 Mbyte blockchain
< wumpus>
hehe, oh no, 100 MB, that must be the end, we'll never be able to handle a chain that big!
< sipa>
sorry, the 95.4 MiB blockchain
< fanquake>
*used to be able to cpu mine .1 btc a day*
< fanquake>
wumpus i can open a pull to amend some of the man page issue if you'd like
< sipa>
*used to be able to GPU mine 50 BTC a day*
< fanquake>
I hacked together a 5970 + 5830 to mine for a while. Missed the 50btc a day ship however.
< wumpus>
fanquake:It'd be nice to close that lnog-standing (but minor) issue
< * wumpus>
didn't even know we had a 'getaddednodeinfo' RPC call
< fanquake>
heh, it's always interesting going over all the "older" issues
< wumpus>
especially nice if issues are solved not because someone actively went on solving the issue, but as by-product of another change
< wumpus>
(that's the positive counterpart of 'changes introduce unexpected issues', they also close unexpected issues sometimes!)
< fanquake>
jonasschnelli How did you go with the trusty gitian build?
< wumpus>
for https://github.com/bitcoin/bitcoin/issues//2368 "Consider reducing 'getblocks' limit below 500" I have no idea whether it's still relevant, but the limit is still hardcoded to 500
< wumpus>
apparently it never killed us, but the issue may be based on a misunderstanding? that getblocks returns block data, instead of INVs?
< wumpus>
fanquake: what vmbuilder are you using?
< sipa>
wumpus: in old code we sent getdatas instantly for all invs we received
< wumpus>
yes that seems to be the place to solve the issue, not in getblocks
< wumpus>
ok, closing then
< fanquake>
wumpus debian8 base, latest gitianbuilder, is that what you mean?
< wumpus>
fanquake: but on debian you have to build vmbuilder from source, right?
< fanquake>
ah sorry, I'm using vm-builder_0.12.4+bzr489
< wumpus>
(e.g. the one included wasn't new enough to support ubuntu, this was described in the doc/gitian-building.md)
< wumpus>
mine (on Ubuntu 14.04) is also 0.12.4 though. But I suppose using a newer one may be worth a try, as it's failing while building the image.
< fanquake>
is yours 489 or 494 (latest)?
< wumpus>
building VM images is, unfortunately, still somewhat of a black art, needing root and support from the OS and loopback devices etc. The linux kernel library project https://github.com/lkl seems promising for the future as it'd simplify doing all the filesystems manipulation etc from userspace.
< wumpus>
Version: 0.12.4+bzr489-0ubuntu2
< wumpus>
" The currently supported distros are: * Ubuntu Dapper, Gutsy, Hardy, Intrepid, Jaunty, and Karmic." oh no... either they stopped updating the software, or they stopped updating the package description
< GitHub21>
[bitcoin] fanquake opened pull request #6985: [gitian] Use latest vm-builder ( 0.12.4+bzr494 ) on Debian (master...patch-3) https://github.com/bitcoin/bitcoin/pull/6985
< wumpus>
"apt-get python-vm-bulder" gets me 96 perl packages, boggles
< wumpus>
oh it gets all of libvirt, even the daemon... whyy
< wumpus>
anyhow what I was trying to find out: Ubuntu 15.10 also still has 0.12.4+bzr489-0ubuntu2, with the same package description. I vaguely remember the author saying they will do a new package for 16.04 tho, the code is almost-unmaintained and what they recommend is using the ready-made daily ubuntu cloud images
< wumpus>
(but that'd make deterministic verification really hard for us)
< wumpus>
anyway, it should be able to make 14.04/trusty images, I've used it for that (on Ubuntu) multiple times
< fanquake>
I cherrypicked you doc commit and added
< fanquake>
Another depends to consider bumping is qt. They've released a 5.5.1 bugfix, just looking through the notes to see if any of our issues might have been solved.
< GitHub144>
bitcoin/master 7791395 Michael: [gitian] Use vm-builder_0.12.4+bzr494 on Debian
< GitHub144>
bitcoin/master 38a4f26 Wladimir J. van der Laan: Merge pull request #6985...
< GitHub105>
[bitcoin] laanwj closed pull request #6985: [gitian] Use latest vm-builder ( 0.12.4+bzr494 ) on Debian (master...patch-3) https://github.com/bitcoin/bitcoin/pull/6985
< wumpus>
(btw apart from being a bad fit for the gitian process, the pre-built cloud images are awfully convenient - ready-built qcow2 images that can be configured in a few seconds - only remining timesink is apt-get install of toolchains)
< jtimon>
phantomcircuit: a library with all the pure functions in bitcoin? what do you mean by pure functions?
< phantomcircuit>
jtimon, no side effects, ie inputs -> outputs no state modified
< GitHub104>
[bitcoin] jtimon opened pull request #6986: Globals: Don't call Params() from miner.cpp (master...global-chainparams-miner) https://github.com/bitcoin/bitcoin/pull/6986
< GitHub19>
[bitcoin] jonasschnelli opened pull request #6987: [doc] add documentation about reducing traffic (master...2015/11/doc_traffic) https://github.com/bitcoin/bitcoin/pull/6987
< jtimon>
phantomcircuit: oh, I see, cfields and I have been calling that "library ready" code. libbitcoinconsensus is obviously library ready and at a higher level there's some more that will be required for a complete libconsensus but not that many
< jtimon>
phantomcircuit: I mean, wait, no "library ready" is even more strict since it doesn't even use the state as input
< jtimon>
for example, checkTransaction checkBlockHeader and checkBlock, those are almost ready apart from uses of error(), Params() and other things that are trivial to solve (if, eventually, one day, we want to do that)
< jtimon>
phantomcircuit: anyway I don't know of any library with functions from bitcoin core besides libbitcoinconsensus with its VerifyScript()
< wumpus>
as long as all inputs and outputs are passed to/from the function explicitly, it qualifies as 'pure function'. This also means no i/o!
< wumpus>
(although ofc. people in functional languages like to cheat by passing 'the state of the world' as input and as output of a function, so that i/o can exist in their 'pure' world :-) )
< jtimon>
wumpus: well, ContextualCheckBlock, for example, doesn't modify the state, but it takes stateful structures explicitly as parameter CBlockIndex* (containing a pointer to a chain index) and a separated VerifyBlock would currently also take at least a CCoinsViewCache like ConnectBlock does. It may be a pure function, but it's not ready for libconsensus
< jtimon>
phantomcircuit: anyway, I'm curious about the question, what motivated you to ask me that?
< wumpus>
probably refering to the RPC calls that are (either truly or effectively) pure functions
< wumpus>
parts of the raw tx API, validateaddress (if no wallet), etc
< phantomcircuit>
jtimon, looking for easy things to fuzz :)
< jtimon>
oh, I see
< jtimon>
phantomcircuit: so the name of the functions is enough for you, right?
< jtimon>
as said just from the top of my head checkTransaction checkBlockHeader and checkBlock come to mind (although they are quite simple anyway), probably wumpus has more names "at hand"
< wumpus>
off the top of my head: everyting in utilstrencodings.h, base58 parsing/generation, serialization/deserialization primitives, uint256 and other encapsulated classes
< jtimon>
in fact we could easily expose those 3 in libbitcoinconsensus right now (assuming we want to)
< wumpus>
everything in src/crypto
< jtimon>
mhmm, I think base58 is still coupled with the global Params() for Base58Prefix ...
< wumpus>
CScript, CScriptCompressor etc
< jtimon>
everything in current libbitcoinconsensus includes everything in src/crypto
< wumpus>
jtimon: read-only though, that doesn't matter for fuzzing
< jtimon>
wumpus: gotcha
< wumpus>
essentailly you want functions that are idempotent, not necessarily pure
< * wumpus>
thinks it would be nice if someone collected release notes "Notable changes" into an organized documentation, people go through the trouble of documenting things for release notes , but after a release it's essentially forgotten
< wumpus>
should probably add the part about torcontrol to doc/tor.md as well
< jtimon>
wumpus: if #6625 was merged right now it wouldn't compile because transaction_tests.cpp now requires "#include "main.h" // For minRelayTxFee and DEFAULT_MIN_RELAY_TX_FEE"
< jtimon>
wumpus: can I push the rebased and fixed version or preemptively fix it in the old version without rebasing?
< jtimon>
s/can/should
< jtimon>
btw gmaxwell a re-"gmaxwell was here" on #6625 (a subset of #6672 without libconsensus related movements) would be appreciated
< jtimon>
after this little fix
< harding>
wumpus: you mean just extracting the "Notable changes" sections from all past releases into a single document hosted somewhere?
< jtimon>
wumpus: gmaxwell: never mind there's also new uses of MAX_BLOCK_SIZE in net.cpp so I have to rebase and fix more things
< jtimon>
wumpus: oh, no, that was what I already fixed but didn't squashed, so my question remains, rebase and fix or preemptively fix ?
< wumpus>
harding: not literally - I just meant, it's easy to lose track of documentation that was written just for the release notes
< wumpus>
harding: but e.g. jonasschelli is doing the right thing in #6987 by also creating a file in doc
< jtimon>
eventually (assuming minRelayTxFee and DEFAULT_MIN_RELAY_TX_FEE ever move to the policy dir) we will be able to remove that main.h include, but not just yet
< wumpus>
ok... well if you need the include then keep it
< wumpus>
I mean this in a test, anything goes :)
< jtimon>
yeah, the question is whether to rebase or not, the problem is only visible if I rebase
< jtimon>
I mean I have already rebased, for me it's simpler to just push the rebased version, but it's worse for reviewers. I can easily "backport" the fix but it's kind of weird to have that "// For minRelayTxFee and DEFAULT_MIN_RELAY_TX_FEE" comment without rebasing...
< jtimon>
wumpus: I had to rebase anyway because both #6382 and #5970 depend on #6625 (in fact, that's how I discovered the problem)
< wumpus>
ok, yes rebasing makes sense then
< jtimon>
wumpus: ok, thanks
< GitHub18>
[bitcoin] MarcoFalke opened pull request #6988: [doc] Fix FIXME for libblkmaker in release-notes.md (master...MarcoFalke-2015-docFixme) https://github.com/bitcoin/bitcoin/pull/6988
< morcos>
phantomcircuit: ping, want to understand your privacy concerns with respect to wallet txs in mempool
< morcos>
is the idea that you don't want people to be able to go around polling nodes trying to get them to respond to a getdata for a tx that should have been evicted
< morcos>
and if they responded, they have a good probability of being the tx author?
< morcos>
i don't think i was suggesting changing that, but I do think it is important for users to know that their tx may no longer be in mempools. it'll be an indication to them that they may want to rebroadcast it or RBF it.
< morcos>
so perhaps the mempool needs to let the wallet know it evicted or something similar and the wallet can inform hte user
< phantomcircuit>
morcos, wallet should poll the mempool if anything
< phantomcircuit>
the potential to leak which transactions are in the local wallet is pretty high even if we're careful
< sipa>
it already does, and reports 'conflicted' for those transactions, afaikM
< morcos>
sipa: yes, but it would be nice if it could identify the reason the tx is no longer in the mempool
< morcos>
how to know, hey it got evicted
< sipa>
right
< phantomcircuit>
sipa, iirc the mempool notifies the wallet of all changes, right?
< morcos>
during tomorrows meeting we should try to game plan an order to review some of these open pulls we'd like to get in for 0.12. I think if we put the effort in we can have a much better and more complete/consistent release. There are quite a few things that are half way between fixes and new features that in my opinion really should be in the release, so we don't get complaints like this about conflicted txs
< jgarzik>
phantomcircuit, sipa: how/where is the mempool/wallet connection made, now? signals?
< jgarzik>
That was one area of open question in the mempool-janitor stuff from year+ ago.
< morcos>
there might be a notification via signal on ATMP, but i dont' think there is any way to inform of eviction is there?
< sipa>
my opinion back then was that mempool transactions from the wallet should be protected from eviction; i don't think that's a good idea anymore due to the privacy concerns that raises
< morcos>
yeah SyncWithWallets signals the wallet on txs entering the mempool or removing it for being mined or removing it for being conflicted
< sipa>
that was more applicable to randon eviction though, as now the mempool aims to be some sort of subset of most likely to be mined transactions
< morcos>
perhaps we can call it within ATMP for evicted txs?
< sipa>
and if your wallet tx is not in that, it probably means those transactions are effectively at risk of not confirming
< phantomcircuit>
sipa, even with random eviction it's a bad idea...
< phantomcircuit>
especially when combined with "mempool"
< sipa>
phantomcircuit: the privacy concern is always there, but with random eviction, being evicted does not have any meaning for the wallet user
< sipa>
now it does
< morcos>
anyway, looks a bit messy to add it, but this is the kind of thing that I think is worth trying to get right before a major release
< sipa>
perhaps the criterion for 'conflicted' should actually be that a conficting transaction is found; if not we can just call it "unclear" or something
< jgarzik>
RE meeting - I would like two buckets for 0.12 - "must have" and "really want to have, if it's ready in time, but not a release blocker"
< jgarzik>
too easy to continue slipping a release for one more awesome change
< jgarzik>
time-based releases are different from feature releases - they are more lumpy
< sipa>
the must haves should probably mostly be "fix known regressions"
< sipa>
the chain limits and wallet/mempool interactions probably belong there
< jgarzik>
+1
< morcos>
Yes, thats why we should use the next 2 weeks to get in all the really want to haves! and then we can worry about the must haves after.. :)
< jgarzik>
;p
< morcos>
I will be sad if we don't merge a good subset of the performance improvements. But even just for those, its useful to decide on an order to knock them out, since they interact to some degree.
< sipa>
morcos: i hope a large part of them (if not all) go in
< wumpus>
morcos: I think it's important to make a distinction between fixes that need to get in and performance improvements that would be nice, but may also be risky
< morcos>
wumpus: yep agreed, i'm just hoping that if we work together we'll be able to get a lot in before the deadline sneaks up on us (says the guy that needs to start reviewing instead of creating more things to be reviewed)
< wumpus>
agreed
< wumpus>
just thinking that it's important to prioritize, there is so much open
< sipa>
morcos: i am right to say that #5967 (if it works as intended) would fix the problem of ModifyNewCoins being unusable for coinbases?
< morcos>
sipa: i was wondering about that and it was on my list to go back and check it
< sipa>
it explicitly creates the case where a child cache has a non-fresh entry for something the parent cache does not have, in its tests
< morcos>
entirely possible, but i want to carefully think about combining 5967 with modify new coins, probably worth doing even if its not going to be merged as just another way of making sure the thinking is correct
< morcos>
yeah that sounds right
< sipa>
morcos: writing a new coin to a child cache while the parent does not have it but the grandparent does is identical to the cache where the parent had it when the coin was written, and then had the parent flushed
< sipa>
so i believe that 5967 exactly covers the unnecessary assumption, and tests it
< morcos>
sipa: before i try to parse your statement, thinking about 5967 made me realize removing the assert without 5967 would have been broken with the ModifyNewCoins
< morcos>
sipa: ok yes i agree with your statement and i think 5967 would allow ModifyNewCoins to be used in both cases and just not flag coinbases as FRESH
< sipa>
morcos: you would have caught it by adding ModifyNewCoin calls in the unit tests
< morcos>
sipa: ha ha. exactly!
< morcos>
ok so do you want to do 5967 as well then?
< morcos>
I will perhaps put them all in one pull so it can be tested all together (once i write the unit tests)
< sipa>
morcos: i'd say first do ModifyNewCoins as-is, adding unit tests for it
< sipa>
morcos: then just removing the assert + ModifyNewCoins for coinbases without fresh flag should fa
< sipa>
fail
< morcos>
sure, ok that makes sense. separate PR can add 5967 and change ModifyNewCoins, and that can be added or not later
< morcos>
gmaxwell: how much do you care about this effectiveSize = max(size/MAX_BLOCK_SIZE,sigOps/MAX_BLOCK_SIGOPS) thing?
< morcos>
its not that difficult to implement, but it touches a lot of things, assuming we make descendant packages track it which i think is important. its too risky to make the default mining prioritization differ a lot from the eviction code.
< morcos>
but then does it also affect package limits, are they 101kbs of actual size or 101kbs of effective size? etc.. and do you sum that by summing maxes, etc. etc.. all doable, but i think i need to work on cleaning up these other pulls first
< GitHub2>
bitcoin/master 7085728 Wladimir J. van der Laan: doc: there is no libboost-base-dev, add missing sudo...
< GitHub2>
bitcoin/master cbf9609 Wladimir J. van der Laan: Merge pull request #6969...
< GitHub167>
[bitcoin] laanwj closed pull request #6969: doc: there is no libboost-base-dev, add missing sudo in release notes (master...2015_11_docfix) https://github.com/bitcoin/bitcoin/pull/6969
< morcos>
sipa: any reason why the BatchWrite for CCoinsViewTest writes all entries and not just the dirty ones? I think it should only write DIRTY ones.. I'm not sure what the right way to keep it in sync with what CCoinsViewDB does, but that makes the most sense to me.
< sipa>
it only writes dirty ones afaik?
< sipa>
ah, the test
< sipa>
i missed that
< morcos>
ok, i'm changing it to follow same logic as CCoinsViewDB...
< gmaxwell>
morcos: I don't care about the specific thing done, but something needs to be done about the sigops flooding attack. The obvious next alternative is to threshold it, but I'm doubtful a threshold can be set that won't randomly bite people with lots of p2pkh outputs.
< GitHub3>
[bitcoin] petertodd opened pull request #6991: Minor: Clarify 'fee' field in fundrawtransaction help text (master...clarify-fundrawtransaction-help) https://github.com/bitcoin/bitcoin/pull/6991
< jgarzik>
btcdrak, release manager != project lead... bitcoin core is intentionally decentralized ever since Satoshi turned things over to Gavin.
< jgarzik>
I've been release manager before, for example.
< GitHub131>
[bitcoin] sdaftuar opened pull request #6992: [WIP] [Mempool] Add space for priority transactions (master...add-priority-to-mempool) https://github.com/bitcoin/bitcoin/pull/6992
< michagogo>
12:50:56 <wumpus> (but that'd make deterministic verification really hard for us) <-- How come? AFAIK it doesn't matter if you use a daily image or a release image or debootstrap or whatever, as soon as you upgrade it should all be the same...
< michagogo>
I mean, it's not like vmbuilder is deterministic, IIRC
< gmaxwell>
michagogo: you're suggesting using daily images for buids? if so thats a terrible idea, as it would make it much easier to compromise everyone's builder images.
< michagogo>
gmaxwell: I wasn't, no
< michagogo>
12:50:37 <wumpus> anyhow what I was trying to find out: Ubuntu 15.10 also still has 0.12.4+bzr489-0ubuntu2, with the same package description. I vaguely remember the author saying they will do a new package for 16.04 tho, the code is almost-unmaintained and what they recommend is using the ready-made daily ubuntu cloud images
< michagogo>
gmaxwell: also, I don't think the idea was "use today's image each time you build", I think it was "use today's image when you want to create the base VM"
< michagogo>
So I wasn't the one bringing it up, but I don't think I really see any reason not to, as I said above
< michagogo>
(Of course, I might just be missing something obvious)
< jtimon>
dcousens: hopefully you like #6597 now, it seems I coded that too fast and it was hurting performance for no good reason
< dcousens>
jtimon: nice
< dcousens>
jtimon: I'm still 100% over the implications of the change (will keep running through it in my head) but, the code looks good.
< jtimon>
dcousens: I think the change you chosed as an example doesn't actually hurt performance since non-genesis block were comparing the hashes anyway (just after checking the pow instead of before), but there were other places (which I'm not sure I had to touch even before rebasing and testing #6382 again) where your arguments certainly applied
< dcousens>
jtimon: indeed, it compares the hash right after, but, this is cleaner :)
< dcousens>
and at least, we only do it once now
< dcousens>
well, less*
< jtimon>
and it's when throwing an error...thanks for the review!
< gmaxwell>
sipa: it was the right thing to do, but I think splitting the libsecp256k1 PR now means that no one is going to comment on the first part. :-/
< jtimon>
gmaxwell: I'm afraid I can just concept ack on the first part
< sipa>
gmaxwell: by first part you mean "the earlier PR", or "the PR whose commits now go first" ?