< kanzure>
i do not feel entirely comfortable with his "throw tests away" comment
< gmaxwell>
kanzure: I thought the same, but on reflection, I dunno, any generic advice is ... generic. I've certantly seen tests that should be thrown away. Ones that basically could never catch an error but still manage to produce false positives. Those should probably be thrown away.
< sipa>
gmaxwell: i think the advice to remove never-failing tests specifically is not wanted for consensus logic
< sipa>
gmaxwell: though i guess that's a case of code whose behaviour is "fully specified"
< kanzure>
"delete tests" is advice that probably works on teams where people are encouraged to write lots of useless tests. i could imagine the weeds growing pretty fast :).
< sipa>
well we do suffer from tests that just test exact implementation equivalence rather than correctness
< sipa>
like any change to the coin selection code will likely break the uniy tests
< sipa>
which discourages improving them
< sipa>
rather than helping making sure that changes don't break them
< kanzure>
yes i am not sure how to correctly "test" stochastic approximation. "is it really approximating?" "is it really stochastic?" hehe
< wumpus>
I wouldn't agree with deleting tests
< wumpus>
I'd still like to add more unit tests and more coverage instead of going on some wacky tangent
< wumpus>
I don't think we have the mentioned problem at all, that we're trusting unit tests too much
< wumpus>
code review is most important
< wumpus>
but removing tests sound like removing safety features from a car because it makes people more careful, yea, nice in theory, in practice it is harmful to have more crippled people even if they 'learned a lesson'
< sipa>
i agree, but that post does contain some interesting view point
< wumpus>
similarly, having trivial things fail more often in production is not even if it makes developers (who?) learn a lesson
< wumpus>
I really don't like it
< btcdrak>
I think it's ridiculous. Unit tests are invaluable, they catch regressions, make refactoring easier to do etc.
< wumpus>
yes it's pretty ridiculous imo
< btcdrak>
if this is where "professional" software development is going, god save us.
< sipa>
well if people are refactoring their software so that higher line coverage is obtained, with tests that will never fail... there is a problem
< wumpus>
I doubt it is. And yes some companies overdo testing, having test suites that run for 20 hours every time a change is made. For those I guess it makes sense to be more selective.
< sipa>
i don't think we have that problem
< sipa>
rather the opposite
< sipa>
however, i do think we have a number of tests that are focussed on testing exact behaviour, of which nobody still knows why that test was added
< wumpus>
yes, there are plenty of specific tests that could be improved
< wumpus>
no argument there
< btcdrak>
and properly commented.
< sipa>
so they break when code is changed, with nobody knowing whether the broken test was testing some required property of the system, or just testing the exact behaviour the old code had
< wumpus>
testing is absolutely a skill in itself and it doesn't overlap 100% with developing, but saying things like 'business people should make tests not developers'... yeah right
< sipa>
well i guess we're kinda doing that for wallet rpc tests
< sipa>
not that they're written by business people, but they do test application behaviour, not individual functioms
< wumpus>
both levels make sense to test
< sipa>
of course
< wumpus>
but yes for the wallet we primarily test application-level behavior also because most functions aren't really testable units
< sipa>
i don't think anyone is actually suggesting removing tests (?), but i think a mindset of testing for correctness rather than exact replication makes sense
< wumpus>
lots of behavior that depends on the state of the wallet, which depends on an entire history of actions, there's not really much unit testing that can be done in that case
< sipa>
or at least separating the two
< wumpus>
indeed
< phantomcircuit>
wumpus, the question is ultimately "what is a unit?"
< wumpus>
phantomcircuit: it is, tho let's not get philosophical :)
< sipa>
phantomcircuit: i think we should use Planck units
< wumpus>
+1
< phantomcircuit>
sipa, :P
< phantomcircuit>
it's in the name
< phantomcircuit>
unit tests
< wumpus>
though not all Planck units describe the minimum physical quantities, e.g. plack mass is pretty large (in the micrograms)
< wumpus>
I guess that describes some of our unit tests that are actually RPC tests in disguise...
< GitHub172>
bitcoin/master 7e908c7 Gregory Maxwell: Do not use mempool for GETDATA for tx accepted after the last mempool req....
< GitHub172>
bitcoin/master 862fd24 Wladimir J. van der Laan: Merge #8080: Do not use mempool for GETDATA for tx accepted after the last mempool req....
< GitHub70>
[bitcoin] laanwj closed pull request #8080: Do not use mempool for GETDATA for tx accepted after the last mempool req. (master...nonmempool_getdata) https://github.com/bitcoin/bitcoin/pull/8080
< iniana>
Is there something I can do, configuration-wise, to speed up initial synchronization? Right now it is painfully slow, and I cannot find the reason why. Around 1 or 2 blocks per second is processed and download peaks are at 15KB/s
< gmaxwell>
iniana: depends on what is bottlenecking you. If disk IO is, then increasing dbcache will help a lot.
< iniana>
gmaxwell: Will try that. Thanks.
< gmaxwell>
to do that set -dbcache=<megabytes> don't set it more than the amount of free ram you have, minus about 600mb or so.
< gmaxwell>
iniana: What sort of hardware are you on?
< iniana>
gmaxwell: Pretty standard hardware. Core i7 (2yo), 16GB RAM. Blockchain stored on a slow HDD though, so I hope the dbcache will work
< sipa>
in that case, dbcache will help a lot
< sipa>
set it to 3000 or more
< gmaxwell>
yes, that will make it much faster. with 16GB ram you should be able to set it to 6000 or so and then the whole sync will basically happen in-memory, with just saving out chain to disk.
< * sipa>
belies this C++ type could use some typedefs:
< iniana>
started up again, trying with dbcache=4000, but synching stalls. log mentions reindexing every single block file. Is that necessary? Is there some way to turn it off?
< sipa>
did you start with -reindex?
< paveljanik>
sipa, it looks like Perl 8)
< iniana>
sipa: no
< sipa>
iniana: can you c/p some typical 3 line snippet from debug.log?
< iniana>
sipa: I copied the entire .bitcoin from another node I had though, if that matters.
< sipa>
a running one?
< iniana>
sipa: no, although maybe it didn't have a normal termination?
< iniana>
2016-05-31 17:57:18 Reindexing block file blk00070.dat... 2016-05-31 17:57:21 Block Import: already had block 000000000000003d64d714747fbfb615e056f533386d162f0bf5e049a9b6e0b2 at height 247000 2016-05-31 17:57:22 Reindexing block file blk00071.dat... 2016-05-31 17:57:23 Block Import: already had block 000000000000004d945017c14b75a3a58a2aa6772cacbfcaf907b3bee6d7f344 at height 248000
< gmaxwell>
sipa: Why does dequeue gain the ctransaction shared_ptr iterator?
< sipa>
gmaxwell: because that's smaller and faster than the hash
< sipa>
the old code was stupid
< sipa>
it's not a shared_ptr iterator btw, but a mapRelay iterator
< gmaxwell>
sipa: isn't the iterator invalidated any time the map is updated?
< sipa>
i'll document that change in the commit message
< sipa>
gmaxwell: oops, no, can't do!
< sipa>
it can cause double deletions, as the expiration entries are not unique
< gmaxwell>
I guess it cares about mutations too? I was thinking about adding a nodeid set to maprelay later to only answer queries for peers we actually inved to.
< sipa>
the value of map entries can be changed, no problems (without iterator invalidation)
< sipa>
keys can't ever change
< sipa>
i'll leave it be
< sipa>
oh, no, it is unique
< gmaxwell>
I was digging through trying to figure out how it wasn't unique...
< sipa>
i'm confusing with askfor
< gmaxwell>
I would like to remove the expiration entry and reinsert it if the old expiration was 'too soon'... but that doesn't change the situation for the map with the shared pointers.
< sipa>
gmaxwell: since #8080, if pto->timeLastMempoolReq == 0, we don't need to check the mempool at all, right?
< gmaxwell>
Right.
< phantomcircuit>
iniana, it's probably a good idea to reindex if you copied ... but it shouldn't need to do so
< phantomcircuit>
are you sure you exited cleanly on the other node?
< gmaxwell>
phantomcircuit: I would make a bet they copied the blocks directory without the chainstate.
< sipa>
that would not result in a 'reindexing' message
< * gmaxwell>
withdraws bet
< GitHub133>
[bitcoin] theuni opened pull request #8128: Net: Turn net structures into dumb storage classes (master...split-resolve) https://github.com/bitcoin/bitcoin/pull/8128
< phantomcircuit>
gmaxwell, i assume they had a node which they manually reindexed with a higher dbcache setting which wasn't cleanly shutdown