< dcousens>
Given its called a 'wrapper', but its now generic, seems a bit odd IMHO
< sipa>
call it DbInterface then
< phantomcircuit>
jgarzik, :)
< dcousens>
sipa: but its not just an interface no, it instantiates and manages the entire thing right?
< dcousens>
its more like 'Db', ha
< jgarzik>
it is a class that wraps the backend database in a nice snuggly generic interface
< gmaxwell>
(1) dear god, are people really arguing about this, (2) dbwrapper says what it does!
< jcorgan>
jgarzik: compile ack on the 2015_sqlite branch, but looks like retains all the leveldb naming in the build system, checking now
< dcousens>
gmaxwell: wasn't arguing, just seemed better to ask here than ask about it on GH seeing as its probably me just missing context
< jgarzik>
jcorgan, naming? 2015_sqlite branch should not include any leveldb naming in the bitcoin->sqlite path. it _does_ build leveldb. it _does not_ link leveldb to bitcoind.
< jcorgan>
oh, that's what i'm seeing then
< sipa>
dcousens: then DbWrapper is a better name! it's something that wraps a db, but you don't know or care what kind of database that is
< dcousens>
gmaxwell: in any case, a wrapper typically wraps an existing object no?
< dcousens>
in this case, the object is entirely owned by the wrapper
< sipa>
grrrrr, EncapsulatedDb then?
< dcousens>
haha, sipa this isn't important, I'm just curious
< jcorgan>
jgarzik: there are some stray leveldb name issues
< jgarzik>
jcorgan, where?
< jcorgan>
like using 'leveldb_error' in dbwrapper.cpp
< jgarzik>
hrm, I thought I got that one :(
< jgarzik>
jcorgan, no, wait, I did. git pull. :)
< jcorgan>
it's still defined in dbwrapper.h, try commenting that and see where the errors
< dcousens>
sipa: yeah, quick investigation of the wrapper/adapter pattern is that it is typically applied as a glue code, aka, between two things with their own ownership.
< dcousens>
In this case, its not that, so, anyway. Just food for though
< dcousens>
thought*
< dcousens>
tl;dr EncapsulatedDb would probably be better
< dcousens>
jgarzik: well, it causes confusion in anyone who hasn't read through that part of the source. So, IMHO it matters enough to be discussed. But not to block your PR. Moving on :)
< jcorgan>
but to be pedantic, there are a bunch of 'leveldb' strings in the comments :-)
< * jgarzik>
is focusing on operating stuff, like why this unit test doesn't want to compile my sql
< jgarzik>
*operational
< jcorgan>
no worries. i haven't tested anything, but compiles fine on ubuntu trusty
< phantomcircuit>
jgarzik, it would be nice if there was a virtual base class
< phantomcircuit>
DBWrapperInterface
< * phantomcircuit>
goes to look for some InterfaceFactories
< dcousens>
phantomcircuit: if the default is being hard-coded in at each entry anyway
< dcousens>
probably not much point
< dcousens>
unless the wrapper is instantiated at a much earlier point in the program, and passed through. At which point then the generic interface could be used
< dcousens>
s/earlier/higher
< * jgarzik>
reminds channel: we've moved on
< phantomcircuit>
dcousens, well it's only actually named in one place i think so it doesn't much matter
< phantomcircuit>
possibly we should at least try to reconnect to disconnected peers...
< GitHub136>
[bitcoin] laanwj opened pull request #6877: rpc: Add maxmempool and effective min fee to getmempoolinfo (master...2015_10_mempool_effective_fee) https://github.com/bitcoin/bitcoin/pull/6877
< GitHub199>
[bitcoin] gavinandresen opened pull request #6880: New -logtimerelative option to do millisec debug.log timestamps (master...millisec_debuglog) https://github.com/bitcoin/bitcoin/pull/6880
< GitHub54>
[bitcoin] gavinandresen closed pull request #6880: New -logtimerelative option to do millisec debug.log timestamps (master...millisec_debuglog) https://github.com/bitcoin/bitcoin/pull/6880
< GitHub0>
[bitcoin] sdaftuar opened pull request #6881: Debug: Add option for microsecond precision in debug.log (master...add-microsecond-timestamps) https://github.com/bitcoin/bitcoin/pull/6881
< maaku>
I am leaving bitcoin core development for a while, not sure how long. If someone wants to take over pushing for #6312, #6564, and #6566 please contact me.
< Dyanisus>
why you leaving?
< btcdrak>
maaku: why are you leaing?
< btcdrak>
maaku: we're literally right at the end!
< Dyanisus>
maaku: did you get some sort of subpoena or something?
< btcdrak>
maaku: I've sent you an email about those PRs.
< GitHub65>
[bitcoin] petertodd opened pull request #6883: Add BIP65 CHECKLOCKTIMEVERIFY to release notes (master...cltv-release-notes-v0.12.0) https://github.com/bitcoin/bitcoin/pull/6883
< GitHub95>
[bitcoin] btcdrak opened pull request #6884: Backport #6566, median-past locktime, rebased against 0.11 (0.11...mpl-0.11) https://github.com/bitcoin/bitcoin/pull/6884
< dhill>
maaku: hi
< dhill>
maaku: regarding checksequenceverify, i ran your tests against btcd, and they all pass
< dhill>
but
< dhill>
+ if (txToSequence & CTxIn::SEQUENCE_LOCKTIME_DISABLED_FLAG)
< dhill>
+ return false;
< dhill>
was not hit
< dhill>
i think there is a dup test in tx_invalid.json
< maaku>
I added btcdrak as a collaborator for my personal repo, so he should be able to update the 'sequencenumbers' 'checksequenceverify' and 'medianpastlocktime' branches
< btcdrak>
maaku: thanks!
< cfields>
sipa: ping. I'm trying to understand the current dnsseed thread behavior. It looks like in the non-proxied case, we resolve all seed nodes and add them to addrman without testing them. In the proxied case, we connect to each resolved entry as a oneshot...
< cfields>
i assume the goal is to query them all once if possible in order to never have to touch 'em again?
< cfields>
just seems a bit unbalanced between the proxy/non-proxy cases, want to be sure i'm not missing anything in the big picture
< * jgarzik>
wonders about the proxy case behavior
< jgarzik>
cfields, The add-untested logic follows the getaddr logic, where it is unknown possibly garbage data coming in from unknown sources
< jgarzik>
cfields, overall I do think addrman must do a much better job of testing and expiring bad addresses
< jgarzik>
cfields, notably we engage in the poor network behaviors of (1) propagating known bad addresses and (2) repeatedly trying the same dead nodes, in some near-empty-bucket edge cases
< gmaxwell>
The proxy behavior is simply because we cannot do the dns lookup via the proxy. So connecting to the addresses returned (by proxy connecting to the names) was the best thing we could do.
< cfields>
jgarzik: ok, that lines up with how i read it. Looks like in the proxied case where we can't run a query, we just connect to all reachable nodes (query done through the proxy), getaddr on them, and disconnect. seems to me like that'd yield less garbage, but i suppose you're right about letting addrman do it.
< gmaxwell>
It does a less good job of initilizing the state for sure.
< gmaxwell>
not to all reachable nodes, we connect 'to' each dnsseed by name (meaning connect to one of the results of it).
< cfields>
gmaxwell: less good because the dns seeds tend to be much fresher, i assume?
< jgarzik>
dns seeds are fresh and quick and easy and wonderfully centralizing ;p
< gmaxwell>
cfields: no, just less good because it any one of the dnsseed results could be broken or malicious.
< cfields>
gmaxwell: ah right, only one-per-seed that way.
< jgarzik>
notably there is now code that "waits a bit, before checking seeds" to attempt to P2P without DNS seeds
< gmaxwell>
e.g. dns seed returns nodes a,b,c,d,e,f,g ... and we only end up connecting to g (out of that dns seed's results) and perhaps g gives us useless answers.
< jgarzik>
so DNS seeding is intentionally not immediate
< cfields>
ok, all clear now. thanks.
< cfields>
ok, along those lines, are there concerns (fingerprinting) about hitting the seeds too quickly or uniformly? I don't see any active approach in the current code to avoid that
< jgarzik>
cfields, sure those concerns exist
< jgarzik>
cfields, that's part of the reason why the current code tries to avoid DNS lookups
< jgarzik>
cfields, a DNS seed can build a picture of bitcoin users etc.
< jgarzik>
cfields, Current logic is: (1) if addrman.size() > 0, Attempt to connect to network given current peer database (2) If "too few" P2P connections after 11 seconds, perform DNS lookup.
< cfields>
obviously not worried about the seeds themselves, since they're already trusted. more about someone trying to get 2+ entries in a seed's current entries in order to distinguish individuals
< gmaxwell>
cfields: DNS seeds are prohibited by policy from having a TTL less than 60 seconds, so recursive resolution provides some increase in privacy.
< jgarzik>
I think it's quite likely that a single DNS seed might spit out two addresses which are in fact controlled by one person/org
< gmaxwell>
it's almost certan. well plus a substantial number of nodes on the network are sybil attackers in any case.
< gmaxwell>
:P
< cfields>
jgarzik: sure, i'm speaking of the procedure when we do fall into the dns lookup logic.
< jgarzik>
sure. the logic is dumb: query all DNS seeds, let addrman sort it out
< gmaxwell>
cfields: addrman knows where addresses came from to e.g. prevent any one source from dominating the table. There is only so much it can do.
< cfields>
ok. no problem with that, just want to be sure i'm understanding the logic that's there
< gmaxwell>
Lack of actual users running nodes provides a very limiting upper bound on how much protection is possible.