< BlueMatt>
oh, what? you told sipa that the "it updates the storage"
< jtimon>
I mean it updates what you store, through the api (ie the library handles reorgs and updates things)
< BlueMatt>
well of course, you tell the library about a block and it connects what it can, possibly asking the library owner for blocks that it previously saw but (ofc) didnt store
< BlueMatt>
I dont see how else you'd possibly do it
< BlueMatt>
gmaxwell: see, eg, the bitcoinj stuff, where all the contributions to "full mode" since it was added have been to add it to database X, so that people can put it in their own db
< jtimon>
see the backlog for when I say for people answer the 4 possible combinations to 2 yes/no questions
< jtimon>
you and me agree on abstracting storage
< jtimon>
I mean, if I understood everyone's position correctly
< gmaxwell>
BlueMatt: without using a highly efficient format like ours, I'm dubious that the system can stay in sync without insane hardware.
< jtimon>
the other question is only check that a block is valid or also handle reorgs, update tables etc
< BlueMatt>
gmaxwell: thats not our problem, thats theirs
< BlueMatt>
gmaxwell: hell, they can use leveldb if they want
< jtimon>
ie verifyBlock vs processBlock
< BlueMatt>
gmaxwell: and, if we prefer, its also easy to add eg a single flag like "use X MB of utxo cache, because I dont want to implement that myself"
< gmaxwell>
BlueMatt: not just about leveldb or not, but the compressed ccoins representation is worthless for queries.
< BlueMatt>
gmaxwell: the compressed ccoins thing doesnt matter all that much if you're talking about an actually-high-performance db on high-performance hardware....if folks need their shit in their own db and are willing to pay $$$$$ for it to run, more power to them
< gmaxwell>
As far as 'their problem' goes, we shouldn't waste our resources (or code base clarity, or performance) supporting functionality that won't be useful to anyone (or to anyone beyond a couple centeralized API services).
< jtimon>
we can add a wrapper with our own implementation of the interfaces, beyond that, right, it is their problem
< BlueMatt>
gmaxwell: ok, then what is the point of libconsensus at all?
< sipa>
BlueMatt: i don't care about libconsensus. i care about abstracting consensus logic out
< BlueMatt>
and whats your answer to folks like btcd/the new javascript one, who do have dbs that are performant enough to stay in sync
< gmaxwell>
The way it was pitched to me is so that people could make wallets and other similar applications without having to reimplement consensus logic.
< jtimon>
right
< BlueMatt>
gmaxwell: sure, but that doesnt mean we have to handle db logic ourselves
< sipa>
well, the hardest part of that is already available: we expose script verification
< jtimon>
how is that incompatible with allowing them to chose their database implementation?
< BlueMatt>
sipa: I dont think thats the only hard part, really
< sipa>
but i also don't think it's very hard to abstract utxo storage out... so if there is a use case, sure
< BlueMatt>
sipa: indeed, abstracting out utxo/block storage is also abstracting consensus logic out of other crap
< sipa>
maybe it is.
< sipa>
something like changing to per-output rather than per-tx utxo model would be impossible with a stable utxo storage abstraction
< jtimon>
I assume the use cases come from the fact that they want to reuse that database for some of their logic somehow
< sipa>
s/impossible/inefficient or complicated/
< gmaxwell>
I'm not opposed to it being abstractable-- but I don't see how this is related to that goal-- it's the opposite of it, the blockchain storage and utxo set is consensus and may even be quite normative in its behavior (e.g. if we have a commited utxo set of some kind), and if it trashes performance or code clarity then it's not a good move.
< BlueMatt>
sipa: oh? if you query per-utxo then per-tx can be hidden on the backend and could still be pretty performant....indeed, the other way around doesnt really work
< gmaxwell>
and for an example of the kind of complexity it creates: if you think you can just query a utxo database it means we can't have writeback caching internally.
< BlueMatt>
gmaxwell: sure, no one wants to do anything that ends up introducing performance regressions
< sipa>
abstracting storage is more to avoid dependencies rather than it being reusable
< gmaxwell>
I think callers that want a database probably don't want to replace the database used for consensus-- what they want is a node that maintains an external database for their application.
< gmaxwell>
Which is probably not the same thing due to consistency requirements.
< jtimon>
gmaxwell: it shouldn't trash performance or code clarity, I agree
< BlueMatt>
gmaxwell: it might be worse performance, but syncing after every block after ibd (ie having a flag to sync) isnt all that hard, either....
< sipa>
BlueMatt: but for a full node, you probably don't want to sync after every block
< BlueMatt>
gmaxwell: and its easy to do that without introducing our own performance regressions....just dont call Sync after every block....
< BlueMatt>
sipa: depending on your application, maybe you do
< jtimon>
sipa: once abstracted out, changing the interface in certain ways may be painful, correct
< BlueMatt>
at least after IBD
< gmaxwell>
Not writing out the utxo set constantly is critical for performance. Leveldb is slow.
< sipa>
BlueMatt: but it'd be using our block validation code, which has known performance characteristics
< sipa>
BlueMatt: and if you don't care about that, you wouldn't be using libconsensus at all
< BlueMatt>
sipa: as long as we dont drop the cache when we flush (like we do now, which we already need to fix), I dont see a performance issue there?
< sipa>
BlueMatt: fair enough
< sipa>
BlueMatt: that's a good point - but the current code doesn't do that
< gmaxwell>
I can't imagine an application which needs to muck around storing the utxo database in an application format which wouldn't be equally or better served by block processing callback that maintains an external database that isn't used for validation.
< BlueMatt>
suresure, but these are all minor issues that are trivial to fix
< jtimon>
avoiding dependencies is also a great gain, although I think we should have a version depending on levelDB and our own implementation too
< sipa>
again, i'm not against abstracting things out
< gmaxwell>
Widespread application visiblity into the actual utxo database would be pretty toxic for commited utxo or stxo improvements.
< BlueMatt>
gmaxwell: duplicate databases? people who run shit on modern services where you literally have no local persistent storage?
< sipa>
i very much feel that utxo storage is one of the things that is abstractable
< sipa>
but that doesn't mean it's necessarily useful for sharing that information with other purposes
< sipa>
it also doesn't mean it's not
< jtimon>
yeah, I assume one use case could be having everything in memory
< gmaxwell>
BlueMatt: basically any standard database approach would have horrible performance to the point that it would only be useable on very high end hardware. Having two copies of the utxo set would hardly be a consideration there, our copy is only about 2GB data.
< BlueMatt>
gmaxwell: I seriously dont believe that...maybe it takes 10 seconds or 30 seconds to sync a block to the db, but so what? you just dont flush all the time during IBD and then wait it out
< BlueMatt>
(and, yes, I get that in most dbs it actually will take 30 seconds, but it wont take much longer than that)
< BlueMatt>
or a minute with segwit blocks
< * BlueMatt>
-> out
< gmaxwell>
BlueMatt: see also the electrum channel with people complaing about their servers falling multiple blocks behind.
< gmaxwell>
and two minute updates.
< BlueMatt>
gmaxwell: ok...? that doesnt mean its impossible to build a db that can store the utxo set with reasonable performance, even if not blazing fast performance?
< gmaxwell>
I just feel that the purpose of these changes is no longer clear. Expecting the user to implement complex interfaces with bitcoin specific and consensus critical behavior is at odds with what I understood to be the stated goal.
< * BlueMatt>
doesnt see the problem with it taking a minute for your db to sync the utxo set....if you're just a merchant and its slow, so what?
< jtimon>
it is also possible to build something faster than our solution, isn't it?
< BlueMatt>
gmaxwell: I highly disagree that selecting a sane DB (ie not using some external SQL thing) is "implementing complex interfaces"
< gmaxwell>
And if maintaining a database for a block explorer is a goal-- then it can be done in a better way then also trying to use that database for consensus... just run it in parallel, the resource overhead will be moderate.
< BlueMatt>
and I dont expect every exchange to do so, I'd expect folks like btcd/bitcoinj/javascript thinggy to do it
< BlueMatt>
and people to use it
< BlueMatt>
gmaxwell: go look at the bitcoinj users...people actually use its full validation shitshow so that they can do exactly this
< jtimon>
gmaxwell: we don't need to expect the user to reimplement it, we should provide our own implementation to the interfaces
< BlueMatt>
and its slow, but they dont care
< BlueMatt>
(and the interface is trivial)
< gmaxwell>
BlueMatt: Complex interfaces is that you need to actually pass the data as fields and not opaque blobs. And what happens when the set of data needed for consensus changes (as sequence locktiming did).
< BlueMatt>
its like 4 functions and only a single non-trivial requirement for which there are unit tests (the utxo-replacement-thing)
< jtimon>
again, you keep discussing the slow case, what if I'm faster than our implementation?
< gmaxwell>
BlueMatt: that usage doesn't even need full node support at all, run your bitcoinj behind a full node, have it log blocks to a database. Hurray.
< BlueMatt>
gmaxwell: yes, except people dont do that
< sipa>
maybe we should write a simple daemon that maintains the utxo set in SQL
< BlueMatt>
people prefer to run full validation shit in btcd or bitcoinj, despite knowingly putting themselves at risk
< sipa>
(without validation)
< jtimon>
or are we discarding that as a possibility?
< gmaxwell>
sipa: that is what I was saying.
< gmaxwell>
BlueMatt: they don't know they're putting themselves at risk.
< BlueMatt>
gmaxwell: ok, well either way I dont see an alternative great solution here? most developers do want a library that handles all the background validation shit for them
< BlueMatt>
and as long as such things exist, they will use them
< gmaxwell>
jtimon: because there exists nothing faster currently, or else we'd be using it.
< BlueMatt>
proxy-nodes be damned
< gmaxwell>
BlueMatt: the alternative is to have support for maintaining a database synced with the blockchain-- that doesn't mean inserting things into the consensus logic.
< gmaxwell>
It just means having a simple set of hooks that run post tip change and update the database.
< BlueMatt>
gmaxwell: you missed my previous point about how lots of "modern" developers run shit on services with ~no persistent storage
< BlueMatt>
anyway, I actually need to run, have an apt to keep
< sipa>
run, BlueMatt, run
< jtimon>
there's no storage solution optimized for having everything in memory that outperforms leveldb or offers some other advantage?
< sipa>
apt-keep BlueMatt
< sipa>
jtimon: -dbcache=8000
< gmaxwell>
We already have that built in.
< sipa>
jtimon: leveldb won't be used at all
< jtimon>
sipa: right, I know, but I doubt there's nothing better than levle db with unbounded cache
< jtimon>
sipa: levledb is what we're using, no?
< gmaxwell>
jtimon: that doesn't use leveldb (except to persist across restarts)
< jtimon>
oh
< jtimon>
I see, I didn't know that, thanks
< jtimon>
even if we only expose the version with our implementation, I think it would be good to abstract consensus storage even for bitcoin core rewardless of libconsensus users
< gmaxwell>
our storage is already abstracted.
< jtimon>
I really think not trashing preformance for our own implementation should be the priority at first, if the interface needs to go through a few iterations not to trash other implementations too, I think that's fine since we will be offering the "don't use storage abstractions" version too anyway
< jtimon>
yes, our storage is abstracted in more ways than we would want to expose in an storage-independent libconsensus C API
< jtimon>
I mean, since I'm in favor of exposing both, one storage independent and one that is not, I'm fine with starting with the one that is not, I'm just more interested technically in the other one
< jtimon>
can we talk a little bit about the other question?
< jtimon>
ie whether libconsensus should fully validate a given block, or also accept it and update the state, manage reorgs, etc
< jtimon>
again I'm ok with offering both but I'm more interested in the smaller one
< jtimon>
or at least that's what I have been imagining all this time, I wasn't counting reorgs or updating the tip as part of the validation
< morcos>
gmaxwell: sdaftuar: to return to the fee bumping, in suhas' example, where you first try to bump tx1,2,3 with tx4, and then you try again.
< morcos>
i can see how you could invent a way to prevent tx4 from getting bumped, but how are you stopping 1,2,3 from being bumped again?
< morcos>
or more generally, lets say you manually had tried to bump and tx1 and tx1a both are trying to pay the same guy (maybe you were smart enought to conflict, maybe not)
< gmaxwell>
They should be marked abandoned when the bump is created.
< morcos>
how do you stop a bumpunconfirmed from bumping both
< gmaxwell>
(but even if they weren't they're conflicted at that point)
< morcos>
they aren't conflicted
< gmaxwell>
by tx4, which is in your mempool.
< morcos>
conflicted means the conflicting tx is in the block
< morcos>
a conflicting tx in the mempool is something you are not even aware of
< morcos>
which brings me to the second point i wanted to make, aobut what you said about people abandoning wallets they think are empty
< morcos>
thats a fantastic point, that i wish had been made a while ago
< gmaxwell>
you're right, I'd been assuming it would be conflicted without thinking carefully what that test actually means right now.
< gmaxwell>
I know for sure people do abandon wallets that they think are empty (or even 'almost empty')
< morcos>
before we made the change to the confliction logic (for 0.12 ?) then if your spend was not in your mempool, it was considered conflicted (regardless of whether it was by an in-block, in-mempool tx, or nothing)
< morcos>
so it would be kind of rare for you to think you were out of money but not
< morcos>
but now, for sure that might happen
< morcos>
now if you issue a tx that never makes it into a block or for some reason can't ever make it into your own mempool
< gmaxwell>
I seem to vaguely recall that something else would still prevent us from doublespending the inputs on txn that weren't in the mempool, even then.
< morcos>
regardless of any conflicting txs, its uses up your balance until you abandon it
< morcos>
i don't think so
< morcos>
it seems like we need another notion of balance (or maybe 2 more)
< morcos>
potential balance (which is not reduced by non-in chain (6 deep?) spends) and maybe even a pending receive balance (although i guess that hasn't been a problem historically, that hasnt' changed)
< gmaxwell>
Yes, agreed. I worry about the use 'balance' for a number which will go down without user interaction. :)
< gmaxwell>
More like "pending outgoing payments: outbound payments which are not N confirms deep yet" and "pending incoming payments".
< Taek>
Would it be enough to have a [confirmed balance] and an [unconfirmed diff]?
< morcos>
if this is really the only use case, it would be easy enough to make a rpc call that just give you a report on how "empty" your wallet is
< morcos>
Taek: we already have that.. , oh yeah, nm, thats the second thing i was talking abou tthen, the pending received balance
< morcos>
getunconfirmedbalance
< Taek>
if I'm following correctly, the worry is about coins that become unconfirmed due to e.g. change outputs?
< morcos>
i think the primary worry is that if you spend some coins , but your spend never makes it into a block
< morcos>
your wallet still deducts that spend from your balance
< morcos>
forever
< morcos>
until you manually mark it as abandoned (which is sort of an advanced feature, that we don't often recommend)
< Taek>
technically some adversary could un-abandon any transaction that hasn't been double-spent
< morcos>
exactly why its an advanced manual feature
< Taek>
imo your confirmed balance should not change until the tx is in the blockchain
< Taek>
and the confirmed balance should be what is presented to the user as the primary balance
< morcos>
perhaps confirmed balance is the wrong word for what getbalance returns... it returns your spendable balance.. which certainly should be decremented for spends that haven't yet made it into the blockchain
< morcos>
and i think thats what people expect to see when they ask their balance
< Taek>
I don't think it's safe enough to show the user just one number.
< Taek>
simply because the whole unconfirmed uncertainty is unseparatable from the blockchain way of doing thigns
< Taek>
(well, lightning doesn't really have this issue)
< BlueMatt>
gmaxwell: thinking about it more, the way we'd probably do it is, initially (ie v1) you make the libconsensus consumer provide a k-v store api, and we use that the same way we use leveldb, and then add functionality to parse the blobs we provide the k-v store into things like scriptPubKeys later
< jtimon>
perhaphs both confirmed and spendable balances should be shown?
< BlueMatt>
this provides functionality, without breaking our ability to change the format to add new things
< jtimon>
I understand k-v is key-value kind of storage
< jtimon>
BlueMatt: if so, what would the values be? C structs ?
< BlueMatt>
the values are binary blobs that libconsensus provides which the user does not have any visibility into
< BlueMatt>
(in our case its the serialization of CCoins or whatever with our compression stuff)
< sipa>
BlueMatt: yup, that's what i imagined
< BlueMatt>
if the user wants to know whats inside, there is some api which can parse it into a c struct or whatever
< sipa>
a batch key-value write operation, and a key read operation
< jtimon>
^ for the "binary blobs the caller doesn't know about"
< sipa>
jtimon: i don't see such a thing at all
< sipa>
that commit is about bitcoinconsensus_create_consensus_parameters
< jtimon>
well, yeah, sorry, this are void pointers the other it's just data like in the tx for current verifyScript
< jtimon>
in this case we would use the serialize lib to interpret and produce the "blobs", correct?
< sipa>
if there needs to be a way to view the utxo set, i'd just provide a separate api for that
< sipa>
not a parser for the database
< sipa>
and not in the first stage
< jtimon>
I am extremely interested in hearing in what other people's next steps, or more feedback on my own proposed next step
< jtimon>
sipa: I'm still not sure if you prefer to expose verifyBlock or processBlock
< sipa>
jtimon: i don't think we can do so right now anyway, without having a way to abstract state out
< sipa>
imho the first step is just continuing refactoring so that consensus logic and other things become better separated internally
< sipa>
and not focus on exposing things
< sipa>
but others may disagree - i think wumpus prefers first having a clear idea of what will be exposed
< sipa>
even a verifyBlock will need a way to pass in the utxo set and the block index
< sipa>
the only difference is that a processBlock doesn't need a way to update set utxo set, and doesn't need to be able to request other blocks in case of a reorg
< jtimon>
althought I tend to agree, I feel that's very vague and doesn't help on clarifying priorities, thinking of the next thing to expose, I think, helps clarify what the goal of the refactors should be and where are we supposed to be moving towards to
< sipa>
you know my opinion - i don't care about exposing anything at all at this point, so i'm the wrong person to ask
< jtimon>
yes verifyBlock would need an interface to access data from the utxo
< sipa>
i think we have harder problems to solve before exposing even comes into question
< jtimon>
dcousens: proposal was to pass all required data explicitly for the block you were validating
< sipa>
essentially, i think we should first introduce clean abstractions between certain modules inside bitcoin core, in such a way that it's effectively bitcoind using a consensus library already, without it being exposed
< sipa>
when it's good enough for us to use, we can think about exposing it to others
< sipa>
(but again, others may see things differently)
< jtimon>
right, and I think the module that should be a priority to cleanly separate is the part of the code that is required to fully validate whether a block is valid or not from everything else
< sipa>
but that's very tightly coupled with validation of the whole chain, through CBlockIndex
< jtimon>
right, it basically depends on chain.o and coins.o
< sipa>
you can't validate a block without knowing its CBlockIndex
< jtimon>
more specifically on two existing classes on them, an API for that is not hard
< sipa>
so i'm not sure whether "single block validation" is a useful abstraction on its own
< sipa>
transaction validation may be useful
< jtimon>
CBlockIndex is the storage interface I abstract (or abstract from its own exsiting abstraction) in 8493
< sipa>
i don't want an abstraction for CBlockIndex
< jtimon>
my proposed next steps are single header validation or single tx validation
< jtimon>
but without policy checks
< sipa>
as i said, i don't think we should focus on exposing interfaces now, but on separating modules
< sipa>
and i think separating block validation from chain validation is hard
< BlueMatt>
jtimon: I'm with sipa here - The Main Split was the first of many steps that make sense on their own to abstract out consensus and non-consensus code
< BlueMatt>
jtimon: the few commits I sent you earlier form the very tiny beginning of what I think are the next steps
< jtimon>
separating network things was absolutely brilliant, thanks again
< BlueMatt>
jtimon: ie having a state object internally which keeps chainstate in it and calls out to things for disk access and has ProcessNewBlock as a member function
< sipa>
BlueMatt: chainstate include mapBlockIndex?
< BlueMatt>
and its ~no code changes, just some function splitting and putting ClassName:: in front of them
< BlueMatt>
sipa: yes, mapBlockIndex and chainActive and related variables
< BlueMatt>
sipa: but calling out for ReadBlockFromDisk, and pcoinsTip is just a pointer that is passed to it
< sipa>
BlueMatt: got it
< sipa>
seems like an easy first step
< jtimon>
sipa: if you don't care about exposing, that's fine, let's talk about dependencies, I want the consensus module to fully verify a single tx and a single header and a single block without depending on coins.o or chain.o
< BlueMatt>
yea, should be pretty clean...I dont have time to do it for the next week or two...do you want to take it up jtimon?
< BlueMatt>
I also want to work on splitting up net_processing more so that we can multithread ProcessMessages
< BlueMatt>
jtimon: I dont see how thats possibel?
< sipa>
yes, i think those are possible
< jtimon>
I don't know what you want to do, how can I pick it up?
< sipa>
(don't
< BlueMatt>
jtimon: literally the point of "fully validating" a tx is to validate it against a CCoins-holding UTXO db
< BlueMatt>
jtimon: did you look at the top commit on the branch I sent you?
< sipa>
efficiency of validation is highly dependent on low-level access to coins and chain
< jtimon>
sigh, I thought I had proved it was possible repeated times...
< sipa>
it's possible if you introduce abstractions everywhere
< jtimon>
is it possible to fully validate a header without depending on chain.o?
< sipa>
no
< sipa>
(unless you abstract it out, of course)
< jtimon>
then what's happening in 8493
< jtimon>
right
< sipa>
but i think such abstraction are both a performance issue and an unnecessary code complication
< jtimon>
well, more than half of 8493 is purely for demonstrating the exposed api and without benchmarking of any kind
< jtimon>
my goal was to separate the code the verify a full block, depending on chain.o and coins.o (but only on those related to storage) [maybe put it all in the consensus folder? or wait for later?] but not putting it in the consensus module until you want to expose more and abstract it from chain and coins
< jtimon>
anyway, I'm happy reviewing any related refactors, please ping me
< jtimon>
sipa: does the GetConsensusFlag make any sense to you at a first glance? at least more than the previous version?
< jtimon>
without exposing anything, just as a refactor (note that calling GetConsensusFlag inside ContextualCheckBlock is painful performance-wise)
< bitcoin-git>
[bitcoin] jtimon opened pull request #9279: Consensus: Move CFeeRate out of libconsensus (master...0.13-consensus-dust-out-minimal) https://github.com/bitcoin/bitcoin/pull/9279
< bitcoin-git>
[bitcoin] jtimon closed pull request #7820: Consensus: Policy: Move CFeeRate out of consensus module and create CPolicy interface (master...0.12.99-consensus-dust-out) https://github.com/bitcoin/bitcoin/pull/7820
< gmaxwell>
FWIW, I'm noticing connection slots full on my nodes.
< gmaxwell>
including some clown at 138.68.10.138 who looks like he's connected three times to everyone; while pretending to be android wallet (he's not).
< Lightsword>
gmaxwell, maybe 138.197.197.164 as well
< Lightsword>
and 138.197.197.132 and 180.173.203.229 and 138.197.197.108
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9280: [Qt] Show ModalOverlay by pressing the progress bar, allow hiding (master...2016/12/qt_modal) https://github.com/bitcoin/bitcoin/pull/9280
< gmaxwell>
Lightsword: I've only been including ones that show up on all my input hosts, unfortunately since everyone is at limits, that conceals a few.
< BlueMatt>
gmaxwell: just take the unlimited-connection-slots patch?
< sipa>
you can set -maxconnections=1000 without any patches
< BlueMatt>
sipa: huh? I thought that sets you at 125?
< sipa>
no
< sipa>
125 is just the default
< BlueMatt>
sipa: its limited by available sockets
< wumpus>
depends on what the fd lmiit is
< BlueMatt>
which can be super low, because select()
< BlueMatt>
or this used to be the case
< sipa>
Warning: Reducing -maxconnections from 1000 to 873, because of system limitations.
< sipa>
ok, 873.
< BlueMatt>
oh, 873, hum, I thought it was lower
< wumpus>
select() can handle 1024 on most systems, that's pretty much enough for most cases
< BlueMatt>
whatever, I carry a patch to use poll() to make it actually higher....
< wumpus>
I guess we've held up switching to poll because we expected to switch to libevent any day, that's taking a bit longer than expected :)
< sipa>
Soon! (tm)
< wumpus>
yea :-)
< gmaxwell>
IIRC matt's patch is darn near trivial.
< BlueMatt>
gmaxwell: yes, but given that its 873 not 1XX as I'd thought, probably not worth it
< BlueMatt>
and libevent is actually sooner now
< bitcoin-git>
[bitcoin] kallewoof opened pull request #9281: Refactor: Removed using namespace <xxx> from bench/ & test/ sources (master...no-using-namespace-bench-test) https://github.com/bitcoin/bitcoin/pull/9281
< bitcoin-git>
[bitcoin] paveljanik opened pull request #9282: CMutableTransaction is defined as struct (master...20161205_CMutableTransaction_is_struct) https://github.com/bitcoin/bitcoin/pull/9282
< bitcoin-git>
bitcoin/master c4b6fa8 Pavel Janík: CMutableTransaction is defined as struct.
< bitcoin-git>
bitcoin/master 7d5d449 Wladimir J. van der Laan: Merge #9282: CMutableTransaction is defined as struct...
< bitcoin-git>
[bitcoin] laanwj closed pull request #9282: CMutableTransaction is defined as struct (master...20161205_CMutableTransaction_is_struct) https://github.com/bitcoin/bitcoin/pull/9282
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9284: Suppress some annoying deprecation warnings (OSX) (master...2016/12/osx_warnings) https://github.com/bitcoin/bitcoin/pull/9284
< jonasschnelli>
And comments on our keypoolrefil RPC call behavior?
< jonasschnelli>
The tests proof, that nodes[0].keypoolrefill(3) result in 4 available keys.
< jonasschnelli>
But reading the API docs, it should be 3.
< jonasschnelli>
IMO the +1 is wrong here
< jonasschnelli>
(I'd ask because I'd like to fix this with the HD split in ext/int chain)
< dcousens>
hmm, CTransaction assignment was totally removed aye
< dcousens>
sipa: was removing CTransaction& operator=(const CTransaction& tx); necessary, or just a safety precaution?
< dcousens>
meh, I guess I can just CTransactionRef anyway
< dcousens>
eh, nvm, rebased all my local code, tl;dr was juts changing CTransaction to CTransactionRef, .vout to ->vout ... and thats it.
< dcousens>
LGTM :)
< instagibbs>
dcousens, I love happy endings
< dcousens>
instagibbs: not so happy yet ha
< dcousens>
trying a fresh-recompile, but master seems to just lock up for me atm
< jl2012>
in what situation, the "Warning: We do not appear to fully agree with our peers! You may need to upgrade, or other nodes may need to upgrade" will be shown?
< morcos>
jonasschnelli: you around?
< morcos>
re: #8501, I agree with not fixing the frequency.. But i'm unsure about the no duplicating the same value over and over again..
< morcos>
It might depend on the use case, but for instance it might be valuable to know that it stayed the same for a while and then incremented all at once, as opposed to not being able to tell that it just hadn't been simpled in between
< morcos>
my thought was if we saved having to record the time stamp, we might be able to put up with lots of duplicate values.. especially if we're saving for instance only 1000 data points at second frequency, it just won't use all that much memory
< morcos>
anywya, sorry, dont mean to redesign your whole PR months after you opened it
< Chris_Stewart_5>
Does -txindex significantly impact performance of IBD? I tried to sync last night and only synced ~10K blocks, which seems slow. Is that reasonable?
< sipa>
that's totally unreasonable
< sipa>
is it stuck?
< sipa>
or just slow?
< Chris_Stewart_5>
extremely slow it seems. I'm using out of box settings on 0.13.1 with -txindex.
< sipa>
does increasing dbcache help?
< Chris_Stewart_5>
I'll try it later and report back, default is 2GB?
< sipa>
default is 300MB
< Chris_Stewart_5>
mmm that is probably why. I thought it was significantly higher. How long does IBD take other people with that setting as default?
< sipa>
what height are you at now?
< Chris_Stewart_5>
403817
< Chris_Stewart_5>
sipa: I should have been more clear, I have been trying to do IBD over the course of a few nights, with results like I said ~10k blocks a night.
< Chris_Stewart_5>
the first ~250k blocks went relatively fast (a couple hour period) but I think some might have already been on disk? Perhaps i'm using the term IBD a little too loosely
< Chris_Stewart_5>
but it is a major sync
< instagibbs>
sdaftuar_, why would you want to spend a coin from you wallet that has descendants(already spent)? I'm surely thinking of this wrong
< instagibbs>
or are descendants calculated from a tx point of view, ie other output has been spent in a chain, therefore that adds to that count
< sdaftuar_>
instagibbs: oh, yeah i meant in-mempool descendants
< sdaftuar_>
say you have a tx that has 2 outputs, you send me money and give yourself change.
< sdaftuar_>
then i chain 24 transactions off it
< sdaftuar_>
you try to spend your change, but that'll fail
< instagibbs>
ok, didnt think of the fact that outputs are linked re:descendants
< sdaftuar_>
that does seem to be a confusing property of the mempool limiting :)
< instagibbs>
but that is obv in hindsight. Ok, well one issue is you might have asymmetrical limits.
< sdaftuar>
yeah, i suggested using min()
< gmaxwell>
it might have been better if that limit was split across outputs.
< sdaftuar>
gmaxwell: that would then fail to capture the issue at hand, i think
< instagibbs>
sdaftuar, hmm says max on my screen
< gmaxwell>
e.g. A can have up to 24 decendants, it has two outputs, each can have 12 under it.
< sdaftuar>
sorry, max(tx->ancestor, tx->descendants()) should be less than min(ancestorlimit, descendantlimit)
< instagibbs>
oh i see nvm
< sdaftuar>
gmaxwell: oh, hm.
< gmaxwell>
in the worst case though it reduces your maximum to a log_outputs(depth), which isn't awesome.
< sdaftuar>
maybe doable, but kind of yuck to implement i think
< gmaxwell>
but it would prevent other people from chewing up your limit. I think this hasn't actually been a problem, though I could imagine it being one in certian kinds of transaction protocols.
< sdaftuar>
what kinds of protocols do you have in mind?
< gmaxwell>
In the abstract, protocols where someone delaying your transaction can allow the party to cheat like atomic swaps. Not that big of a concern since unless the head transaction is confirmed those protocols are not secure against miners.
< sdaftuar>
right, if someone comes up with a use case that does rely on the parent not necessarily being confirmed, then that should alter our thinking
< dcousens>
hmph
< dcousens>
So I'm running master, no changes at all
< dcousens>
And my bitcoind finishes up to verify, then just sits there on 100% CPU usage (probably forever, but who knows)
< gmaxwell>
'up to verify'?
< dcousens>
It fails to open up the RPC, or start synchronizing
< dcousens>
checkblocks
< gmaxwell>
what is the last log entry?
< gmaxwell>
can you attach GDB?
< dcousens>
It still keeps logging, but solely the tor control messages
< gmaxwell>
that sounds like a deadlock then.
< dcousens>
What do I need to do to attach GDB? Happy to do it
< gmaxwell>
dcousens: what OS are you on?
< dcousens>
just collecting info, sec
< gmaxwell>
on *nix: ps aux | grep bitcoin to get the bitcoind pid then gdb -p <pid> to attach. then run thread apply all bt full to get backtraces from every thread, and 0bin that to me, and then you can type q<enter> to quit