< GitHub9> [bitcoin] laanwj pushed 14 new commits to master: https://github.com/bitcoin/bitcoin/compare/c6de5cc88614...3b20e239c602
< GitHub9> bitcoin/master 78b82f4 Suhas Daftuar: Reverse the sort on the mempool's feerate index
< GitHub9> bitcoin/master 49b6fd5 Pieter Wuille: Add Mempool Expire function to remove old transactions...
< GitHub9> bitcoin/master 9c9b66f Matt Corallo: Fix calling mempool directly, instead of pool, in ATMP
< GitHub173> [bitcoin] laanwj closed pull request #6722: Limit mempool by throwing away the cheapest txn and setting min relay fee to it (master...mempoollimit) https://github.com/bitcoin/bitcoin/pull/6722
< BlueMatt> heyyyyyyyy
< jonasschnelli> Nice
< jonasschnelli> Finally
< phantomcircuit> BlueMatt, great now i can start reviewing it
< * phantomcircuit> flees for his life
< BlueMatt> phantomcircuit: Ive done that before :/
< wumpus> gmaxwell: would be interesting to check at least; switching to tcmalloc is as simple as LD_PRELOAD a library, don't know about jemalloc
< gmaxwell> same thing for jemalloc.
< phantomcircuit> i wonder how much it would mess up things to put all the global config things into a locked object
< wumpus> good as a workaround but I'd prefer a solution that doesn't involve making changes at that level. C++ makes it possible to use own memory pools/allocators for specific things, that can be discarded in one block (say) at the end of the function, that seems to be a more predictable way
< gmaxwell> I wish the standard API just let you hint short and long lived objects.
< wumpus> phantomcircuit: which things aren't protected by mutexes that should be ?
< * phantomcircuit> looks at gmaxwell
< wumpus> at the least you'd want a configuration object per module, not single global one, there is already too much lock centralization
< phantomcircuit> wumpus, i believe we do bad things with stuff like fReindex and nPruneTarget
< wumpus> gmaxwell: yes, unfortunately C and C++ are very much lacking with regard to giving hints to your compiler or runtime, it presupposes them to be much more important
< wumpus> s/important/smart
< btcdrak> gmaxwell: what are the deployment options for SG?
< btcdrak> err, SW.
< gmaxwell> btcdrak: a flag day swap of the network and all software to use a different transaction seralization.
< gmaxwell> I don't think it can be done.
< btcdrak> gmaxwell: if we're going to have a hard fork, we may as well take the opportunity then...
< * btcdrak> runs
< phantomcircuit> btcdrak, there's a huge list of things that should absolutely be in a flag day
< btcdrak> SW is a big win, solves a lot of problems and would make a flagday worth the hassle. If we do end up with a blocksize increase, SW should be added at the same time imo.
< * wumpus> throws a hard frok after btcdrak
< Luke-Jr> gmaxwell: I thought I had convinced sipa a softfork was possible for SW a week or two ago. Or is that just too ugly?
< * btcdrak> has never tried to cross-dress before
< phantomcircuit> btcdrak, the blocksize thing is spv backwards compatible, the only other hard fork proposal im aware of that is, is for commitment schemes in the merkle tree root
< phantomcircuit> note: i dont actually see that as being very useful
< btcdrak> Luke-Jr: I'm sure I had a conversation with someone recently, maaku or BlueMatt maybe, about how SW could be a soft fork.
< phantomcircuit> spv clients can all be updated remotely by the devs since they're apps in an app store thingie
< * Luke-Jr> agrees SPV-backward-compatible is not a particularly desirable attribute.
< phantomcircuit> SW cannot really be a soft fork, you can commit to the sw mtr as a sf but that only gets you the uninteresting part of it
< phantomcircuit> oh but actually you can force the tx to commit to the sw txid in the script sig
< phantomcircuit> ha you could but... please no
< Luke-Jr> phantomcircuit: would it really be worse than this thing cdecker is proposing?
< gmaxwell> btcdrak: it cannot be a softfork. only a weak approximation of it can be, which is what the decker email is. But that approximation doesn't get you benefits like being able to sync blocks without downloading signatures. (itself likely elimiating most of the need for bloom filtered lite wallets, among other benefits); ... and the approximation has costs like a considerable utxo size overhead, a
< gmaxwell> nd doubling the amount of transaction hashing we have to do.
< Luke-Jr> gmaxwell: I don't see why you can't sync without downloading signatures in the softfork.
< gmaxwell> Luke-Jr: because you can't verify a txout you learn without the whole transaction.
< gmaxwell> (unless in a softfork you mean extensionblock like softforks)
< Luke-Jr> gmaxwell: the witness wouldn't be part of the "whole transaction"; it would be stored in a separate piece of data the block commits to
< phantomcircuit> block commits to the normal txid as normal and the sw id & script hash separately
< gmaxwell> Luke-Jr: hm. interesting, you have a point. So you'd basically have a scriptpubkey that says "go look elsewhere for the signature"
< Luke-Jr> right
< gmaxwell> still ends up needing to store two IDs.
< gmaxwell> hm. or maybe it doesn't.
< phantomcircuit> gmaxwell, segregated witness id should match transaction id if you simply require all the scripts in the normal transaction to be OP_1
< gmaxwell> Luke-Jr: damn you're right. This is pretty cool.
< gmaxwell> phantomcircuit: yea.
< Luke-Jr> ☺
< gmaxwell> phantomcircuit: empty is better.
< gmaxwell> So the idea there would be a new p2sh like scriptpubkey which requires the scriptsig to be empty. The signatures are 'external', and commited in another hashtree commited to by the block.
< gmaxwell> it's identical to SW over whatever span of tx history is exclusively using this scriptpubkey type..
< Luke-Jr> (which could be enforced by the softfork, if it was desirable)
< gmaxwell> Luke-Jr: then it turns it into a flagday which is the hard thing to avoid.
< Luke-Jr> sure
< Luke-Jr> personally I think we might as well just do it in a hardfork
< Luke-Jr> but a softfork is possible
< gmaxwell> the problem is that it is not very reasonable to demand the entire bitcoin ecosystem cut cold to new code all at once.
< gmaxwell> (I mean, you could try to demand it but the result would be epic amounts of failure)
< gmaxwell> esp at the high degree that people reimplement everything themselves.
< Luke-Jr> we'd need to ship a version with both code
< Luke-Jr> but that's true of any hardfork
< gmaxwell> Sure, yea yea, the issue isn't us. it's j random wallet thingy that hardly works at all _currently_. It might even manage to ship code for both modes, but then the other one just won't actually work.
< Luke-Jr> oh, you mean tx creation :D
< gmaxwell> because outside of bitcoin core most software (but by no means all) in the ecosystem is developed by teams of one or two people, and shipped with little to no systematic testing.
< gmaxwell> not just creation but handling.
< gmaxwell> Not how we still have no remotely decent block explorer thingy for elements alpha.
< Luke-Jr> yeah, I guess that's a big problem
< * Luke-Jr> ponders if we can do the softfork way that remains tx-compatible with a future hardfork
< gmaxwell> I think it's almost that already, without special considerations.
< Luke-Jr> maybe
< GitHub36> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/3b20e239c602...0fbfc5106cd9
< GitHub36> bitcoin/master 41db8c4 Wladimir J. van der Laan: http: Restrict maximum size of request line + headers...
< GitHub36> bitcoin/master 0fbfc51 Wladimir J. van der Laan: Merge pull request #6859...
< GitHub199> [bitcoin] laanwj closed pull request #6859: http: Restrict maximum size of http + headers (master...2015_10_max_http_headers) https://github.com/bitcoin/bitcoin/pull/6859
< gmaxwell> Is there a reason that for fee purposes we are not using max(size, sigops*BLOCK_MAX_BYTES/BLOCK_MAX_SIGOPS) as the size of a transaction?
< GitHub192> [bitcoin] domob1812 opened pull request #6863: [Test Suite] Fix test for null tx input (master...null-txin-test) https://github.com/bitcoin/bitcoin/pull/6863
< gmaxwell> I'd like to see this go in https://github.com/bitcoin/bitcoin/pull/6622 I've had this in testing on several nodes (including one acting as a gateway to the outside world) for several others for over a month now.
< gmaxwell> Does anyone have any views on what else we need? for #6622?
< gmaxwell> libevent related compile fail on one of my older fedora hosts, error: ‘EVENT_LOG_WARN’ was not declared in this scope
< btcdrak> gmaxwell: if this is the first time compiling since libevent merge you need to do a git clean -dfx
< gmaxwell> btcdrak: fwiw, in my case no concern but you might want to take care in advising people to git clean -d :)
< gmaxwell> "omg I was keeping my wallet in my source directory!"
< wumpus> at least mention what it does
< gmaxwell> yea.
< gmaxwell> So in any case, cleaning doesn't make that go away. I see some other EVENT logging stuff appears to be ifdef guarded.
< wumpus> but after larger changes to the build system it's good advice
< gmaxwell> this appears to have libevent-2.0.so.5.1.6
< wumpus> I don't know what the minimum version of libevent is that is supported, may be too old
< gmaxwell> well hacking out that one like got it to build.
< wumpus> though if it just errors about the logging stuff, commenting that out may make it compile further
< wumpus> good
< gmaxwell> er one line.
< gmaxwell> testing now.
< gmaxwell> warren: what libevent version does current RHES have?
< wumpus> it's not necessary, it's just nice, redirects libevent errors to our debug log, likely you could put some #ifdef guard around it...
< gmaxwell> unit tests pass.
< wumpus> #if LIBEVENT_VERSION_NUMBER >= 0x....
< warren> gmaxwell: RHEL7 you mean?
< gmaxwell> warren: Yes, whatever version is most current.
< warren> libevent-2.0.21-4.el7
< warren> Keep in mind that if <whatever> is important you need to actually look at the source package as they often will forward port things.
< gmaxwell> warren: okay, thats newer than what this host has in any case.
< warren> wow, that's the same version in Fedora 22. I guess it didn't change upstream in a while.
< gmaxwell> the particular host I was trying on is my oldest still running fedora box, with F19.
< wumpus> 2.0.21 is the one-to-latest stable of libevent, they have been working on 2.1 for a long time but it's still in alpha
< GitHub79> [bitcoin] MarcoFalke opened pull request #6864: [qt] Use monospace font (master...MarcoFalke-2015-qtMonospace) https://github.com/bitcoin/bitcoin/pull/6864
< btcdrak> gmaxwell: omg
< jgarzik> gmaxwell, ok to merge #6622, IMO
< gmaxwell> cfields_: Any thoughts on #6622? If you look at it-- it's largely orthorgonal with your overall networking work, happening at a higher level. (later rate limiting support would probably compliment it nicely)
< cfields_> gmaxwell: hmm
< gmaxwell> wow, we never went and set TCP_NODELAY on our sockets? we should do that right away-- perhaps even in backports. We've talked about it before but apparently never did it.
< jgarzik> gmaxwell, not that I disagree, but... what's your specific motivation to use NODELAY & disable Nagel?
< gmaxwell> jgarzik: now that we've removed all the gratitious sleeps from the networking, nagle is almost certantly slowing our performance. Consider how chatting the bitcoin protocol is. It likely also makes the traffic more bursty, which isn't good for other users sharing the network.
< gmaxwell> Basically all of our writes should be large enough that nagle shouldn't be saving us much, and the smaller writes we to tend to be latency important.
< jgarzik> gmaxwell, nod
< jgarzik> gmaxwell, just theory or is someone reporting this?
< jgarzik> certainly appears low hanging fruit
< cfields_> gmaxwell: agree that the logic for deciding what to avoid sending belongs at a higher level. The implementation itself seems a bit naive, though?
< gmaxwell> jgarzik: Suhas mentioned in an email some testing that seemed impacted by it. I'm just surprised we hadn't done this already, historical analysis suggests it's somewhat my fault! :(
< sdaftuar> jgarzik: yeah tcp_nodelay speeds up block relay by a round trip
< gmaxwell> (jgarzik you brought it up previously, and I (thinking of the 100ms sleeps) said we had much more low hanging fruit. :P )
< jgarzik> yep it was an issue when I rewrote low level network reads/writes code
< gmaxwell> I'd previously gone around to miner software and p2pool and had them fix it, even before it had been brought up in bitcoin core.
< jgarzik> thanks. I was curious where it would first trip up ppl
< btcdrak> can someone with rights please restart this travis job please? https://travis-ci.org/bitcoin/bitcoin/jobs/86431798
< cfields_> btcdrak: just force-push
< btcdrak> not my PR
< morcos> wumpus: sipa: i've been trying to learn a bit more about this intensive memory usage in getblocktemplate. i still haven't quite figured out why the framentation is so bad that nothing can get clean up, but there are a few things happening
< morcos> during CreateNewBlock, we fetch all the coins needed for all the txs in our mempool.
< morcos> there is a 2x hit for this, as we make a copy in pcoinstip (assuming they aren't already cached there) and a copy in the view in miner.cpp
< morcos> also note that the dbcache size we set here will have just been ignored as we're loading up pcoinstip. so if we're over that it'll just get flushed again once we're done with CNB
< morcos> also because the rpc call is running in a different thread, if the populating of the cache in pcoinstip trips the loadfactor of the CCoinsMap. Then we'll end up moving the CCoinsMap memory to a different arena.
< wumpus> right, that is how it will happen with the coinsview caches
< morcos> so both the size of the coins in the miner.cpp view and the size in the coins of pcoinstip could possibly be replicated in #rpcthreads + 1
< wumpus> indeed
< gmaxwell> morcos: sipa and BlueMatt have been been working on the view cache memory usage.
< morcos> i'm curious as to what they're doing
< wumpus> it seems a bit redundant that the child cache also stores everything
< jgarzik> morcos, RE fragmentation part of that has to do with the low level allocator that how that memory is returned to the OS. Older allocators use sbrk(), which make it impossible to release memory unless there is absolutely no data structure, including hidden-from-app mgmt ADTs, within the memory range. Ditto newer mmap-based allocators, which will not munmap unless there is perfect reclamation by app + libc.
< morcos> yes, thats what i was thinking
< jgarzik> So you're stuck even if libc allocation structs are left behind
< jgarzik> (typically two pointers, in negative offsets from your DS)
< morcos> wumpus
< wumpus> especially as this is a read-only view
< morcos> wumpus: which is the child cache? The cache in view has to store everything b/c it might get modified by in mempool txs
< wumpus> oh it isn't?
< morcos> however the cache in pcoinstip doesn't need to be populated
< morcos> view (in miner.cpp) is backed by pcoinstip is backed by the database
< wumpus> that's the parent cache
< wumpus> and sure - the modified coins need to be stored
< morcos> i think its kind of silly that you set a dbcache size, but then can blow through that inside CreateNewBlock
< morcos> of course its also made much worse by the organization of the mining code
< morcos> no reason to go through every single tx
< wumpus> e.g. the coins that are the same in the child and parent cache don't need to be stored twice
< wumpus> only if the one in the child cache starts to deviate because ti is changed by a mempool transaction
< morcos> yes, thats what i'm suggesting.. when you do a fetch from the child cache, just pass them down from the grandparent cache without storing them intermediately in the parent cache
< wumpus> right, assuming they aren't changed in the parent cache, they don't need to be cached there
< morcos> this is related to #5967
< morcos> where i change the assumptions that the parent cache can't be flushed
< morcos> slight shift of subject, how do most miners use bitcoind now?
< morcos> they do use getblocktemplate?
< wumpus> is there another way?
< morcos> and in effect have CreateNewBlock control their mining algorithm, or they do something else?
< morcos> i don't know i thought i saw chatter somewhere about miners not using it
< morcos> but i dont' knwo what else you would do
< wumpus> I think they all use getblocktemplate in some form or another, although some may have made large changes to the code
< morcos> i'm wondering how to think about rewriting the mining code
< morcos> it could use a substantial rewrite
< morcos> but i guess it doesn't have to be thought of as consensus critical if the testing/submitting aspects of it haven't changed?
< wumpus> block validation is consensus critical, creating blocks from the mempool isn't
< gmaxwell> morcos: I suspect you misunderstood whatever that chatter was (or someone was confused); everyone uses GBT to get work from bitcoind. Many parties also do other stuff as well.
< gavinand1esen> morcos: +1 . I'd suggest starting with a clean sheet of paper, and figuring out what would be best for solo miners / mining pools
< morcos> it seems like there is some low hanging fruit. once you already have 999,900 bytes in a block, you don't need to scan the remaining 2GB of txs just to see if you find a magical 100 byte tx for instance
< morcos> gmaxwell: ok good... i definitely didn't understand whatever i read
< gmaxwell> so long as the resulting block is valid I don't think it matters how it was done; so I think it's fine to rewrite.
< gmaxwell> morcos: might have been a comment that few/no one is using GBT to go from pools to mining devices; which is the case.
< wumpus> yeah it obviously matters that it generates correct blocks, but the exact algorithm is not set in stone
< gmaxwell> morcos: as far as rewrite goes, being able to get createnewblock out of the latency critical path for mining would be the biggest win from the user's perspective.
< morcos> does it make sense to start with a simple improvement to the existing algorithm, and then separately or later try to use more advanced logic that takes ancestor packages into account? would it make sense to ever support 2 different algorithms
< morcos> gmaxwell: can you explain that a bit more? you mean once a new block comes in, the time it takes to have a new potential header to work on?
< gmaxwell> (e.g. by returning an empty template when there isn't one precomputed, then computing a template in the background.).. and thats independant of the algorithims, but it means that more computationally expensive algorithims would be reasonable.
< morcos> even if its not optimal
< morcos> yes, ok that makes a lot of sense
< gmaxwell> morcos: what I was thinking was: there is a cached block template, when a new block comes in, it's no longer valid (as we have a new block) so its flushed. If there is no cached template available, just generate one that has no transactions (fast), and trigger creating a template in the background.... and the background one can get recomputed on a timer or on new-block arrival, so long as GBT re
< gmaxwell> quests keep coming in.
< gmaxwell> we cache CNB right now, but it's "too late" :)
< jgarzik> +1
< jgarzik> that's been the general template for a rewrite, for years
< morcos> gmaxwell: yes, thats what we were just discussing...
< morcos> so a new thread for template generation is ok?
< gmaxwell> morcos: I think so!
< jgarzik> practically required
< wumpus> I don't see why not, but make sure to only do the work if a node is actually mining
< morcos> what should a call to getblocktemplate do if cs_main is locked? seems too bad that you really dont' care for a lot of the cs_main locks
< morcos> but if the chain is being updated, you might prefer to wait for that to happen before the template is returned
< jgarzik> does it need cs_main if it's simply returning a cached version?
< jgarzik> seems like the RPC call should return cached version, and b/g compute thread is what needs the lock
< morcos> i suppose maybe you could lock the template if you know you're activating a new chain
< morcos> ok i guess i need to learn more about how notification of new tips works anyway
< jgarzik> if a new block comes in, the cache is invalidated + regenerated as an empty template + kicks off new gen thread
< morcos> but yes i agree in theory you should just need a tiny lock on whether your template is switched to the new one
< gavinand1esen> I was very close to refactoring the block validation code a tiny bit so the cs_main lock was released while transaction validity was checked... theoretically not a difficult change
< morcos> gavinand1esen: that seems scary
< gavinand1esen> that's why theoretically....
< jgarzik> should just need a quick swap upon new-block, not a long term lock
< morcos> jgarzik: isn't the technical word for that "tiny"?
< jgarzik> can be lockless technically :)
< gmaxwell> yes, the cache could be RCU, but I don't think thats needed. :)
< jgarzik> I was thinking more std::atomic
< morcos> ok all sounds good... i'll give it a shot
< gavinand1esen> morcos: I was mapping out the work needed to validate blocks in parallel (as part of prep work for looking at broadcasting 'weak blocks'); was pondering an extension to CCoinsViewCache that used leveldb read-only snapshots to validate against the UTXO set as of the last block, which could let validation against two different CCoinsViewCache's happen in parallel without the cs_main lock
< gavinand1esen> morcos: that idea might work for a cs_main-free CreateNewBlock, too....
< sipa> signature validation already happens without the cs_main lock
< morcos> sipa: you mean b/c they are happening in other threads? but the cs_main lock is held and any further processing waits for them to finish
< morcos> sipa: btw, did you see my comments about doing threaded signature checking in ATMP in addition to in ConnectBlock?
< morcos> it seems thats when the actually work is being done a lot of the time anyway
< sipa> morcos: i did not see that, but it certainly may make sense
< sipa> i'd like to move signature checking to be something fully asynchronous though
< sipa> with a queue that is worked on by threads, and notification callbacks happen when validatiin succeeds/fails
< morcos> oh... huh, so a block could just assume that all signatures were valid and keep processing the rest of the blcok, but only at the very end wait
< morcos> wow that would be way better
< sipa> it's nontrivial :)
< morcos> not for you!
< jcorgan> i think it would be a great idea...if sipa does it :-)
< gmaxwell> morcos: yes, ... behold the sadness of CHECKSIG NOT, however.
< morcos> hmm... so you don't even know whether you want it to be valid or not
< gmaxwell> if all of script is async then that goes away.
< morcos> i thought processors were good at branching. :)
< morcos> what do you mean goes away?
< gmaxwell> I mean that if the thing you dispatch is the whole script and not just ecdsa then there is no chance that you really want it to fail.
< jgarzik> processors try to predict branching because they suck at it ;p
< morcos> oh...
< gmaxwell> but then the state that needs to be dispatched is large.
< jgarzik> The kernel compiles Berkeley packet filters in a JIT - would be amusing to JIT scripts & sigs
< gmaxwell> jgarzik: by amusing you mean the tremendous fun in finding all the attacks...
< gmaxwell> :P
< gmaxwell> morcos: in Elements alpha we turned CHECKSIG into CHECKSIGVERIFY (if you want a checksig thats allowed to fail, wrap it in an if).
< sipa> gmaxwell: i mean parallellizing script validation
< sipa> not signature validation
< gmaxwell> sipa: k. then the negation problem goes away... but lots of data to manage.
< gmaxwell> (the CHECKSIGVERIFY vs CHECKSIG is also motivated by batch EC signature checking, which is faster and only gives you a pass/fail for the whole batch).
< morcos> i'd feel better about if we more formally declared what the state that can affect script validation is
< morcos> yeah do we need CHECKSIG?
< gmaxwell> No, there is no need for CHECKSIG--- in theory we could softfork convert existing checksig into a CHECKSIGVERIFY; though some risk of breaking some crazy existing scriptpubkey and making it unspendable.
< gmaxwell> So thats probably not advisable, but any future checksig like operators will certantly be VERIFY only.
< morcos> wouldn't that not be a soft fork because of the CHECKSIG NOT's
< morcos> it seems like you could just bump tx version if you didn't want to worry about existing scriptpubkeys
< gmaxwell> morcos: it would be a soft fork: If you had a CHECKSIG NOT with a bad input you'd just get denied before you got to the NOT.
< gmaxwell> Less agressive but probably equally useful is CHECKSIG requiring that the only signature that is allowed to fail is the zero length signature. (so you can decide if it fails or not without doing anything but checking the length)
< gmaxwell> morcos: key criteria for a soft-fork is that it cannot turn any invalid thing valid. So a soft fork that just added additional reasons for the script to fail when excuting a checksig would be a softfork (if not an advisable one!)
< gmaxwell> the only-zero-length can fail also has the advantage of being less potentially incompatible with existing scriptpubkeys.
< morcos> gmaxwell: yes, i just got confused about whether verify shortcircuited or not
< morcos> ah so its the tx version of the spending tx that matters
< gmaxwell> As far as version, it's a little complicated. We don't have versions on txouts in transactions. Though we do track the transaction version in the utxo set at least.
< gmaxwell> so one could use that utxo version to decide what rules to apply. alternatively one could use the spending transaction, version... and if you need to make a signature with the old rules to have it pass, then use the old version.
< gmaxwell> So I think that would work.
< morcos> yeah, but its more messy than i hope
< morcos> d
< jgarzik> one step at a time - just getting a b/g regen would be nice
< jgarzik> ;p
< gmaxwell> Not really sure if it's worth it for the existing checksig... esp since for ECDSA with our current encoding it only gives you the async gains. For the schnorr in elements it allows a 2x verification speedup from batch verification.
< gmaxwell> sipa: can you please get cdecker to not call his work "normalized transaction ids"
< gmaxwell> (or have you met with him already?)
< sipa> gmaxwell: just did
< sipa> why not?
< gmaxwell> sipa: because of the instant automatic assumption that this results in a transaction ID that cannot change.
< gmaxwell> sipa: did you see that luke figured out how to make SW a soft-fork, rather elegantly too?
< sipa> no, but i did talk about it with cdecker
< sipa> gmaxwell: well it'd be a second id... transactions would have a txid and an ntxid
< CodeShark> +1 to getting rid of scriptSig :)
< gmaxwell> sipa: but the ntxid is not actually non-malleable. So if you write software expecting that it is, you will be disappointed (and maybe even suffer funds loss).
< gmaxwell> It's non-malleable against certian kinds of operations under certian conditions, and quite useful. But we had this problem before that people assume this ID is actually more useful for accounting purposes than it is (with your prior writeup)
< sipa> anything except sighash flags that isn't covered?
< gmaxwell> signer just modifying the transaction; e.g. I pay you and it takes two hours to confirm, and during that time I RBF the payment (to boost the fees, for example). You still get paid all the same but now your software is confused when the ntxid changed out from under you.
< CodeShark> you can also stick in some NO_OPs into the script just to mess with the system :)
< gmaxwell> And if you go look at the malleability paper from fc2015 thats the kind of thing wallets were doing with txid and getting horribly confused.
< CodeShark> or write a different script with equivalent logic
< sipa> gmaxwell: that's not malleability
< sipa> but fair enough
< sipa> do you have a better idea?
< sipa> name
< CodeShark> so the main feature is that malleating the transaction requires creating a new signature/new signatures?
< gmaxwell> I think we should do the SW instead, and then it's just the same old txid and we do not introduce yet another ID that people need to deal with which is supposted to help malleabiity but still can't actually be used as a payment identifier.
< sipa> SW doesn't help against these
< sipa> it also changes txid on resigning/modifying a transaction
< gmaxwell> Indeed, it doesn't but it doesn't introduce yet another ID which also doesn't do what people expect it to do unconditionally (because the expectation is unreasonable)
< gmaxwell> and it has a number of additional benefits. (e.g. allowing private lite wallets that don't download signatures; or syncing the history without syncing the signatures)
< sipa> i fear this is a lot more work than you think
< sipa> not just in bitcoin core
< gmaxwell> And not increasing the size of the utxo set.
< sipa> it means we need new messages for blocks and transactions
< sipa> to relay extra data
< sipa> store that on disk
< gmaxwell> You can use it in a softfork manner without updating. Actually the bitcoin protocol _used_ to relay extra data outside of transactions but that got removed.
< sipa> go fetch it from peers that have it
< gmaxwell> Yes, it complicates relay; which is the downside compared to the approach in EA.
< sipa> it introduces a new type of block witholding
< gmaxwell> sipa: I don't think it does.
< sipa> if an old client gives you a block without the witness you need to go fetch it elsewhere
< gmaxwell> sipa: or you just require that the network of updated nodes be connected.
< sipa> so you stop accepting blocks from old nodes? yuck...
< gmaxwell> E.g. you wouldn't fetch a block from a older protocol version.
< gmaxwell> which, of course, could be deployed first.
< gmaxwell> In exchange you: Avoid giving the public two txids both of which don't do what they expect, you avoid increasing the UTXO set size by 30%, you allow new sync methods that don't transfer the signatures (saving 2/3rd the bandwidth), and you avoid double hashing transaction data to compute two IDs on it.
< morcos> 9 blocks in the last 20 mins?
< sipa> and you get a single txid that equally doesn't do what you expect, pretty much need everyone to upgrade anyway, or you need to keep supporting chains of transactions in which some are old-version which breaks even the intended effect
< sipa> while ntxid is strictly better at accomplishing the goal of malleability protection, and is a simple new opcode to softfork in
< sipa> (with the downsides you mentioned, yes)
< gmaxwell> sipa: the txid is functonally the same as the ntxid but at least it's not confusing people by being something new that they'll understand as existing to accomplish something that cannot be accomplished.
< gmaxwell> sipa: it is strictly inferior.
< sipa> ntxid is recursive; sw is not - if some transactions in the chain do not use sw, it breaks; ntxid does not break
< gmaxwell> sipa: okay point... This is true, but the cost is a 30% increase in utxo size. I don't think that is reasonable.
< gmaxwell> thats a significant and perminant increase in the total operating cost of the system.
< gmaxwell> as far as recursive case, it's not so simple-- for contracts all the participants are using the new style scriptpubkeys and you can enforce it so its fine.
< sipa> hmm, i thought i did the math before, and it was 20% extra
< sipa> now it seems to be 50% extra
< gmaxwell> for ntxid none of the existing software is using these IDs, so gain none of the protection, and nodes need reference rewriting to fixup transactions when their parents change.
< sipa> yeah, that fixup is annoying
< gmaxwell> I agree its somewhat less annyoing then having to provide an upgraded relay mechenism, but we need to upgrade relay regardless... and doing weird topology things to assure connectivity, but these are one time and short term costs.
< sipa> this makes no sense
< sipa> the average serialized size of txouts in the utxo set (excluding txid) is 7 bytes
< sipa> "transactions": 7386379,
< sipa> "txouts": 32564578,
< sipa> "bytes_serialized": 480114635,
< gmaxwell> My 30% was just 22+22+22+4+4+32 vs that plus 32 more.
< gmaxwell> sipa: f2pools empty pubkey attack... but I didn't think it was that many.
< sipa> also, i though the total size was way larger than 480 MB
< gmaxwell> oh indeed, something there is screwed. lemme check.
< gmaxwell> Myserty solved, obfscuation change broke the stats by cherry-picking bugged code from sipa's addrindex branch.
< gmaxwell> Author: Pieter Wuille <pieter.wuille@gmail.com>
< gmaxwell> Date: Wed Oct 7 17:12:24 2015 -0700
< gmaxwell> - stats.nSerializedSize += 32 + slValue.size();
< gmaxwell> + stats.nSerializedSize += 32 + pcursor->GetKeySize();
< GitHub125> [bitcoin] sipa opened pull request #6865: Fix chainstate serialized_size computation (master...fixchainsize) https://github.com/bitcoin/bitcoin/pull/6865
< sipa> so, it's a 22% increase to add ntxid at this point
< sipa> "bytes_serialized": 1065420089
< btcdrak> sipa: that's a lot
< gmaxwell> indeed, its a figure that is artifically lowered by the spam attacks...
< sipa> lowered?
< sipa> ah, the 22% is lowered - yes
< GitHub36> [bitcoin] MarcoFalke opened pull request #6866: [trivial] fix white space in rpc help messages (master...MarcoFalke-2015-rpcWhitespace) https://github.com/bitcoin/bitcoin/pull/6866
< gmaxwell> sdaftuar: your headers first relay test... can you try that with nagle disabled? :P (e.g. is most of the benefit from the change nagle? (trivial patch)
< sdaftuar> gmaxwell: basically all the benefit is nagle
< sdaftuar> if we turn it off, the existing relay code works as it's supposed to (one round trip instead of two)
< sdaftuar> #6494 wasn't intended to "fix" that bug, it's intended to make relaying generally work better in the case of reorgs
< sdaftuar> it just happens to not trigger nagle
< gmaxwell> yea, obviously #6494 is good and we should do it.
< gmaxwell> oh actually it was gavin that was testing it sorry.
< GitHub20> [bitcoin] gmaxwell opened pull request #6867: Set TCP_NODELAY on P2P sockets. (master...nodelay) https://github.com/bitcoin/bitcoin/pull/6867