< gmaxwell> 2016-10-23 00:57:16 - Disconnect block: 1651.55ms
< gmaxwell> :( actually the disconnectblock message undersates it, seeing on a fast machine 97 seconds between update tips while disconnecting blocks.
< morcos> gmaxwell: yeah disconnects are slow.. but those are on my list once i'm done banging my head against a wall with connects... i tried to look up that on my nodes and i didn't see one
< gmaxwell> I'm trying to roll back to a couple months ago for some tests.
< gmaxwell> and it's going to take hours. I'm not sure if we regressed or it's just increased load. It was always slow but when invalidate block was first written I tested it by reorging back all the way to the start... and I know it didn't take me months. :)
< sipa> 18:20:59 <sipa> for every undo that creates a UTXO that doesn't exist yet (because it's an undo of the last spend from one txid), it's trying to 'modify' that entry, looking for it on disk
< sipa> 18:21:14 <sipa> not realizing that it needs to create a new one, and thus can avoid the lookup
< sipa> so it causes on average 1 disk lookup for each txid being rolled back
< sipa> over long periods of time
< sipa> it would be an easy fix except for the fact that we want to use the more thorough current behaviour in the start-up consistency check (ah! that explains why that check is so slow as well...)
< gmaxwell> in addition to that it looks like it's spending most of its time twiddling with the mempool. setting the relay fee to 1btc/kb and the mempool size to minimum has it going fast.
< gmaxwell> not blindingly fast but fast enough that I wouldn't have commented (maybe 4 blocks per second or so)
< gmaxwell> sampling the backtrace seems to show a lot of it under UpdateForDescendants
< gmaxwell> so I have a node configured with connect=0 (what I've historically done when wanting no connections) and I see that it's managing to connect to itself over and over again...
< gmaxwell> 2016-10-23 01:40:11 trying connection 0 lastseen=0.0hrs
< gmaxwell> 2016-10-23 01:40:11 Added connection peer=721
< gmaxwell> 2016-10-23 01:40:11 Added connection peer=722
< gmaxwell> 2016-10-23 01:40:11 send version message: version 70014, blocks=435494, us=[::]:0, peer=721
< gmaxwell> 2016-10-23 01:40:11 connection from 127.0.0.1:53402 accepted
< gmaxwell> 2016-10-23 01:40:11 sending: version (103 bytes) peer=721
< gmaxwell> 2016-10-23 01:40:11 received: version (103 bytes) peer=722
< gmaxwell> 2016-10-23 01:40:11 connected to self at 127.0.0.1:53402, disconnecting
< luke-jr> is there some reason we're not using SVG images in the GUI?
< GitHub8> [bitcoin] luke-jr opened pull request #8996: Network activity toggle (master...networkactive) https://github.com/bitcoin/bitcoin/pull/8996
< gmaxwell> why is getblocktemplate returning txn results for me on a 0.13.1rc1 node?
< gmaxwell> I thought as soon as segwit was defined it needed the capability?
< gmaxwell> ... if thats only triggered on activation, then uh we would expect some portion of hashpower who hasn't correctly prepared to simply drop off at that point.
< gmaxwell> Where did I desync?
< sdaftuar_> gmaxwell: possible I'm misremembering but I think gbt just won't signal for segwit until the capability is specified, but it will still return successfully
< sdaftuar_> There should be tests for the intended behavior in segwit.py or p2p-segwit.py
< gmaxwell> oh thats okay too then!
< wumpus> luke-jr: because including the svg rendering engine would introduce an extra dependency, and also qt4/qt5 differences IIRC
< wumpus> also drawing SVG is generally slower than just drawing pixmaps, unless you have some smart caching layer, I have no clue where Qt is in that regard
< wumpus> tldr it's just a big risky change and things work pretty well as they do
< wumpus> maybe it makes sense when, if ever, moving from qt widgets to qml quick or such based gui. I have no relevant experience with any of the qt newness in the last few years though.
< wumpus> in any case I doubt we'll accept a patch that just changes image rendering to svg and has zero visual changes for the user, even though it would be 'neater' way of doing things in some sense, it's just not worth the review and testing overhead
< wumpus> on the other hand a snappy GUI-redo that blows everyone away and happens to need SVG rendering, well, sure
< nsh> there were a couple issues identified in NCC audit by NCC group which might be relevant to bitcoin-core. not sure if they made it upstream.
< nsh> NCC-2016-015 - Out-of-bounds Read in Boost date Class #1459 - https://github.com/zcash/zcash/issues/1459
< nsh> NCC-2016-008 - Potential uninitialized reads #1464 - https://github.com/zcash/zcash/issues/1464
< wumpus> I've seen the report, but thanks for the update
< wumpus> the respective fixes haven't made it upstream yet
< * nsh> nods
< jtimon> gmaxwell: wumpus btcdrak sorry I missed the rest conversation in https://botbot.me/freenode/bitcoin-core-dev/2016-10-22/?msg=75272893&page=1
< jtimon> I know btcdrak hates "cascading PRs", but I think it makes sense here
< btcdrak> nice!
< jtimon> but I think they could really just use this temporarily and I'm afraid the block signing part will require a lot more review and testing and will take longer to merge
< jtimon> they can -chain=custom -chainpetname=lightning, new chain
< btcdrak> I'm ok with two PRs. blocksign will be rebased on the first PR anyway right?
< jtimon> if an idiot spends a lot of energy mining that testchain, you can just -chain=custom -chainpetname=lightning2 and show him your middle finger, you can always -listen=0 -connect=... etc
< jtimon> btcdrak: yeah blocksigning will be on top of this one since to create a new blocksigning chain you need to first create a new chain
< jtimon> reading from a file was a later addition, but it was like 4 lines extra
< jtimon> at least reusing ReadConfigFile, if we really prefer a json file over a conf file that may be more extra work, not sure how strongly wumpus wants json, I personally don't see the gain
< btcdrak> I think json makes sense, there are potentially a lot of chain params to configure.
< jtimon> regarding "what they asked for" rusty asked for providing the genesishash manually instead of it being calculated dynamically from the genesis block (you can force a new genesis just changing -chainpetname) like this PR does, so I wanted his feedback on that (it's true I didn't need to open the PR already for that)
< jtimon> btcdrak: the same number as with a conf file, but you're just not able to reuse util to handle them
< jtimon> I don't think CChainParams::UpdateFromArgs() will get smaller by using json
< btcdrak> It's also easier to share chain details with a json file.
< jtimon> regarding "a pointless feature that we've already lowered the quality of the codebase to enable" I strongly disagree but I'm not entirely sure what gmaxwell is refering to
< jtimon> he mentions Params().GetConsensus().getdarnaninterger() inside a loop
< wumpus> let's bury that part of the discussion please
< wumpus> both gmaxwell and me went out of line there
< wumpus> blame it on stress, or whatever
< jtimon> as opposed to chainparams.GetConsensus().getdarnaninterger() so I assume he is not complaining about changes related to #7829 . If he is complaining about the GetConsensus part, that was only for libconsensus to avoid the chainparams dependency which contains globals. Of course I agree that shouldn't be in a loop, it should be called once and made a local variable outside the loop, if it is the only chainparams that the function
< jtimon> needs, it should take darninteger directly as parameter instead of the whole CChainParams
< jtimon> ok, not trying to revive any fire, just trying to make sure we're on the same page
< wumpus> I explained the reasoning behind it afterward, and what will be the future direction, no need to rake up anything
< wumpus> right
< jtimon> btcdrak: how is sharing mychain.json any easier than sharing mychain.conf ?
< wumpus> if there's a bla().bla().bla() in a loop we can just factor it out of the loop, this is not rocketscience :)
< gmaxwell> putting things in a file does not make them configurable.
< gmaxwell> please keep that in mind.
< wumpus> jtimon: json may be a more convenient format for automatic generation from the tests / handling nested structures, but I don't care deeply
< jtimon> wumpus: what are your thoughts on json vs conf since you brought that up?
< gmaxwell> many of our constants tie deeply into algorithims and protocol assumptions.
< jtimon> I see
< btcdrak> what wumpus said
< wumpus> yes, there is compromise somewhere on what constants whould be configurable, I think the ones curently in ChainParams make sense though
< wumpus> this doesn't mean the entire algorithm should be micro-manageable though the configuration file
< wumpus> unless you want to replace it with JITed LUA or so, but that's clearly not a goal here
< gmaxwell> yep.
< gmaxwell> (as I said, just something to keep in mind.)
< wumpus> it may be possible to switch between discretely defined algorithms in the config file though, e.g. between a PoW or "centrally signed blocks"
< gmaxwell> fRequireStandard = false;
< gmaxwell> sure
< jtimon> once they're in ChainParams they are not constants, but we may have abused ChainParams and maybe we want to try to turn some back into constants (if testnet and regtest have the same values at least)
< wumpus> well they are constants after reading them from the configuration file
< wumpus> they don't change at runtime, that would be madness
< jtimon> the way I was planning to expose the blocksigning was through a variable like -blocksignscript, maybe a boolean too
< wumpus> sure, at the language leven they're not constants, but they already are not because they're fetched from a structure...
< wumpus> leveL
< jtimon> wumpus: right, they init once and then ChainParams should be always passed as const
< gmaxwell> there are a at least some things that the values in chain params for testnet/regtest are at odds with the code. don't assume the paramters for testnet or regtest were especially well thought out. :) there may be places where we want to reduce their divergence with mainnet in the future... as their differences are a continued source of time-waste.
< wumpus> gmaxwell: that's our whole point, though, there may be use for testnets that are more close to mainnet, or further away from it, dependeing on the specific testing
< jtimon> I fear a cleanup may require testnet4 and regtest2
< gmaxwell> I am wtfing at regtest2.
< wumpus> regtest2 makes no sense
< gmaxwell> but duh sure, improving things may mean some incompatiblities. thats fine.
< wumpus> there is no shared 'regtest' network
< wumpus> although compatibility between versions may be useful for comparison testing
< wumpus> (!)
< jtimon> well, maybe if you want to make some values more similar to the mainnet to make them constants again, but I really don't know, I've thought more about testnet4, particularly in the context of bip70 which maybe gets replaced or something
< gmaxwell> E.g. I don't think when we created testnet or regtest anyone tought of the idea of inserting an optional bit mask in the target comparison function, so that high value blocks could be seen as meeting much lower targets. I think if we'd thought of that we could have avoided some of the special difficulty rules there.
< jtimon> I particularly hate the fact that in bip70 testnet3 is called "test" instead of "testnet3" but that's a tiny detail
< wumpus> well a new testnet would need a new bip70 identifier I guess so that can be fixed then... but it's a minor inconsequential thing
< jtimon> and I dislike testnet's special case for pow too, but matt said it was quite useful (I don't really know)
< wumpus> well it will always need a special case for PoW, the question is can we do better than testnet3
< jtimon> not sure I understand gmaxwell's point about the bit mask
< wumpus> without special case for PoW a miner entering it and exiting it will just kill it, this happened before and was the reason for adding it to testnet3 in the first place. That doesn't mean it's the perfect solution that shoudl be used forever, but just reverting to plain PoW would be stupid.
< gmaxwell> regtest's 'special casing' requires difficulty go below one, which causes a large amount of divergence in the code.
< jtimon> wumpus: maybe a different difficulty filter would help more, but that's strong special-casing too
< wumpus> I sometimes wonder why doesn't just ignore PoW completely
< wumpus> regtest*
< wumpus> that would mean the PoW checking is regression tested less, ofcourse
< wumpus> jtimon: my point is just that testnet requires special casing there, the type of special casing is open for proposals though
< jtimon> gmaxwell: I see, what about making regtest just always pass the pow check?
< gmaxwell> that could have been better accomplished with a if (params->weakpow) memset(blockhash,0,4); in the pow comparison.
< jtimon> I see
< gmaxwell> jtimon: there are tests that need pow to not be bypassed. Or at least there have been in the past, e.g. feeding lower difficulty blocks.
< wumpus> yes, that would make sense
< jtimon> gmaxwell: yeah just passing pow would need those tests to move to mainnet or testnet, but your solution seems less disruptive
< gmaxwell> the testnet getting stuckthing, even the current setup has problems with that, part of the call for the availability of signed block testnets.
< wumpus> regtest as it is now was a compromise back in the day and it works pretty well for regression testing, most trivial alternatives are probably worse
< gmaxwell> it also existed as a patch at first, it wasn't quite so designed out. Did it's roll fine, but with expirence better could be done now.
< jtimon> gmaxwell: right, so maybe after adding signed blocks it makes more sense to remove testnet3's mindif special case
< gmaxwell> It might also be possible to set the rest of the paramters like normal (2016 blocks, yadda yadda, just make it cheaper to mine if its not fast enough)
< gmaxwell> jtimon: potentially.
< gmaxwell> Basically we should either have divergences that _really_ aid testing (signed blocks) or otherwise minimize them, we really have lost of a lot of troubleshooting time due to testnet having issues that mainnet would never have, not all due to paramter differences, but many.
< jtimon> right
< jtimon> regarding block signing, I was thinking making it optional at compile time and just have blocks having both nBits-nNonce and scriptChallenge-scriptSolution in blocks (that's a hit on memory requirements that shouldn't be imposed on production nodes)
< jtimon> previously thought of union, but even that would be a hit if you cannot disable it at compile time
< jtimon> how does that sound?
< jtimon> I mean, even an extra pointer and polymorphism would be 4 or 8 extra bytes per header (apart from polymorphism performance concerns)
< wumpus> where would be the hit in memory requirements? header stoage?
< jtimon> yep, header storage
< wumpus> ok yes bleh, that structure is already too fat
< jtimon> in elements we just remove nBits and nNonce, but obviously we cannot do that here
< jtimon> so ack on compile time option for signed blocks?
< wumpus> using an union sounds better then
< wumpus> well compile time option does mean it cannot be used for the normal tests
< wumpus> and probably won't be enabled by default in releases
< wumpus> I'd prefer not to do that, unless this is a rarely used, memory hogging option, but then who would enable it in practice?
< jtimon> mhmm, if you allow the option you compile the tests that need it too?
< sipa> i don't think you can use a union with non-trivial C++ types in it
< jtimon> only devs I was assuming
< wumpus> this is about trivial C++ types such as ints right? nBits, nNonce etc
< sipa> c+11 relaxed the requirements a bit, though
< wumpus> if not that's a dangerous minefile
< wumpus> minefield*
< sipa> for block signing the signature is a CScript
< gmaxwell> compile time is pretty sad, how about an auxiliray datastructure that only gets created if using signed blocks, e.g. a second index?
< sipa> in CBlockHeader
< wumpus> that needs to be stored for every block, permanently?
< sipa> yes, instead of a nonce you have a signature
< jtimon> I was thinking ethier a union of structs PowProof and ScriptProof or an IntOrScript union for both nBits and nNonce
< wumpus> gmaxwell: yes
< gmaxwell> well technically it's a block witness...
< gmaxwell> segregate all the witnesses. :P
< wumpus> that makes sense; with using an union you could even take the additional memory requirement to 0
< wumpus> union a pointer with nBits,nNonce
< jtimon> I mean, the challenge field could be just constant per chain instead of being included in every block, I was just thinking that maybe someone could get creative with CalculateNextScriptChallenge or something
< wumpus> (i mean the additonal memory requirement when not using the feature, which is what we care about here)
< jtimon> this is it's actually in both CBlockHeader and CBlockIndex
< jtimon> wumpus: yeah, at a minimum union a pointer would be an extra pointer per block, that's my worry
< sipa> ah, it seems you can have non-trivial classes in a union now
< sipa> but it requires placement new and explicit destructor invocations
< jtimon> yep IntOrScript seemed to compile
< wumpus> jtimon: I don't understand that. You'd only need the pointer when using block signing, youd' only need nBits+nNonce when using PoW, those could be in the same memory space right?
< jtimon> so it would be a pointer to a struct that is either nBits+nNonce or a script (or a couple of them)
< sipa> wumpus: nBits is the "challenge" which can in theory also be replaced with a "block scriptPubKey"
< wumpus> jtimon: in the first case it'd just be nBits+nNonce in-place, in the second case it'd be a pointer to a script
< sipa> wumpus: so you can allow rules about how the signer(s) can change over time
< wumpus> sipa: okay
< sipa> however, that seems overkill here
< gmaxwell> a bit out of scope here but not incompatable.
< jtimon> sipa: right, but we can also remove that if the challenge script is going to be always constant
< sipa> as the block challenge can just be a chain wide constant as jtimon says
< gmaxwell> the union is the 64 bits of nbits+nonce or a pointer to an extension datastructure.
< jtimon> wumpus: I see, yeah, that's better and removes the need for the compile time option, thanks!
< sipa> perhaps we want the union to be between {nBits,nNonce} on the onenhand, and CScript *scriptSig on the other
< wumpus> sipa: that's what both gmaxwell and me are saying , yes :)
< sipa> note the *
< wumpus> I had assumed the *
< jtimon> or we could put the script in the coinbase with the other witnesses, but I'm not really sure I like that
< wumpus> I have no clue what the size of CScript is, but I'd guess it's larger than 8
< wumpus> so yes that should be a pointer
< sipa> 24
< sipa> on 64-bit
< sipa> actually, it's a prevector, so much larger
< jtimon> yep, now I'm embarrased I didn't thought of the union being like that myself, but thanks guys, good call
< sipa> for some reason i am very scared of using unions
< wumpus> in this case the way it is used depends on a global setting, so I'm okay with it
< wumpus> I'm scared of tagged/per-case unions though
< jtimon> sipa: yeah that motivated my run to the compile option too
< sipa> but the CBlockHeader destructor will need to know which of the two cases is being used
< sipa> that's very ugly
< sipa> how will it know? a global?
< wumpus> maybe have two different CBlockHeader types?
< jtimon> oh, right, that's ugly
< jtimon> a static field in CBlockHeader maybe
< sipa> jtimon: that's just a global
< wumpus> CPoWBlockHeader CSignedBlockHeader .. but yeah that moves the problem up :)
< jtimon> sipa: yep
< sipa> wumpus: then you need to templatize all the block logoc
< jtimon> wumpus: CPoWBlockHeader CSignedBlockHeader is way too disruptive
< wumpus> yes...
< wumpus> no, never mind, that would be stupid in c++ :)
< jtimon> I mean, it's CBlockHeader<proofType> or whatever, but still, not an option I think
< wumpus> this is pretty much the place where the inflexibilty of C languages starts to show
< gmaxwell> only if the header itself owns the data, and it isn't just an index into a seperate data structure.
< wumpus> either you have to template everything, or do some crazy union hack, both seem like bad choices...
< sipa> it is in fact much easier (in terms of code changes) to make it tagged
< jtimon> well, I guess the less disruptive option would be to make CBlockHeader the base and use polymorphism, but I think we want to avoid that too
< gmaxwell> e.g. you could have a header-extradata structure, and the header just gets indexes into it. iirc we don't ever delete headers one accepted.
< wumpus> sipa: yes, but the tag takes up extra space, which was what we wre trying to avoid in the first place :)
< sipa> wumpus: i know, but far less than just always having both cases
< gmaxwell> so the extradata structure would own its own memory and be responsable for freeing it on shutdown.
< sipa> gmaxwell: the extra data is not just in CBlockIndex
< wumpus> gmaxwell: yes, indeed
< sipa> it's part of every CBlockHeader we send and receive
< jtimon> I think the best candidates are union and a compile time option
< sipa> it sounds very hard to avoid a memory leak if you do not deal with deletion
< jtimon> right, plus CBlock extends from CBlockHeader
< wumpus> what I like least about this is that it changes consensus data structures
< wumpus> for something that is just for testing
< jtimon> yep, that may be another argument in favor of the compile time option (which I realise is independent from using the union or not)
< gmaxwell> Compile time option would greatly diminish the utility of this. (especially considering the size of our static binaries)
< wumpus> with compile time option you could *guarantee* that the normal case remains unchanged
< gmaxwell> Yes.
< gmaxwell> at that point, might as well just make it a seperate repo and resync to core periodically, I think.
< wumpus> which is, imo, the only acceptable way to do this
< wumpus> yes, probably. It's effectively an altcoin at that point :)
< gmaxwell> Right, so why take noise in the codebase for it if we can't even make it as integrated as testnet? it's at least a pretty clean patch.
< * wumpus> wants pluggable consensus libraries
< wumpus> yes, it seems it's not practically doable at this time
< wumpus> in our current codebase and structure
< jtimon> well, I will try with the union and without compile time option and see how it looks like
< jtimon> if we do it as a constanly rebased branch, at least merging the custom chain patch would make the blocksigning one more maintainable
< sipa> jtimon: agree
< sipa> i was surprised there was not a resurgence of coingen like sites after chainparams was merged :)
< jtimon> sipa: you where also surprised there wasn't an elements_alpha_with_pow_back altcoin I asume, I guess you forget that the most important part of an altcoin is the logo not the features :p
< wumpus> proabably coingen didn't work too well as a business model
< wumpus> making it even easier to make altcoins by just changing one source file undermines that further, who would pay for it anymore :)
< sipa> wumpus, jtimon: we can of course trying the separate-branch approach first, merging only the unlikely-to-affect-consensus patches in mainline, to see how much interest there is in it
< wumpus> yes
< sipa> sure, not having it integrated inline may hurt adoption of such a chain
< sipa> but on the other hand, it would be a pity tongo throigh all the work of plugging this in if it doesn't get used
< sipa> *to go through
< jtimon> in any case, I think chainparams is older than coingen , isn't it? I don't remember not being a chainparams
< gmaxwell> not at all.
< sipa> chainparams was only in june 2013
< sipa> before that, we just had "if (fTestnet)" all over the place
< sipa> and testnet itself was october 2010, 2 months before i ever looked at the code
< wumpus> also altcoins, inherently driven by speculation, have always tended to fork from what is the speculation hotness of the day, at one point this was litecoin, after that the "PoS" coins, now it's probably ethereum-ism things. A profitable coingen would have to follow all that :p
< tulip> for a long time most alt coins were, and still are 0.6 based branches. the original proof of stake patches were never rebased onto modern Bitcoin Core until quite late in the crazy. the original lfnet IRC channels still have hundreds of alt coin nodes (but only 2-3 wxBitcoin).
< wumpus> yes :)
< sipa> many earlier ones forked off namecoin, was was based on 0.3.24 afaik
< tulip> up until recently there was a 0.3 and a 0.4 node still connected to #bitcoin00 on lfnet, one of the two still had 8333 routed. I'd love to know where that was running to be still up, but obviously well behind the chain this number of years on.
< wumpus> it's a lemons market, flooded with even more lemons every day, quite interesting from a psychology point of view not so much from a technological :)
< tulip> wumpus: given there's >200 name coin clients on lfnet I assume they never rebased past 0.5? surprised it even functions, they must be missing some serious patches by this point.
< wumpus> namecoin did fairly recently rebase on top of newer bitcoin core (not sure what version). But yes most coins do not, it's not like they're a big target for attacks, nor actively maintained. The only way we notice is that sometimes people file bugs/build system issues against bitcoincore that have been fixed years ago, not realizing we've moved on since
< wumpus> I'm going to try to get this gcbpay person banned from our github, this can no longer be accidental: https://github.com/bitcoin/bitcoin/issues?utf8=%E2%9C%93&q=is%3Aissue%20author%3Agcbpay%20
< luke-jr> wumpus: go the organization settings
< luke-jr> there's a "tab" for banning users
< sipa> wumpus: he has a bunch of repositories, but none contain code, and all issues are self created and look like nonsense as well
< sipa> it may be just someone who has no clue
< sipa> but if they're a nuisance and not responsive to comments, yes ban
< wumpus> Blocking a user prevents the following on all your repositories: opening or commenting on issues or pull requests, starring, forking, or watching, adding or editing wiki pages
< wumpus> they can still download the source, or check it out
< wumpus> which is great
< wumpus> I don't expect any useful contributions from them, but they won't lose access completely, so this is the right thing to do
< arubi> something weird on regtest (did not try testnet), I have two nodes addnode'd to eachother (A and B). A first mines 750 blocks, sends the sum in one output to B, and mines block 751. then B creates a 1 input to 607 p2wsh(15-of-15 multisig) outputs (to itself) tx and relays it to A which then mines a block. B now has 607 15-of-15 p2wsh utxos, it combines them all a 607 p2wsh(15-of-15) -> 1 p2wpkh output tx, and relays to A which now has it
< arubi> in mempool too. now, I can not get either A or B to mine this tx, no matter if hours pass or if I mine a thousand blocks at a time. it just stays in both mempools.
< arubi> the same process with a 606 15-of-15 outputs works fine. still trying other types of scripts. I couldn't get this to happen with p2pkh or p2sh(15-of-15). it either aborts because the tx is too large, or too many sigops.
< arubi> 606 and below*
< achow101> the new policy limits for p2wsh?
< arubi> is 15 of 15 multisig affected? looking
< sipa> how large is the 607 p2wsh output?
< arubi> well the script is 513 bytes
< arubi> signatures + pushes.. 1110 bytes? I think
< arubi> can check, moment
< sipa> eh, i mean the size and weight of the transaction that does not get mined
< arubi> oh heh, sec.
< arubi> size": 999515, vsize": 268602,
< jl2012> vsize over 100000 is nonstandard?
< arubi> hm, so it's relayed around because regtest? then not mined because it's too exotic?
< jl2012> not tested. but if it is accepted to mempool, it should also be mined?
< sipa> yes, being relayd/accepted but not mined sounds like a bug
< jl2012> arubi: are you sure your blocks are synced between the 2 nodes?
< arubi> yep
< jl2012> have you tried to do everything with only 1 node?
< arubi> I can try, though neither will mine it
< arubi> (trying)
< jl2012> are you doing it by hand or automated?
< jl2012> if automated, would you please share the script?
< arubi> a lot is automated, the script is a monstrosity
< arubi> I'm rewriting it, making it useful for more complex stuff. mast in mind
< arubi> maybe it's time.. I'll get a gitlab account tomorrow and put it there
< jl2012> size: 999515 is for 606 or 607 inputs?
< arubi> 607
< jl2012> i think there is a setting about the block size/weight, forgot the name
< jl2012> maybe you hit the limit
< arubi> maxblocksize=1m , maxblockweight=4m with my nodes
< jl2012> try with only maxblockweight=4m ?
< arubi> unless weight can go higher, I didn't assume it could
< arubi> hm. alright
< sipa> do you literally set '1m' as value?
< arubi> oh no no
< jl2012> i think the problem is maxblocksize=1m
< sipa> ok, just making sure
< sipa> yes, maxblocksize=1000000 may be too much
< arubi> yea no worries. so just not set it?
< sipa> i think we always stay 1000 bytes below the limit
< jl2012> too small, you mean?
< sipa> you can set both weight and size to 4000000
< arubi> re-running with 1 node now, waiting for it to finish. sipa, oh really? I really didn't think that's the case
< sipa> maxblocksize is about the total serialized size, including witness
< arubi> and weight? seems like the default value for maxsize is 750000, so maybe that's why I assumed
< arubi> and default weight is 3m so really seemed like the x4 factor at the time, now I remember
< sipa> weight is (total_size + 3*size_without_witness)
< arubi> I see, thanks. jl2012, single node same behavior. trying w/ setting maxsize to 4m as well
< jl2012> quite sure it's because of the max size
< jl2012> i tried before
< arubi> you mean without setting it at all, or setting it to 4m?
< sipa> if you only specify maxblockweight, the maxblocksize is implicitly 4M
< jl2012> i think i just set max weight
< GitHub114> [bitcoin] theuni opened pull request #9000: miner debugging: faux-mining (master...faux-mining) https://github.com/bitcoin/bitcoin/pull/9000
< jl2012> #9000
< arubi> ah, so is it correct to say maxblockweight supersedes maxblocksize? congrats! :)
< arubi> bitcoin is officially over 9000
< sipa> not _over_ 9000.
< jl2012> arubi: if you set both, i guess it always take the effective lower one
< sipa> jl2012: if you set both, it respects both
< jl2012> make sense
< arubi> sipa, programmers of all people don't start counting at 1 :P
< sipa> arubi: ok, so we're at 8999 even.
< arubi> sipa, thanks :(
< arubi> \o/ jl2012 , sipa , thank you! setting only weight=4m cleared a 607 inputs tx. I'll play around with both size and weight, got a better idea what they mean now
< luke-jr> we are well over 9000 byte blocks, and even 9000 MB blockchain
< * luke-jr> ducks
< GitHub52> [bitcoin] TheBlueMatt closed pull request #7903: Fix help text around importaddress and rename it to importscript (master...16-04-importaddress-helptext) https://github.com/bitcoin/bitcoin/pull/7903