< gmaxwell> sipa: ^ are you setup to produce windows binaries right now?
< warren> gmaxwell: here?
< gmaxwell> right.
< warren> gmaxwell: sorry the split wasn't entirely clear to me.
< gmaxwell> warren: this channel should be for bitcoin core exclusive discussion.
< gmaxwell> no one else gives a hoot at how our rpc auth works. :)
< Luke-Jr> Fedora does apparently
< * Luke-Jr> hides
< sipa> gmaxwell: do i get 30 minutes to find a bug? :)
< gmaxwell> sipa: hahah
< gmaxwell> sipa: I don't want it merged; there is a user that manged to get pubkey "" in his wallet on bct, and I want to give him a binary to see if it rejects it.
< gmaxwell> warren: as far as selinux policy, I think it's unfortunate that you cannot confine bitcoin core to have no access outside of its own directory.
< gmaxwell> though maybe that should be a boolean.
< gmaxwell> since its only needed for the walletimport/backup stuff.
< warren> Would it be bad if the system-level systemd service were named something like bitcoinsys.service, with a selinux context bitcoinsys_t, and a bitcoin-cli wrapper named something like /usr/bin/bitcoinsys_cli that only adds the --datadir /path/to/system/datadir/
< Luke-Jr> and what if the user wants bitcoind run per-user? :P
< gmaxwell> warren: what happens if a second datadir is specified to bitcoin-cli?
< gmaxwell> warren: it's all ugly, bitcoin core is currently not intended for system level operation.
< warren> gmaxwell: hence why i'm against shipping a .service file at all
< gmaxwell> in any case that doesn't sound too horrible but the wrapper would need to not prevent you from overriding the datadir, and it needs to be tested with e.g. super long inputs.
< gmaxwell> better perhaps would be a compile time setting to adjust the default datadir. Do we have one of those?
< sipa> not afaik
< sipa> gmaxwell: bug found, will build
< gmaxwell> lol okay.
< Luke-Jr> warren: doesn't systemd support per-user services?
< sipa> Luke-Jr: systemd would be a great operating system, if it only contained a service manager
< sipa> s/if it only/if only it/
< Luke-Jr> don't get me wrong, I hate systemd. but I thought this was one of the few benefits to it.
< gmaxwell> Seperate subject; https://github.com/bitcoin/bitcoin/pull/6902 I feel that changes like this make the software more opaque. Now to understand whats being done there you must read two places instead of one. If you only look at policy.h you will likely erroniously believe it applies to p2sh somehow.
< warren> Luke-Jr: yes it's possible
< gmaxwell> It's of a class of change where I purposfully try to keep my opinion to myself to avoid amplifying bikeshedding, because it fundimentally does not matter.
< sipa> gmaxwell: 32-bit or 64-bit?
< sipa> gmaxwell: will take a while to build dependencies
< gmaxwell> user say "or" so 64-bit I guess.
< GitHub198> [bitcoin] nathanimpact opened pull request #6909: 0.8 (master...0.8) https://github.com/bitcoin/bitcoin/pull/6909
< GitHub180> [bitcoin] gmaxwell closed pull request #6909: 0.8 (master...0.8) https://github.com/bitcoin/bitcoin/pull/6909
< GitHub190> [bitcoin] nathanimpact opened pull request #6910: 0.9 (master...0.9) https://github.com/bitcoin/bitcoin/pull/6910
< GitHub40> [bitcoin] nathanimpact opened pull request #6911: 0.10 (master...0.10) https://github.com/bitcoin/bitcoin/pull/6911
< GitHub112> [bitcoin] gmaxwell closed pull request #6910: 0.9 (master...0.9) https://github.com/bitcoin/bitcoin/pull/6910
< GitHub69> [bitcoin] nathanimpact opened pull request #6912: 0.11 (master...0.11) https://github.com/bitcoin/bitcoin/pull/6912
< gmaxwell> I wish github would let you block these.
< GitHub93> [bitcoin] gmaxwell closed pull request #6911: 0.10 (master...0.10) https://github.com/bitcoin/bitcoin/pull/6911
< GitHub198> [bitcoin] gmaxwell closed pull request #6912: 0.11 (master...0.11) https://github.com/bitcoin/bitcoin/pull/6912
< GitHub45> [bitcoin] nathanimpact opened pull request #6913: 0.11 (master...0.11) https://github.com/bitcoin/bitcoin/pull/6913
< belcher> you can block them(!) on the settings panel somewhere in github you can configure these to happen or not
< gmaxwell> belcher: what specifically does it block?
< belcher> turns off or on these bots
< sipa> belcher: we want to block the PRs, not the reporting here :)
< belcher> i see, im not sure thats possible
< belcher> if you know they're coming you could set the channel mode +n (i think) which stops them being able to message without joining
< petertodd> gmaxwell: ACK re your thoughts on #6902 - this isn't a constant that you can change after all!
< gmaxwell> the bots we want, just not the pointless PRs to merge entire (sometimes altcoin) branches into master.
< sipa> gmaxwell: building qt failed, sorry
< petertodd> gmaxwell: er, scrap that, no policy, not script, dumb of me...
< sipa> not going to spend time investigating
< GitHub139> [bitcoin] sipa opened pull request #6914: [WIP] Add pre-allocated vector type and use it for CScript (master...prevector) https://github.com/bitcoin/bitcoin/pull/6914
< gmaxwell> sipa: why doesn't it bounds check?
< sipa> gmaxwell: because i'm lazy
< gmaxwell> sipa: 13% faster with script checking enabled?!
< sipa> gmaxwell: disabled
< gmaxwell> oh disabled good.
< gmaxwell> okay. Yea, missed that on first read.
< GitHub7> [bitcoin] sdaftuar opened pull request #6915: [Mempool] Improve removal of invalid transactions after reorgs (master...fix-reorg-handling) https://github.com/bitcoin/bitcoin/pull/6915
< sipa> the only place where we actually use ::at() is in CScript::IsPayToScriptHash(), and it's prefixed with a size check there
< gmaxwell> maybe just get rid of the at and make it not compile if its used? Rationale: providing something thats almost vector but less safe in some subtle way will be forgotten. :)
< gmaxwell> sipa: did you fix the scriptcheck on reindex generally or just patch it out? if the former can you PR the fix?
< sipa> i just patched it out
< gmaxwell> (that particular misbehavior is a beautiful interaction with the always need to reindex issue in windows I bet. :-/ )
< sipa> gmaxwell: building 6906 (+6900, since windows building is broken on modern mingw without 6900)
< CodeShark> are you guys trying to reproduce the windows db corruption issues? :)
< gmaxwell> not at the moment.
< gmaxwell> CodeShark: warren was talking about that earlier.
< gmaxwell> I'm trying to debug a wallet that ended up with a corrupted pubkey in it.
< gmaxwell> which is in possession of an end user who is helpful but needed a binary.
< CodeShark> ah
< gmaxwell> As an aside, same user is running a 0.8GB wallet with hundreds of thousands of transactions, on windows.
< CodeShark> hah
< CodeShark> doesn't sound very pleasant on multiple fronts :p
< dcousens> sipa: around?
< sipa> yes
< dcousens> sipa: 23% less memory, sounds like its simply a matter of the allocator being more exact instead of allocating more generously etc
< sipa> ?
< dcousens> sipa: conceptually, how does https://github.com/bitcoin/bitcoin/pull/6914 save memory, except that in some cases it is allocated on the stack?
< sipa> dcousens: well it uses 32-bit instead of 64-bit sizes
< dcousens> (and ignoring the 4/8 byte pointer)
< sipa> so it always saves 4 to 8 bytes
< sipa> apart from that, the only way it saves is by not allocating on the heap
< sipa> which has a 16-32 byte overhead
< dcousens> and the hunch is that takes up 23% of available memory?
< sipa> so the old structure is on average 40 + N bytes storage
< sipa> sorry, 48 + N
< sipa> the new one is 32 for N<=28, and 56 + N for N > 28
< sipa> since almost all CScripts in memory are P2PKH or P2SH, which are less than 28 bytes, this is an improvement
< gmaxwell> sipa: aren't the N objects seperately heap allocated?
< dcousens> sipa: any reason not to just use union a std::array and std::vector?
< sipa> gmaxwell: no
< sipa> dcousens: that would be much larger
< sipa> the trick here is to share the size variable between the two approaches
< sipa> which you can't do if you use the standard std::vector
< gmaxwell> sipa: hm for some reason I thought it was, so that the removal of objects could be supported without invalidating pointers.
< dcousens> sipa: could the same memory savings be achieved by just abstracting a CScript specialised for smaller sizes?
< dcousens> Eh, that'd be as bad nvm
< sipa> dcousens: perhaps i failed to highlihght the largest advantage: reducing load on the heap
< sipa> which results in worse memory locality and memory fragmentation :)
< sipa> gmaxwell: you're thinking about a linked list...
< dcousens> indeed, probably where the 13% speedup came from
< sipa> dcousens: and simpler copying
< sipa> gmaxwell: a vector can't delete elements without invalidating iterators
< dcousens> sipa: is push_back used at all?
< sipa> no clue
< dcousens> just wondering, maybe drop the capacity functionality if we don't need it
< dcousens> I feel like we' wouldn't be re-allocating these vectors very often
< sipa> well i wanted something API compatible :)
< sipa> so it's not a huge burden to go review whether it can be used or not
< sipa> if you just want a compact vector, there are other changes you can make
< dcousens> sipa: I guess if you're looking for public compatibility
< dcousens> sipa: IMHO for a change like this, we should start with the smallest subset of changes, optimize on that, then introduce public API compatibility
< sipa> the subset of changes is as small as it can be
< sipa> the PR contains a few not-strictly necessary changes to just break the assumption that CScript is a vector
< dcousens> ooi, why free vs new?
< sipa> because i need realloc
< sipa> and this works at a lower level: it's allocating memory, not objects
< sipa> new is for allocating objects
< dcousens> ? you have a clear type though
< sipa> which?
< dcousens> the realloc might not be necessary if you don't need the push_back/change_capacity etc
< dcousens> T
< sipa> no, that would call T's constructor
< sipa> the reserved but unused memory doesn't contain constructed T objects
< dcousens> True, but isn't T in this case always a char*?
< sipa> i don't plan to just use this for char
< dcousens> hmmm, well, code looks good, tentativee ACK simply cause I'm still not sure its the best way, but, it is a solution, and it exists :) wd
< gmaxwell> sipa: did the prior stl instrumentation you did before allow you to easily measure vector overhead in a useful way to go find the biggest offenders?
< dcousens> if we broke the API compatibility, I feel like we could just use managed pointers [at the CSCript level] quite easily with a pooled stack allocator for small scripts, but, if this might be easier if you have other future plans for other areas
< sipa> dcousens: that would indeed by an improvement; CBlock could have its own pool for example, and CTransaction could couple a pool too
< sipa> dcousens: but it's a much more invasive change to do well
< dcousens> indeed
< sipa> gmaxwell: define offender?
< dcousens> sipa: I guess the benefit of this change, with API compatibility, is it doesn't hinder that path in any (except maybe making it look less attractive haha)
< gmaxwell> I do not see why we'd want _any_ overhead of having constant allocation indirection for data where 99.999% of the time its a fixed size. even having the extra pointer is overhead we don't need.
< dcousens> gmaxwell: as mentioned above, to do that (I assume you mean using fixed size CScripts), it'd be a more invasive change
< gmaxwell> sipa: places where many vectors with a total amount of data them that is almost always small with respect to 40 bytes.
< sipa> gmaxwell: vectors have 40 byte overhead on average (including the malloc overhead) per vector
< dcousens> gmaxwell: or by extra pointer did you mean cap?
< gmaxwell> dcousens: what sipa is doing now is almost a fixed size cscript, no-- only when its an exceptional cscript does it fail over to using an allocation.
< sipa> i'm using 28 bytes as cutoff; there are many scripts larger (nearly every scriptSig)
< sipa> but in places where storage matters, more than 28 is a minority
< sipa> i used 36 earlier, and that was still a net benefit, but a smaller one
< gmaxwell> ah, I did forget this was used for ScriptSig too.
< phantomcircuit> it's kind of amusing how we can optimize for the current set of data and have that not be a huge issue... since it will never change
< sipa> if we'd switch to a P3SH with a 256-bit hash, we may need to change it :)
< sipa> we could use this for valtype inside the script evaluator too, set to 72 bytes (as we rarely have larger script elements), or to 520 (as we don't support larger at all)
< phantomcircuit> heh
< phantomcircuit> sipa, sure but like if block_height < 350000 then vectortype = x else vectortype = y
< gmaxwell> obviously it needs to be autoadaptive. :P
< phantomcircuit> :)
< gmaxwell> Another obvious thing to try is to breakpoint at the validation of block X, then breakpoint at malloc/new and catch every place it hits the heap allocator during verification of that block.
< dcousens> ignoring memory, I wonder if you just fixed size it to 520 bytes
< gmaxwell> dcousens: it can be larger than 520 bytes.
< phantomcircuit> gmaxwell, how?
< gmaxwell> 0_o
< sipa> CScript can be larger than 520 bytes
< sipa> script stack elements can;t
< gmaxwell> phantomcircuit: this is the full script pubkey in a transaction or a full scriptsig in a transaction.
< gmaxwell> scriptsigs larger than that are probably not even all that uncommon.
< phantomcircuit> ohh
< phantomcircuit> i thought you were just optimizing for the push data
< warren> gmaxwell: a safe wrapper that passes all parameters to some other program is simple like: http://pkgs.fedoraproject.org/cgit/couchdb.git/tree/couchdb.temporary.sh
< warren> I wrote down what I think would be good for the two different ways in which people want to use bitcoind, as a user and as a system service. Both would need separate SELinux contexts, and a wrapper like http://pkgs.fedoraproject.org/cgit/couchdb.git/tree/couchdb.temporary.sh to launch the system service under the different context.
< jonasschnelli> should get in before it gets outdated
< GitHub158> [bitcoin] laanwj closed pull request #6913: 0.11 (master...0.11) https://github.com/bitcoin/bitcoin/pull/6913
< * wumpus> has an old crappy laptop with windows now, let the carnage begin...
< wumpus> I'm sure it comes with rootkits and malware pre-installed to simulate a true, out in the wild windows environment
< phantomcircuit> :P
< dcousens> wumpus: what would you say an average time for re-index is?
< dcousens> (ballpark, obv. very dependent on hardware)
< morcos> sipa: i've been trying to track down some odd slowness in ConnectBlock and I found something I think interesting
< morcos> When we do UpdateCoins and we need to add the outputs for the new coins we are now creating. We call ModifyCoins(new tx hash) which will end up doing a database read.
< morcos> so even if our cache is completely up to date, we have to read the database anyway
< morcos> There is already an existince check for BIP30 earlier
< morcos> I'm wondering if we should have some sort of negative cache too
< morcos> Also do we need to persist the BIP30 check? Is it still possible to violate that or now that the old duplicate coinbases are sufficiently buried, its not necessary
< morcos> Anwyay, I have to go afk. I'll look into it more later.
< morcos> sipa: still afk, but yep, thats a cause of SIGNIFICANT slowness in ConnectBlock. I think we'd have to change/remove the BIP30 check to really eliminate it, which might be the harder part, bc I think adjusting ModifyCoins to know its a new tx and not lookup seems easy and safe?
< wumpus> cfields: what would be the best way to include the windows determinism postprocessing in the gitian build (in https://github.com/bitcoin/bitcoin/pull/6900) - call it from the descriptor or 'make'?
< cfields> wumpus: since that's really only useful for gitian (and a specific version of binutils), i'd say in the descriptor is fine
< wumpus> yeah, thought as much
< wumpus> do wonder how to make it output the linker map though
< cfields> wumpus: i haven't taken a good look yet, but i'm surprised that you can't find the necessary offsets with objdump/nm/etc. I suppose you tried all those first?
< wumpus> objdump and friends all report virtual addresses
< cfields> mm, ok
< wumpus> and within a section they're no help at all, for finding the offset, it's not like there's a pointer to this variable (the other option may be to parse DWARF information, but hm...)
< wumpus> that may not even work, I don't think gdb has an offset to it, it reports it as GS_ExceptionPointers+8/16
< cfields> wumpus: another fleeting thought.. in the mailing-list link you posted, some of my binutils changes are listed in the same changelog...
< cfields> you completely sure that this is the only hack necessary?
< cfields> iirc it was the one that removed timestamps from windres
< wumpus> well I found no other differences between multiple builds
< wumpus> remember, we still use faketime
< cfields> ah, right
< wumpus> the tor folks build binutils themselves
< wumpus> but anyhow, the current script works, so I'll just integrate that for now
< cfields> sounds good
< cfields> wumpus: one last quick thought, another option might be using LD_PRELOAD to wrap malloc (or whatever is leaving unsanitized data)
< wumpus> hehe.
< wumpus> wonder if mallopt(M_PERTURB,...) can be set from the environment
< wumpus> I think so! it should do exactly this
< cfields> hah! neat. I was just doing a quick malloc->calloc wrapper. That looks smarter :)
< wumpus> 'export MALLOC_PERTURB_=23' seems to work
< wumpus> (at least outside of gitian, let's try inside)
< wumpus> that was an awesome idea
< * wumpus> resents having written a PE parser :-)
< cfields> should probably wrap the linker in a script and export it there, rather than globally. It's hard to imagine anything build-side dependent on that, but i'd hate to see something like the debian-openssl thing crop up again :)
< wumpus> also reduces the performance hit
< cfields> yep
< cfields> wumpus: nice find on MALLOC_PERTURB_. that makes this very simple :)
< wumpus> but the linker is invoked as 'g++' isn't it?
< cfields> g++ invokes ld
< wumpus> oh yes it's even worse for mingw, it invokes collect2, which is a wrapper for ld. I wonder if it's possible to interpose somehwere there
< wumpus> (well obviously, by replacing the executable... but I wonder if it uses path somewhree)
< cfields> uhmm, you can specify your own linker i believe
< * Luke-Jr> imagines each of these is used for a reason
< wumpus> Luke-Jr: 'collect2' is MS's linker, which for mingw is emulated by ld
< Luke-Jr> oh, it's just emulating it?
< wumpus> yes - it could really use MS's linker, if you have it available
< Luke-Jr> interesting
< cfields> wumpus: yup, looks like it checks for 'real-ld'
< cfields> and ${prefix}-real-ld
< wumpus> I haven't been able to trick it yet into running a different linker outside the $prefix (created all sorts of variants of collect2 and ld in my PATH)
< wumpus> one remaning option would be to override gcc's specs, but bleh
< wumpus> seems it only looks for 'linker' outside its hardcoded path /usr/lib/gcc/x86_64-w64-mingw32/4.8 if it cannot find it there
< wumpus> what works is: 'gcc -dumpspecs > specsfile' 'sed -i s/collect2/mycollect2/g specsfile' 'gcc -specs=specsfile'
< wumpus> then put mycollect2 somewhere in the path...
< wumpus> it's kind of stupid though
< wumpus> hmm overrding GCC_EXEC_PREFIX may work too
< wumpus> but that means copying the other files
< wumpus> oh there is COMPILER_PATH too, where it will look for subprograms if it cannot find in EXEC_PREFIX
< wumpus> great, this may yet be solvable
< cfields> wumpus: heh, was just testing that one
< cfields> no luck
< * wumpus> considers just defining the malloc perturb globally and be done with it...
< cfields> wait, that worked :)
< cfields> export COMPILER_PATH=`pwd`
< cfields> then wrap ./collect2
< wumpus> according to the docs it only looks there if it cannot find it in the EXEC_PREFIX, but will try
< wumpus> yep, seems to work!
< wumpus> cfields: something like this? https://gist.github.com/laanwj/10b0981d9d75510a124f
< cfields> wumpus: http://pastebin.com/W350ZGK5
< cfields> hah!
< cfields> looks great
< wumpus> ah yes, $CC -print-prog-name=collect is nice
< wumpus> updated to use that https://gist.github.com/laanwj/10b0981d9d75510a124f (now request the path of the collect2 tool while creating the wrapper script instead of hardcoding it in the descriptor)
< cfields> wumpus: need to unset COMPILER_PATH or you get an infinite loop
< wumpus> COMPILER_PATH is not set at that point yet :)
< wumpus> it's set in the per-host build loop later on
< cfields> oh heh, you're finding the path outside of the script. nm :)
< wumpus> yes I look up the path upfront
< cfields> well that's nice and simple if it works
< cfields> wumpus: just to be on the paranoid side, should probably compare before/after to make sure that only the expected bytes changed.
< wumpus> well if more bytes change then we should definitely skip trusty, agreed
< cfields> (i just mean eyeballing it once, ofc. not suggesting scripting that part)
< wumpus> leaking heap data into output is really ugly, can be e.g. a privacy issue, I'm happy it was solved two years ago ;)
< wumpus> MS office had some legendary fuckups in this regard
< cfields> wumpus: assuming mingw is self-hosted, we can check the bins for leaks and see if the dev had anything interesting in mem :)
< wumpus> hehe
< wumpus> well in this case it seems it leaks some internal address; could be used to bypass ASLR, if it was a longer running process
< sipa> morcos: we should get rid of the bip30 check; it's superseeded by bip34
< cfields> wumpus: for the qt fix, can that ifdef just be c/p to the header as well?
< wumpus> cfields: yes I suppose the same principle can be used to use either PIDLIST_ABSOLUTE or ITEMIDLIST
< wumpus> (just copy pasting won't work though because the typedef won't help_
< cfields> wumpus: ok. mind PRing that separate? or I can? That needs fixing regardless of when we switch gitian
< wumpus> no problem if you do :)
< cfields> i'm happy to do it, not trying to con you into that one :)
< cfields> sure, will do
< * wumpus> just crashed a windows PC in the middle of initial sync, came back fine after reboot...
< wumpus> "NotMyFault: Use this executable and driver to crash your system in several different ways" lovely
< wumpus> the only thing hard to reproduce with a laptop would be a direct power failure
< sipa> remove the battery :)
< wumpus> I'm not sure that's possible with this one without disassembling it
< wumpus> and not sure bitcoin core should be resilient to motherboard-bends-while-application-is-running crashes :-)
< wumpus> cfields: it worked! windows trusty build is now deterministic http://s7.postimg.org/bhrf0eui3/Naamloos.png
< cfields> woohoo :)
< cfields> but... wtf am i looking at?
< mcelrath> What's that usha256?
< wumpus> my hex-as-unicode-blockelements hack
< wumpus> (somewhat easier to compare patterns than letters/numbers)
< cfields> ah, neat
< wumpus> "Fout bij openen blokkendatabase" ah, this time I crashed windows and the block database got corrupted
< helo> corrupted your error messages too! ;)
< zooko> wtf-8
< wumpus> lol zooko
< zooko> Seriously, what is this for ?
< zooko> Wow.
< zooko> Intriguing.
< mcelrath> I can't decide whether that's easier to differentiate or not. Better let the computer decide: sha256sum a b | awk '{print $1}' | uniq
< mcelrath> Also diff -q
< wumpus> mcelrath: what I don't like about that is that in the succesful case it provides no output, so I can never be sure whether it crashed (and hence produced no output) or succeeded
< mcelrath> What provides no output?
< wumpus> diff, when files match
< sipa> wumpus: how many such block elements are there?
< wumpus> sipa: I used the 16 2x2 ones to get 16 bits - but there are also e.g. 1/8th horizontal/vertical full ones, see https://en.wikipedia.org/wiki/Block_Elements
< wumpus> to get 4 bits*
< wumpus> and if you combine with reverse-video you can get even more, e.g. "right one eights block"
< wumpus> most console applications that use thsese, use them for progress bars/gauges etc
< wumpus> you could also do a kind of Chernof faces visualization using emojis, esp. on mac consoles which render those silly things in color
< wumpus> "2015-10-30 17:30:05 Corruption: error in middle of record" that's the error with the windows chainstate db after crash
< wumpus> this happens while reading the log, so it is a partial record error
< wumpus> partial write*
< wumpus> https://github.com/bitcoin/bitcoin/blob/master/src/leveldb/util/env_win.cc#L587 shouldn't the FlushFuleBuffers and FlushViewOfFile be the other way around?
< wumpus> the former syncs changes in the file to disk, the latter changes in the memory map to the file
< wumpus> I may be misunderstanding something of course...
< sipa> wumpus: that code may well be wrong
< * wumpus> wants to implement a WritableFile with just the win32 base file API and none of this fancy mapping stuff
< wumpus> like PosixWritableFile but for windows... oh wait, maybe I can even just copy PosixWritableFile and rely on mingw to do the rest for me
< GitHub92> [bitcoin] sipa opened pull request #6916: Remove BIP30 enforcement, as it is impossible to trigger since BIP34 (master...nobip30) https://github.com/bitcoin/bitcoin/pull/6916
< sipa> wumpus: we should do that, I think
< GitHub118> [bitcoin] sipa closed pull request #6916: Remove BIP30 enforcement, as it is impossible to trigger since BIP34 (master...nobip30) https://github.com/bitcoin/bitcoin/pull/6916
< gmaxwell> :)
< sipa> gmaxwell: i'm trying to think whether enforcing it for all blocks (except those two) that don't have BIP34 is safe
< sipa> as BIP34 is only sufficient when there are no actual duplicate coinbases
< sipa> but there are
< gmaxwell> sipa: if we ship a wad of the 32 least significant bits of the block hashes up to the height bip 34 activates; and then require that, that would be sufficient.
< sipa> that's not very different from a checkpoint...
< morcos> sipa: ok so i have an even stranger one for you
< morcos> when TestBlockValidity ends, between then and when control returns to CreateNewBlock, consistently takes 10-15 ms
< morcos> if you recently flushed your cache, i think this number can be 20-25 ms
< gmaxwell> sipa: checkpoints don't fix this, since the corruption can slip in between them.
< gmaxwell> is it spending 10-15ms in free? :)
< morcos> but i have no idea what if anything should be happening then, the CCoinsViewCache destructor runs, but what else
< morcos> gmaxwell: is that possible?
< sipa> and that destructor doesn't do anything...
< sipa> except deleting cacheCoins
< sipa> morcos: you could try to do a cacheCoins.clean() in the CCoinsViewCache destructor, and time that?
< morcos> ok
< sipa> or add a .Clean() method that dispatches to cacheCoins.clear()
< gmaxwell> morcos: it's possible e.g. freeing a kazillion and one tiny objects. I'd done some profiling in the past that suggested to me we were losing a lot of time in the allocator.
< sipa> #6914 should help there too
< morcos> yep, clearing the cache takes 10+ ms
< sipa> try with #6914 :)
< sipa> actually, it can at most give a 2x speedup
< morcos> now the confusing thing is it sometimes takes twice as long if pcoinsTip (the base cache, had recently been semi-flushed)
< morcos> i wonder if thats a memory thrashing thing,or it has something to do with the flags on those coins
< wumpus> replaced the leveldb win32 writable file, let's see how it goes https://github.com/laanwj/bitcoin/commit/86050deaabe7ca0a19db1c5a65e3174c9cb14511
< gmaxwell> \O/ always good to see a fix that removes a lot of code.
< wumpus> it is! it's extremely simple now, the only thing I'm not sure about is Sync() versus Flush(), they both do a sync to disk now
< wumpus> that's probably overkill but it cannot hurt either
< wumpus> going to let it sync a bit and then cause another IRQL_NOT_LESS error and see if it avoids ending up in broken state
< sipa> great!
< sipa> we don't need particularly good write performance, i guess
< sipa> as we mostly do batch writes
< morcos> gmaxwell: so the easy simple thing re: BIP 30 would to just be to disable it for blocks after a certain height right?
< morcos> eh, never mind
< wumpus> agreed
< morcos> we could disable it for blocks that have a certain block as their ancestor
< morcos> such block being after BIP 34 and the pre existing duplicate coinbases diverged and were buried
< sipa> morcos: fundamentally, the bip30 check can be disabled for any transaction that depends on a bip34 coinbase
< gmaxwell> sipa: the existing BIP30 violations overwrote without proliferation.
< sipa> but there have been duplicated coinbases in the past, so there can be transactions that spend such a duplicate, which can become duplicated as well
< gmaxwell> meaning there can be no duplicate children.
< sipa> gmaxwell: if you know the chain
< sipa> you can be fed an alternative chain in which that's not the case
< sipa> no?
< morcos> right so if we only disable if you have an ancestor block which sufficietnly buries those, then you're good on the main chain
< gmaxwell> Right but we could enforce BIP30 for all chains until the point where BIP34 activates.
< morcos> andif you're fed an alternate chain you won't disable the check
< gmaxwell> oh I see if it proliferates its not enough.
< sipa> gmaxwell: BIP34 or not is not enough
< sipa> even post BIP34, a transaction that spends a previous duplicate coinbase is possible (in an alternative chain)
< sipa> so a checkpoint would work
< gmaxwell> yea, so we could disable BIP30 when BIP34 acitivates if and only if the the ancestor chain is the real one.
< morcos> yes, thats what i was trying to say
< morcos> thats the best kind of checkpoint it doesn't require you to actually be on the real chain, it just saves you some work if you are
< sipa> indeed
< gmaxwell> the constant ancestor tests are slightly annoying but much better than running bip30 everywhere.
< morcos> so erasing these caches is absurdly expensive. sometimes its 40ms+
< morcos> but why is it higher shortly after the underlying cache has been erased, that i don't understand
< sipa> how many entries?
< morcos> the 10ms case was on the order of 5000 entries
< sipa> maybe we need a special throw-away-cache which uses an append-only pooled allocation
< sipa> ... though again, a per-txout cache would be better for that
< morcos> but the number of entries is irrelevant, its the number of txins in those entries i guess right?
< sipa> txouts
< morcos> yeah,
< gmaxwell> what are the heap allocated objects in it? the entries themselves. scripts inside them (largely fixed by pieters alt-vector)? what else?
< sipa> gmaxwell: one allocation in the hashmap per txid; for each txid there is one allocation for a vector of txouts; for each txout there is one allocation for the script (partially fixed by prevector)
< gmaxwell> interesting that we always load all the txouts but are still taking per-txout allocation overhead.
< gmaxwell> so in any case if transactions are tending to gather from many distinct txids, then you potentially have a huge blowup in allocations; it's e.g. two heap allocations for each txout, ... so if you gather from a 100 txout transaction there is 200 heap allocations for each txin that does that.
< gmaxwell> (well, 201) and pieter's vector thingy would reduce that to 101 heap allocations.
< gmaxwell> They're probably destroyed in the same order they're created, which is probably a fairly expensive order for malloc too.
< gmaxwell> as an aside, after thinking about the heap behavior; I would guess a non-trivial amount of the 'remove on replace' memory savings is ficticious.
< gmaxwell> Due to fragmentation.
< morcos> "remove on replace"?
< gmaxwell> attempts to remove cache entries for transactions evicted from the mempool. Remove on reject is probably effective, since it's removing things it just added.
< morcos> ah yes, well the whole thing i'm trying to accomplish here is making that unnecessary
< morcos> if flushing the cache more often is doable (b/c you don't flush out the hot entries you're about to use)
< gmaxwell> sipa: so an append only arena that we can dump as a single operation would ... be a thing that could be done, _but_ it leaves us in a world were the only management thing we can do is flush() which would be unfortunate.
< morcos> then instead of worrying about what you're adding, just check far more often that you're not too big. most of the time you wont' have new stuff to write to the DB anyway
< sipa> gmaxwell: it would be awesome for blocks, not for the mempool
< morcos> but for some reason, the cache clearing of the child cache is slower if you've done one of these controlled flushes on the parent cache (or a real full flush) right before you create the child cache
< morcos> that makes no sense to me
< gmaxwell> sipa: hm. even for blocks, say you process block 100 and read in transactions X,Y,Z. then you flush. Then block 101 needs transactions Y and Z again. I'm not sure how often that happens, but I would be that its way way more common than chance.
< sipa> gmaxwell: i mean for the scripts in a block
< sipa> gmaxwell: the scriptSigs
< sipa> not talking about UTXO set or mempool
< gmaxwell> ah, indeed. you even have a nice figure on the size of the arena to allocate: just allocate the size of the block, and then it never needs to be realloced.
< sipa> even for a single transaction it's probably worth it
< sipa> as they're immutable
< sipa> hell, a whole transaction could be a single malloced block
< sipa> with some internal offsets/pointers in it
< gmaxwell> sipa: thats really what I'd prefer to see. I don't think having one allocation per transaction is unreasonable; but these nested allocations are goofy.
< gmaxwell> sipa: similar to that: AFAIK utxo cache entries cannot change in size.
< sipa> individual txouts no; the per-tx ones can
< gmaxwell> sipa: why? the only operation is that they can have entries spent, but that can be a flag that doesn't require resizing the object.
< gmaxwell> (and the operation flushes the dirty part could resize the object when it does so, I suppose, as it will already have exclusive access to it, no?)
< sipa> gmaxwell: you want to remove the allocated scripts for spent entries, no?
< sipa> gmaxwell: otherwise a full in-memory sync from scratch needs memory for every UTXO every created, rather than a working set
< gmaxwell> sipa: not quite, if it's spent completely you just drop the object. So how much partially spent would there be? Thats also an unusual case. In any case, it could trigger a rebuild at any time you have exclusive acccess. basically batching the malloc overhead.
< sipa> you always have exclusive access
< gmaxwell> e.g. when you spend an entry you set a dity flag and increment a wasted bytes counter. Then if the wastes bytes are over X when you spend from it, you rebuild in place and realloc.
< wumpus> partial spent (of transaction outputs) are quite common
< wumpus> there are lots of transactions with many outputs
< gmaxwell> And if it's spent completely, you just free it.
< gmaxwell> but say a block spends the 100 outputs in order in a block, it makes no sense to resize the object 100 times and then free it.
< sipa> vectors never reallocate on shrink
< sipa> only on grow
< gmaxwell> sipa: With a vector we instead have 201 mallocs and 201 frees in that case. (and probably a few reallocs along the way)
< gmaxwell> where as if the cache entry is a single fixed size object, which never grows, and shrinks only in a deferred way when the amount of memory saved by shrinking would be non-trivial, it's 1 malloc, 1 free, and a few reallocs for shrinking along the way... and always in chunks large enough where its meaningful (e.g. where fragmentation doesn't prevent the freed memory from being used anyways).
< GitHub21> [bitcoin] laanwj opened pull request #6917: leveldb: Win32WritableFile without memory mapping (master...2015_10_leveldb_win_nomap) https://github.com/bitcoin/bitcoin/pull/6917
< wumpus> seems to have worked...
< gmaxwell> \O/
< gmaxwell> too bad the chainstate obfscuation is a bit invasive; it would be nice to have a 0.11.2 that fixes all known corruption issues for windows users.
< sipa> i think it's backportable
< wumpus> it's not too bad
< gmaxwell> (it wasn't that interesting to consider when the leveldb ones remained)
< gmaxwell> warren: if you want to do a bounty, instead of fixing windows (which wumpus just did, and we should send him flowers)-- perhaps the bounty we should ask for is this: A NBD server that sakes an initial disk image file (compatible with loopback mount) and logs all writes to it, instead of actually writing them, and then applies the writes as a patch in memory (doesn't matter if uses lots of ram).
< gmaxwell> Then it lets you restart it with a image + old log + N and it will truncate the last N writes off the old log. This would let you construct a test harness where we start a node against an existing chain, sync one block, and shut down. Then attempt to restart with one write less. and so on, for every single write back to the first one, and make sure the node can restart at every point.
< wumpus> yes, that could be useful
< gmaxwell> without that kind of test harness I dunno how _anyone_ manages to write database software that actually works (well: answer, according to that usenix paper, they mostly just fail. :) )
< warren> gmaxwell: you might have convinced me that I shouldn't be spending personal income on bounties anymore, I'd be happy to specify it though =)
< warren> kanzure: ping
< kanzure> pong
< kanzure> for which thing have i been summoned?
< wumpus> there are a few debugging and reverse engineering tools that can rewind execution in memory, but I know of no file systems that can rewind/replay operations explicitly in fine granularity. You'd say that with journalling file systems it should be possible by truncating/rewinding the journal, but never been able to find reports of anyone doing that...
< gmaxwell> kanzure: ever seen people who own both a parrot and a dog, ... and the parrot will sometimes call the dog just to see it come?
< kanzure> there was a linux tool for replay but requires kernel module, does replay and rewind of execution state, very bizarre alien thing
< gmaxwell> wumpus: I wasted a bunch of time searching and could find nothing. But it seems to me that this is probably only a few days work on top of a simplistic NBD server.
< kanzure> and maybe http://rr-project.org/ but more the last link
< wumpus> yes, for execution state for example pandare can do that
< kanzure> this could possibly be extended to windows vm things but w/e
< gmaxwell> RR is awesome, and can do this for execution state, but we need it for disk state seperate from execution.
< kanzure> can't find pandare, please link?
< wumpus> but that's not really what we want here - we want to restart the application at every point of disk writing
< gmaxwell> I expect most of the work related to my NBD suggestion will be discovering that all filesystems are broken, and being unable to get results against bitcoin until the file systems are fixed. :(
< kanzure> yeah probably just write your own fusefs plugin or something
< kanzure> and ramfs with snapshotting or something
< * kanzure> handwaves
< gmaxwell> NBD is bascially the perfect interface for this. It is a remote block device whos primitives are "read this block" and "write this block"
< kanzure> "network block device"
< kanzure> i hope this isn't samba
< gmaxwell> so the code becomes if read check the log replay hashtable, no hits? go to the read only backing device. if hit return that. if write, append log, and add to the hashtable. On restart, replay old log into hashtable up to the cutoff point.
< wumpus> samba is not a network block device, but a network filesystem :)
< kanzure> (samba is many things)
< gmaxwell> the test harness is, mount the thing, sync a block. shut down. unmount. restart daemon with -1, mount, start daemon sync next block and record success, unmount, go back to the original oldlog, start with -2 ... and so on. for each offset. If there are a million writes made while syncing that first block, you restart a million times. Once that starts failing, you change the behavior so that it
< gmaxwell> turns the last write to line-noise... and try it again.
< gmaxwell> and then you know that, absent reordering at lower levels, there is no point where unclean termination can leave the state irrecoverable in the log-under-test.
< gmaxwell> obviously this could model reordering too, if barriers get communicated across NBD (I dunno if they do)... but the state space becomes huge fast, so I think you could only try random cases, and not an exhaustive test.
< kanzure> warren: random pings in the future are much appreciated, thank you
< gmaxwell> wumpus: please post test binaries with the leveldb fix. 99.9% of windows users won't test from source.
< wumpus> the randomized brute force approach would be to crash, restart, crash, restart a VM running bitcoind in a loop, then a zillion times, trying to get it into a state where it won't recover
< GitHub91> [bitcoin] sipa opened pull request #6918: Make sigcache faster, more efficient, larger (master...smallsigcache) https://github.com/bitcoin/bitcoin/pull/6918
< wumpus> gmaxwell: maybe jonasschnelli would like to post executables; I don't have the infrastrcuture for that at the moment
< gmaxwell> sipa: "faster, stronger, better!"
< kanzure> you could selectively break parts of the vm maybe
< kanzure> e.g. increase unreliability of different system calls
< gmaxwell> sipa: hitrate could be improved further if you store the height at the time an entry is added the random eviction picks N entries and drops the one with the lowest height.
< gmaxwell> sipa: even with it 10x larger, hitrate for a transactions in the current mempool will not be 100% and thats nuts.
< sipa> gmaxwell: around 99.3% hitrate on average for the last 8000 entries
< gmaxwell> so lets say the block has 2000 signatures drawn from the last 8000, and a 99.3% hitrate. So 14 misses. If a verify takes 420 microseconds (as it does for openssl), then thats 6 ms added to block processing.
< sipa> the last 1000 have 99.9% hitrate
< sipa> one way to do it would be to use a unordered_map to int64_t, which just increases sequentially
< sipa> and when removing, randomly find 4 entries and sort them by that number
< gmaxwell> or 1%-6% slowdown (depending on how much we speed up the rest of block processing).
< sipa> maintaining a fully ordered index adds a lot of memory
< gmaxwell> I was basically suggesting that but using height, because I think we actually want the oldest of the ones at the current height usually.
< sipa> otherwise we need to pass down height through several layers of code that are currently height-oblivious
< gmaxwell> so map to int32_t, with the height at insertion time. and then pull four entries and reject the lowest height.
< gmaxwell> oh well meh, true. though block height advance is the magical event that should cause entries to become useless. Not age.
< sipa> we could make it wipe entries on block verify
< sipa> that's trivial
< gmaxwell> sounds even better. I think with that we could avoid increasing its memory usage, and stay at 10mb.
< gmaxwell> sipa: if we can do that, then we could also have that same event trigger a counter that a block has been accepted, no?
< sipa> gmaxwell: it's a boolean passed down saying whether we want to store or not
< sipa> per entry
< phantomcircuit> er
< phantomcircuit> bool Encrypt(const CKeyingMaterial& vchPlaintext, std::vector<unsigned char> &vchCiphertext);
< phantomcircuit> actually nvm
< phantomcircuit> i had them reversed in my brain
< gmaxwell> sipa: the eviction is probably enough, though it still has a bias towards the more recent inserts rather than a bias towards preserving the oldest inserts which are since the prior block.
< sipa> gmaxwell: meh
< gmaxwell> I don't think it matters much, but evict on verify will make that sighash single trick for spending dustspam slow to verify.
< gmaxwell> sipa: what is the average hit rate for a transactions 6000 to 8000 ago?
< phantomcircuit> gmaxwell, to verify that im reading this right, each time an encrypted wallet is unlocked the first key in mapCryptedKeys is checked against it's pubkey and the first time the wallet is unlocked all of the keys are checked against their pubkey, correct?
< gmaxwell> phantomcircuit: thats my recollection of what it does, yes.
< gmaxwell> it's a slow and not very good test-- the problem, of course, is you get no verification at all if the wallet is never unlocked.
< phantomcircuit> gmaxwell, right
< phantomcircuit> how about make a list for eviction and evict only when a block is completely verified?
< gmaxwell> phantomcircuit: complicated and screws up the layering.
< gmaxwell> phantomcircuit: I'd rather have a hash basked check that stays with the key, and gets checked on load and any time its used.
< phantomcircuit> gmaxwell, no i meant for the sigcache thing
< gmaxwell> phantomcircuit: I know, mutiple threads of discussion.
< sipa> phantomcircuit: complicated and screws up the layering :)
< sipa> phantomcircuit: the sigcache code doesn't know about a 'block' concept at all
< gmaxwell> sipa: I don't see why the sigcache couldn't just have a counter in it, and every time you verify a block you take a lock on the sigcache and increment that counter. And the current counter value rides with the entries in an ordered map, and does the biased eviction I suggested.
< sipa> gmaxwell: perfectly doable
< gmaxwell> that won't require passing anything height around, it's not hight anymore then, it's just 'sigcache epoch'
< sipa> going to benchmark without it to see how large the sigcache actually gets
< gmaxwell> well it will get to the maximum size, since there is no eviction except on full. :)
< sipa> with the evict-when-seen-in-block rule
< phantomcircuit> gmaxwell, heh
< gmaxwell> k, that does have that patalogical performance I mentioned. sadly I don't see a nice way to fix that, except perhaps having a one entry FIFO for the last evicted item.
< sipa> gmaxwell: hmm, what pathological performance?
< gmaxwell> sipa: There is, for example, a block with a 1MB tx of SIGHASH_SINGLE bug using transactions that have all the same signature. If it's cached its almost instant to verify ... otherwise....
< sipa> oh!
< gmaxwell> similar OP_DUP2 CHECKSIG OP_DUP2 CHECKSIG ... (or checkmultisig with the same pubkey).
< phantomcircuit> we still end up calling SignatureHash even with the sigcache right?
< sipa> yes
< phantomcircuit> any reason CryptedKeyMap is in keystore.h instead of src/wallet/crypter.h ? (it's not used anywhere outside of src/wallet/crypter.{h,cpp})
< sipa> nope
< sipa> CKeyStore itself should move to script/sign.h
< sipa> CBasicKeyStore can stay
< sipa> and CCryptoKeyStore can indeed move to wallet/crypter
< phantomcircuit> gmaxwell, also yes that was my basic plan for adding hashing for ckey entries
< GitHub83> [bitcoin] laanwj closed pull request #6893: forward-declare univalue in rpcclient + remove include in rpcserver.cpp (master...univalue) https://github.com/bitcoin/bitcoin/pull/6893
< GitHub43> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/725539ea0376...d482c0a7b246
< GitHub43> bitcoin/master e9e6163 Pieter Wuille: Make -checkmempool=1 not fail through int32 overflow
< GitHub43> bitcoin/master d482c0a Wladimir J. van der Laan: Merge pull request #6896...
< GitHub192> [bitcoin] laanwj closed pull request #6896: Make -checkmempool=1 not fail through int32 overflow (master...fixchainsize) https://github.com/bitcoin/bitcoin/pull/6896
< GitHub164> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/d482c0a7b246...48b5b84ee511
< GitHub164> bitcoin/master 30d9662 Gregory Maxwell: Reject invalid pubkeys when reading ckey items from the wallet....
< GitHub164> bitcoin/master 48b5b84 Wladimir J. van der Laan: Merge pull request #6906...
< GitHub146> [bitcoin] laanwj closed pull request #6906: Reject invalid pubkeys when reading ckey items from the wallet. (master...ckey_pubkey_check) https://github.com/bitcoin/bitcoin/pull/6906
< dcousens> sipa: don't mind my nits
< sipa> dcousens: you're welcome to ask questions :)
< dcousens> PR is super clean, the only semantics that weren't clear to me was the logic around `store`
< dcousens> What does it represent? A data 'store', or, whether to allow the sigCache to be used?
< sipa> store==true is set for mempool transactions, store==false is set for block validation
< sipa> the original intent was that we wouldn't store validations done during block validation, only query the cache
< dcousens> So the current usage is whether or not it should use a cache at all
< sipa> no
< sipa> with store==false, the cache is still queried; the result of validation is just not stored
< dcousens> But the cache would be empty no?
< dcousens> The set only gets inserted into if `store == true`
< sipa> it gets filled by mempool validations
< sipa> and queried for block validations
< dcousens> AH
< dcousens> bloody inline statics
< dcousens> sorry, missed the static
< dcousens> That makes sense now
< dcousens> Though signatureCache was a member of CachingTransactionSignatureChecker
< dcousens> Thought*
< dcousens> I'm too used to just never having globals haha, anyway, cheers for answering questions :)