< ProfMac> was there ever a time when GetBlockWork used the actual hash instead of nBits to calculate work?
< bitcoin-git> [bitcoin] promag opened pull request #12419: wallet: Force distinct destinations in CWallet::CreateTransaction (master...2018-02-distinct-destinations) https://github.com/bitcoin/bitcoin/pull/12419
< gmaxwell> no, that wouldn't be correct.
< ProfMac> tell me more.
< gmaxwell> the hash reflects more than actual work, by a factor of two on average. It also has very high variance. meaning that if you got a block with ten times that 'work' expected (which you'd get in 1/10) blocks you could just delay announcing it until a couple blocks had passed, then announce it and be very likely to reorg them out.
< promag> gmaxwell: regarding duplicate outputs #12419
< gribble> https://github.com/bitcoin/bitcoin/issues/12419 | Force distinct destinations in CWallet::CreateTransaction by promag · Pull Request #12419 · bitcoin/bitcoin · GitHub
< ProfMac> Yes, gmaxwell, I agree with all that.
< ProfMac> Thanks.
< ProfMac> Not unlike "curling" or "corn hole"
< gmaxwell> n1bor: conversation continued in #bitcoin he posted bench logs which indicated to me that he was IO bound very badly. Today he followed up with,
< gmaxwell> n1bor: 18:28:24 < murrayn> gmaxwell, just to follow up from last night: 1) i shutdown gracefully and restarted, and the reindex started from the beginning. From what you said, it shouldn't do that? and 2) it was definitely disk IO bound. doing it on an SSD this time. much better
< gmaxwell> (1) was something sipa warned him about in here, that restarting with -reindex-chainstate would restart the reindex.
< midnightmagic> doh
< meshcollider> looks like contrib/bitcoin-cli.bash-completion needs an update, hasn't been updated for ages
< esotericnonsense> k, so i'm looking at the JSON-RPC 2.0 spec stuff now. 'strict compliance' requires two main changes as far as i can see. one of them I've made gated behind a flag 'strictjsonrpcspec' (better names accepted)
< esotericnonsense> that's the 'client must send jsonrpc=2.0, server must send jsonrpc=2.0, only one of 'error' or 'result' should be returned'
< esotericnonsense> the other bit is that if the client does not send an id with a request, the server should send a blank response. within a batch it should just be skipped. so if you send [a id=1, b, c id=3] you should get back [aresult, cresult].
< esotericnonsense> i think that's enough to make it fully spec compliant. the former one is enough to get it to work with libjsonrpccpp, could probably write some unit tests based on the jsonrpc spec as well, manual tests i've done look fine
< bitcoin-git> [bitcoin] MarcoFalke reopened pull request #12349: shutdown: fix crash on shutdown with reindex-chainstate (master...fix-qt-shutdown) https://github.com/bitcoin/bitcoin/pull/12349
< kallewoof> What's going on here? Either that comment is wrong, or the check is wrong... https://github.com/bitcoin/bitcoin/blame/master/src/script/interpreter.cpp#L1233 (it says nOut out of range, but it checks nIn against txTo.vout.size()...)
< kallewoof> I guess it's because single requires the in to sign the same index out. Just weird looking.
< aj> kallewoof: yeah, "no vout corresponding to nIn" or something would make more sense to me as a comment
< aj> kallewoof: code seems right at least
< aj> kallewoof: wow, that comment used to be a log output saying nOut out of range
< kallewoof> aj: yeah agreed!
< bitcoin-git> [bitcoin] laanwj pushed 5 new commits to master: https://github.com/bitcoin/bitcoin/compare/c997f8808256...2dbc4a4740cd
< bitcoin-git> bitcoin/master cc046f6 John Newbery: [tests] Reduce NodeConn connection logging from info to debug
< bitcoin-git> bitcoin/master c32cf9f John Newbery: [tests] Add P2PDataStore class...
< bitcoin-git> bitcoin/master 359d067 John Newbery: [tests] Fix flake8 warnings in invalidtxrequest
< bitcoin-git> [bitcoin] laanwj closed pull request #11771: [tests] Change invalidtxrequest to use BitcoinTestFramework (master...refactor_invalidtxrequest) https://github.com/bitcoin/bitcoin/pull/11771
< bitcoin-git> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/2dbc4a4740cd...f4f4f51f1a94
< bitcoin-git> bitcoin/master a71c56a Luke Dashjr: clientversion: Use full commit hash for commit-based version descriptions...
< bitcoin-git> bitcoin/master f4f4f51 Wladimir J. van der Laan: Merge #11966: clientversion: Use full commit hash for commit-based version descriptions...
< bitcoin-git> [bitcoin] laanwj closed pull request #11966: clientversion: Use full commit hash for commit-based version descriptions (master...ver_full_commit_hash) https://github.com/bitcoin/bitcoin/pull/11966
< wumpus> hmmm
< wumpus> util.cpp:384:121: note: in instantiation of template class 'std::__1::pair<const std::__1::basic_string<char>, boost::interprocess::file_lock>' requested here
< wumpus> boost::interprocess::file_lock& lock = locks.emplace(pathLockFile.string(), pathLockFile.string().c_str()).first->second;
< wumpus> anyone have an idea what can cause this c++ error monstriosity?
< wumpus> so it looks like "static std::map<std::string, boost::interprocess::file_lock> locks;" is illegal with openbsd's clang + boost version combo
< bitcoin-git> [bitcoin] Sjors opened pull request #12421: [qt] navigate to transaction history page after send (master...2018/02/qt-goto-transactions-after-send) https://github.com/bitcoin/bitcoin/pull/12421
< Lauda> Will there be rc4?
< provoostenator> Lauda: there's two open PR's which I suspect need to be fixed before rc4: https://github.com/bitcoin/bitcoin/milestone/30
< Lauda> Alright, thanks. I'll wait for those.
< bitcoin-git> [bitcoin] laanwj opened pull request #12422: util: Use unique_ptr in locks map (master...2018_01_openbsd_util_fix) https://github.com/bitcoin/bitcoin/pull/12422
< gmaxwell> wumpus: can you explain why it works inside a unique ptr? (I am just curious)
< wumpus> gmaxwell: no, I have no idea why it fails in the first place
< wumpus> tbh this is really over my limit of understanding C++
< wumpus> well, the point of having pointers in a map instead of whole objects makes it 'simpler' to work with layout in memory I suppose
< wumpus> as for some reason the whole lock objects can't be moved
< wumpus> and pointers, obviously, can
< gmaxwell> yea, I was wondering if values in maps had to be relocatable, but quick googling didn't answer the question for me.
< wumpus> it must be some edge case that was solved in either a newer version of clang, or in a different version of boost, as I've not seen the issue on other platforms
< wumpus> the 'map' here is used to avoid locking a directory multiple times I guess? this structure is write-only, no lookups are ever done in this map, so there should be no performance difference whatsoever
< wumpus> gah
< goatpig> from the look of the error log
< goatpig> this has to do with the ctor
< goatpig> not the map itself
< wumpus> so emplace returns the current record if one already exists, right?
< wumpus> I dont' think this construction even accomplishes what it is meant to accomplish
< goatpig> emplace is like insert afaik, returns a pair of <iter, bool>
< goatpig> the iterator points at the inserted element or the already existing one, the bool tells you if it was added (true) or fetched (false)
< wumpus> ok that depends on what does lock->try_lock does on a lock that is already locked
< goatpig> you mean within the same thread?
< goatpig> or another one?
< wumpus> within the same process
< goatpig> i think it throws
< goatpig> not sure
< wumpus> this is a per-process lock, not per-thread
< wumpus> I wonder what the intent is for LockDirectory being called on the same directory multiple times
< goatpig> but that wouldnt have anything to do compile, that would just be runtime error
< wumpus> I know.
< wumpus> I'm just doubting that this code does what it is meant to do, at all
< goatpig> typically that's to make sure you got the lock
< wumpus> separate from what it takes to compile it
< goatpig> now if the same threads is trying to get a lock multiple times
< goatpig> it presupposes the methods that are accessed can be used on their own
< wumpus> it should probably at least check the boolean it gets back from emplace
< wumpus> whether the lock already exists
< wumpus> otherwise it will lock an old lock another time
< goatpig> that part is like
< goatpig> idk id have to look at the code
< goatpig> but it seems pointless
< goatpig> why hold a reference to the lock in the scope? that's usless, the map perpetuates it
< wumpus> either that's a no-op, or it throws, but it's pointless
< goatpig> and that could be your compile issue
< goatpig> that these locks are not copiable
< goatpig> ie just get rid of the referencing altgother
< wumpus> defining the map itself causes the compile issue
< wumpus> referencing is not the problem
< goatpig> boost::interprocess::file_lock& lock = locks.emplace(pathLockFile.string(), pathLockFile.string().c_str()).first->second;
< goatpig> get rid of the reference and try
< gmaxwell> in general I wouldn't expect a lock to be copyable.
< goatpig> if that doesnt bork other stuff
< wumpus> goatpig: I tried, see discussion in https://github.com/bitcoin/bitcoin/issues/12413
< goatpig> i.e. just do locks.emplace(pathLockFile.string(), pathLockFile.string().c_str());
< gmaxwell> what the heck is this doing?
< gmaxwell> anyways?
< wumpus> goatpig: as I said above, defining the map itself already causes the error, even if you don't access it at all!
< wumpus> gmaxwell: that was also my question, what is the semantics supposed to be
< goatpig> you could just ninja that my making it a map of shared_ptr<lock>
< goatpig> by*
< gmaxwell> maybe we can remove it all entirely? thats always the best fix when possible. :P
< wumpus> goatpig: that's what I did in #12322 (but with unique_lock)
< gribble> https://github.com/bitcoin/bitcoin/issues/12322 | Docs: Remove step making cloned repository world-writable for Windows build. by murrayn · Pull Request #12322 · bitcoin/bitcoin · GitHub
< wumpus> eh #12422
< gribble> https://github.com/bitcoin/bitcoin/issues/12422 | util: Use unique_ptr in locks map (fix OpenBSD 6.2 build) by laanwj · Pull Request #12422 · bitcoin/bitcoin · GitHub
< * gmaxwell> channels some contributor and considers just submitting a PR to remove stuff he doesn't understand
< goatpig> wumpus: sure, dunno what the code really intents. i'd prefer unique_ptr when possible
< goatpig> but if it's trying to reference the lock, then it should be using shared_ptrs
< wumpus> no, shared_ptre is not necessary
< wumpus> the lock does not leave the scope at all
< goatpig> the lock exists within the map
< wumpus> yes
< wumpus> and after the change, it exists inside a unique pointer
< goatpig> if you kill the map, if there is a thread with the ptr/reference, it will blow up
< wumpus> which is unique, inside the map
< wumpus> no one has a reference to it
< wumpus> please read the code
< wumpus> no reference leaves the function
< goatpig> checking
< wumpus> the destructor will be called when the process has shut down, after the main function, by automatic cleanup
< wumpus> by that time no one will be using the data dir anymore
< wumpus> the map exists to hold on to the objects until the process shuts down
< goatpig> so the locks persist the process?
< goatpig> or rather
< goatpig> no method tries to clean them up
< wumpus> as I said, they'll be automatically cleaned up by the runtime framework
< wumpus> <gmaxwell> in general I wouldn't expect a lock to be copyable. <- they are; it's only moving them at most, not copying
< goatpig> wumpus: is it assumed that code gets accessed by only a single thread?
< gmaxwell> goatpig: this isn't an ordinary lock
< gmaxwell> it's a file locking thing
< wumpus> goatpig: yes, it's only used during initialization, by the init thread
< gmaxwell> used to keep seperate processes from muckign about with the same data directories.
< wumpus> goatpig: oh... I'm actually not sure anymore
< goatpig> i understand that much, but it's not thread safe, so im assuming this is every hit once
< goatpig> err
< wumpus> wallet directories complicated this
< goatpig> hit ever once
< goatpig> i mean the semantics of the method suggest this is accessed just to check the lock is held at all
< wumpus> as soon as dynamic opening of wallets is supported, this can be called from the RPC thread
< wumpus> but right now it's only used from init
< wumpus> it's a good question, the function as it is is certainly not thread safe
< goatpig> you are adding elements to a map. ideally it should check the map has the element, otherwise grab a mutex, add the element, then go back to regular execution
< wumpus> it should grab a mutex before accessing the map at all
< gmaxwell> ^
< wumpus> *if* it's supposed to be thread safe
< goatpig> ok, sure
< gmaxwell> and its trivial here in any case.. since the purpose of the map is just lifetime management for the contained objects.
< wumpus> yes
< goatpig> that would be if the method was not meant to return state of the lock
< goatpig> the semantics just suggest to me that it could be accessed by another thread
< wumpus> I think the intent is to return true if the process has a lock on the directory, whether it's new or not. Not sure the code matches that intent, at the moment.
< gmaxwell> nah its just init only now. though wumpus is right that dynamic wallet opening/closing will cause accesses from rpc.
< wumpus> I've asked meshcollider (who wrote the code) in the issue just in case
< gmaxwell> for wallet opening we probably want it to return true if it takes a lock and false if it already has it or can't take it.
< meshcollider> wumpus: I think ryanofsky suggested the currently used approach as a comment on the PR
< wumpus> gmaxwell: it's a lock on a directory, not on the wallet, bdb takes care of not opening the same wallet multiple times
< wumpus> gmaxwell: there can be multiple wallets in a directory
< gmaxwell> wumpus: ah point.
< gmaxwell> Carry on then.
< wumpus> we could certainly use unit tests for that function
< wumpus> meshcollider: ok adding jnewbery to reviewers
< meshcollider> jnewbery or ryanofsky?
< meshcollider> wumpus: your PR looks good to me
< wumpus> eh, sorry, yes ryanofsky, added the wrong one
< meshcollider> wumpus: didn't even think about the dynamic loading/unloading case / thread safety when I wrote it, apologies
< meshcollider> Was just trying to fix the issue for 0.16
< wumpus> meshcollider: no worries, we found that issue before it became a problem at least
< * wumpus> working on a unit test for LockDataDirectory - part of it will only work on UNIX because fork()
< provoostenator> You have got to be kidding... anyone had to jump through these hoops before? http://htmlpreview.github.io/?https://github.com/JKSH/QtSdkRepoChooser/blob/master/doc/index.html
< wumpus> no, not really, though I've had to build qt for embedded devices, the source code is huge, it takes a long time and it's a pain to configure for cross compile
< wumpus> it pretty much regards itself as an operating system in itself
< wumpus> we can be really happy that linux distributions simply package that stuff pre-built
< midnightmagic> /w/w 59
< cfields> wumpus: seems it's moot now, but I think std::piecewise_construct might be what's missing for the emplace
< cfields> as usual, spoke too soon. Red herring. Nevermind.
< wumpus> I'll look at that one, never heard of it
< hkjn0> a bit lazy q: am I reading depends/ docs right that there's currently no way to use other platforms than x86_64 to build e.g. bdb?
< hkjn0> so if I'm compiling code on armv7l and want wallet support, best thing to do is to get on x86_64 instead, and crosscompile towards arm as necessary?
< hkjn0> (I did get bdb4.8 compiled from source on armv7l outside of the depends system, but had trouble getting it to be noticed by the build system..)
< wumpus> hkjn0: the depends system assumes it's building on x86_64, however if you want to build db4 outside the depends system there's the script contrib/install_db4.sh and the flags BDB_LIBS="-L${BDB_PREFIX}/lib -ldb_cxx-4.8" BDB_CFLAGS="-I${BDB_PREFIX}/include"
< wumpus> would be nice if the depends system also supported building *from* other architectures, though
< hkjn0> ah, great, I bet I was just misisng BDB_LIBS or somesuch.. will try out the contrib/ script
< wumpus> esp. at some point we'd like to do gitian builds from other architectures
< hkjn0> right. I've been setting up these weird build environments already, would be happy to try to contribute to gitian work or otherwise support other build platforms, just need to understand more of how current system hangs together first.. :)
< cfields> depends doesn't assume build is x86_64. It's the only arch that really gets tested, though
< cfields> but, all of the machinery is there to auto-detect the build platform and try to cope.
< cfields> hkjn0: if you try building native on arm and run into issues, feel free to ping me
< hkjn0> cfield, thanks! building with contrib/install_db4.sh + BDB_LIBS it suggested for ./configure as wumpus suggested seems to work fine so far
< hkjn0> (before that I tried a plain `make` in depends/ and it failed on armv7l, but it is possible I am missing some packages)
< cfields> hkjn0: well if you'd be willing, I'd be happy to step through the depends with you and try to get it patched to build
< cfields> like wumpus said, it'd be nice to have several working build arches
< hkjn0> cfields: sure, let me know how you'd like to proceed.. do we take it off the main channel to not clutter it up too much?
< cfields> hkjn0: sure, #bitcoin-build
< michagogo> wumpus/cfields: Do you remember (on a describe-in-a-couple-sentences-off-the-top-of-your-head level) what the actual issue is with Xenial and why it's broken?
< michagogo> And does Artful(/Bionic) just work?
< michagogo> (with cross-compilation for Windows, I mean)
< michagogo> I tried looking through the issues on GitHub and got confused by the various different threads and things people tried and experienced
< cfields> michagogo: by default, the toolchain is configured for win threads rather than posix, which don't exist (at least with that gcc version)
< cfields> further, the ssp is busted
< michagogo> And were those fixed in later versions, e.g. Artful?
< cfields> I believe so. But we should be switching to our own toolchains for 0.17 anyway. Or if not a full switch, probably at least for mingw.
< michagogo> That sounds like it would be a good thing
< michagogo> I ask because last night and today I spent a bunch of hours trying to see how hard it would be to just backport the version of mingw-w64 that's in artful/bionic to xenial
< michagogo> I tried the straightforward "`backportpackage` into a PPA", but ran into chains of missing dependencies
< michagogo> I won't bore you with everything I tried and looked at, but at this point I've come to the realization that I don't know enough about Ubuntu packaging and how all that works, and beyond that, more important stuff
< cfields> michagogo: I can upload a stand-alone mingw toolchain soon, if it would help
< michagogo> Like, what is mingw-w64, vs what is gcc, vs what is gcc-mingw-w64
< cfields> heh
< michagogo> and how do all those interact
< cfields> right
< cfields> well, in case it helps...
< michagogo> In #ubuntu-devel someone said: 18:34:22 <rbasak> michagogo: I would step back and consider your approach. I don't know what problem you're solving, but there might be an easier way.
< michagogo> I explained that the version in xenial was broken, and they said this: 18:39:39 <rbasak> Also, if the reason it doesn't work in Xenial is due to a bug, we're quite happy in general to cherry-pick the fix for all Xenial users to benefit.
< michagogo> But somehow I suspect if it were that simple it would have already happened...
< cfields> to build a mingw-w64 toolchain, you need: mingw-w64 headers (think kernel headers), mingw-64 runtime (think glibc), gcc, and binutils. And obviously they need to be built in a way that lets them play nicely.
< michagogo> Right. So the thing is, I don't actually know the difference between mingw-w64 and gcc
< michagogo> Well, that's not entirely true
< michagogo> I sort of know what gcc is
< michagogo> What I don't know is, what is mingw-w64, and how does it interact with/based on/use gcc
< cfields> mingw-w64 is a shim that allows for posix-y code to work on windows.
< michagogo> So what is gcc-mingw-w64?
< cfields> so you can build a compiler (gcc) that's able to produce win32 code, but it wouldn't do much good if you couldn't use standard unixy apis.
< cfields> so, for example, mingw-w64 can use either win or pthread threading models. Configured without pthread support, you'll have trouble with lots of applications. That's one of our issues.
< michagogo> So gcc-mingw-64, say, is a version of GCC that builds the mingw-w64 layer into the binaries it produces?
< cfields> it's probably easiest to just think of mingw-w64 as a libc for windows.
< michagogo> I guess at this point the knowledge that I'm missing is: 1) what _exactly_ the term "toolchain" refers to and how you build one
< michagogo> 2) How Ubuntu packages work together to make up a toolchain
< michagogo> And all kinds of other things related to those, like how exactly Ubuntu packaging works
< cfields> well, the packaging is an entirely different beast
< michagogo> I'm thinking I'll drop the issue for now, since I don't really have a ton of time and I suspect it would take me at least a few days to sort all these things out
< michagogo> I mean, regarding what rbasak said over in #ubuntu-devel, are the problems we have simple bugs with focused fixes that can be cherry-picked, or more fundamental problems that can't really be fixed without a wholesale upgrade to a newer version of MinGW?
< cfields> a toolchain is the group of progs that are required to produce a working binary. Typically that's target kernel/os headers in some form, target libc, host compiler, other host apps (in linux, binutils provides ar/ranlib/etc.), and a linker (also provided by binutils on linux)
< jcorgan> michagogo: it's been a little while since i've used gcc-mingw-w64 but i recall in ubuntu that installing that package pulls in all the accessories (runtime, binutils, etc.) that cfields mentioned
< michagogo> And actually, is the issue just with the MinGW component? Meaning, say one were to rebuild the gcc-mingw-w64 packages with the same GCC version and just an upgraded MinGW, would that work?
< jcorgan> in late to the conversation here, so not sure you're exact goal
< cfields> michagogo: the thread thing is just a config option. There's also the stack issue, which I haven't looked into for a while, but afaik that's no longer an issue in newer ubuntu versions. So it was either an upstream bug that's been fixed, or a config problem that ubuntu has fixed
< michagogo> jcorgan: Well, at the end of the day, here's my big-picture goal
< michagogo> Specifically, the lines telling you to sudo add-apt-repository "deb http://archive.ubuntu.com/ubuntu zesty universe"; sudo apt update;sudo apt upgrade
< michagogo> Because besides that not working, because zesty is EOL, that's just going to hose your system
< jcorgan> ah, i only have experience going from LTS to LTS, never use the ones in between
< michagogo> Even doing that, you never ever just add the newer repository like that
< michagogo> There's an upgrader
< michagogo> So my thought was, well, can I get the packages from the newer version brought back to xenial?
< jcorgan> it might be possible, but that way lies madness
< michagogo> So it seems.
< michagogo> I tried just naively `backportpackage`ing into a PPA
< jcorgan> you're on zesty and need packages that only exist later than that?
< cfields> michagogo: yea, I'm not sure it's worth the trouble. We already have a plan for resolving this, and it means that we don't have to rely on Ubuntu anymore
< michagogo> I’m not on zesty
< michagogo> I’m actually not an active Ubuntu (or Linux) user day to day
< jcorgan> you could see if the packages you need are in the 'backports' repo
< michagogo> Nah
< michagogo> My idea was to *get* a newer mingw-w64 into -backports
< michagogo> I filed the “please backport” bugs
< cfields> michagogo: well, that certainly wouldn't be a bad thing. It's definitely not just us.
< jcorgan> i expect that would require specialized knowledge of how gcc/binutils/etc are packaged that a typical user wouldn't normally need to know
< jcorgan> and personally i'd look for an alternative way to solve whatever problem
< michagogo> I had the naive thought that, well, it should be easy to backport, because it’s a cross-compiler, not anything that the system uses
< michagogo> jcorgan: AIUI “whatever problem” is “mingw-w64 on Xenial is very broken”
< jcorgan> personally i'd see if the must-be-xenial constraint can be relaxed. but if you wanted a challenge you could learn enough about how it works to help fix the issue. that may not be worth your time, but it could also be fun and very educational.
< jcorgan> (depends on you)
< michagogo> A no-changes backport failed, firstly, because it build-depends on gcc-7
< michagogo> Which must-be-Xenial constraint?
< michagogo> The thing is, 1) that’s the latest LTS
< jcorgan> right, i use that extensively
< michagogo> And 2) that’s the version that you get with WSL, and as far as I know trying to upgrade just breaks it
< jcorgan> ah, another piece of the picture i was unaware of
< michagogo> So the “constraint” is that we really want to let users on Xenial cross-compile is
< jcorgan> you're in a maze of twisty passages, all alike
< michagogo> cross-compile us*
< michagogo> I mean, it’s possible that if I knew Ubuntu packaging and had enough time I could try just reverting the changes that made it build with gcc 7
< jcorgan> so, getting a backport made of mingw does seem like the right way to go, but the number of people who are likely to know how to do that is probably small
< michagogo> And bring it back to 5.whatever
< michagogo> For all I know it might be just changing the build-depends line back to point at the gcc version that’s in Xenial
< jcorgan> sure
< jcorgan> but there could also be some specific bug in gcc5 that caused gcc7 to be needed.
< michagogo> I doubt it
< michagogo> I mean, I shouldn’t t say that
< jcorgan> that does sound like a (relatively) easy first thing to try, and you might get lucky
< michagogo> But I’m pretty sure that the upgrade to gcc 6 in... yakkety or zesty, I think? And to 7 in bionic, was just part of a wholesale upgrade to those versions
< jcorgan> i'm going to be in pain when i start looking at migrating everything to 18.04, aren't I?
< michagogo> But besides not knowing even the basics of Ubuntu packaging, I’d need the ability to go over all the other changes to the gcc-mingw-w64 source package and figure out which ones are related to the gcc 5/7 stuff, vs which ones should be there anyway
< bitcoin-git> [bitcoin] ryanofsky opened pull request #12424: Fix rescan test failure due to unset g_address_type, g_change_type (master...pr/scang) https://github.com/bitcoin/bitcoin/pull/12424
< michagogo> jcorgan: define “everything”
< jcorgan> most of the systems i use/support are 16.04, some still 14.04
< jcorgan> it sounds like a lot of changes are coming in 18.04
< michagogo> No idea
< jcorgan> anyway, this is straying a bit from coredev
< michagogo> I mean, it’s 2 years of progress in the overall Linux ecosystem
< michagogo> Bringing everything up to date
< michagogo> So yeah, some things may break…
< jcorgan> well, the older LTS versions get new packages for everything that isn't a fundamental system change
< jcorgan> and the backports repo even lets you run newer kernels and video drivers
< jcorgan> it's usually the lower-level stuff like sysvinit/upstart/systemd that causes me pain
< michagogo> Get new packages? What do you mean?
< michagogo> My understanding is that -updates only has bug fixes, ~never new wholesale versions of software that have features
< jcorgan> maybe i'm thinking of PPAs, where authors make recent versions of their software for several different ubuntu releases
< michagogo> Yep
< michagogo> With a PPA you can give people who decide to use you whatever you want, whenever you want
< michagogo> (Including installing a backdoor on their systems. It’s important that you trust the owners of PPAs you use.)
< jcorgan> well, i hope you figure out how to get teh mingw/WSL stuff working
< michagogo> Yeah, I think I'm just going to drop the issue at this point
< michagogo> cfields said that we're going to be using our own toolchain at some point (soon? ish?)
< cfields> michagogo: yes. Ubuntu versions/packages won't matter for gitian/travis after that.
< michagogo> cfields: I'm thinking mostly about the case of someone wanting to cross-build binaries for themselves
< michagogo> Either just not wanting to go through the hassle of setting up gitian, or with something like WSL
< michagogo> I really don't like the fact that we're giving users broken instructions, especially ones that (were they to work) have them completely mess up their system
< cfields> michagogo: sure. None of the new stuff will be required, ofc.
< michagogo> And of course, there's the fact that mingw is broken for everyone else
< bitcoin-git> [bitcoin] richardkiss opened pull request #12425: Add some script tests (master...feature/bool_tests) https://github.com/bitcoin/bitcoin/pull/12425
< gmaxwell> Note: if anyone here uses bitmessage, it has a remote code execution bug. (deserializing using eval)
< sipa> what is it written in?
< gmaxwell> Python.
< contrapumpkin> is today national facepalm security day or something
< esotericnonsense> deserializing using eval ???
< esotericnonsense> -_-.
< instagibbs> the vulnerability is being exploited in the wild as well
< instagibbs> not sure about success, but trying to run windows executables, grab electrum wallets, the like
< gmaxwell> esotericnonsense: deserializing using eval is a thing I've seen many times w/ javascript, php, and python.
< contrapumpkin> java does its own version of that too, if you use their native serialization
< esotericnonsense> gmaxwell: oh sure it's a thing. i just feel like it's one of the most famous 'don't do this' footguns
< esotericnonsense> evidently not
< mmgen> wouldn't use of the json module protect against this?
< mmgen> first json.loads(), then deserialize?
< reardencode> mmgen: yes.
< mmgen> strange they didn't do that
< contrapumpkin> that still seems like an iffy fix, even though the scope for mischief is far more limited
< contrapumpkin> it'll basically call an arbitrary method specified by untrusted data on the specified class
< bitcoin-git> [bitcoin] MarcoFalke opened pull request #12426: qt: Initialize members in WalletModel (master...Mf1802-qtInitializeMembersWalletModel) https://github.com/bitcoin/bitcoin/pull/12426
< mmgen> contrapumpkin: hopefully just an emergency fix. far better than nothing in any case
< contrapumpkin> yup :)
< mmgen> seems like try: assert data[""
< mmgen> seems like try: assert data[""] in ('message','vote') would have been safer
< contrapumpkin> the hoops people go through to avoid writing a proper parser... 😢
< mmgen> since 'message' and 'vote' are the only permissible values
< sipa> it's getting a bit off topic for this channel
< contrapumpkin> sorry :) I'll shut up
< mmgen> sipa: sorry
< sipa> np!
< mmgen> sipa: though it **was** gmaxwell who brought up the topic
< sipa> i'm aware, and i participated too; no blame here
< sipa> just noting that the discussion is getting too far removed from the topic here
< mmgen> mmgen: i know, i'll really shut up now ;)
< bitcoin-git> [bitcoin] sipa opened pull request #12427: Make signrawtransaction accept P2SH-P2WSH redeemscripts (master...201802_signrawp2shp2wsh) https://github.com/bitcoin/bitcoin/pull/12427
< gmaxwell> doh, how'd we miss that
< grafcaps> hah
< sipa> gmaxwell: it was actually partially being addressed in #11708
< gribble> https://github.com/bitcoin/bitcoin/issues/11708 | Add P2SH-P2WSH support to signrawtransaction and listunspent RPC by MeshCollider · Pull Request #11708 · bitcoin/bitcoin · GitHub
< promag> sipa: is it possible to add a test for #12427?
< gribble> https://github.com/bitcoin/bitcoin/issues/12427 | Make signrawtransaction accept P2SH-P2WSH redeemscripts by sipa · Pull Request #12427 · bitcoin/bitcoin · GitHub
< sipa> promag: yes, will look into that
< sipa> just wanted to have the PR visible as soon as possible
< Chris_Stewart_5> sipa: Is the basic idea that if we can sign the p2sh redeem script, we should be able to sign P2SH(P2WSH())?
< eklitzke> this is kind of cool, i got a build of bitcoind working with systemtap markers (in this case I'm marking CCoinsViewCache flushes): https://gist.github.com/eklitzke/8bf6957fe886ddec36cde737d69ac6f5
< gmaxwell> eklitzke: whats the advantage of that?
< eklitzke> i'm going to add more probes and then use that to measure some work i'm trying to do to the ccoinscacheview, e.g. i want to make the cache into a real writeback cache and get cache hit statistics out of it
< eklitzke> today if you wanted to get cache hit statistics from CCoinCacheView::GetCoin you'd have to hack up some existing rpc to dump the information (or dump it to a log), with stap/dtrace probes that would be much easier and less invasive
< warren> Does anyone have working gitian with qemu-kvm instead of lxc? I want to compare notes.
< gmaxwell> eklitzke: you'll defeat its purpose entirely
< gmaxwell> eklitzke: the cache's purpose for existing is not to act as a cache, it's purpose is to act as a buffer to prevent writes from hitting the database entirely. And also to allow atomic updates to keep the database consistent with the chain at a block level (this is not as important since we now use a replay for consistency). It also satisfies reads, but if you defeat that it doesn't make a huge p
< gmaxwell> erformance difference in IBD.
< gmaxwell> okay maybe on a non-SSD the read cache matters somewhat more, to be fair I haven't benchmarked it with defeated read caching on a spinning disk.
< eklitzke> i have thought through these issues, you can increase the amount of coins in memory during ibd in a way that's safe and still batches disk writes
< eklitzke> yes this is primarily for spinning disks
< eklitzke> or worse, people's cloud vm instances
< sipa> eklitzke: i've experimented with various cache flushing strategies before which try to predict which entries will be more useful in the near future, and keep them longer
< gmaxwell> eklitzke: the purpose of the replay stuff we did in 0.15.x was in part to enable incremental flushing, so yes, I also see advantages there.
< sipa> in practice, anything that reduced the ability to avoid writes slowed things down
< gmaxwell> but what we did find is that trying to achieve a higher read hit rate is mostly pointless.
< gmaxwell> (on a SSD at least)
< gmaxwell> I fear we've more or less given up on performance on non-SSDs :( esp since many large spinning disks are shingled now and have horrifying performance.
< gmaxwell> Incremenal flushing would also give the advantage of getting rid of flushing latency spikes and perhaps better overlap IO and validation.
< eklitzke> the flushing code right now is pretty inefficient, anyway databases like mysql do this by having a dirty buffer pool watermark that causes them to flush (potentially a large amount of data), and then they retaing data in memory that was flushed to disk if it's not dirty
< eklitzke> if you kept exactly the same semantics wrt flushing a large amount of data but didn't automatically expire the old coins from memory that would be an improvement (in my opinion)
< eklitzke> would love to chat with you about it next week, as i'm participating in the chaincode labs hacker residency program
< gmaxwell> eklitzke: We've measured that, it doesn't help.
< eklitzke> i've also measured it and shown that it does
< gmaxwell> Thats the whole reason I brought the subject up-- so you don't assume read caching is important.
< gmaxwell> eklitzke: Interesting, I'd love to see those results.
< eklitzke> if there are any writeups or mailing list posts about experiments in this area i'd be interested to read them
< gmaxwell> eklitzke: probably in the PR introducing the replay on start there are some of them.
< eklitzke> ok, i'll take a look
< sipa> eklitzke: i'll be at the residency on 22nd and 23rd
< gmaxwell> I'd offer you the specific code I benchmarked but I don't have access to the system with it anymore.
< sipa> eklitzke: the bitcoin utxo set is pretty unique as a data set in that a very large number of newly written entries are almost immediately after deleted again
< gmaxwell> We tested many configurations. For an example, one of the specific cases we tested was having flush leave the entries in but marking them as non-dirty and existing in the backing store, and seperately removing non-dirty entries when full. What appeared to be the case is that avoiding the read hit seemed basically irrelevant, because if i got read, that means it is getting spent, which means th
< gmaxwell> ere is going to be an erasure which must hit the disk.
< TD-Linux> since many large spinning disks are shingled now and have horrifying performance. <- these are still really uncommon
< gmaxwell> TD-Linux: they seem to be 100% of people showing up and talking about their node performance on spinning disks, but now that I say that outloud...
< sipa> eklitzke: but we should talk more about this
< eklitzke> i would like to change some of the leveldb settings too, but that's another matter entirely; for instance leveldb is configured to only open 64 files at once, so during ibd or a reindex you see the process thrashing a lot in a pattern where it opens an lbd file, mmaps it, munmaps it, and closes it
< eklitzke> that's another matter entirely though
< gmaxwell> eklitzke: by all means if you have results, great.
< gmaxwell> eklitzke: we suffer from FD exhaustion issues currently. :(
< eklitzke> yeah I talked to BlueMatt about it a bit yesterday, the thing is you can mmap a file, close the fd, and the mmap mapping is still valid
< gmaxwell> ah, cute.
< eklitzke> so if you didn't mind the extra memory overhead from the PTEs, you could potentially mmap all of the ldb files at once and not actually have 1000+ ldb files open
< eklitzke> doing *all* of them is probably too agressive
< gmaxwell> leveldb doesn't use mmap on 32 bit hosts.
< gmaxwell> and on 64-bit the only harm is the PTE as you note, which is pretty minor.
< gmaxwell> eklitzke: if you want to get into leveldb internals there are probably a lot of other gains. E.g. our keys are almost uniformly random, and so bisections in leveldb can probably be replaced with a secant search. (and if our keys aren't random enough, we could fix that) leveldb block size is tunable too
< gmaxwell> I think I found larger blocks to be a loss, but that might not be as true with a secant search.
< eklitzke> yeah I was actually surprised that leveldb itself wasn't doing the mmap trick I just described (or at least have it as a tuneable parameter), I want to dig deeper into their code
< gmaxwell> eklitzke: I have a believe which I've not been able to validate yet, that a lot of the performance limitations of the current dbcache is all the malloc activity... that often doesn't show up too clearly in profiles. We'd talked about changing the main dbcache to an open hashtable with fixed size entries (and some special indirection to an exception map for unusually large entries).. but it's a l
< gmaxwell> ot of work to just try it and see if it helps.
< eklitzke> interesting because glibc malloc already has systemtap probes bulitin, since 2.24 i think, so that would go well with my systemtap branch because you could write stap scripts that correlated activity in malloc with events in bitcoind
< Madscotslad> Wow nice to see a full IRC room its been a while :) hey folks.
< luke-jr> if you think this is full, you should see #bitcoin