< GitHub186> [bitcoin] jonasschnelli pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/fa7caf6d9116...57b34599b2de
< GitHub186> bitcoin/master 1b6bcdd Jonas Schnelli: Remove maxuploadtargets recommended minimum
< GitHub186> bitcoin/master 57b3459 Jonas Schnelli: Merge #8712: Remove maxuploadtargets recommended minimum...
< GitHub42> [bitcoin] jonasschnelli closed pull request #8712: Remove maxuploadtargets recommended minimum (master...2016/09/rem_maxupt_min) https://github.com/bitcoin/bitcoin/pull/8712
< jonasschnelli> I guess the Travis master issue is due to a random flickering in walletbackup.py
< jonasschnelli> Would be nice if anyone could review/ACK/NACK https://github.com/bitcoin/bitcoin/pull/7783 (nested RPC commands).
< dcousens> jonasschnelli: how would you extend it to the RPC?
< jonasschnelli> dcousens: there are no plans to extend it to the RPC layer... the PR once had support for RPC but we decided to keep it QT only for now.
< jonasschnelli> Moving the function RPCExecuteCommandLine(std::string &strResult, const std::string &strCommand) to a core class would be trivial.
< jonasschnelli> We could either have a special RPC command that result in parsing and executing multiple nested commands...but I don't think this would be clever.
< jonasschnelli> The Qt version is "client-side".
< sipa> i'd have liked that, but it seems i was alone with that :)
< jonasschnelli> A better approach for the RPC layer would be to add it into bitcoin-cli
< jonasschnelli> Having it server-side i just fear some uncontrollable resource usage.
< jonasschnelli> I think its better to start with it client-side QT only, and if it turns to be useful, add it to bitcoin-cli.
< sipa> rpc already has uncontrollable resource usage
< dcousens> Personally I just always used bash, but I never use the QT UI so *shrug*
< sipa> fair enough :)
< jonasschnelli> dcousens: Adding in in Qt has less of exposures... a better test-bed .:)
< jonasschnelli> *it
< dcousens> As for the RPC extension, I think a special RPC command, say a mechanism to chain calls - makes sense, would feel like the idea by wumpus, but won't break all our RPC implementations
< dcousens> s/break/require changing
< dcousens> as for resource usage, RPC is already meant to be a "safe space", haha
< sipa> well if we turn it into an RPC thing, we'd arguably want more discussion about what the "query language" will look like
< sipa> as it may be harder to make incompatible changes latee
< dcousens> sipa: absolutely, the amount of time to get right may not be worth the extra line/rpc call for the few times this matters :P
< jonasschnelli> The query language is pretty homebrew and non-standard..
< jonasschnelli> Also,... once we have it in RPC, people are going to see it as part of the API...
< jonasschnelli> changes at this point will probably more "difficult".
< jonasschnelli> Lets find out through Qt how it could work in RPC. :)
< dcousens> ultimately any bulk usage of the RPC ends up seeing the piped network latency being completely insignificant... and chaining the commands won't lower the JSON bulk
< dcousens> unless their doing it over an open network I suppose
< sipa> jonasschnelli: yes, agree there
< wumpus> what did I break?
< jonasschnelli> ;-)
< sipa> wumpus: ?
< wumpus> oh that, yes that's a simple extension to JSON batching that would allow chaining of return values
< wumpus> don't know whether to propose it or not, I'm fairly sure their reply will be 'if you need this level of sophistication don't use JSON-RPC you noob'
< sipa> who is they?
< wumpus> the people who made the JSON RPC standard
< dcousens> I think if the RTT time is more significant than the encoding of the data itself, and you want to optimize RPC, its very likely you have control over the node anyway, just write a small HTTP wrapper that does that over localhost for you/
< dcousens> or hell... write a RPC command that does what you want
< sipa> hah, yes
< sipa> that's my approach usually :)
< * sipa> zZzZ
< dcousens> as is mine :S
< wumpus> the point is that we don't want the API clogged full of 'chain A and B' commands
< wumpus> but for a local patch sure
< wumpus> in the end I think no one really has any convincing use-cases for this
< dcousens> yup, we're all just lazy cause it means 1 more line
< jonasschnelli> I think the RPC layer is for app to app communication
< wumpus> I've had approximately zero replies on me JSON-RPC extension proposal, though I think it makes a lot of sense *if* people are seriously considering this
< jonasschnelli> The Qt console is supposed to be a human command line interface
< wumpus> yes, the RPC layer is for app to app communication, it is an API
< jonasschnelli> The convenience of nested commands are probably on the client side
< dcousens> wumpus: I'm against the idea, just wasn't sure if the Qt change (#7783) was relevant
< wumpus> dcousens: huh?
< wumpus> no, qt has nothing to do with it
< wumpus> that' just a user convenience
< dcousens> indeed, hence ACK :)
< wumpus> I mean with bitcoin-cli you have full access to bash' scripting/variable munging/etc capabilities, in the Qt console you don't, so this adds a bit of shell power to it
< sipa> wumpus: oh, i had no idea that that chaining value idea was your proposal
< gmaxwell> or the rpc console could just link in a lua/python/js engine, all of which are embedable.
< wumpus> this is a completely orthogonal concern from being able to chain values at the server side to avoid latency
< wumpus> gmaxwell: :-(
< gmaxwell> heh
< wumpus> no interpreters in bitcoin please
< wumpus> (apart from the one we really need)
< dcousens> no more*
< wumpus> having a powerful interpreter in the target executable makes exploitation so much easier
< dcousens> maybe the CLI could just have an interactive mode with a LUA shell or something?
< gmaxwell> well it could be a seperate interface, e.g. fancy-console.
< dcousens> that way its not in the core as such
< wumpus> dcousens: yes, that has been proposed, we could jut have a python-based interactive CLI
< gmaxwell> I've only heard of five or six instances where the Js enable ethereum wallet stuff was used to trick people into sending away all their funds. :P
< dcousens> bitcoin-cli*, that is
< wumpus> dcousens: it could be based on ncurses, ipython, etc, that idea is old as the hills, yet no one wrote one :)
< gmaxwell> but yes, not having it in the main binary would be obvious. I dunno, I think the utility is all that great.
< wumpus> I thnk the current tools are jut ok
< dcousens> wumpus: but jonasschnelli wrote this :P
< wumpus> gmaxwell: right, it just doesn't matter that much, people like talking about this but it's no itch to scratch
< wumpus> dcousens: yes, he did, and it works
< wumpus> it's awesome
< dcousens> haha,indeed
< wumpus> let's not try to shed-paint it to death
< gmaxwell> the standard python bitcoin json library that people get linked to is not at all reliable (gets disconnected and then throws exceptions, and has to be wrapped to be usable) and I don't see anyone even complaining about it.
< wumpus> s/json/json-rpc and you're right
< dcousens> I've found it much easier to just always use a well-tested rpc library than use something "bitcoin tailored"
< wumpus> which one are you using for your projects?
< gmaxwell> json-rpc, yes.
< wumpus> are there any RPC mechanisms that don't suck?
< dcousens> wumpus: myself? as most of my work is in JS, I ended up using my own (haha) simply because all the batch interfaces sucked with error handling for well tested ones
< wumpus> this presentation talks about people using MQTT for bitcoin wallet control, among other things: https://media.defcon.org/DEF%20CON%2024/DEF%20CON%2024%20presentations/DEFCON-24-Lucas-Lundgren-Light-Weight%20Protocol-Critical-Implications.pdf of course, without authentication or encryption and on the public internet :)
< wumpus> tl.dr: don't do that
< luke-jr> [06:53:12] <jonasschnelli> We could either have a special RPC command that result in parsing and executing multiple nested commands…but I don't think this would be clever. <-- this would be useful if the RPC client was over a netwrok, to avoid round-trips and such
< dcousens> but, my point was, even my own implementation has nothing to do bitcoin as such, its just an RPC implementation
< jonasschnelli> Luke-Jr: Yes. The server-side nesting could save bandwidth...
< luke-jr> FWIW, I both dislike and like the idea at the same time. >_<
< gmaxwell> wumpus: What did we do in the embeded python json-rpc used in the tests to make it reliable? (or did we not and the test just run fast enough that it isn't an issue?)
< wumpus> server-side promise pipelining aves some bandwidth, but it mostly saves *latency*
< jonasschnelli> Example: with RPC nesting, you could get the first outputs value of the second transactions of the bestblock with just one command ---> bandwith: a couple of bytes:
< jonasschnelli> s/:/.
< luke-jr> it's easier to support server-side expressions when bitcoind is split from the consensus stuff someday
< wumpus> gmaxwell: which issue do you exactly mean?
< luke-jr> ("support" in a human manner, not technical)
< wumpus> gmaxwell: we did add some code to AuthServiceProxy._request to handle connection loss
< luke-jr> I think there's some RPC mechanism which does expressions already BTW
< wumpus> luke-jr: yes, it's old as the hills, "promise pipelining" is usually what it's called
< wumpus> luke-jr: capnproto does it for example
< wumpus> I think even CORBA could do it in the 90's, but don't look for that, it'll turn you crazy
< wumpus> XCB (X11's RPC protocol) can do it too
< gmaxwell> wumpus: varrious forms of connection loss; bitcoind either times out the persistant connection then any other interaction with the proxy object throws an exception, or I believe any case where bitcoind returns an error. I don't have a detailed list because I've always just soloved it by creating a wrapper that catches the exception and reconnects.
< wumpus> gmaxwell: the high-level problem is that AuthServiceProxy uses python's http mechanism in an unconventional way: it opens a connection to a HTTP server, and instead of immediately queuing a bunch of requests, it keeps the connection open
< wumpus> gmaxwell: which in principle is fine, web browsers also do that, but it needs code to handle the case where the remote server hung up on you
< wumpus> gmaxwell: stock AuthServiceProxy has no code for that, our version (in the tests) does
< luke-jr> it used to :x
< wumpus> no, it assumes a unwordly friendly RPC server that never disconnects. Which used to be the case, approximately, before the switch to evhttpd, which introduced (like any other HTTP server in the world) connection timeouts
< wumpus> (but network problems can *also* result in the http connection being closed, you just can't make the assumption it will always be there in the real world)
< dcousens> wumpus: fwiw, I occassionally sync an address index from genesis just to see how fast it goes, and its never the throughput/bandwidth that slows it up, its the disk thrashing, even in just a query-of txindex case
< dcousens> s/address index/script index
< gmaxwell> wumpus: yea, it was unreliable before-- just more obvious now.
< wumpus> dcousens: thanks for actually benchmarking to find bottlenecks instead of making assumptions :) ("it must be RPC overhead")
< luke-jr> oh, I see what it was: the older version that I thought worked simply never reused the connection :|
< wumpus> luke-jr: that's also a valid way to do it, if you don't have too many requests or theyr're sparse
< wumpus> it's how people usually use python's http: just open a connection per requests. The whole keep-alive thing is an optimization, not required
< wumpus> it brings a lot of state and complexity
< luke-jr> yet another unmaintained jgarzik project: https://github.com/jgarzik/python-bitcoinrpc/pull/49
< gmaxwell> though without it, I imagine performance is poor (well even with it, performance is poor)
< gmaxwell> it stinks mostly because everywhere recommends it, and it falls flat on its face... and I'm not sure what to recommend instead.
< wumpus> well "performance is poor" is relative. keep-alive only helps with repeated requests. In many cases of using RPC you don't really care about performance of repeated reuqests at all
< wumpus> many people just use the RPC through curl, or bitcoin-cli, which involves executing an external process per connection per command, and they don't complain either
< dcousens> wumpus: heh, I'm using 16 concurrent RPC connections w/ 300 batched in each
< wumpus> I think the problem is recommending using a broken-ish) library when the underlying mechanism is so trivial
< dcousens> needless to say I found where my custom RPC commands were missing LOCKs pretty quick
< gmaxwell> right, but "walk through the last N blocks to gather data" taking longer in python than it takes me to modify bitcoind and recompile feels a little silly.
< wumpus> yes, if you're one of the few people that needs to do really dense requesting, there are various optimizations that help
< wumpus> but that's not what JSON RPC was designed for in the first place
< wumpus> it's a high-overhead protocol that was designed to be simple to use, not for low-latency high-throughput application
< dcousens> wumpus: my point was, the RPC isn't whats slow, so no point changing it
< wumpus> something like capnproto is designed for the latter
< wumpus> howeer it's much harder to use, bind, and have basic setup, which is why companie generally offer JSON RPC or REST APIs
< dcousens> lest we forget SOAP
< wumpus> heh :)
< gmaxwell> author of capnproto didn't seem to regard sending uninitilized memory in struct padding to remote peers as a serious security problem when I spoke to him a year or two ago.
< wumpus> yes, security is another thing that is usually compromised on when latency is the most important thing
< gmaxwell> in any case, after seeing that the python stuff was slow I tested with curl and it didn't appear to be the bitcoind side that was slow.
< wumpus> it's a difficult choice, I think JSON-RPC was a fairly good choice as these things go
< gmaxwell> I agree.
< * jonasschnelli> thinks we should fork to use CORBA as our bitcoin scripting language and SOAP as http API :}
< luke-jr> ._.
< luke-jr> jonasschnelli: you forgot /s
< jonasschnelli> Ah... I forgot to migrate the Qt client to a chrome html/css extension. :)
< wumpus> the network and JSON processing on the bitcoind side is fast: the most common bottleneck would be the locking, which has nothing to do with the RPC mechanism. Or reading/deserializing blocks. Or...
< wumpus> jonasschnelli: hehe, maybe if bitcoin was designed 10 years earlier
< dcousens> jonasschnelli: I wonder when a ECDSA library will be written with pure CSS
< jonasschnelli> lol
< wumpus> dcousens: at least you can write a ECDSA library with just x86 mov instructions
< luke-jr> >_<
< wumpus> yea
< luke-jr> no need to write a new one, just build libsecp256k1..
< jonasschnelli> How large will it be?
< wumpus> moon-sized
< luke-jr> dunno, never tried it
< jonasschnelli> Would be fun open the moon-sized library in IDA-PRO
< wumpus> yes I think that's the only reason why anyone would do something like that, throw off people trying to analyze it 'this can't be real x86 code'
< luke-jr> s/people/software/
< luke-jr> eg virus scanners
< wumpus> well not just software, software is much easier to throw off
< luke-jr> is it? ☺
< dcousens> jonasschnelli: its possible to de-movuscate,isn't it? in which case eh?
< luke-jr> dcousens: IIRC this one intentionally makes it hard
< wumpus> then again if the virus is 1000MB it kind of inhibits its own spread
< GitHub179> [bitcoin] laanwj pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/57b34599b2de...881d7eaf29f7
< GitHub179> bitcoin/master 36fa01f Cory Fields: net: only delete CConnman if it's been created...
< GitHub179> bitcoin/master 881d7ea Wladimir J. van der Laan: Merge #8715: net: only delete CConnman if it's been created...
< GitHub0> [bitcoin] laanwj closed pull request #8715: net: only delete CConnman if it's been created (master...fix-connman-shutdown) https://github.com/bitcoin/bitcoin/pull/8715
< jonasschnelli> Is there a reason why the last key in out HD scheme (and in BIP32) is non-hardened? Its m/0'/0'/0 for the first key and not m/0'/0'/0'
< jonasschnelli> Or does it just don't matter at the last level
< gmaxwell> because the wallet itself isn't intended to iterate at that level. The part below is subkeys.
< gmaxwell> So every key the wallet gives out is strong against unzipping, but it could support repeated payment arrangements without address reuse.
< gmaxwell> IIRC.
< MarcoFalke> wumpus: Can you elaborate? (random.random() *is* deterministic)
< MarcoFalke> I think the seed is the current time, so we may not know that
< MarcoFalke> But knowing the seed is not required.
< wumpus> MarcoFalke: I like test that simply test the same thing every time
< wumpus> there is no need to randomize here
< MarcoFalke> So just set it to usehd=0
< wumpus> just make every odd bitcoind a legacy wallet and every even one a HD one, or so
< wumpus> I just don't see the point of random() here, there's a fair chance that it ends up testing all-hd or all-non-hd
< MarcoFalke> Which is a legit case to test
< MarcoFalke> But I can remove it, so we don't need the import random
< wumpus> sure, but not randomly
< wumpus> I like deterministic tests, esp in travis
< MarcoFalke> But you want them to be constant?
< wumpus> yes
< MarcoFalke> Have you seen the print(extra_args) to make it reproducible in case of failure?
< wumpus> otherwise there's a larger chance it will fail due to reasons unrelated to the code changes
< wumpus> which confuses people
< wumpus> tests run in travis should preferably be constant and deterministic. I'm ok with a random fuzzing jackpot testing method for running locally, but the tests running for every pull should be the same tests every time
< wumpus> if you want to test with a combination of legacy and HD wallets in a test that's good, but just hardcode that
< MarcoFalke> fine
< MarcoFalke> Then we should write some new tests which do "parameter fuzzing" some time
< MarcoFalke> Sadly this depends somewhat on rewrinting the arg parser
< wumpus> sorry that I feel so strongly about htis, but I've seen too many random travis failures in the last months :)
< wumpus> bitcoin core win64 cross-build is in a state of crap with ubuntu 16.04
< wumpus> I thought I did this before, and didn't have any of these issues, but I must hae imagined it
< GitHub17> [bitcoin] andersoyvind opened pull request #8720: Minor change in section name (master...patch-1) https://github.com/bitcoin/bitcoin/pull/8720
< GitHub88> [bitcoin] laanwj opened pull request #8722: bitcoin-cli: More detailed error reporting (master...2016_09_cli_http_error) https://github.com/bitcoin/bitcoin/pull/8722
< GitHub10> [bitcoin] jonasschnelli opened pull request #8723: [Wallet] Add support for flexible BIP32/HD keypath-scheme (master...2016/09/hd_flex) https://github.com/bitcoin/bitcoin/pull/8723
< * jonasschnelli> thinks he should review Luke-Jr multiwallet PR
< wumpus> jonasschnelli: whatevery you do don't look at #8653
< jonasschnelli> wumpus: hehe.. I did already and started disable --hardening wherever I can to avoid "possible startup problems". :P
< wumpus> my new favourite number is "zu"
< jonasschnelli> :-)
< wumpus> I think it's that posix mode of gcc that completely throws everything off track
< wumpus> (on windows)
< wumpus> I'm going to locally revert the usage of c++11 threads, compile with the -win32 compiler then see what happens, I think everything will be fine
< jonasschnelli> wumpus: Not sure if this helps.. but in another project I'm using mingw.thread.h (https://github.com/digitalbitbox/dbb-app/blob/master/src/mingw/mingw.thread.h)
< jonasschnelli> Then just include it with a #ifdef WIN32
< jonasschnelli> Probably a huge hack
< wumpus> jonasschnelli: that makes sense; but I don't get it, why isn't it part of mingw-w64 itself?
< wumpus> well it's less of a hack than fully fledged POSIX emulation mode on windows
< wumpus> it's supposed to just support it, and on trusty it doe
< wumpus> (it=std::mutex and friends)
< jonasschnelli> wumpus: Yes. I don't know why its not part of mingw itself... I just added them and it worked fine, though, I'm not using stuff like the --enable-hardening we do at Core
< wumpus> I doubt this has anything to do with hardening, I think with hardening it just detects the crazy memory corruption that happens sooner
< GitHub93> [bitcoin] MarcoFalke pushed 2 new commits to master: https://github.com/bitcoin/bitcoin/compare/881d7eaf29f7...a82e5d8220bb
< GitHub93> bitcoin/master 1111ddb MarcoFalke: gitignore: Remove unused lines
< GitHub93> bitcoin/master a82e5d8 MarcoFalke: Merge #8714: [qa] gitignore: Remove unused lines...
< GitHub7> [bitcoin] MarcoFalke closed pull request #8714: [qa] gitignore: Remove unused lines (master...Mf1609-qaUnused) https://github.com/bitcoin/bitcoin/pull/8714
< wumpus> also there's a mutex header in ./lib/gcc/x86_64-w64-mingw32/5.3-win32/include/c++/mutex
< wumpus> maybe we need to #define something
< wumpus> well at least that's one more 'trivial' I've removed from a pull title
< jonasschnelli> heh.. yes. That happens quick..
< MarcoFalke> Hmm. Github allows us to write to branches that have a pull request open against bitcoin.
< MarcoFalke> Might come in handy but I'd prefer to disable it
< MarcoFalke> (if possible)
< jonasschnelli> MarcoFalke: so you did amend commit force push to 8720?
< MarcoFalke> Apparently not possible to disable
< jonasschnelli> Heh... that's handy but evil.
< wumpus> that can be really useful
< wumpus> was of course already possible to do that locally, but ok
< jonasschnelli> Yes. Thinking of all there cases where a nit holds back a PR
< jonasschnelli> I think its much better to have a more public way then locally alter during the merge
< wumpus> before the merge*
< wumpus> if you do a local merge you can simply add your own commits before doing it, that's pretty public
< jonasschnelli> It's public, but only in the commit list and not on the PR
< wumpus> right. Just wanted to be clear I didn't mean anything as sneaky as squashing it into the merge commit, or into the author's commits
< MarcoFalke> Also quite fiddly trying to emulate github-merge.py
< MarcoFalke> I think it is fine to use the new feature when we announce in in the pr that is affected
< wumpus> yes I've always intended to add a local mode to github-merge.py
< wumpus> (I have a local version of github-merge but it's really messy)
< GitHub197> [bitcoin] MarcoFalke opened pull request #8724: [qa] walletbackup: Sync blocks inside the loop (master...Mf1609-qaWalletBackupFix) https://github.com/bitcoin/bitcoin/pull/8724
< jonasschnelli> windows_hell, nice branch name
< * jonasschnelli> wishes we could include a ubuntu vm and the linux binaries for the windows package
< wumpus> well there seems to be a shitload of issues, for now the best recommendation would be to just use trusty. The "zu" issue doesn't exist with the gitian executables does it?
< wumpus> let's first try to figure out why the tests crash...
< jonasschnelli> Yes. We should probably flag certain distros not compatible with out cross compile process or so
< wumpus> it only seemed like bitcoin-qt was working: I had disabled wallet support, apparently it works then
< wumpus> windows is truly a mess on gcc 5.3
< wumpus> the test crash before it even get to the BasicTestingSetup constructor
< wumpus> at least I can reproduce it on wine....
< achow101> Is this windows problem the same thing as I was having a while back?
< wumpus> there are at least three seperate issues here
< achow101> the thing with mutex and not compiling
< wumpus> 1) the std::mutex/thread c++11 compilation issue 2) something trange with libevent (the 'zu' issue) 3) something with hardening
< wumpus> some of these may partially overlap in cause or effect
< wumpus> but it's a mess
< achow101> Yeah, it looks like the same problem, it happened on ubuntu 15.10 (wily)
< achow101> This is the discussion if it's of any help: https://botbot.me/freenode/bitcoin-core-dev/2016-08-08/?msg=70975840&page=1
< wumpus> right - it's still an issue on 16.04
< wumpus> ok, phew, at least verified that the libevent `zu` issue doesn't exist in the released 0.13.0 executables
< sipa> good
< wumpus> well travis RPC tests pass on windows, so it'd be surprising, but I don't understand why it suddenly starts happening with a certain gcc version
< wumpus> the libevent version didn't change since 0.13.0, neither did our use of it
< wumpus> achow101: which patches? I've also been testing w/ Ubuntu 16.04 and windows 10
< GitHub12> [bitcoin] rebroad opened pull request #8725: Split debug for estimatefee into {estimatefee,2} (master...MoreGranularDebug2) https://github.com/bitcoin/bitcoin/pull/8725
< GitHub174> [bitcoin] rebroad opened pull request #8726: Move a bunch of fairly verbose debug messages from mempool to mempool2 (master...MoreGranularDebug3) https://github.com/bitcoin/bitcoin/pull/8726
< GitHub66> [bitcoin] rebroad opened pull request #8727: Move logic for TX INVs together (master...MoreGranularDebug4) https://github.com/bitcoin/bitcoin/pull/8727
< GitHub85> [bitcoin] rebroad opened pull request #8728: More granular debug5 (master...MoreGranularDebug5) https://github.com/bitcoin/bitcoin/pull/8728
< GitHub29> [bitcoin] rebroad opened pull request #8729: More granular debug6 (master...MoreGranularDebug6) https://github.com/bitcoin/bitcoin/pull/8729
< GitHub64> [bitcoin] laanwj opened pull request #8730: depends: Add libevent compatibility patch for windows (master...2016_09_libevent_windows_gcc_531) https://github.com/bitcoin/bitcoin/pull/8730
< wumpus> sheesh how many debug pulls do we need...
< paveljanik> 8(
< paveljanik> 6+1. The last one combines all prev 8)
< wumpus> we really should add a suggestion not to spam pull requests
< wumpus> especially if they're all related
< paveljanik> well, I know how he feels. Sometimes you know it is much easier to make it in steps. But doing steps separately and then all in one PR is nonsense...
< paveljanik> let's close the steps PR and comment only the final one.
< paveljanik> rebroad, is it ok for you?
< rebroad> paveljanik, hi
< rebroad> paveljanik, I find the whole process quite confusing... in my experience my smaller pull requests get merged but the larger ones don't... so this way I'm trying to hedge my bets and provide small ones and a larger one to work out which is preferred.
< rebroad> paveljanik, not sure what you mean by "the final one"
< paveljanik> the final one - the one containing all (almost) the previous
< rebroad> do you mean the one with all the commits in? and close the others?
< cfields_> wumpus: ok, diving into the win32 mess. I'd written it off as user error, but looks like I need to familiarize myself with what's going on now.
< paveljanik> 8729
< rebroad> paveljanik, I'm happy to do this, but I'm not sure if this is would the other devs would prefer also
< paveljanik> well, some changes are wrong, yes, but they can tell you in one PR.
< rebroad> paveljanik, wrong?
< paveljanik> yes
< paveljanik> lets wait for comments
< paveljanik> I have already added approx. 5 comments.
< rebroad> paveljanik, ah, thank you
< GitHub154> [bitcoin] rebroad opened pull request #8731: Debug headers received ("block" for new block announcement, "block2" … (master...DebugHeadersReceived) https://github.com/bitcoin/bitcoin/pull/8731
< rebroad> paveljanik, is there a way to easily go to my pull requests that have had comments left? i.e. is there an inbox for these comments somewhere on github?
< rebroad> thanks
< paveljanik> and especially "Further reading"
< rebroad> paveljanik, what did you mean by "why?" in #8728?
< paveljanik> expanded there...
< rebroad> paveljanik, #8727 you asked what other inv.type I can see.. not sure why you are asking this
< rebroad> I just moved the code that was doing the same thing as the code above into the same if statement.. which is needed to avoid messier code with the later commits also
< wumpus> this is starting to be annoying, please stop pushing debug pull requests
< rebroad> paveljanik, #8725 have added a description now... I guess the "why" is a reasonable question given it was absent.
< wumpus> most debugging information is only intersting for yourself during a debugging session, there is no reason to merge them upstream
< rebroad> wumpus, what is your issue with debug pull requests?
< paveljanik> there are many of them and useless - wumpus wrote it above...
< wumpus> a) there are way too many already b) it's mostly ego-commits, not intersting to other people
< rebroad> wumpus, from the pull requests I have seen I would say that various developers have found a number of my debug pull requests useful, and they facilitate better understanding of what is happening. It would be unreasonable to expect everyone to have to make their own changed to obtain useful debug information.
< wumpus> try to be focused when you submit pulls upsteam. Every one of us likely has tons of patches adding personal debugging information to solve specific problems, but it's not in general useful for others and you're wasting review time
< wumpus> it wouldn't be a problem if this was one or two pulls, but you keep pushing this on
< rebroad> wumpus, i just spend 3 days having to rebase all these debug commits due to changes that were made recently.. I am getting very very fed up with the amount of time wasted rebasing when I could be spending that time adding functionality to bitcoin. this is why this needs to get merged!
< wumpus> do you really expect people to review "more granular debug6"?
< rebroad> i have a life, which I'd like to spend living rather than rebasing
< wumpus> that's not a reason to merge it though, to merge it it needs to actually be considered useful, the project doesn't exist to save you work
< wumpus> we all have a life
< wumpus> it's kind of insulting to suggest you're the only one
< paveljanik> so why do you spend your time with useless stuff instead of The functionality?
< wumpus> and you can just burden us with your shit
< rebroad> wumpus, it's not just saving me work. I wouldn't write it if I thought it was only useful for me
< rebroad> i agree that some of the commits are trivial...
< rebroad> wumpus, what do you mean by "ego-commits"?
< wumpus> well some of them may be useful to others
< wumpus> but it's the sheer volume
< wumpus> everyone tries to make time to write and review patches that add functionality and fix bugs
< wumpus> not just debugging spam
< wumpus> <paveljanik> so why do you spend your time with useless stuff instead of The functionality? <- very good question
< wumpus> we have 402 issues and 140 pull requests open at this moment
< paveljanik> yup, pick one of the issues and try to solve it!
< wumpus> lots of actual problems that need to be solved, that are experienced by users
< rebroad> wumpus, you say sheer volume, but I'm not sure what you mean. do you mean pull reqs, or lines of code?
< wumpus> and yes, some of them may be bullshit, but let's try not to add too much more
< wumpus> rebroad: number of pull requests inthis case
< wumpus> (same could hold for lines of changed code, in some cases)
< rebroad> wumpus, ok, I will close the smaller ones as paveljanik suggests
< wumpus> yes just merge some together if they are related to the same concern
< wumpus> and at least make some effort to name/describe them
< wumpus> 'more granular debug5' really sucks as name
< GitHub9> [bitcoin] rebroad closed pull request #8484: More granular debug (master...MoreGranularDebug) https://github.com/bitcoin/bitcoin/pull/8484
< rebroad> wumpus, you don't mince your words, do you :)
< wumpus> rebroad: sorry this has been annoying me for a while
< rebroad> spending 3 days rebasing has been annoying me too ...
< wumpus> well then don
< wumpus> 't
< rebroad> if i need to understand the code and how it functions I do... unless these commits get merged
< rebroad> ok, I am oversimplifying a little
< rebroad> but it all helps, IMHO
< wumpus> you can understand code without logging everything, or by adding temporary logging
< wumpus> or by stepping through with a debugger
< wumpus> many ways taht don't involve adding logging statements to the upstream code
< rebroad> part of me needs to disagree with you as otherwise I've spent a lot of time for very little reason :-s
< wumpus> I'm not saying it's all useless
< rebroad> there are some more commits yet to come that might help to reveal the usefulness of the so far seemingly useless ones
< wumpus> just try to have some discipline, spend time explaining to others why something is useful, instead of just adding pulls with very little description
< rebroad> but yes, the mempool2 and estimatefee2 ones were a bit trivial... estimatefee2 more so... but still useful to an extent... I'd still like to know what you mean by ego-commits... i do wonder how much ego is behind my struggle to get things merged.
< wumpus> and maybe explain the overall plan, people get lost if they only see details instead of where something is going
< rebroad> wumpus, you are right.. i do need to be less lazy when it comes to explaining
< rebroad> wumpus, thank you for your feedback and patience
< wumpus> an ego-commit is something that is useful to yourself, for example some feature that you need, or some piece of information that you specifically need, but without regard whether it's useful for other users
< rebroad> wumpus, I would find it hard to believe that some features are useful to only one person
< wumpus> and then to stop having to rebase it yourself you try to push it upstream
< wumpus> well it tends to happen if something is only a perceived need, someone is really held up about something
< rebroad> there seems to be a general attitude among many developers that "debug" commits are a waste of time.. I don't quite understand why that is the popular viewpoint
< wumpus> maybe it's useful to one other person :) but all code needs to be maintained, so there has to be some threashold
< wumpus> well debugging is something kind of personal to developers
< wumpus> what information do you need
< wumpus> for what you're doing
< wumpus> for the problem you're trying to find
< rebroad> a lot of people, non-developers especially often ask for more feedback on what their full-node is doing... they want to see what it is doing...
< rebroad> so in a sense, this debug info might benefit non-developers more than developers
< wumpus> e.g. to find a wine issue I"m now instrumenting some code to log every executed instruction
< wumpus> that's not something anyone would ever want upstream :-)
< rebroad> i could code something prettier like a matrix style green scrolly thing.. but that would move away from actual useful information then!
< wumpus> if I became really obsessed about this, and wanted this a default part of bitcoin, and became angry if other developers disagreed, that would be an ego commit
< wumpus> yes on a high level I agree with you, it'd be nice to have some more insight into what a node is doing
< wumpus> I kind of like statoshi's approach
< wumpus> but I'm not sure e.g. splitting up debug into a zillion categories, or adding lots of messages, is the way to go
< wumpus> haha if you'd add that to the GUI it may well be accepted
< rebroad> if it's granular enough, then no one should need to add adhoc debug message as you are suggesting... there'd be a debug message already there that they can activate just by editing bitcoin.conf
< wumpus> jonasschnelli did a screensaver-like thing once that showed statistics
< rebroad> and they'd not need to recompile etc
< rebroad> which wastes a lot of cpy, energy, etc...
< rebroad> cpu
< wumpus> but gave up on it unfortuantely
< rebroad> :)
< jonasschnelli> I haven't gave it up... I just came with a more stable solution for a first start
< jonasschnelli> The mempool stats
< rebroad> I love the mempool graphs..
< jonasschnelli> Once this will be merged, more can be added
< jonasschnelli> Nodes / Tx / Mempool just don't fit in one screem
< rebroad> although it would be great if I could point to the graph and it would tell me the X-Y co-ords of the point closest to the pointer
< kanzure> not clear to me if rebroad has used gdb and code stepping
< wumpus> well i think there's always someone that wants to add an ad-hoc debug messgae, I don't think it's possible to be complete in that regard
< kanzure> if there needs to be metrics for user display then probably a more general frmaewokr would be useful, rather than individual debugger additions
< rebroad> kanzure, i have used gdb.. but i think it's of limited use, in my experience and not as clear as some good debug messages, imho
< wumpus> the linux kernel has something called 'tracepoints' which allows inserting print or other hooks at certain points to print information on the fly
< wumpus> I think a similar thing exists for user space
< kanzure> wumpus: probably there's a very noisy compiler option for this...
< wumpus> most annoying about gdb is <value optimized out>
< GitHub167> [bitcoin] rebroad closed pull request #8728: More granular debug5 (master...MoreGranularDebug5) https://github.com/bitcoin/bitcoin/pull/8728
< kanzure> -finstrument-functions
< kanzure> hehehe
< wumpus> another one is systemtap probe points
< GitHub28> [bitcoin] rebroad closed pull request #8725: Split debug for estimatefee into {estimatefee,2} (master...MoreGranularDebug2) https://github.com/bitcoin/bitcoin/pull/8725
< GitHub62> [bitcoin] rebroad closed pull request #8726: Move a bunch of fairly verbose debug messages from mempool to mempool2 (master...MoreGranularDebug3) https://github.com/bitcoin/bitcoin/pull/8726
< kanzure> cool.
< GitHub25> [bitcoin] rebroad closed pull request #8727: Move logic for TX INVs together (master...MoreGranularDebug4) https://github.com/bitcoin/bitcoin/pull/8727
< wumpus> or gdb's "The dynamic printf command dprintf combines a breakpoint with formatted printing of your program's data to give you the effect of inserting printf calls into your program on-the-fly, without having to recompile it."
< rebroad> someone on github mentioned that everytime i change the pull title, or close/open that 1200 emails get sent... is this true?
< achow101> rebroad: yes.
< wumpus> well I think 1200 is kind of overstating it
< rebroad> if so.... this sounds a little verbose... i agree with paveljanik's comment that things can be concise to one person while verbose to another
< rebroad> which is why I felt more granular was the answer..
< kanzure> it's the same thing with mailing lists too. your email will be sent to many thousands of people.
< wumpus> but at least the maintainers and people that have posted to an issue get a notification
< achow101> wumpus: I don't think it is actually overstating it. Anyone who watches the repo will get an email and there are 1206 watchers
< kanzure> and if everyone is reading their email then a 5 minute email ~= economic catastrophe of time spent reading
< wumpus> achow101: I'm not sure watchers get a mail for everything that happens in a repostiory, but you could be right
< kanzure> wumpus: are you a watcher?
< achow101> I'm watching and I get mails for every comment and commit to every pr or issue
< kanzure> i sort of assumed all the regulars were using the watchers feature.
< kanzure> ah maybe it's something about org members on github. nevermind.
< wumpus> kanzure: I'm repo owner, so I get a lot of mail from github, but I didn't realize watchers get all that too
< rebroad> paveljanik, was it you who sent me a github article earlier on here? i am struggling to find the link you quoted earlier :-s
< paveljanik> rebroad, see topic, it contains a link to log of this channel...
< wumpus> I don't amange to read most of it though
< wumpus> still looking for a way to filter when someone @'s me from the rest of the github spam
< kanzure> wumpus: i think github is supposed to do that on its homepage, but IIRC nobody ever views the github homepage :)
< achow101> Does anyone think that the pull request review feature on github might be useful?
< wumpus> really those should end up in my personal mailbox instead of with the bulk
< wumpus> achow101: haven't looked at it yet
< achow101> part of it looks like basically preventing PRs from merging without ACKs and ACKs are built in
< paveljanik> wumpus, github mails contain this weird CC: ... Comment <comment@noreply.github.com> or Mention or ...
< wumpus> ok that sounds fairly useful
< kanzure> achow101: are those ACKs tied to specific commit ids?
< achow101> Dunno, haven't fully checked it out yet. I just read over the help article they linked
< wumpus> paveljanik: that could be key
< wumpus> paveljanik: yes, searching for to:mention@noreply.github.com seems to do what I want
< GitHub131> [bitcoin] rebroad opened pull request #8733: More granular debug [WIP] (master...MoreGranularDebug.wip) https://github.com/bitcoin/bitcoin/pull/8733
< rebroad> this is the version with 22 commit.... hopefuly if anyone can be bothered to look they can get a clearer idea of where it was all going..
< wumpus> much better
< GitHub147> [bitcoin] MarcoFalke closed pull request #8731: Debug headers received ("block" for new block announcement, "block2" … (master...DebugHeadersReceived) https://github.com/bitcoin/bitcoin/pull/8731
< btcdrak> wow, what is this new "review" thing on Github
< BunBun> quit
< GitHub198> [bitcoin] rebroad closed pull request #8596: [WIP] Feeler code and debugging. (master...FeelerFixes) https://github.com/bitcoin/bitcoin/pull/8596
< cfields_> wumpus: ok, i've finally had enough of the system package crap that we can't control. I'm adding native toolchains to depends, along with a BOOTSTRAP option, false by default. Bootstrapping can be done in gitian and uploaded, so that they're simply fetched 99% of the time.
< sipa> cfields_: so gcc being part of the build for all platforms?
< cfields_> sipa: that's what I'm considering, yes
< sipa> awesome.
< cfields_> I'm sure it'll spiral downward with arguments about where to stop, but gcc/binutils seems like a reasonable start.
< sipa> cfields_: do you know nixos?
< cfields_> sipa: not really
< cfields_> sipa: looks interesting. Is it intended to be deterministic across builders?
< sipa> it is not
< sipa> but it uses hashes of build instructions to identify packages
< cfields_> ah, nice. sounds familiar :)
< MarcoFalke> Isn't this what docker does?
< sipa> docker builds from source?
< cfields_> docker just does whatever you tell it to. but yes, i'd assume it uses the hash of the recipe for checkpointing
< luke-jr> btcdrak: paveljanik: where do you see the new review thing⁇
< btcdrak> luke-jr: look on the diff tab
< luke-jr> cfields_: we need to support building outside gitian anyway, so not sure I see the point
< btcdrak> luke-jr: "the files changed" tab.
< luke-jr> huh
< luke-jr> interesting
< MarcoFalke> luke-jr: I think the goal is to solve the cross compile issues with depends.
< MarcoFalke> So if you fetch the (gitian-built) toolchain, you can just build outside gitian like you do today