< kanzure>
sipa might even have elements.git branches in there
< sipa>
no
< kanzure>
welp.
< luke-jr>
cfields: it occurs to me the reason the dir is dirty, is because it's missing files; so if we want to defer doing a real fix, we can alternatively fix the missing-files issue instead, by generating the tarball from git-archive
< cfields>
luke-jr: I believe the dir is dirty because we extract the tarball into it
< gmaxwell>
Wow, this is super dishonest https://segwit2x.github.io/segwit2x-announce.html ... "Bitcoin Upgrade" is untrue... it claims Bitcoin "Classic" and unlimited are compatible "Compatible Fully-Validating Node Software" but they don't implement the S2X rules and don't even implement segwit!
< cfields>
luke-jr: I see what you mean
< promag>
praxeology: I feel you too
< Cryptocide>
Abra|BitClub Network|Bitcoin.com|BitFury|BitGo|Bitmain|BitPay Blockchain|Bloq|BTCC|Circle|Ledger|RSK Labs|Xapo, no thanks
< luke-jr>
gmaxwell: Classic and BU merged 2X code
< kanzure>
why XORing them? i don't get it.
< kanzure>
er, ORing
< luke-jr>
funny how they didn't include XT, Knots, btcsuite, et al on their lists
< kanzure>
yes. anyway.
< gmaxwell>
luke-jr: they merged segwit?!
< luke-jr>
gmaxwell: no, just 2X
< luke-jr>
it's still super dishonest, just not *totally* bogus
< gmaxwell>
then they're not compatible fully validating s2x nodes.
< luke-jr>
remember that crowd thinks SPV is fine
< gmaxwell>
they don't list bitcoinj
< gmaxwell>
or any other SPV client.
< luke-jr>
[01:19:28] <luke-jr> it's still super dishonest, just not *totally* bogus
< gmaxwell>
if they said "[compatible fully validating nodes] btc1 \n [compatible wallet software] bitcoin classic\n" it would n... oh okay, well I suppose because it's not a lie in every possible sense it's okay. :P
< luke-jr>
in other news, Texas Bitcoin conference is promoting 2X as if it's Bitcoin, so I think that makes the decision to go simple (ie, not to)
< kallewoof>
I'm running a modified Bitcoin Core node to do some profiling on where resources are spent (CPU cycles and bandwidth in particular) and am seeing some really weird stuff. E.g:
< kallewoof>
The columns are "portion of CPU used", "min CPU cycles", "max CPU cycles", "medium CPU cycles per call", "bandwidth per call", "# of calls", "code path"
< kallewoof>
So the LOCK(cs) in SendMessages for inventory for trickle for the tx relay part is taking 78% of all CPU cycles for my node. Does that seem normal?
< kallewoof>
Also baffled by the number of calls. 131 million. I started profiling yesterday.
< kallewoof>
This is probably from the mempool.info(hash) call in the while loop btw. That probably explains the high # of calls, but 131 million in 24 hours is 1500/second. Maybe my profiling code is broken.
< kallewoof>
... Actually, it rose by 38k from 01:48:49 to 01:50:00, so doesn't seem improbable.
< morcos>
BlueMatt: is there an ascii middle finger? 3- or something?
< jimpo>
Ah, got it. It's the fTry constructor param.
< gmaxwell>
sometimes profiling is confused by locks. (and/or moderately condented locks in an otherwise inactive process can be a high percentage due to spinning).
< gmaxwell>
Everyone okay with me doing a presentation on 0.15 improvements to a local group in 1.5 weeks?
< praxeology>
Like... I know that BitPay and Coinbase are declaring that they will support Segwit2x... has any exchange declared that they will continue to support Bitcoin (Bitcoin Core's rules)?
< kallewoof>
gmaxwell: there is some amount of overhead as I am doing the profiling on my own. Maybe that's the cause for the high portion of time spent there, but it still seems like a lot of LOCK calls, regardless of actual CPU cycle count. Would be cool if the mempool could be copied once and then not lock cs at all. Code is here btw: https://github.com/kallewoof/bitcoin/tree/profile-resources
< sipa>
copying the whole mempool?
< kallewoof>
Only the hashes.
< sipa>
oh
< kallewoof>
Actually that wouldn't work. It uses txinfo for feeRate and tx etc.
< praxeology>
kallewoof: does your bitcoin process have enough memory to hold the entire chainstate?
< praxeology>
err, what -dbcache are you using?
< kallewoof>
it has 16 GB of RAM. 223 MB free atm.
< kallewoof>
Default
< kallewoof>
(dbcache)
< praxeology>
are you profiling while synching from genesis, or from the latest tip, or what?
< sipa>
praxeology: he's talking about inv relay
< sipa>
not about sync
< kallewoof>
The node is fully synced up. I restarted it with profiling enabled so it's from tip
< bitcoin-git>
[bitcoin] jonasnick opened pull request #11083: Fix combinerawtransaction RPC help result section (master...fix-combinerawtransaction-help) https://github.com/bitcoin/bitcoin/pull/11083
< praxeology>
kallewoof: is your profiler sampling instruction pointer positions, or is it timing each function call?
< kallewoof>
It's using rdtsc. Not sure which that falls under.
< praxeology>
the latter can really mess up measurements
< praxeology>
and potentially the profile is not actually measuring cpu usage... that could be misleading, instead it could just be saying where a thread is sleeping
< kallewoof>
Right. It definitely keeps ticking even while waiting for locks. I was more intereted in the # of LOCK() calls/second rather than the actual CPU usage in this case, though.
< kallewoof>
Since the code is auto-profiling all locks, I can simply reduct the lock times from the parent to get "time spent excluding lock wait times", if that seemed useful.
< praxeology>
is it measuring the number of lock calls? or the number of times it sampled with the thread being at the Lock() call?
< kallewoof>
# of calls
< praxeology>
Does bitcoin's networking code operate on polling or interrupt?
< kallewoof>
I logged every entry into the tx relay loop and logged the size of vInvTx. I don't have a lot of connections yet (14 or so) but I'm seeing 4-5 per second. Example of vInvTx sizes: 5, 0, 5, 0, 4, 4, 5, 1, 0, 1, 5, 1, 2, 0, 26, 6, 5, 0, 5, 5, 0, 5, 0, 2, 2, 13, 52, 0, 0, 0, 4, 4, 11, 21
< Fibonacci>
o/
< kallewoof>
That doesn't match up with 1500/second at all, but maybe with more connections. Or maybe there was a bunch of txs in the last 24 hours.
< kallewoof>
Actually, for comparison, the path into "trickle (tx-relay)" only has 543732 instances, which would mean there are on average 133867681/543732=246 calls to LOCK(cs) per entry. Huh.
< cfields>
i should really clean that up and PR it
< * cfields>
throws it on the pile
< kallewoof>
Nice :)
< cfields>
should tell you how long it's locked, and what percentage of the thread time it spent in it
< kallewoof>
I sort of already know that. What is confusing me is the # of times it is locking.
< kallewoof>
350 million in <24h is a lot of LOCK()s.
< cfields>
heh
< Fibonacci>
Bitcointalk will soon be irrelevant in the cryptoverse. Alternative methods for coin legitimization are popping up that will more carefully scrutinize developers intentions and identities, while at the same time allowing for an influx of new blood not associated with the corrupt financial institutions. Keep your eyes open for this shifting to a new paradigm
< luke-jr>
!ops spammer
< Fibonacci>
That wasn't spam Luke-jr
< luke-jr>
sure looks like it.
< Fibonacci>
I typed that myself
< luke-jr>
considering this channel has nothing to do with bitcointalk, altcoins, etc
< Fibonacci>
well I think it's time
< Cryptocide>
People can code whatever they want wherever, and if they have great coding skills they will shine, what is the issue we all agree on this
< kallewoof>
Found part of why there are so many locks. It's locking cs twice for every entry in vInvTx, which is anything from a few up to 100+. With 4-5 per sec that adds up fast. The first lock is for the CompareInvMempoolOrder (std::pop_heap call at start of loop), and the second is for the actual mempool.info(hash) call.
< kallewoof>
Actually, since the CompareInvMempoolOrder is a sorter, and each sort calls CompareDepthAndScore which locks cs, the number of lock calls are dependent on the size of the vector. That definitely explains things...
< kallewoof>
I think this could all be solved by making a sub-mempool which takes a list of hashes and simply pulls them out of the mempool once. These operations could then be done on the sub-mempool without locking anything or at least without locking cs.