< * luke-jr>
suggests "if you can articulate a good reason why, it's okay to have mid-commits fail"
< achow101>
meshcollider: I'm fine with doing that. The question is do *you* want to review that commit?
< sipa>
achow101: i think meshcollider's was suggesting that promag would squash the commits himself, before local review
< gwillen>
luke-jr: do you consider "it would be a huge pain in the ass" to be a good reason
< gwillen>
my sense is that's maybe the case here
< gwillen>
it's probably better to avoid that case but I'm sympathetic because I've gotten into it before
< promag>
sipa: is it possible to have the changes since a bdb cursor?
< luke-jr>
gwillen: if it's not just laziness ;)
< luke-jr>
otoh, we're making our test system more fragile, so dunno, maybe we should just ignore test failrues :/
< meshcollider>
achow101: I prefer smaller commits over 1 massive one even if bisect breaks
< achow101>
gwillen: luke-jr: it's just super hard to follow if it is one gigantic diff. I tried doing that and quickly got lost in my own changes before I even committed it.
< sipa>
promag: ?
< promag>
achow101: incoming refactor? be gentle :P
< sipa>
achow101: that seems like a plenty strong reason to ignore the rule
< promag>
sipa: scantxoutset takes a cursor then iterates it
< sipa>
achow101: though even better would be small commits that don't break it :)
< promag>
sipa: is is possible to to make then following scantxoutset incremental?
< gwillen>
yeah I think small commits with breakage > large commits that are hard to understand, indeed
< sipa>
promag: what does bdb have to do with it?
< sipa>
that's a wallet thing
< sipa>
scantxoutset has nothing to do with the wallet
< sipa>
promag: ah, i see what you mean
< sipa>
you can have an incremental scantxoutset by filtering only entries with a blockheight later than your previous scan
< achow101>
sipa: I don't think it is possible to make small commits that don't break tests. all this wallet stuff is tied to each other extremely tightly
< sipa>
but that wouldn't speed anything up
< sipa>
achow101: yeah, i believe that
< promag>
sipa: right, I thought the same
< promag>
btw, scantxoutset doesn't take into account mempool, is that intended?
< sipa>
we could add an option to include the mempool i guess
< promag>
achow101: you could add [skip ci] in the commit message, travis only reads this in the HEAD commit but we could use it in broken commits
< achow101>
promag: ah, good idea
< fanquake>
meshcollider are you about to merge that PR. I'm in the middle of doing the same..
< meshcollider>
Oh yes I am lol
< meshcollider>
Do you want to continue or shall I do it?
< fanquake>
I'm basically at sign and push hah
< meshcollider>
Ok go for it :)
< meshcollider>
It's safe to assume I'll see and merge anything tagged with wallet which is RTM within half a day or so, but you beat me to it this time
< jtimon>
regarding https://github.com/bitcoin/bitcoin/pull/8994 I'm not sure it needs conceptual review (I'm sure there's many concept acks buried in the comments), but mostly review and re-review
< bitcoin-git>
[bitcoin] practicalswift closed pull request #16045: Skip redundant memset(p, 0, 0) calls where p is not valid for writing (master...redundant-memsets) https://github.com/bitcoin/bitcoin/pull/16045
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #16293: test: Make test cases separate functions (master...1906-testNoScopeLeak) https://github.com/bitcoin/bitcoin/pull/16293
< MarcoFalke>
[05:41] <jonasschnelli> MarcoFalke: how is travis caching .ccache? Per job? Or does it combine/merge files in that cache folder?
< MarcoFalke>
It should be per job
< jonasschnelli>
so per build? per job would not make much sense?
< MarcoFalke>
I think they run something like $(env | md5sum) to get the cache name, so that should be different for each entry in the matrix
< jonasschnelli>
I see...
< jonasschnelli>
yes. Makes sense
< MarcoFalke>
each HOST should have their own cache
< jonasschnelli>
maybe based on the job name or so
< jonasschnelli>
(or as you said a hash of the env)
< dongcarl>
For #13989, I'm wondering what the policy is for adding intrinsics like this. According to jamesob's benchmarks here https://github.com/bitcoin/bitcoin/pull/13989#issuecomment-497381424. I see that: 1. The PR significantly improves SHA-intensive microbenchmarks: `micro.gcc.MerkleRoot` and `micro.gcc.SHA256D64_1024`. 2. It does not speed up IBD performance significantly.
< BlueMatt>
sipa: the total CheckBlock time in IBD isn't 0, though?
< BlueMatt>
especially with assumevalid
< sipa>
BlueMatt: yeah, true
< sipa>
still, i don't expect IBD to be bottlenecked by sha256
< BlueMatt>
it is (for now) critical-path, though I do want to change that.....
< dongcarl>
So is there consensus around whether to continue maintaining intrinsics? Should they be phased out over time, or we just don't add new ones?
< sipa>
dongcarl: i suggest you try comparing performance with versions without any intrinsics :)
< sipa>
(we have sse4, avx2, and shani based optimized code already, which you may be seeing)
< sipa>
i don't see any reason why we'd drop them?
< dongcarl>
perhaps I'm misunderstanding but jamesob brought up the point that we might not want to trust chip manufacturers with SHA implementations?
< sipa>
we're trusting them entirely for everything else already?
< jamesob>
nvm, I think *I* misunderstood there - I thought the intrinsics were doing more than they rae
< dongcarl>
sipa: true
< dongcarl>
have there been previous attempts at benchmarking propagation speed? (what would that even entail?)
< sipa>
jamesob: they're just syntactic C sugar for platform-specific assembly instructions
< jamesob>
sipa: gotcha; for some reason I had it in my head that we may be wanting to make use of some sha256 native instruction
< sipa>
we are; through intrinsics in some cases :)
< sipa>
dongcarl: it could have been written as an else branch instead
< sipa>
but it's kinda hard with all the conditional compilation
< sipa>
once you've installed a sha-ni based hasher, you don't want it to be overwritten afterwards with a measly sse4-based one
< dongcarl>
sipa: Right... So when we add AVX512 we should also disable it like you did? Or are benchmarks required before that call can be made?
< sipa>
dongcarl: AVX512 only lets you speed up 16-way hashing; not 8-way, 4-way, 2-way, 1-way, or double-64byte :)
< sipa>
so no
< sipa>
AVX2 code right now is also only used for 8-way hashing, but 4-way is still done using the sse4 based code
< sipa>
this is different from sha-ni which gives you a faster 1-way hash
< sipa>
dongcarl: maybe this isn't obvious, but the AVX2/AVX512 instructions really just give you a fast way to implement the computation of 8/16 (for avx2/avx512) hashes simultaneously
< sipa>
if you'd use AVX512 based code to compute just one hash, you'd need to fill the buffers with 15 pieces of dummy input, and hash those at the same time still
< * dongcarl>
digesting and thinking
< sipa>
but the AVX512 code for 16 hashes is slower than just using naive non-optimized SHA
< sipa>
so it's only worth it when you actually have 16 things to hash
< sipa>
it's very likely that AVX512 (for 16 hashes) is also slower than AVX2 (for 8 hashes)
< sipa>
same as AVX2 (for 8 hashes) is usually slower than using SSE4 (for 4 hashes)
< sipa>
slower in clock time; per hash it's obviously faster, or we wouldn't have the code at all
< sipa>
does that make sense?
< dongcarl>
sipa: I guess I'm not understanding why we disable SSE4/AVX2 when SHA-NI is present, but you're saying we shouldn't disable AVX512?
< sipa>
dongcarl: because SHA-NI just gives you superfast code to compute *one* hash
< sipa>
you can't even implement computing *one* hash with AVX512; you always compute 16
< sipa>
so if you have SHA-NI you want to use it for everything
< sipa>
with AVX512 you don't only don't want to use it for everything; you can't
< dongcarl>
Right I believe I understand that. But thinking along this line of logic, shouldn't we disable AVX512 when SHA-NI is present?
< sipa>
oh!
< sipa>
yes
< sipa>
i thought you were asking if we shouldn't disable SSE4/AVX2 when AVX512 is present; the answer there is no
< sipa>
SHA-NI will likely be faster than AVX512 code, even for 16 hashes
< dongcarl>
Haha okay cool that makes perfect sense! Thanks for your patience!
< sipa>
np!
< sipa>
that's actually something you may want to benchmark, whether 16-way AVX512 isn't faster per hash than SHA-NI
< sipa>
unfortunately, I don't think any hardware exists with both AVX512 and SHA-NI :p
< dongcarl>
sipa: True. Why aren't there overlaps in hardware?
< sipa>
for some reason AMD has been much faster with rolling out SHA-NI support than Intel... which is surprising as SHA-NI was designed and proposed by Intel
< dongcarl>
sipa: Ahhh... probably doesn't make much sense to benchmark when there isn't a machine that supports both then...
< luke-jr>
sipa: AFAIK there must have been hardware issues, as earlier Intel CPUs were advertised to have it, then they backed out
< luke-jr>
actually, I would think some Atoms have both?
< sipa>
luke-jr: atoms have avx512?
< luke-jr>
unsure
< luke-jr>
nah, looks like Intel left it out of them
< luke-jr>
fwiw, what BFGMiner did for CPU mining, is benchmark all algorithms in separate processes, then use the best performing
< luke-jr>
(at startup)
< sipa>
yeah
< sipa>
that may be necessary at some point
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #16294: qt: test: Create at most one testing setup (master...1906-qtTestOnlyOneTestingSetup) https://github.com/bitcoin/bitcoin/pull/16294
< achow101>
how do people feel about getting rid of dumpwallet and the import* commands except for importmulti?
< sipa>
for now dumpwallet is kind of the only way to actually see everything you've imported in the past
< achow101>
it kind of gets in the way of all of these wallet changes
< sipa>
they can keep working on top of the legacy spkprovider i think?
< promag__>
achow101: really? don't people use dumpwallet and importwallet for backups?
< promag>
achow101: really? don't people use dumpwallet and importwallet for backups?
< achow101>
promag: they shouldn't...
< promag>
breaking change alert :P
< promag>
do you have that refactor pushed?
< sipa>
achow101: that's not a good reason :)
< sipa>
there are plenty of things supported that are Bad Ideas (tm)
< sipa>
they can certainly be deprecated if there are alternatives, but i suspect here that the expected workflow will be that once you upgrade to a Shiny Descriptor Wallet certain RPCs stop being available, but remain supported on legacy wallets
< promag>
achow101: oh nice, lots of commits :D
< achow101>
i'm going to reorganize them after I get the thing to actually work
< jb55>
achow101: why do you need [ci skip], does CI run on every commit? I thought it only ran on the tip.
< achow101>
jb55: it's just to note that those commits won't compile/won't pass tests
< jb55>
I see
< achow101>
the order is kind of screwed up right now
< achow101>
all the [ci skip] commits should be at the end
< luke-jr>
eh, why wouldn't they compile at least?
< achow101>
cause I fucked up. they'll compile later when I reorganize everything