< wumpus>
#bitcoin-core-dev, #bitcoin-builds, #bitcoin-core-pr-reviews have been reserved at least, if you need any more let me know
< wumpus>
or kinlo
< aj>
#bitcoin-anything is reserved apparently, i tried registering bitcoin-signet. presuably kalle can set it up if desired
< jeremyrubin>
wumpus: ##ctv-bip-review, ##taproot{-bip-review,activation,}, ##miniscript, #sapio, #bitcoin-workshops, #bitmetas, #rust-bitcoin are all maybe good to nab if you can reserve them?
< emcy>
get the fuck outta here fresh prince
< BlueMatt>
jeremyrubin: note that libera is doing aggressive registration of #X-*
< BlueMatt>
jeremyrubin: relevant bitcoiners have the "bitcoin" group registered, so it probably makes sense to move more towards #bitcoin-X
< BlueMatt>
jeremyrubin: eg, I'm suggesting #rust-bitcoin move to #bitcoin-rust (as #rust-* is technically owned by anyone who registers a rust organization)
< gwillen>
BlueMatt: are they actually more organized about it than freenode was? The documentation claims you need a group registration to get a single-hash prefix but that doesn't actually seem to be enforced by chanserv any more than it was here
< BlueMatt>
gwillen: hmm, dunno? It seems like the whole thing was somewhat of a massive rushjob
< BlueMatt>
gwillen: I mean the servers only came online like, today, so....who knows
< gwillen>
I mean they just copied a lot of the documentation from freenode I think
< gwillen>
but the group system on freenode has always been sort of aspirational at best
< vasild>
https://libera.chat/guides/connect -- "Libera.Chat is not yet accessible by TOR but we intend to have this available soon.", trying to connect to irc.libera.chat via tor results in "You are banned from this server- Your Tor exit node must not allow connections to libera.chat. Email tor-kline@libera.chat for assistance."
< gwillen>
vasild: from the phrasing I assume they intend to set up an onion service
< hebasto>
wumpus: maybe reserve #bitcoin-core-gui as well?
< vasild>
gwillen: yes, "soon"
< gwillen>
hebasto: I recommend just go ahead and do it if you think it might be useful, you can always hand it off later
< gwillen>
mentioning things to reserve in public spaces before actually reserving them is a dangerous game ;-)
< gwillen>
if the wrong people are listening.
< gwillen>
(I just popped into it to hold it, I haven't registered it but I'm holding ops and can hand them off to one of you guys)
< kinlo>
I can handle reservation for #bitcoin-* channels
< gwillen>
ahh right, I forget there's group registration
< hebasto>
kinlo: see my message above
< real_or_random>
shouldn't bitcoin and bitcoin-core be different namespaces even?
< real_or_random>
(I mean... I guess they're not. But do we want different namespaces?)
< jnewbery>
#proposedmeetingtopic remove fuzzer from CI jobs
< hebasto>
real_or_random: thinking the same
< sysadmin>
This channel will be terminated. Any nicks remaining in the channel will also be terminated. This action cannot be undone. Please /connect to irc.butt.es and /join #gamme for more information.
< sysadmin>
This channel will be terminated. Any nicks remaining in the channel will also be terminated. This action cannot be undone. Please /connect to irc.butt.es and /join #gamme for more information.
< michaelfolkson>
^ I think this from sysadmin is spam
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #22002: Fix crash when parsing command line with -noincludeconf=0 (master...2105-parseCommandlineCrash) https://github.com/bitcoin/bitcoin/pull/22002
< Kiminuo>
michaelfolkson, Will this channel stay or is something changing?
< michaelfolkson>
Kiminuo: The last I heard/read we were claiming nicks and channels on Libera in case we move there but we are probably going to wait until the dust settles to decide whether we do
< Kiminuo>
thanks for the info
< michaelfolkson>
I haven't heard anything regarding Freenode channels being terminated. I suspect this is highly unlikely despite the spam suggesting otherwise
< nkuttler>
nothing was decided for #bitcoin* yet
< nkuttler>
quite some channels are moving though, some to libera, some to oftc, others probably elsewhere
< Kiminuo>
I'm interested in the process of "merging of PRs in Bitcoin Core repo" in general. So I understand that a PR needs a certain number of code review ACKs to be eligible for merging. But what happens then? What actually makes a person to click the "merge" button?
< wumpus>
real_or_random: no idea if libera supports recursive namespaces, tbh i'm fine with the bitcoin core channels being under the bitcoin namespace no need to make this too complicated
< real_or_random>
Yeah I think both options are reasonable in the end.
< real_or_random>
I just thought that we had the same story for websites, github orgs etc, so it would be logical to have a proper separation from the beginning on
< real_or_random>
But in the end, chat is less official, and it's like with all the other services: we can switch if they go crazy.
< wumpus>
Kiminuo: the goal in a PR is to get consensus for merging it, there's some judgement involved by a maintainer regarding risk of the change versus number and thoroughness of reviews
< wumpus>
no one is constantly paying attention to the whole list though, so if you see something with a lot of ACKs that isn't merged yet feel free to bring it up
< Kiminuo>
I see. So it basically depends on people with merge right when they feel good about a PR.
< Kiminuo>
But given that the merging process is a somewhat non-deterministic process from my side, I can't really decide, I guess.
< wumpus>
mostly how they feel about the *comments on a PR*, if you see something you'd absolutely do not want to see merged it's really important to bring that up too
< wumpus>
of course, merges are not final, if someone finds a problem after something is merged it's possible to revert
< wumpus>
Kiminuo: let me see
< wumpus>
in general i don't think 'losing ACKs' should be a reason not to do something if it's otherwise a valid comment, it's easy enough to review the diff again, at least if it is a small change
< Kiminuo>
ok, so I'll try to address that. Thanks
< wumpus>
in this case as the whole point of the PR is to come up with better naming, i guess it makes sense to incorporate the suggestion, instead of doing a big rename yet again afterward
< Kiminuo>
right
< wumpus>
michaelfolkson: i think it's not 100% clear, the infrastructure for freenode is hosted by various parties who donate a server, what happens depends on whether they go along with the new situation
< wumpus>
that said i doubt any will just pull the plug with 2 minutes notice
< michaelfolkson>
Kiminuo: You can always cc those who have already ACKed it asking for a re-ACK clearly explaining what has changed since they last ACKed and hopefully they'll be happy to re-ACK
< Kiminuo>
michaelfolkson, Yeah. I kind of want to optimize my workflow for "not bothering others needlessly" but in this case it is probably justifiable.
< michaelfolkson>
Kiminuo: Right especially as the time for the reviewer to re-ACK will be minimal in comparison to the time they needed to ACK it in the first place (assuming the change is relatively small which in this case it looks to be)
< Kiminuo>
yeah, thanks
< ggus>
hi all, this is gus from the tor project. i don't know if this is the right channel to ask, but very soon tor project will deprecate and remove v2 onion services. i've checked that ~15% of bitcoin nodes are running over tor and many of them are running v2 onion. do you have thoughts on how we could warn these node operators and help them migrate to v3 onion services?
< hebasto>
Kiminuo: to make re-ACKing a bit easier to reviewers, you'd not want to rebase without conflicts as well
< vasild>
ggus: Hello!
< Kiminuo>
hebasto, Yes, you have made me aware of it. It didn't occur to me back then.
< aj>
i have an alias ---> git rebase -i $(git merge-base HEAD origin/master) <--- for when i want to edit my commits without changing the base commit to make comparison a little easier
< hebasto>
Kiminuo: sorry if I repeat myself, it was not address to you personally, rather a general advice
< Kiminuo>
hebasto, np, it's useful advice!
< ggus>
hello, vasild!
< Kiminuo>
How do you guys review force-pushes? I use Github CLI and "gh pr checkout <number>" and when it is force-pushed, I use "gh pr checkout --force <number>". Then do you use simply "git diff old-commit..force-pushed-commit"?
< vasild>
ggus: on warning those node operators, I have no idea, maybe some mailing list or blog (that gets picked by news web sites) may help. On helping them migrate to v3 - there are two cases - 1. statically configured onion service in torrc, the users need to edit their torrc, no need to upgrade bitcoin core; 2. an onion service that is automatically created by bitcoin core - this will be switched
< vasild>
automatically to v3, but an upgrade to bitcoin core 0.21 is needed
< Kiminuo>
hebasto, ah, that's super helpful. I missed that. Thanks!
< hebasto>
you need `git range-diff ...`
< Kiminuo>
Great. That will make my life simpler :)
< Kiminuo>
b for now
< vasild>
ggus: hmm, actually in the 1. case, if the user just edits torrc and changes the v2 onion service to v3, then an old bitcoin core (<0.21) would be able to accept incoming tor connections, but if it wants to make outgoing connections to v3 addresses, then an upgrade to >=0.21 is needed
< ggus>
vasild: do you have suggestions of mailing lists or forums that we should announce this? for the news, we can contact some news outlet. And do you know which tools people are using to automatically deploy a bitcoin node, so we can contact the developers?
< ggus>
vasild: about #2: amazing! so we just need to ask people to upgrade their bitcoin core software
< vasild>
ggus: there is bitcoin-dev@lists.linuxfoundation.org but I doubt it is the best way to reach users, maybe others would have better ideas, wumpus?
< wumpus>
I can post to the notification list (bitcoin-core-dev) and to the bitcoin core twitter account (would make sense to PR a new blog item to bitcoincore.org first, so it's possible to post a link to that)
< wumpus>
it would be more useful to have bitcoin specific instructions
< wumpus>
i mean i think tor deprecating tor v2 hidden services is well known at this point to tor users (they have been pretty loud about it), but bitcoin users might not know what to do dunno
< ggus>
wumpus: yes, agree.
< sipa>
ggus: hello! what is the exact timeframe within which we might expect v2 services to start failing?
< sipa>
oh, you linked that above - sorry, i was reading backlog
< sipa>
july 15 is the next step on the timeline, and we're scheduled to have our next major release in august (though from experience, it always might slip a few weeks)
< sipa>
we should not forget to include a mention in our release notes
< sipa>
wumpus: i guess i missed something, but right, if v2 support is going to be removed imminently from tor, might as well remove it in v22.0
< wumpus>
yes, i think so too
< wumpus>
tor (master) already refuses to connect to v2 hidden services so if we don't, we'll likely get a lot of complaints about people seeing their log full of failed connection attempts
< vasild>
in July 0.4.6.x will be released (without v2 support), but it will not be stable?
< vasild>
and in October a stable version without v2 support will be released
< vasild>
I assume after Oct v2 services will start being unreachable, but what about tor nodes that run 0.4.5.x (which supports v2) after Oct?
< vasild>
maybe with some luck v2 services could be reachable even after Oct (if only old nodes are involved in the routing)?
< vasild>
ggus: ^ correct?
< sipa>
ggus: thanks for reaching out, in any case
< ggus>
vasild: yes, with some luck they could be reachable, but we're expecting that almost all tor relays will not support v2 onions anymore in October.
< ggus>
sipa: next week we will have an invite only Tor AMA about v2 onion services deprecation. it would be important to have some of bitcore-devs present there. if you pm your email, i'll send the invitation.
< bitcoin-git>
[bitcoin] 0xB10C opened pull request #22006: tracing: first tracepoints and documentation on User-Space, Statically Defined Tracing (USDT) (master...2021-05-initial-usdt-support) https://github.com/bitcoin/bitcoin/pull/22006
< sipa>
wumpus, vasild: we should probably (but perhaps not immediately) stop relaying v2 addresses
< sipa>
so they stop being rumoured and taking space in addrmans
< wumpus>
right
< sipa>
ggus: see PM, note that the project is called "bitcoin core"; there is an unrelated piece of software called bitcore too
< jonatack>
catching up with the tor v2 discussion, i've been running bitcoind with tor-0.4.6.1-alpha for a couple of months now that rejects v2 addresses
< jonatack>
and working on removing v2 support
< jonatack>
after discussions on doing this with wumpus and vasild
< jonatack>
if someone else is also working on this, please let me know to coordinate
< jonatack>
wumpus: yes, that was the starting point for me running the new tor, have stayed with it since
< wumpus>
same, it works for mainnet, but wasn't able to connect to testnet it would be nice to have v3 hardcoded seeds for testnet too (and remove the v2 ones), though i'm not aware of anyone even running one
< jonatack>
-addrinfo on testnet shows 24 v3 peers for me
< wumpus>
great!
< jonatack>
(and 15 on signet)
< jonatack>
ah indeed, #21560 added on mainnet only
< wumpus>
yeh that was most urgent; the reason i'm bringing it up really is that contrib/seeds/nodes_test.txt *only* contains v2 onions, it also hasn't been touched since 2015 (besides adding port #'s), another option would be to just get rid of it
< jonatack>
speaking of which, #21843 is probably rfm
< gribble>
https://github.com/bitcoin/bitcoin/issues/21843 | p2p, rpc: enable GetAddr, GetAddresses, and getnodeaddresses by network by jonatack · Pull Request #21843 · bitcoin/bitcoin · GitHub
< jonatack>
e.g. getnodeaddresses 24 onion
< wumpus>
jonatack: agreed, it looks pretty much rfm
< bitcoin-git>
bitcoin/master d35ddca Jon Atack: p2p: enable CAddrMan::GetAddr_() by network, add doxygen
< bitcoin-git>
bitcoin/master c38981e João Barbosa: p2p: pull time call out of loop in CAddrMan::GetAddr_()
< bitcoin-git>
bitcoin/master a49f3dd Jon Atack: p2p: allow CAddrMan::GetAddr() by network, add doxygen
< bitcoin-git>
[bitcoin] laanwj merged pull request #21843: p2p, rpc: enable GetAddr, GetAddresses, and getnodeaddresses by network (master...getnodeaddresses-by-network) https://github.com/bitcoin/bitcoin/pull/21843
< jonatack>
turns out there are many more v3 peers on testnet once you addnode the ones you know
< provoostenator>
jonatack: I guess the DNS seeds don't crawl them and currently can't announce using the v2 address message?
< provoostenator>
(format)
< provoostenator>
But I would still expect most other nodes to gossip them.
< wumpus>
was about to say, you're basically performing the DNS seed crawler's work manually now :-)\
< provoostenator>
But I guess you can't specifically ask for them?
< wumpus>
created an issue wrt collecting torv3/i2p addresses in the crawler https://github.com/sipa/bitcoin-seeder/issues/92 even though there's no point in adding them to the actual DNS seeds, it would still be useful to have a way to keep tabs on them for the hardcoded seeds updates
< wumpus>
three proposed meetings for today: moving to oftc or libera.chat (aj), windows code signing certificate update (achow101), remove fuzzer from CI jobs (jnewbery)
< sipa>
bhi
< wumpus>
any last minute topics?
< wumpus>
#topic High priority for review
< core-meetingbot>
topic: High priority for review
< wumpus>
anything to add/remove or that is (almost) ready for merge
< wumpus>
is it possible to add a GUI PR there?
< wumpus>
unfortunately i don't think so
< hebasto>
it should work to add with full link
< wumpus>
let me try
< wumpus>
right i could add it through a 'note'
< hebasto>
wumpus: provoostenator: thanks!
< wumpus>
amything else for high prio?
< wumpus>
#topic Moving to oftc or libera.chat (aj)
< core-meetingbot>
topic: Moving to oftc or libera.chat (aj)
< wumpus>
i'm not sure how much of a discussion this is anymore, fwiw: we've already reserved the namespace and channels in libera.chat
< wumpus>
it makes sense to register your nickname there if you haven't done so yet (works the same as here, with nickserv)
< hebasto>
how long it would take to switch from freenode?
< wumpus>
i don't think there's any work left to be done?
< michaelfolkson>
This is still up in the air right? We're going to wait until dust has settled to decide rather than move in say the next week or two?
< wumpus>
okay, the merges bot isn't there yet, and we need to update the bitcoincore.org website
< BlueMatt>
I dont think it should be a question on if things should move, there's not really any doubt that it should have moved already, only question is libera, oftc, or something else
< michaelfolkson>
BlueMatt: Because new owner is "malicious"? I haven't been following it that closely. Are there any devs still left on the Freenode side?
< wumpus>
from what i've seen there is pretty broad agreement to move
< hebasto>
is current lack of tor support on libera a blocker?
< wumpus>
michaelfolkson: it's really fishy what happened, legal threats against admins etc
< achow101>
oftc seems to be a bit harder to register with since they don't support sasl
< hebasto>
^ without public plans to add such support?
< BlueMatt>
michaelfolkson: the people behind the freenode acquisition have been known hostile to bitcoin for years, I kinda suggested moving, but wasnt worth the effort, this seems like a good excuse more than anything.
< b10c_>
hi
< wumpus>
hebasto: libera is planning to add tor support at least they mention so on their page
< michaelfolkson>
BlueMatt: Ok thanks
< sipa>
today i was briefly able to connect to libera over tor
< hebasto>
nice
< BlueMatt>
(the actual freenode admins have been fairly friendly to bitcoin stuff, at least after a *ton* of effort on the part of some of the #bitcoin mods, most of those admins have moved to libera now)
< provoostenator>
I registered provoostenator on the other side :-)
< murch>
provoostenator: Well, that settles it then. ;)
< wumpus>
it's wise to register your name there, to prevent it from being grabbed by someone else
< provoostenator>
And then confirm it here.
< wumpus>
yes
< michaelfolkson>
I suppose my second and final question would be there isn't the possibility of a reversal and everyone makes up and goes back to Freenode right? This often seems to happen in scenarios like this. A protest which results in a new agreement
< hebasto>
am I understand correctly hat there is a consensus to move from freenode?
< provoostenator>
(depending on *how* hostile the IRC overloards are here of course)
< hebasto>
* that
< achow101>
it's prudent to maintain accounts on multiple networks so that switching is merely changing a url
< wumpus>
michaelfolkson: i sincerely doubt it (but cannot really go into details)
< wumpus>
read the gist i posted for some information
< BlueMatt>
michaelfolkson: I think if we find much wrong with libera, we will move to oftc or elsewhere, there's about zero chance we end up back on freenode
< michaelfolkson>
wumpus BlueMatt: Ok, sounds like it is just a question of timing then
< michaelfolkson>
Yeah libera seems obvious choice
< jonatack>
yup
< provoostenator>
Who wants to register satoshi to keep out the name squatters, and receive lots of legal threats? :-)
< wumpus>
so the remaining question is timeframe
< wumpus>
where is next week's IRC meeting?
< provoostenator>
It could make sense to do that on libera, but maybe ask right before the meeting if anyone objects.
< sipa>
maybe let's plan to have next week's meeting be the last one here?
< sipa>
unless there are exigent circumstances
< achow101>
There are also all of the side meetings (e.g. wallet, p2p)
< jnewbery>
those should follow the main meeting
< ariard>
hi
< michaelfolkson>
Yeah it is good to keep people informed and not move quickly unless forced to (I think)
< jonatack>
next wallet meeting is tomorrow (here, presumably)
< michaelfolkson>
Last Core dev meeting here next week sounds good to me (and gives time for people to get set up)
< wumpus>
sgtm
< hebasto>
agree
< achow101>
ack
< jonatack>
i'm fine to switch as soon as people want to
< wumpus>
#topic Windows code signing certificate update (achow101)
< core-meetingbot>
topic: Windows code signing certificate update (achow101)
< hebasto>
how people will be informed about moving?
< wumpus>
hebasto: we can set the topic here i guess
< hebasto>
wumpus: ok
< achow101>
It looks like we will be getting the windows code signing certificate shortly, hopefully within in the next week
< wumpus>
achow101: great news!!!
< sipa>
awesöme
< wumpus>
hebasto: and we'll need to badger people talking here for a while to move maybe :)
< achow101>
We have created Bitcoin Core Code Signing LLC registerd in Delaware and the cert will be issued to this new LLC
< wumpus>
hebasto: then at some point maybe set it to +m
< achow101>
it's been validated already, so all that's left is waiting for Digicert to issue the cert itself
< wumpus>
achow101: glad to hear that
< achow101>
that's all
< provoostenator>
Nice!
< wumpus>
#topic Remove fuzzer from CI jobs (jnewbery)
< core-meetingbot>
topic: Remove fuzzer from CI jobs (jnewbery)
< jnewbery>
hi!
< jnewbery>
It seems that cirrus CI very frequently times out after two hours on the job that runs the fuzz corpus
< jnewbery>
and that causes the CI to show as failed for PRs that don't actually have any problems
< wumpus>
yes, it happens intermittently but quite often
< jnewbery>
There are two problems here: 1. a run time of two hours is a really slow feedback loop, which is terrible for productivity
< wumpus>
i wonder if there are any specific cases that are so slow or it's necessarily like this
< sipa>
i think the fuzz test sets are also just too big
< provoostenator>
Is it possible to a much more limited fuzz? Just to catch really obvious mistakes?
< wumpus>
two hour is long yes
< jnewbery>
2. a CI that has false failures reduces people's trust in the system and hides actual failures
< jonatack>
protip, push on sunday
< jnewbery>
I personally don't think we should be running the fuzz corpus on PRs
< sipa>
jnewbery: you mean just run it on master?
< sipa>
that'd be fine by me
< ajonas>
so oss-fuzz does have a ci intergration
< jnewbery>
if the code that the corpus is testing hasn't changed, then you're just running the same code paths as every other path. If it has changed, then the corpus isn't going to get good coverage (since it's optimized for the old code)
< ajonas>
no idea what the run time would be but it's something to maybe consider
< jnewbery>
yes, just run on master
< jnewbery>
I feel like a CI is not the right place for fuzzing
< jonatack>
at the same time, the fuzz CI helpfully catches when you forget to update the fuzzers for your changes
< sipa>
jonatack: we can build the fuzz code without running it
< jnewbery>
jonatack: we should still build the fuzz binaries
< hebasto>
^ we have fuzz binary by default
< michaelfolkson>
provoostenator: I don't think we're at a point where we can draw a clear divide between obvious fuzzing and non obvious fuzzing (but someone can correct me if wrong)
< sipa>
another possibility is just running the a small random subset of fuzz inputs
< sipa>
but i think i agree with jnewbery that in general, (PR) CI is not the right place for fuzzing
< jnewbery>
sorry, typo in my earlier statement: s/the same code path as every other path/the same code path as every other branch/
< ajonas>
The fuzzer is not the only thing that's slow
< sipa>
well, what we're doing isn't even actual fuzzing, it's running unit tests that are automatically derived from fuzzing corpus :)
< ajonas>
it will knock the feedback loop down some but it's not the only issue if that's what the goal is
< sipa>
ajonas: what else is?
< jnewbery>
sipa: not really unit tests, since they don't necessarily assert that the behaviour is correct.
< jnewbery>
they just try to hit path coverage
< sipa>
jnewbery: true
< glozow>
the fuzzer is the one that’s timing out a lot though right?
< ajonas>
the macbuild actually takes longer on average
< jonatack>
and there is still a fuzzer run on bitcoinbuilds atm
< ajonas>
same with tsan
< wumpus>
glozow: yes it's why your testmempoolaccept PR is failing the CI
< michaelfolkson>
glozow: Nothing else takes two hours as far as I know :)
< glozow>
yes :’(
< jonatack>
michaelfolkson: i have the impression that it can vary quite a bit
< jnewbery>
my proposal would be to remove the fuzz corpus testing from our CI. I also think separately we should all aim to drive down the CI time - fast feedback loops are really important
< jnewbery>
ajonas: is that data pulled from cirrus?
< ajonas>
yes
< michaelfolkson>
So ACK on removing fuzzer from CI. Does there need to be a separate conversation on the increasing time taken by test frameworks? Or is it inevitable that slowly goes up over time?
< jnewbery>
it's good to have that data
< sipa>
there is always the possibility of splitting up a very slow CI target in two
< lightlike>
has there ever been an instance where the fuzzer corpus run found a bug on a PR?
< jnewbery>
It doesn't match what I'm seeing on some PRs. For example 20833 has failed many times in a row because the fuzzer times out after two hours
< jnewbery>
that seems unlikely if the mean really is 40 minutes
< wumpus>
yes, i'm also fine with removing the fuzzer from the CI, i think it was mostly useful to that the fuzz tests got compiled, but that's the default now
< jnewbery>
unless there's something in that PR that causes the fuzzer job time to triple
< sipa>
jnewbery: maybe it's only the mean of successful jobs?
< glozow>
this might be a dumb question but can we just run fuzz tests that are changed in the pr?
< sipa>
glozow: not sure how we'd do that
< glozow>
me neither 😅
< sipa>
michaelfolkson: those are all due to actual fuzzing
< michaelfolkson>
sipa: Ok
< jnewbery>
I'm also not sure of the value of running a fuzz corpus derived from an old branch on the new branch
< sipa>
jnewbery: it could detect certain regressions
< jnewbery>
either the code is the same and you're running the same executions, or the code is different and the fuzz corpus won't penetrate very deeply into the new code paths
< sipa>
jnewbery: i think that's a bit unnuanced
< jnewbery>
maybe I just don't understand the idea of corpuses
< sipa>
you're certainly right for certain types of changes, but not all
< jnewbery>
sipa: probably :)
< ajonas>
I don’t know think a slow and fast makes sense. Fanquake also brought up that much of the Mac build is brew installing stuff. We can do better.
< sipa>
in any case, i agree that there is little value in running these tests in CI on every PR
< ajonas>
That was garbled. A slow and fast test separation makes sense.
< jnewbery>
I think there's probably more analysis to be done. Knowing how many regressions were caught by running the fuzz corpus would be really interesting
< sipa>
jnewbery: a useful test could be trying to reintroduce an old bug that was found by fuzzing
< jnewbery>
do we have a CI job on the master branch after every merge? Would that be a better place for this test?
< sipa>
(but without straight up reverting the code)
< sipa>
jnewbery: i think so
< jnewbery>
I know we used to with Travis. I'm not very familiar with the cirrus config
< bitcoin-git>
bitcoin/master 7eea659 Jarol Rodriguez: qt, test: use qsignalspy instead of qeventloop
< bitcoin-git>
bitcoin/master e2b55cd Hennadii Stepanov: Merge bitcoin-core/gui#335: test: Use QSignalSpy instead of QEventLoop
< aj>
jnewbery: i think the variance in fuzz time is that the job is "[compile] [fuzz] [add compile results to ccache]" -- if a previous job succeeded in <2h, ccache is up to date making compile much quicker; but if not, compile is slow, fuzz is slow, and 2h elapses before ccache gets updated causing every run to fail?
< murch>
I've registered the nickname "murch" on Libera.