bitdex has quit [Remote host closed the connection]
bitdex has joined #bitcoin-core-dev
<_aj_>
achow101: if you're still around to give a quick hot take on the other alternatives for #33671 i can probably update the PR to one of them today
<bitcoin-git>
[bitcoin] hebasto merged pull request #33824: ci: Enable experimental kernel stuff in most CI tasks via `dev-mode` (master...2511-ci-dev-mode-kernel) https://github.com/bitcoin/bitcoin/pull/33824
<abubakarsadiq>
#topic Benchmarking WG Update (l0rinc)
<l0rinc>
We've finished the article about benchmarking and performance optimization for Bitcoin Magazine: Outrunning Entropy: Why Bitcoin Can't Stand Still
<l0rinc>
Did a few full reindex-chainstate measurements for different parallelism levels for the InputFetcher to see how the cpu affects the performance: https://github.com/bitcoin/bitcoin/pull/31132#issuecomment-3532063730 - it seems beyond ~4 threads the performance gains are usually negligible.
stringintech has joined #bitcoin-core-dev
<l0rinc>
Looking for reviewers for two PRs I had to rebase recently: #33738 and #30442 - that's it from me
<corebot>
https://github.com/bitcoin/bitcoin/issues/33738 | log: avoid collecting `GetSerializeSize` data when compact block logging is disabled by l0rinc · Pull Request #33738 · bitcoin/bitcoin · GitHub
<abubakarsadiq>
#topic Stratum v2 WG Update (sjors)
<sipa>
sdaftuar has been benchmarking the final resulting performance impact of the invocations of linearization, hoping to tune the logic for determining how much budget to give it
<sipa>
(that's it for me, continue)
<Sjors[m]1>
Some smal interface changes are in PRs.
<Sjors[m]1>
And the SRI team is working a Rust client that consumes our IPC interface, which is nice.
<BlueMatt[m]>
various miners have it as a requirement
<Sjors[m]1>
^ fanquake it's our memory that holds the transactions, we'll crash long before the client does.
<Sjors[m]1>
It's because multiprocess holds shared pointers on behalf of the client
<cfields>
Sjors[m]1: while changing the api, maybe fix the missing context param as discussed at coredev? Or has that been addressed somewhere already?
<BlueMatt[m]>
presumably bitcoin core should just discard them after a minute?
<Sjors[m]1>
cfields: yes, I have a PR with a bunch of small breaking changes that include the context fix.
<cfields>
Sjors[m]1: ack, thanks.
<BlueMatt[m]>
and if its only generating new templates once every NN seconds at max that would trivially remove the issue?
<Sjors[m]1>
cfields: but the changes I listed above are non-breaking
<cfields>
👍
<fanquake>
sjors: ok, reading the issue its not super clear, which direction do you want the memory querying to go?
<Sjors[m]1>
BlueMatt: ideally i'd like the client to decide when to drop things
<BlueMatt[m]>
@sjors:sprovoost.nl that is rather incompatible with the goal of being able to expose this over TCP eventually?
<BlueMatt[m]>
but also I'm not clear on why that's important?
memset has quit [Remote host closed the connection]
<BlueMatt[m]>
like, yea, client deciding when to drop things in the short term seems reasonable
<cfields>
is that a goal??
memset has joined #bitcoin-core-dev
<BlueMatt[m]>
but if a template has been sitting around for several minutes, then it can presumably be dropped
<BlueMatt[m]>
cfields: yes?
<abubakarsadiq>
imo changes at this stage for the interface that is beneficial should go in we should discuss how not break compatibility when we expose a stable interface
<sipa>
i wouldn't expect to expose the IPC interface over TCP; it'll incur latency you don't want i think?
<Sjors[m]1>
I don't think the IPC interface needs to be exposed. The Template Provider would.
<sipa>
run an sv2-ipv protocol adapter locally
<sipa>
*sv2-ipc
<BlueMatt[m]>
I do not see why you'd want to run a proxy?
<BlueMatt[m]>
I mean you'll need to run a proxy if bitcoin core doesn't support listening on tcp, but that can be socat
<Sjors[m]1>
And a public facing TP can simply tell clients: don't bother me about templates more than 60 seconds old.
<fanquake>
abubakar: I agree. This is so new, has so few users, and we've given 0 expectations, there shouldn't be an issue making any changes at this point
<BlueMatt[m]>
Sjors[m]1: I guess my question is why do you want to support templates older than a few minutes at all?
<Sjors[m]1>
BlueMatt: maybe because miner ignored the fee bump templates?
<BlueMatt[m]>
I mean after ten-ish minutes the templates are pretty likely to be useless :p
<sipa>
BlueMatt[m]: my understanding is that they need to be kept for the case a hasher still finds a nonce for one
<BlueMatt[m]>
right, but my point is only that after some reasonably long duration it seems like an incredibly safe assumption that no one is still mining on it
<BlueMatt[m]>
and thus they wont find a nonce for it
<BlueMatt[m]>
maybe a related question - is ten minutes sufficient?
<Sjors[m]1>
My point is that stratum v2 software knows what miners need, and if it thinks it's safe to drop a template then it can do that. On our side we don't have to reason about it.
<BlueMatt[m]>
like if all 10-minute-old templates are discarded even if the client hasn't marked it explicitly safe to discard does that sufficiently limit memory to not be a concern (I imagine it would?)
<Sjors[m]1>
Currently the Template Provider holds on to all templates until 30 seconds after a tip change.
<BlueMatt[m]>
even that presumably suffices to not run out of memory?
<Sjors[m]1>
If we push a new template every second for 2 hours that's 28 GB assuming maximum mempool churn.
<BlueMatt[m]>
as long as bitcoin core is deciding when to build a new template, and its only doing so every 30 seconds, worst-case you have like 40 templates?
<BlueMatt[m]>
right, but presumably we aren't sending a new template every second ever?
<BlueMatt[m]>
bitcoin core should refuse to do that
<Sjors[m]1>
We do if the client asks
<BlueMatt[m]>
we should stop doing that :)
<Sjors[m]1>
And I also think we shouldn't decide what the safe minimum frequency is.
<BlueMatt[m]>
well, if a client asks we can send the one we just generated :)
<sipa>
the IPC interface is trivially DoS vulnerable anyway - it inherently assumes a client that isn't going to DoS it
<abubakarsadiq>
fanquake: we would want to handle memory internally because currently we hold on the generated block templates, waitnext create templates after after second when their is fee increase. their is a scenario where there is lots of inflow that improve mempool fees continuously and you generate lots of templates and keep in memory. If you don't manage and have a limit it thats a potential problem I think
<Sjors[m]1>
Currently the IPC takes an argument which sets that minimum update frequency (in seconds).
<BlueMatt[m]>
sipa: if you assume the protocol is more strictly checked so that the messages aren't invalid is that still true?
<BlueMatt[m]>
Sjors[m]1: why? shouldn't that be something bitcoin core decides?
<sipa>
BlueMatt[m]: yes
<BlueMatt[m]>
bitcoin core knows what the feerate difference is between different templates
<Sjors[m]1>
BlueMatt[m]: No because we can do it 10x per second but node software of ASIC firmware might crash if it can't keep up.
<BlueMatt[m]>
so it seems like it should always be bitcoin core deciding when to issue new templates (maybe based on config, but certainly not based on when clients request)
<Sjors[m]1>
We send a new templates when fees rise by N sats, but only if it's been at least M seconds.
<BlueMatt[m]>
right, but presumably if the client is limited to N updates per second on their asic they will simply not push the new work and ignore it?
<BlueMatt[m]>
that's what ultimately will happen in the sv1/2 client on the device anyway
<BlueMatt[m]>
like even if the translator feeds templates in 100 times a second the firmware will just not switch fast (unless you set the flush flag, ofc)
<BlueMatt[m]>
anyway, all I'm saying is fewer config knobs for the template IPC client means less effort thinking about all this stuff, and I'm quite skeptical the config knobs are useful :)
<Sjors[m]1>
Those are all things miners / pools should be experts in, and we have no idea what values to pick.
<BlueMatt[m]>
sipa: I'll follow up and ask why later, but that seems like something we should resolve (at least if we limit the interface to just the mining IPC and not the various other IPC things)
<Sjors[m]1>
The Template Provider can hardcode values.
<BlueMatt[m]>
hmm? I'm really unclear on which of the above are an issue?
<BlueMatt[m]>
my point is that it doesn't matter how fast bitcoin core generates templates, the mining device will rate-limit it itself
<sipa>
so it should tell bitcoin core / template provider / something sv2, what that rate limit is?
<BlueMatt[m]>
so istm bitcoin core should be configurable (via bitcoin.conf) to generate templates based on fee increases or time as needed, with reasonable defaults (seems like that's already a thing?) and from there we don't need to worry about individual clients
<Sjors[m]1>
Bitcoin Core can ship only one release per 6 months. I'd rather have a few too many IPC parameters that the TP doesn't use, then having to argue about changing something in the codebase because new miner suddenly wants N or M different.
<BlueMatt[m]>
sipa: no, I dont think so because you may have 1000 devices with different limits hanging off of one template generation coming out of bitcoin core
<BlueMatt[m]>
the templates themselves shouldn't be rate-limited coming out of bitcoin core based on hardware limitations
<sipa>
i don't follow at all anymore
<BlueMatt[m]>
Sjors[m]1: my related comment was that the config knobs for this should be in bitcoin.conf, not the IPC client :p
<sipa>
and i don't think this is a good use of this meeting time
<BlueMatt[m]>
okay, well clearly this needs more discussion, maybe we schedule a followup call
<BlueMatt[m]>
so we can do it live rather than over irc
<Sjors[m]1>
Right, seems better to discuss in one of the above issues.
<abubakarsadiq>
moving on
<abubakarsadiq>
#topic Net Split WG Update (cfields)
<cfields>
Very sorry to all who are waiting... still nothing to report. Working through a few other things before shifting my focus 99% to this.
<abubakarsadiq>
#topic asmap update (fjahr)
<fjahr>
Hi! The original embedding PR has been slip up into a triplet. The first part is already merged by now. The second part contains necessary refactorings and extensive added documentation in the implementation which was briefly discussed at CoreDev and should hopefully help getting people through this toughest part of the review. The third adds the build stuff (and that is the PR that used to be the main thing).
<fjahr>
There have also been some spin-off discussions on how the -asmap arg should work and this resulted in two separate PRs that are currently open. All this can be found through the freshly rebooted tracking issue: #33879.
<fjahr>
Special thanks to hodlinator who has given very valuable feedback in his detailed reviews in the last two weeks ❤️
<fjahr>
That’s it unless there are questions.
<fanquake>
fjahr: Wanted to follow up with one of mine
<sipa>
fjahr: i have been very hard to refrain myself from suggesting yet another "how -asmap argument should work", because i really don't care and want people to agree on something :p
<sipa>
+trying
<fjahr>
hehe, yeah, seems like everyone has their favorite -asmap :)
<fanquake>
That was my question in regards to how would someone build from source, without having to take some binary blob as an input
<sipa>
it's time sensitive, so if you repeat the gathering process at a different time, you'll get a different result
<fanquake>
Yes, but is the raw data used to create those blobs saved somewhere?
<BlueMatt[m]>
best you can do is re-run it and do some kind of diff telling you total IPs different
<fjahr>
The runs we do with multiple people are coordinated in issues such as this: https://github.com/asmap/asmap-data/issues/34 If there is agreement between enough people about the included data, that is shared there.
<sipa>
BlueMatt[m]: we have a diff tool
stringintech has quit [Quit: Client closed]
<fanquake>
Otherwise we are going to end up in a situation where you can't actually build a release bin from source, without taking some binary blob
<fanquake>
which some group, and some time, generated
<fanquake>
it'd be nice if that didn't become mandatory
<sipa>
so that would imply storing all the data fetched by kartograf in the repo?
<fanquake>
I assume so. Asking somewhat in the context of wether we have a build option to disable this feature. I imagine we might want that, if someone want to be able to compile with having to use this data, which they can't actually recreate
<fjahr>
The raw data that is downloaded by each participant could be shared somewhere too, this is about 2G and it will be slightly different for each participant so we would have again decide who should upload it and where.
<fjahr>
It's a snapshot of the internet routing table, I think to some degree it will be that we are working with data that some group at some point generated.
<fjahr>
The process we are doing is trying to match the level of scrutiny of a core PR.
<fanquake>
Sure, I am just trying to keep things trust minimized, and introducing new blobs, which can't be recreated after the fact, is moving in the other direction
<sipa>
i think that's inevitable because it's ultimately based on non-verifiable data
<sipa>
even if we save the source data gathered during the process, to enable re-creating the asmap.dat determinstically from that data, it's just moving the trust question elsewhere: was this dump created honestly?
<vasild>
is the dump easier to inspect than asmap.dat?
<darosior>
Could it be something that is shipped separately from the binary?
<fjahr>
We can inspect the data and do checks to make sure the data is reasonably realistic but fully validating it would be unresable
<sipa>
i doubt it's easier
<vasild>
if I do fetch myself and create my own asmap.dat from my dump, can I diff it against the shipped asmap.dat with the diff tools or do they run on the dumps?
<fanquake>
Yea, I agree, but having the data at least lets anyone that wants too, not have to take a blob, that they can re-create it themselves. Maybe if we have more of the group involved in the creation of these inputs, it's also better
<fjahr>
A manipulated map where all peers are in one bucket is easy to detect for example
<sipa>
fanquake: but in what way is the kartograf fetched data dump different from "have to take a blob", it's still a blob you have to trust, just a much bigger one, in multiple formats, with a ton of information that's irrelevant for us
<TheCharlatan>
darosior I don't think that would significantly improve things.
<l0rinc>
fjahr: are the automatic (statistical?) validations for that?
nanotube has quit [Ping timeout: 244 seconds]
<instagibbs>
sipa having regular process with more than 1 person is good to stave off process rot if not a security enhancement
<sipa>
l0rinc: we report the asn variety in addrman, according to the provided asmap data
<sipa>
instagibbs: there is
<fjahr>
You mean the health check? Yeah, that's information that is printed though there is no threshold where it sets of an alarm or so
<fanquake>
sipa: sure, but if everyone who signed off on the smaller blob, also dumps there data, I can atleast recreate the thing they are all attestign too, using the same tools
<instagibbs>
sipa 👍 (it tried to have me pick ship emoji, telling)
<darosior>
TheCharlatan: why? It's separating concerns. Keep the binary entirely reproducible. Have the semi-reproducible data alongside, which can be discarded if desired (but still shipped and enabled by default).
Emc99 has joined #bitcoin-core-dev
<sipa>
darosior: 2G per participant is a lot of data
<darosior>
sipa: not sure i follow. I'm just asking why not shipping the compressed data alongside the binary. For instance in /usr/share, in the same way we ship the multiprocess binaries in /usr/libexec
<sipa>
darosior: as i understand it, the source data is gigabytes per participant, and it won't be the same for everything - it's just the resulting asmap data that doesn't change
<sipa>
(mostly)
<sipa>
*for everyone
<darosior>
Yes i'm talking about the data that would be embedded in the binary
zeropoint has joined #bitcoin-core-dev
<TheCharlatan>
darosior, not sure that is really relevant, I would treat this similarly to any other chainparams data. But thinking aloud here, it might still be nice to do that in case people would like to udpate it without having to either change their configs, or their binary.
<sipa>
darosior: the asmap.dat file? you can just extract it from bitcoind (i think there was a plan of adding an RPC to dump it out)
<sipa>
i think the rest of the discussion is here about the source material from which the asmap.dat file is built
<sipa>
which is routing table dumps
<vasild>
is it easier to compare two routing table dumps VS compare two asmap.dat?
<vasild>
compare = see how many % difference there is
<fjahr>
the two asmap, the dump has even more data we don't care about so we don't need to look at that because we don't use it anyway
<fjahr>
The tracking issue could also be a venue for discussion, fwiw.
<vasild>
ok, so if I do not trust the binary blob asmap.dat and if I want to "verify" it, I can generate my own and diff it and if the diff % is below some number, then it is ok.
<BlueMatt[m]>
ha cool my flight got diverted and it reset the wifi and made me buy it again, sorry if some messages got delivered late
<fjahr>
vasild: maybe :) a low diff is a good indication there is nothing completely wrong but there would still be manipulation which targets a specific subset of nodes. So we need to look at coverage of nodes and that there asn distribution is still feasible at least.
<vasild>
fjahr: have a built in asmap.dat diff tool which does that against the own addrman ;)
<fjahr>
Happy to answer more questions after the meeting or in the tracking issue or whereever
<abubakarsadiq>
novo__: mentioned to me that he has no update this week on silent payments wg
<abubakarsadiq>
with that
<abubakarsadiq>
#endmeeting
<corebot>
abubakarsadiq: Meeting ended at 2025-11-20T16:54+0000
<sipa>
vasild: i think that'd be a cool idea, we could have an RPC which you provide a prospective asmap.dat file, and it gives you a diff, restricted to your own addrman contents, between the loaded asmap data and the one you provide
<vasild>
yes
enochazariah81 has quit [Quit: Client closed]
enochazariah has joined #bitcoin-core-dev
<fjahr>
We have a tool that let's you do that with a list of addresses that you export and two asmaps, it's just not an RPC
<fjahr>
asmap_tool.py diff_addrs, unless I am misunderstand what you are thinking of
enochazariah has quit [Client Quit]
Emc99 has quit [Quit: Client closed]
enochazariah has joined #bitcoin-core-dev
<vasild>
"list of addresses that you export" -- that would be your entire addrman, right?
Emc99 has joined #bitcoin-core-dev
<instagibbs>
was hoping we'd get cluster mempool PR in soon, various other testing/behavior improvements are in holding pattern to not force rebase. gentle prod
<fjahr>
vasild: that's what I have been doing, yepp
<sipa>
fanquake: the bitcoin core build is and remains reproducible, it just uses the asmap.dat file in our repo, added in the commit darosior links to
<fanquake>
It will be reproducible, but we should aim to minimize trust in binary inputs as much as possible, and this is introducing a situation, where you can't actually recreate the inputs
<sipa>
fanquake: right
stringintech has joined #bitcoin-core-dev
<sipa>
my point is that this is an unsolvable problem, because no matter how far you go back, and include the source material instead of the result, you end up with binary blobs you need to trust are correct
<fjahr>
But this is fundamentally the same with the seeds, right? The main difference is that they are easier to reason about.
<sipa>
indeed
<fanquake>
A mitigation is to ensure that as many contributors are generating this data, and attesting to it, as possible
<sipa>
fanquake: absolutely!
eugenesiegel has quit [Quit: Client closed]
<TheCharlatan>
could commit to a hard threshold of individuals that have to contribute it.
<sipa>
fjahr: how do i subscribe to notifications for the next collaborative build?
<fjahr>
I agree, that would be great!
<darosior>
It's also reassuring that malicious malleations would be obvious, as fjahr underlined during the meeting
<fanquake>
With at least some % crossover of active Guix builders
<sipa>
i've wanted to for a while, but i always just hear about them after the fact
<fanquake>
(or just active Core contributors)
<darosior>
fanquake: it seems to be more the realm of Core contributors, if we treat it as "source"?
<darosior>
Guix builders are supposed to just take whatever source Core contributors vouched and ensure it matches the binary output?
<fjahr>
I guess subscribe to https://github.com/asmap/asmap-data/? We have been tagging people that have participated in the past in the issue and we are experimenting with a team to notify people easier because github notifications are terrible. I can tag you and announce the next one here as well.
<fjahr>
sipa: ^
<fanquake>
darosior: possibly, I just think it's more likely that someone regularly guix building, will be aware, and able to contribute to doing another process
<darosior>
Yeah fiar
<sipa>
right, guix building is attesting to the compilation process from source to executable
<sipa>
asmap building is attesting to the fetching of route tables at a particular point in time, ultimately (and for pragmatic reasons, the conversion of it to asmap.dat too, because that makes the attestation much more exact and small)
tarotfied has quit [Ping timeout: 264 seconds]
dviola has quit [Ping timeout: 256 seconds]
justache- has joined #bitcoin-core-dev
jerryf_ has joined #bitcoin-core-dev
justache has quit [Ping timeout: 264 seconds]
f321x has quit [Quit: f321x]
enochazariah has quit [Quit: Client closed]
<sipa>
if only there was some globally trusted Master of the Internet, which published certified routing tables :p
<fjahr>
on a blockchain!
phantomcircuit_ has joined #bitcoin-core-dev
tarotfied has joined #bitcoin-core-dev
diego has joined #bitcoin-core-dev
<darosior>
:)
<instagibbs>
on blockchain* fjahr
diego is now known as Guest6935
zeropoint has quit [Ping timeout: 264 seconds]
<darosior>
on The Blockchain
phantomcircuit has quit [Ping timeout: 264 seconds]
jerryf_ has quit [Remote host closed the connection]
jerryf has joined #bitcoin-core-dev
cotsuka has quit [Remote host closed the connection]
cotsuka has joined #bitcoin-core-dev
PaperSword has quit [Quit: PaperSword]
PaperSword has joined #bitcoin-core-dev
<PaperSword>
Is there any reason why Bitcoin Core does not have an RPC to view debug.log or logging of any kind?
l0rinc has quit [Quit: l0rinc]
<bitcoin-git>
[bitcoin] Sjors closed pull request #33890: mining: add requestedOutputs field, e.g. for merged mining (master...2025/11/requested-outputs) https://github.com/bitcoin/bitcoin/pull/33890
memset has quit [Remote host closed the connection]
memset has joined #bitcoin-core-dev
eugenesiegel has quit [Quit: Client closed]
<sipa>
PaperSword: "nobody added one"... but i think part of why not is probably a long-standing desire not to put functionality in RPCs that can be implemented outside Bitcoin Core entirely
<PaperSword>
Okay here is my reasoning.
<PaperSword>
Lets say I have a node where I do not have user access to the file system (docker) but want to introspect poor block acceptance performance.
<PaperSword>
It would be very nice to call an RPC that would dump my logs
justache- is now known as justache
<sliv3r__>
PaperSword: Why you would not have access to the docker file system?
enochazariah has quit [Quit: Client closed]
l0rinc has joined #bitcoin-core-dev
Talkless has quit [Quit: Konversation terminated!]
<instagibbs>
printtoconsole=1, docker logs?
<laanwj>
it's just a text file, there's tons of browsers and analyzers for that. also the idea has been floated in the past to store a circular buffer of log messages in memory which could be queried, this could also give access to messages that are too low prio to get written to disk (for more detailed troubleshooting in the case of a problem). but it's never been implemented and i'm not sure the complexity is worth it
jerryf has quit [Remote host closed the connection]
jerryf has joined #bitcoin-core-dev
<bitcoin-git>
[bitcoin] willcl-ark reopened pull request #33514: Clear out space on CentOS, depends, gui GHA job (master...centos-space-fix) https://github.com/bitcoin/bitcoin/pull/33514
<bitcoin-git>
[bitcoin] maflcko opened pull request #33919: ci: Run GUI unit tests in cross-Windows task (master...2511-ci-win-cross-gui-unit-test) https://github.com/bitcoin/bitcoin/pull/33919
<PaperSword>
instagibs: I am not a sudo user on this host and only have access via RPC username:pass
<fanquake>
PaperSword: it sounds like the admin should surface this for you then. Assuming they want to make it available
<corebot>
https://github.com/bitcoin/bitcoin/issues/32274 | [RFC] What security expectations does/should the RPC server have from credentialed RPC clients? · Issue #32274 · bitcoin/bitcoin · GitHub
<dzxzg>
I can't think of anything, but there might be a way to trick a node into spitting out it's debug.log over rpc.
<PaperSword>
Even a ZMQ stream would be very usefull
<PaperSword>
@dzxzg, I don't seem to be making any huge mistakes in terms of the security recommendations.
<PaperSword>
ZMG aggregation would also be nice for setups with multiple nodes like a pool to to have a central monitoring service
choochaa6 has joined #bitcoin-core-dev
choochaa has quit [Remote host closed the connection]
choochaa6 is now known as choochaa
<dzxzg>
I just meant that you should assume that a remote RPC user has the same permissions as `bitcoind` on the machine.
memset has quit [Remote host closed the connection]
memset has joined #bitcoin-core-dev
memset has quit [Remote host closed the connection]
memset has joined #bitcoin-core-dev
bugs_ has quit [Quit: Leaving]
memset has quit [Remote host closed the connection]
memset has joined #bitcoin-core-dev
<dzxzg>
maybe I don't understand the scenario well enough, but given that, why not provide ssh access to the user that's running bitcoind?