< fanquake>
vasild: I can restart it, but looks like you'll want to browse those logs
< * vasild>
looking...
< vasild>
fanquake: https://travis-ci.org/github/bitcoin/bitcoin/builds/689316499 that is the result from the initial run after PR was opened. Afterwards I push-f'ed 2 times, hopefully resolving those failures (first push -f) and rebasing (second push -f)
< fanquake>
vasild: ok. I'll restart that build and see what happens, and if it re-connects to GH
< vasild>
It is not very obvious but the travis job says "Commit d48ece5" and when I click on it I see on github "Merge 89d346a into 448bdff". 89d346a was the tip when the PR was opened
< fanquake>
Maybe it got caught out somehow between the two forces pushes
< fanquake>
You should probably just push again for good measure
< vasild>
yes, I will (needlessly) rebase now and push again
< vasild>
hopefully nobody started reviewing yet :)
< vasild>
So, at the time I pushed some fixes to the PR the master branch has advanced in such a way that it would conflict. Makes sense that CI stumbled because it tries to merge the PR into master and build/test that.
< vasild>
And maybe the second push which rebased on latest master (resovling conflicts) was missed because it came too soon.
< wumpus>
promag: I don't think rounding it makes sense, despite looking silly it's simply a JSON number invalid JSON number format, and JSON is not a presentation but interchange format
< wumpus>
invalid -> in valid
< wumpus>
if you want it to look nicer in some monitoring program, rounding it client-side is the way to go
< wumpus>
should we do a meeting today?
< wumpus>
(I don't mind, but at least here it's officially a free day due to Christian holiday)
< fanquake>
wumpus: I will be sleeping either way heh
< vasild>
Has anybody tried to map code coverage reports with lines modified by a patch? To generate a report like "This PR modified 10 lines and from those 8 are covered and 2 are not".
< wumpus>
vasild: I don't think I've ever heard that idea before, it's kind of interesting!
< vasild>
wumpus: it would help assess how much of the modified code is covered
< vasild>
=> to write tests that cover the modifications
< vasild>
jonatack: I did not have anything in mind, other than I need that info to extend the coverage on a PR :)
< vasild>
Can coveralls.io do that?
< MarcoFalke>
[15:13] <achow101> how do I run the travis lsan build locally?
< MarcoFalke>
See ci/Readme.md
< jonatack>
vasild: it should, see https://coveralls.io/features "Line by line coverage: Quickly browse through individual files that have changed in a new commit, and see exactly what changed in the build coverage line by line."
< MarcoFalke>
vasild: DrahtBot does that, but it is not yet public
< jonatack>
vasild: but ISTM that MarcoFalke has that already with his online coverage pages?
< jonatack>
MarcoFalke: oh ok, not surprised
< MarcoFalke>
Though, it is currently blocked on #[15:13] <achow101> how do I run the travis lsan build locally?
< lucaferr>
I'm trying to understand Compact Filters. In blockfilters.json are "[Prev Output Scripts for Block]" each supposed to match with the basic filter provided? I can't get them to match, although outputs from the same block do match the filter.
< MarcoFalke>
(wrong clipboard)
< vasild>
I think coveralls.io can do full report on a branch and then another full report on branch+commit.
< MarcoFalke>
You will have to do the diff manually for now
< jonatack>
vasild: it did for my use, it wasn't c++ but it likely can, yes
< MarcoFalke>
But I can tell DrahtBot to run on a pull to generate the raw data
< vasild>
"You will have to do the diff manually for now" -- so I have a full report on branch that is 100k+ lines, another full report on branch+commitX that is 100k+ lines and commitX that modifies e.g. 1000 lines.
< vasild>
I am now trying to do something locally, if that works, then it should be possible to teach some CI system to do it too (e.g. coveralls.io or another one).
< MarcoFalke>
Jup :( . Not sure how to solve that. But if you only modify net_processing, for exmple, you can probably discard coverage diffs in init.cpp or so
< MarcoFalke>
vasild: Nice. Let me know if you make progress :)
< vasild>
actually the full report on branch (before commitX) is not needed. My idea is to strip the report from branch+commitX to only the lines modified by commitX
< MarcoFalke>
That would exclude all side effects, no?
< vasild>
hmm :/
< MarcoFalke>
For example a commit that restructures the validation interface might have a coverage effect on the wallet or the txindex
< vasild>
right
< MarcoFalke>
If you goal is to show coverage of the lines you modify in a patch (and they happen to have debug log statements, or otherwise show effects that can be tested), I recommend just writing a test to prove coverage
< vasild>
that is my goal, yes
< MarcoFalke>
but longer term we really need a way to automate this
< MarcoFalke>
One option could be to build a deterministic CPU to kill all non-determinism in the unit tests. Not sure if that is possible with qemu or rr
< vasild>
Just run the tests on Windows where the OS random number generator is so flawed that it returns always the same numbers :-D
< wumpus>
what is non-deterministic about a CPU? (besides race conditions in threading)
< wumpus>
for random instructions it should be as easy as 'don't use them', i guess, only use deterministic randomness in tests
< vasild>
yeah, random number are easy to deal with
< vasild>
numbers
< wumpus>
the thing is, even if the CPU is determinstic, things such as I/O won't be
< wumpus>
even RAM might not be 100% deterministic (regarding timing)
< MarcoFalke>
Ok, then we also need deterministic RAM :)
< wumpus>
heh
< lucaferr>
Would it be possible to get the expanded uint64:s of the basic filter field as well as siphashed/fastranged values of the prev output scripts in the blockfilters.json?
< MarcoFalke>
lucaferr: Not without modifying the rest interface. But why would you want to do that?
< lucaferr>
I'm just trying to build some code around it to better understand what is going on. My understanding is that all the prev output scripts should be encoded in the basic filter? However, I cannot get them to match the basic filter. Am I perhaps wrong in assuming that? It would be nice to see the decoded uint64:s from the golomb encoded bitstream, for verification...
< jnewbery>
I think it may be ready for merge (4 ACKs). You thought the last one might have been merged too soon quickly, so I want to make sure that there are no concerns about merging this one
< theStack>
that's great news! i wondered recently what happened to bitcoin-acks.com
< theStack>
i think jonatack proposed to run it on bitcoin.org recently if i remember correctly; any plans about that? or is the idea that people spin it up locally? (should also be quite easy thanks to docker)
< wumpus>
pierre_rochard: yw, thanks for creating it in the first place!
< wumpus>
theStack: i think it should be back up
< theStack>
wumpus: ah, indeed! i just mistyped... it's without the dash, i.e. bitcoinacks.com
< wumpus>
right the repository has a dash but the site doesn't :)
< luke-jr>
theStack: you mean bitcoincore.org?
< wumpus>
probably; but bitcoincore.org itself only hosts static content, it would be possible to host it as a subdomain but the original domain is better
< jonatack>
theStack: luke-jr: i proposed to bring bitcoinacks, with pierre_rochard's permission, into https://github.com/bitcoin-core/ to see more attention
< wumpus>
one proposed meeting topic: alternative transports support (ariard)
< ariard>
yes!
< instagibbs>
hi
< wumpus>
any last minute proposals?
< amiti>
hi
< meshcollider>
hi
< wumpus>
okay, let's start with high priority for review as usual
< wumpus>
#topic High priority for review
< luke-jr>
hi
< aj>
hi
< hebasto>
could #18297 be added to hi-prio?
< gribble>
https://github.com/bitcoin/bitcoin/issues/18297 | build: Use pkg-config in BITCOIN_QT_CONFIGURE for all hosts including Windows by hebasto · Pull Request #18297 · bitcoin/bitcoin · GitHub
< luke-jr>
I guess they can do their own addr-equivalent messages on their own transport
< sipa>
i'm not sure what the best way to support such things is... on the face of it, it could all be done as an external application talking P2P
< sipa>
but there may be easier ways of integrating
< fjahr>
ariard: have users expressed interest in any specific alternative transports? which one would have the most interest in the beginning do you think?
< ariard>
sipa: like a supplemental daemon supporting all drivers ?
< sipa>
fibre-like things are harder, as they need mempool access
< luke-jr>
would be ideal to split out current p2p stuff external someday too
< sipa>
ariard: or one daemon per driver
< wumpus>
what we're not going to do: include it all into the repository, or dynamic library based plugin mechanisms
< sipa>
wumpus: agree
< ariard>
fjahr: yes some LN routing nodes operators would like block redundancy
< wumpus>
everything else is fine
< ariard>
sipa: yes you may have one daemon per driver but ideally you may react on what your learn one driver
< sipa>
i can't parse your sentence
< wumpus>
if it runs in an external process and communicates with bitcoin core that's the right way
< ariard>
wumpus: yes agree doesn't make sense to include it all into the repo
< wumpus>
if it requires a little extra support on our side that's ok
< jb55>
for tor it just connects to a local proxy socket right? isn't this general enough?
< jeremyrubin>
What about making our P2P function that way wumpus?
< wumpus>
jb55: yes, there's a little specificsupport for incoming connections by creating a hidden service
< ariard>
sipa: let's say I learn I'm currently eclipsed, I send a notification to my LN daemon, LN daemon react by closing channnel and broadcast through some emergency channel
< sipa>
jb55: that wouldn't work for a satellite connection for example
< jeremyrubin>
I mean it seems like a silly suggestion but if we want fully capable alt-p2ps, then we should be able to dogfood it with our current p2p system
< wumpus>
jeremyrubin: in the long run maybe
< sipa>
jb55: as you can't route a bidirectional byte stream
< jb55>
sipa: satellite would require custom deserialization code as well if I understand correctly
< ariard>
jeremyrubin: ideally but that's too much carefull refactor
< jeremyrubin>
wumpus: also seems good from a security perspective -- no DMA if you pop the transit layer :)
< jeremyrubin>
I think ryanofsky probably has some insight here
< jeremyrubin>
Don't you have a seperate network process branch right now?
< jeremyrubin>
Or is it just wallet stuff presently
< ariard>
better integration with core would ease deployement therefore making the number of such alternative transport used higher
< wumpus>
jeremyrubin: yes, gmaxwell had some ideas in that direction as well, run the whole P2P in a sandboxed process
< sipa>
if one goal is supporting FIBRE in an external process... that's pretty hard; i think the reconstruction code needs low latency access to the mempool etc
< ariard>
and this would increase network security
< ariard>
sipa: fibre is so much special, not in my integration target
< sipa>
ariard: ok
< ariard>
I'm more interesred by headers-over-DNS or domain-fronting or radio integration
< wumpus>
let's aim lower first ...
< jeremyrubin>
sipa: IPC shared memeory the mempool for fibre?
< * jeremyrubin>
ducks
< sipa>
that's fair
< wumpus>
it coudl always be extended later
< wumpus>
to way-out ideas like that
< ariard>
yes I want something simple first, like headers-over-DNS
< jb55>
what's the simplest non-invasive use case?
< sipa>
ariard: headers-over-DNS could even just done with RPC, i think?
< wumpus>
nothgin is going to happen at all if that's a requirement for the first version though :)
< sipa>
(a daemon communicating through RPC)
< wumpus>
memmap the mempool lol :)
< ariard>
sipa: yes my concern is if you have to manage one daemon per alternative transport that's a bit cumbersome to deploy
< ariard>
even for power users
< ariard>
I want to explore if we can integrate with multiprocess and just have another new process supporting the drivers
< sipa>
ariard: compared to c-lightning which is already half a dozen daemons? ;)
< ariard>
sipa: I know it doesn't work well for embedded or mobile envs
< wumpus>
why not?
< ariard>
and I think actually blocksat is its own fork of core
< wumpus>
on limited 32-bit platforms it's usually better to have a zillion separate processes than threads within the same process
< wumpus>
e.g. at least they don't have to share an address space
< ariard>
well actually with my driver interface design if you want to fork() behind you can
< ryanofsky>
sipa/wumpus is there quick way to say what you don't like about the shared library approach? build complications? memory safety?
< ariard>
it's up to the driver implementation
< wumpus>
ryanofsky: security, limits static linking, and having to provide a binary interface
< wumpus>
also cross platform stuf
< wumpus>
dynamic libraries are slightly but different enough to make your life a pain between windows and linux
< ariard>
wumpus: on security/fault-tolerance that's why I think a separate process would be better
< wumpus>
yes, it is
< ariard>
contrary to Matt stuff spawning new threads
< wumpus>
separate processes are better for isolation/security
< ariard>
now what should be the threading model of such new process it's another question
< ariard>
wumpus: agree
< ariard>
but current proposal is just to have fetching/broadcast logic in this new process and talk to a driver interface
< ryanofsky>
ariard, i'll take a look at your prs and what the interfaces are. i'd expect they are easy just to get working across processes
< ariard>
fetching logic is operating over abstract driver attributes like fHeaders or fBlocks
< wumpus>
that's interesting, I'll read the issue more closely
< ryanofsky>
the issue is just performance, it's a easier to have great performance if you aren't doing io and just accessing memory
< ariard>
wumpus: yes I want to rely drivers devs to write their own fetching logic for each transport
< BlueMatt>
ariard: i think it may be useful for you to take a look at all the rust-based drivers that I had written
< BlueMatt>
ariard: I think the current API design in 18988 is much too tied to certain assumptions about what the thing will look like
< wumpus>
bitcoin core is I/O bound but usually only on disk, not on P2P/network
< ariard>
BlueMatt: I know, I need to rework the threading model, but ideally your rust-based drivers should be reused
< BlueMatt>
ariard: I dont think the threading model is the only issue there
< BlueMatt>
i think it needs to be way more free-form
< BlueMatt>
eg the rust stuff was just exposing ProcessNewBlock(Headers) and FindNextBlocksToDownload
< BlueMatt>
as well as an api to get header state
< ariard>
BlueMatt: more asynchronous ?
< BlueMatt>
and leaving it up to the driver what to do with that
< BlueMatt>
cause drivers may have very different capabilities in terms of how to query
< sipa>
i guess that's a question of layering
< ryanofsky>
BlueMatt, is starting with an API and just changing over time hard here? it seems like a brand new API you can declare unstable
< BlueMatt>
eg headers-over-dns may only be able to query by block height, and its a poll, its not a file descriptor or a socket.
< sipa>
it does make sense to have a "i have a bidirection byte stream connection to another node... tell me what to route over it" interface
< BlueMatt>
ryanofsky: I'm suggesting start with less api, and add more over time :)
< sipa>
but it's not the right solution for everything
< BlueMatt>
i dont think its the right solution for...almost anything we'd want to do here?
< BlueMatt>
like, some of the most exciting options would be https domain fronting
< BlueMatt>
and its not a byte stream, its a query interface
< sipa>
what is https domain fronting?
< ariard>
yes that's why fetching logic should adapt its request to driver capabilities
< ariard>
like don't send GETHEADERS if it's unidirectional
< BlueMatt>
sipa: where you hit commoncdn.amazon.com in SNI, but the http host header is magicbitcoinproxy.amazonuser.com
< sipa>
SNI?
< BlueMatt>
sipa: and it routes to your domain. this is a *functional* way to connect to tor through the gfw still today, last i heard, though the common endpoint is slow
< BlueMatt>
sipa: ssl plaintext hostname in the connection setup
< BlueMatt>
ariard: i dont even think 'bidirectional' is a reasonable assumption here - eg radio-based stuff is likely to be unidirectional
< BlueMatt>
(and could be *either* read or write)
< ariard>
yes censorship circumvention is a huge topic in itself, and some of Tor stuff may not fit your threat model
< jnewbery>
if it's possible to unnoob the rest of us in a sentence or two, that might be helpful though
< ariard>
BlueMatt: yes capabilities need refinement
< ryanofsky>
jnewbery, it's just a string passed to disambiguate when you have web servers for different domains on the same ip
< jb55>
jnewbery: its like the http Host header for tls connections
< BlueMatt>
jnewbery: tl;dr: a way to connect to a super common https endpoint (so common that any censors wont block it cause they'd break many websites) and then route your own data over it
< sipa>
i feel people are trying to explain minor details while i (or some) are missing the big picture
< BlueMatt>
but you magically make the censor think you're connecting to that but are in fact connecting to a bitcoin core reset endpoint
< sipa>
i hear "something something https bypass GFW", but not what or how that'd be used
< ariard>
sipa: agree there is a lot of alternative transports, integration with each of them may bring you more security/privacy
< ariard>
therefore making them easier to deploy would be great
< jeremyrubin>
ariard: or less as you add more too
< ryanofsky>
i think big picture is it's harder to censor an ip when it's an amazon ip also hosting things you're not trying to censor
< sipa>
ryanofsky: ok, that's helpful!
< sipa>
why is it unidirectional?
< ariard>
jeremyrubin: there is a risk of privacy leakage, it should be well-documented I agree
< jnewbery>
Very helpful. Thanks ryanofsky jb55 BlueMatt
< ariard>
domain-fronting increase cost of censorship, because now you have to harm ingenous traffic too
< instagibbs>
Signal uses this IIRC
< jb55>
could I use altnet to send a tx via carrier pidgin
< BlueMatt>
its been very effecive for signal, in fact
< ariard>
but sipa was right, we miss the bigger picture, I would be glad to explain domain-fronting or how we may incorporate it in core in a pr review club session or other
< instagibbs>
BlueMatt, AWS got mad at them didn't hear the result of it :P
< BlueMatt>
instagibbs: the result was to use azure
< BlueMatt>
who did not get mad
< sipa>
jb55: using pidgin to steganographically hide the traffic is a great idea!
< instagibbs>
Ahhh
< BlueMatt>
(yet)
< wumpus>
hehe
< jb55>
pigeon*
< instagibbs>
But indeed, seems to work inside China quite well considering
< ariard>
yes if we have such generic driver framework, drivers devs may just focus on improving protocol fingerprint hidding
< BlueMatt>
exactly. but it needs to be super flexible to support all these different crazy ideas
< wumpus>
but not necessarily initially, it just needs to be extendable
< ariard>
I invite anyone to let comment on the PR/issues and in the meanwhile I will explore more each alt-transports to see how much generic framework
< ariard>
and come with a cleaner design proposal
< jnewbery>
ariard: what do you need help with? From my perspective, there's still a lot of work to be done internally in Bitcoin Core cleaning up the layer separation between net / net_processing / validation, but I haven't reviewed your branch yet
< * BlueMatt>
notes that, after initialization, before any read()/write()s to the remote host, openssl can be run in a sanboxed processes which has no access to any syscalls except read() and write() :)
< sipa>
maybe it's good to make a list of potential alternative transport that are useful, and then see if there's a reasonable subset that can easily be supported
< wumpus>
you don't want to overdesign either, nor make such a large stack of requirements that you give up in the idea
< ariard>
sipa: I've been working on this the last few days, will make it public soon
< ariard>
wumpus: you don't want to overdesign, but let room in the design to step-by-step extend it
< BlueMatt>
jnewbery: these types of things need only very few calls into the rest of core
< BlueMatt>
thats the api that I was exposing to all 4 clients in rust, and was pretty easy to write again. its not a class-based thing, just a bag of functions, though, so not quite what you're going for.
< ariard>
okay I've had a different concern, if such stuff get wider deployement we may have side-effect on the network topology
< BlueMatt>
in other news, these things are back in stock, and can be used to pick up headers most places in new york city: https://unsigned.io/product/rnode/
< ariard>
anyone has opinion on this ?
< BlueMatt>
ariard: as long as its purely additive, who cares! :)
< ariard>
and such side-effect may not be desirable
< jonatack>
ariard: at the risk of stating the obvious (but that is often forgotten in the excitement): restrict scope inititally to the smallest possible, minimum viable thing, that can be reviewed and that people can/will actually use on a small scale but eagerly
< ariard>
BlueMatt: right if everything is opt-in I think that's okay
< jeremyrubin>
I don't think it's addititive
< BlueMatt>
(one of the key observations made previously is that these types of relays can be bucketed into a few categories: a) privacy preserving ones, often which only provide headers due to bandwidth constraints, and b) not that...)
< ariard>
jonatack: I know review process and getting the first chunks PRs are the hardest steps :)
< jeremyrubin>
This increases partitionability
< jeremyrubin>
but increases availability I think?
< BlueMatt>
in such cases, we can use (a)-category sources to detect when there are blocks we dont have, and turn on the non-privacy-preserving modes in such cases
< BlueMatt>
but maybe only after having the regular p2p code make some new connections
< ariard>
BlueMatt: exactly that's kind of the idea of having a watchdog for this
< BlueMatt>
you dont need anything fancy to do that, though
< BlueMatt>
only a sleep(300) :)
< ariard>
that's hacky
< BlueMatt>
no its not
< BlueMatt>
its actually exactly what you want here
< sipa>
wth are you guys talking about
< BlueMatt>
give net_processing a chance to figure out how to fetch the block if we're missing one
< ariard>
BlueMatt: don't use a privacy-harming communication channel if you don't have to
< BlueMatt>
and if it fails to within 5 minutes, turn on the http client and give up privacy for the block data
< BlueMatt>
right, exactly, thats why you sleep(300) :)
< wumpus>
~5 minutes to go
< jnewbery>
I agree with sipa
< BlueMatt>
sipa: if, eg, you learn a header over radio, but that source doesnt provide blocks, what do you do?
< BlueMatt>
oh, wait, is this still a meeting? lol I'll shut up.
< jnewbery>
perhaps this more speculative discussion is better suited to bitcoin-wizards or similar?
< ariard>
BlueMatt: but no, you may start to be eclipsed after IBD and due to block variance sleep doesn't make sense IMO :p
< sipa>
sleep(300) till the end of the meeting
< wumpus>
jnewbery: yes, though we don't have any other topics queued for today
< ariard>
jnewbery: agree again I invite people to pursue discussion on the issue or PR, it's better suited
< sipa>
BlueMatt: i don't know, why would you do anything?
< sipa>
i feel like i'm missing a lot of context
< wumpus>
anyhow I agree with jonatack: restrict scope initially, don't try to do too many out-there things at once
< wumpus>
though it's good that you're clearly having a lot of ideas
< ariard>
sipa: yes I will try to come with some Q&A-style of doc to inform people better
< wumpus>
but some of us hardly follow :)
< BlueMatt>
ariard: no thats my point - you learn absolutely that there is probably something amiss if you have headers for which you do not have a block, and several of these things are fine from a privacy perspective to learn that. in that case, it makes sense for net_processing to go make some new connections and see if it cant find the block. if it fails to after some time, go query cloudflare.deanonymizingseed.com cause, like, its better
< BlueMatt>
than not getting the block (at least if you're a ln user or so and have it configured to do this), but you first want to give net_processing a chance here. so, essentially, sleep(300) is exactly what you want :)
< ariard>
wumpus: thanks, I want to spend more time scoping to the minimal step but yet extendable after
< BlueMatt>
wumpus: lol, sorry...I wrote out all 4 of the (headers-over-dns, headers-over-radio, blocks-over-http, secondary-p2p-client) things before, so apologies we're a bit three-steps-ahead here :/
< BlueMatt>
this all was, in fact, the rust branches.
< wumpus>
BlueMatt: yes, I know, it was great! unfortuantely the approach to integrate them into bitcoin core didn't work with regard to review and organizationaly
< ariard>
BlueMatt: okay I see your point, but you need to dissociate cleanly 1) anomalies detection and 2) reaction
< ariard>
reaction may happen through net_processing or alt-transports
< BlueMatt>
wullon5: yea, no, not complaining, just providing historical context for folks
< BlueMatt>
wumpus:
< BlueMatt>
ariard: right, kinda, but I think part of my goal at least here is that there should be an absolute minmal amount of common code between the various sources
< wumpus>
wrapping up the meeting
< wumpus>
thanks everyone for the lively discussion
< ariard>
BlueMatt: ideally yes but I fear our internal components aren't that much clean yet
< BlueMatt>
ariard: cause if there's a bug in anomoly-detection (which we've had 20 issues with in the past, turns out its hard to get right), then all of a sudden your 5 block sources are all stuck, instead of providing the redundancy they're there for.
< wumpus>
#endmeeting
< lightningbot>
Meeting ended Thu May 21 20:01:41 2020 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)
< sipa>
BlueMatt: i think that depends on the goal; for some transports i think it's easier to have a "i can relay bytes/messages back and forth for bitcoind, but i don't want to care about all the protocol details like blocks and transactions and addresses"
< sipa>
it's additional complexity for the transport to need to care about these things
< ariard>
BlueMatt: I spent some times thinking about anomaly detection, there is clearly a tradeoff between false positive and complexity
< ariard>
and even worst, you may want to fine-tune depending of your second-layer timelocks...
< sipa>
BlueMatt: by "sleep(300)" you actually mean "a fixed time-out" right, not an actual sleep in the code
< BlueMatt>
sipa: right, sure lol
< BlueMatt>
sipa: as for "the goal", thats true, if we want to model after eg tor's pluggable transport, we want a byte stream, but I think bitcoin data is a little too unique for that - because its so small, we end up being able to shove it in all kinds of queries
< BlueMatt>
and we probaby often want it to be unidirectional
< sipa>
BlueMatt: i just think there are distinct goals, and doing both right means doing things at a different layer
< BlueMatt>
right, fair. i guess just in my past thinking on this I've always considered mostly more nice bitcoin data providers, less p2p-encryption/modoulation-style providers. I agree that would be a cool goal as well, though i guess is fully separate from things I was thinking about
< sipa>
one is a transport that deals with concepts like blocks and transactions, and probably plugs in at the validation layer directly
< sipa>
another is just connecting bitcoinds, which is more like routing P2P traffic directly (and may even be dealt with as a "peer" from the net perspective, subject to throtting/banning if better DoS protection are added)
< BlueMatt>
right.
< ariard>
sipa: but it's a layer question, 1) may rely on 2) once serialization is done?
< sipa>
ariard: i don't know what you mean
< BlueMatt>
ariard: maybe, but (1) provides you ability to get data from things that arent bidirectional high-bandwidth sockets. (2) is just a way to shove bitcoin p2p protocol over top of something else.
< ariard>
sipa: you may have a first layer caring about protocols details like blocks and transactions and then passing to a dumb byte stream transport
< ariard>
or sorry I think I don't get your distinction laid out above, if you have concrete example
< BlueMatt>
ariard: if you're farmiliar, (2) is tor's pluggable transport system.
< BlueMatt>
(which is to say, just a socks proxy that does its own encryption/scrambling, but doesn't use a different protocol inside)
< ariard>
BlueMatt: okay I see compare to blocksat where you do have your own serialization for compressing tx?
< BlueMatt>
right, and especially where you have unidirectional coms
< BlueMatt>
like blocksat
< BlueMatt>
or like headers-over-dns where you have to do strange queries (height -> header data instead of hash -> header data)
< sipa>
imagine we'd have had support and a nice variety of actually used protocols of the first type, before compact blocks existed
< sipa>
compact blocks gets invented, and now each protocol needs to invent its own way of adding support for that in its protocol
< sipa>
things that are just route-a-dumb-byte-stream (which isn't applicable always for sure) would just immediately get support for it without any effort
< sipa>
if you do things at a lower level, there may be a cost incurred to keep up with protocol development
< sipa>
so i really think what is appropriate where is very application specific
< sipa>
but interfaces may differ wildly between them
< ariard>
sipa: agree you may have a driver lib somewhere? I think some serialization may be reused between radio and blocksat or stuff like this?
< BlueMatt>
right, its a question of goals
< BlueMatt>
my thinking was the goal wasnt to try to just scramble the bitcoin data to make it look like noise on the wire, it was to actually have redundant codepaths (and network paths) to fetch block data
< BlueMatt>
which is a very different goal from only getting around port 8333 blocking
< BlueMatt>
ariard: those two goals have very different APIs, though.