< wumpus>
achow101: I noticed that the sqlite wallet backend recreates the prepared queries for every batch, is there a specific reason for this? I guess it has to do with multi-threading?
< mj_node>
Folks, I was 95% done syncing my node, and unplugged my external SSD accidently, I tried to restart bitcoind, but it's resuming from block 0, can anyone help? I tried to re-builld, etc.. the [blocks] folder has all the raw blk.dat but I just cant get it to use them.
< sipa>
mj_node: run with -reindex
< sipa>
it'll still restart the validation from 0, but it won't redownload everything
< wumpus>
normally unless a really high dbcache value is used it shouldn't go back that far on a crash, though unplugging storage while running can cause all kinds of corruption so it's hard to say-in any case a reindex is all you can do, try to keep the device plugged this time :-)
< mj_node>
I tried that -reindex doesnt work, its started to re-download blocks, I launched again re-index it spent hours going through the blocks but still says "loadBlockIndexDB: last block file = 136", when I have over 2000 block files
< wumpus>
ok, wipe everything and start over then
< mj_node>
somehow I think block\index content is the problem
< mj_node>
Im on 1mbps conneciton ,ehehe
< wumpus>
it's corrupted beyond repair
< mj_node>
ouch
< mj_node>
just to understand, is it because the blk.dat files are unique/
< mj_node>
and custom to each ?
< sipa>
mj_node: presumably when it started redownloading it started overwriting the block files you already had
< mj_node>
sipa: yes exactly that's what happened, so even a command like 'reconsider block' doesnt work?
< wumpus>
not necessarily, they contain public information after all, though the blocks don't come in in sequental order so they'll be in different orders in the files on different nodes, the "block index" database contains pointers to where every block is
< mj_node>
@wumpus hmm so surely I should be able to rebuild the block index from my raw blocks on the SDD no?
< wumpus>
yes, that's what reindex does
< sipa>
mj_node: redownloading corrupted the blocks you already had
< wumpus>
but not if the first blocks were overwritten as sipa says
< mj_node>
got it -thanks guys, appreciate the explaination, another 20 days i guess.
< sipa>
so you have a few good blocks at the beginning, then overwritten garbage, and then all old good blocks
< sipa>
20?!
< mj_node>
slow wifi....
< sipa>
what hardware is this?
< sipa>
ow
< mj_node>
ehehhehe, I'm in quarantine with shitty connection
< sipa>
if network is the bottleneck i can't help you
< sipa>
mj_node: here is something you couldnl try (it's a moonshot, though)
< wumpus>
you might want to copy the block files from someone else on physical storage
< sipa>
start downloading again in another directory
< sipa>
until you have the first few block files (as many as you possibly had overwritten)
< wumpus>
in any case, this isn't a support channel, unless you're doing development and having questions about the code for that reason this is not the place, use #bitcoin next time
< sipa>
then copy those over the same-named ones in your real dir
< sipa>
and then do a reindex
< sipa>
also, yeah, what wumpus said
< mj_node>
@wumpus sorry for that, and I will make sure to go to #bitcoin-core-dev
< mj_node>
#bitcoin i mean...
< wumpus>
support for rescanning with missing block files, and in general for disjointed block directories and downloading only missing blocks would be an interesting feature, and likely required for more eleborate pruning strategies, but I think there's a few things prventing this from working right now
< vasild>
the way blocks and undo are stored now in blk and rev files would make this messy
< vasild>
put everything in sqlite!
< wumpus>
I don't think the storage format is the problem, I think this kind of corruption is easier to handle the simpler your data format is
< wumpus>
heh :)
< wumpus>
ideally it would get the longest headers chain from P2P *then* start reconstructing
< wumpus>
it's easier to puzzle what fits where then
< bitcoin-git>
bitcoin/master 91d6195 Suhas Daftuar: Simplify and clarify extra outbound peer counting
< bitcoin-git>
bitcoin/master 3cc8a7a Suhas Daftuar: Use conn_type to identify block-relay peers, rather than m_tx_relay == nul...
< bitcoin-git>
bitcoin/master daffaf0 Suhas Daftuar: Periodically make block-relay connections and sync headers
< bitcoin-git>
[bitcoin] MarcoFalke merged pull request #19858: Periodically make block-relay connections and sync headers (master...2020-08-blocks-only-rotation) https://github.com/bitcoin/bitcoin/pull/19858
< wumpus>
instead of having 'import blocks' as a discrete initialization phase, consider the unplaced but known blocks already on disk as another block source like P2P (thinking of it I suppose it *almost* works that way already, after reindex-chainstate two-phase reindex process)
< wumpus>
it'd be a lot of un-fun fiddling around with broken block directories and such to develop and test this, doesen't sound like fun :)
< gwillen>
wumpus: one thing that occurred to me, when I had a corrupted block directory myself and was waiting to see if core could recover (it could not), is that you have to be slightly careful
< gwillen>
you don't want any risk of ending up in a situation where your block files on disk actually contain two slightly different copies of the same block
< gwillen>
and you checked one of them, but one or more indices ends up pointing at the other one
< gwillen>
this seems like the sort of thing that could slip in, by being more liberal in what one accepts from the block files
< gwillen>
(especially considered that it is Generally Regarded As Safe to grab this stuff from someone else to bootstrap, trusting that core will check it on startup before using it)
< bitcoin-git>
[bitcoin] jnewbery opened pull request #20624: net processing: Remove nStartingHeight check from block relay (master...2020-12-remove-starting-height) https://github.com/bitcoin/bitcoin/pull/20624
< aj>
jnewbery: is the logic for 20624 right? we never go back into IBD without stopping and restarting, but could conceivably get more than two weeks behind (eg if we've got an unattended node whose network was down)
< jnewbery>
aj: potentially, yes. We could get out of IBD, lose connectivity and fall behind, and then start catching up again and relaying old blocks to peers. That's mostly also the case now (the 2000 blocks behind nStartingHeight won't stop us from relaying old blocks to peers if we've been connected to them for a while)
< aj>
jnewbery: presuming we were connected a while, they should be in the same state as us (either both current, or both out of date), so relaying is probably okay; if the network was down, when it came back up, we'd reconnect and choose new starting heights though?
< jnewbery>
I think the worst case is this: we get out of IBD, lose connectivity and fall behind, and then connect to new peers that are on the best tip. We start catching up and then relay headers to peers that are ahead of us. The worst case is 80 bytes for each header, and that peer wouldn't download the block.
< aj>
jnewbery: i guess my impression is that if this works okay if you fall behind, it should work okay if you're still in IBD?
< jnewbery>
the 'this' working ok being 'checking that you're within 2000 blocks of the peer's starting height'?
< aj>
jnewbery: jnewbery: "sending headers, relaying blocks" -- if it's okay to do when you've been out of IBD but are 2001 blocks behind, it should also be okay if you're in IBD?
< aj>
jnewbery: (the "?" there is doing a lot of work...)
< jnewbery>
maybe. I think it's best that we just avoid relaying any inventory when we're in IBD
< aj>
jnewbery: so i think the argument against doing it in IBD is you could be on a relatively low-work but high-height fork from genesis, and end up spamming lots of headers/blocks and waste lots of b/w before everyone realises there's a higher-work fork and switches to that
< jnewbery>
I
< aj>
jnewbery: doing a high-height fork from 2000 blocks ago is expensive, though; and maybe finishing IBD generally is good enough there -- worst case there's a long, low-work chain from a year or two ago, _and_ you've been running but disconnected for that time, but the number of nodes doing that are going to be small, even compared to the number of nodes syncing from genesis at any point in time?
< jnewbery>
I'm confused. Are you arguing against the change in 20624?
< aj>
jnewbery: no, i'm trying to understand it
< jnewbery>
I think worst worst case is you send headers from that long chain from a year or two ago. Two years of headers is ~8MB
< sdaftuar>
aj: in practice, if you're out of IBD, you'll be syncing headers with all your peers shortly after the connection is established
< sdaftuar>
aj: because the criteria for doing that lines up pretty closely (iirc). While in IBD, we purposely sync headers from only one peer to avoid duplication.
< sdaftuar>
so i think if we are announcing blocks to peers while catching up after coming back online (so IBD is false, but we're in fact behind), it's no big deal, because we ought to know our peers' header chain anyway
< sdaftuar>
and therefore won't actually be announcing to them, since we'll know they know the blocks already
< sdaftuar>
i guess it's worth testing that there's not some slippage at the beginning of a connection, if they are slow to respond to our getheaders and we are connecting blocks, maybe we'd blast them with useless data? not sure how likely that is
< jnewbery>
sdaftuar: are you talking about after removing the nStartingHeight check? I think if we're out of IBD, we can't be connecting that many blocks?
< sdaftuar>
(just realized "while in ibd, we purposely sync headers from only one peer" is a bit imprecise -- what i meant was that we wait until we have completed header sync from one peer before syncing with others, so we could be connecting blocks whiel still doing header download, and not yet have tried syncing headers with our other peers)
< sdaftuar>
jnewbery: yes i'm talking about that check (well your PR), trying to explain the IBD vs nonIBD distinction better
< sdaftuar>
hm! the behavior around exactly how we starting syncing headers witha peer is not quite what i remember.
< sdaftuar>
it still may not matter much, but it looks like if we fall behind a bunch (say because we lost our network for a while, and then it came back up) that we would only sync headers from one peer until our tip is close to current, before syncing with new ones
< sdaftuar>
anyway, i think that's a long-winded way of me thinking that not-relay in IBD is the right behavior, and relaying while not in IBD even if we happen to be a bit behind is probably ok.
< jnewbery>
I need to understand the "start block sync" and FindNextBlocksToDownload() logic better
< sdaftuar>
the basic idea is that when we start up, we pick a first peer to start syncing headers from. as soon as we have headers that indicate there is a tip >= work of our current tip, we start downloading towards it from any peer that has it. also, once our headers chain is close to current (time within a day of current time), we sync headers from all peers.
< sdaftuar>
and then as those peers respond with their headers (which should be quick, if our headers chain is the correct one -- a single header with their best tip is typical) we'll download blocks from them as well, since we'll know they have the blocks we need.
< jnewbery>
That certainly makes sense conceptually. I just gind that mapping that design to the various bits of logic and state in SendMessages() and elsewhere is a bit tricky
< aj>
michaelfolkson: 1 is always the separator in bech32
< wumpus>
both testnet and signet start with 'tb' i don't know why it was chosen to use the same there, probably because they are both test networks and there can potentially already be many of them
< michaelfolkson>
Oh so regtest always starts bcrt1?
< michaelfolkson>
bcrt is human readable and 1 is the separator
< wumpus>
michaelfolkson: thanks for looking it up so it was as i guessed
< michaelfolkson>
Presumably non-default signets will be encouraged to not start tb
< michaelfolkson>
Although can't force them
< aj>
michaelfolkson: nah, that would require patching the code
< michaelfolkson>
Ohhh non-default signets will also start tb?
< aj>
michaelfolkson: there used to be config options for it, -signet_hrp= or so
< wumpus>
seems from that discussion that all new test networks start with tb, as it's for testing, address overlap is not critical
< wumpus>
that regtest has its own is a historical artifact then
< aj>
yeah, and makes it easier to port wallets to different testnets
< wumpus>
exactly
< aj>
meanwhile regtest is kind-of bitcoin-core specific, and there's no point having wallets work with it
< michaelfolkson>
Yeah treating regtest differently to testnet, signet makes sense as not meant to be a public network
< michaelfolkson>
Though choosing tb versus bcrt is a bit peculiar
< aj>
i noticed today that apparently when p2sh was being deployed, regtest didn't even exist
< wumpus>
the public signet is a public network
< michaelfolkson>
And presumably the non-default signets would also be public(ish) networks
< michaelfolkson>
Otherwise use regtest
< wumpus>
yes definitely easier to share
< jonatack>
"// Download if this is a nice peer, or we have no nice peers and this one might do."
< jonatack>
love some of these comments :)
< wumpus>
it's cute
< michaelfolkson>
What you looking at jonatack?
< jonatack>
michaelfolkson: grep the codebase ;)
< michaelfolkson>
So demanding of my typing fingers
< jonatack>
you'd be forgiven for thinking that line was from a jane austen novel instead
< aj>
"It is a truth universally acknowledged, that a high-bandwidth archive node must be in want of an inbound connection" ?
< vasild>
"uhoh, different"
< wumpus>
jonasschnelli: strange, you have a mismatch for the gitian macos signed build
< wumpus>
but the unsigned one matches
< jonasschnelli>
wumpus: that’s really strange.
< jonasschnelli>
Let me do it again
< wumpus>
it shouldn't even be possible, a difference would invalidate the code-signing right?
< wumpus>
or is there scope for malleability
< jonasschnelli>
The only reason I could think of is that I have built it before the sigs where pushed (then it would have took the rc2 detached sig).
< jonasschnelli>
But I very much doubt that I did this
< wumpus>
it doesn't verify what it is attaching?
< jonasschnelli>
I don’t think so. It just takes the newest signature from the 0.21 branch (signature repository)
< jonasschnelli>
A check against the release/tag should probably be added.
< jonasschnelli>
I investigate as soon as when I’m back on my desk
< wumpus>
I mean a check that the signature checks out against what it's attached to
< wumpus>
the windows one has a check like that IIRC
< jonasschnelli>
wumpus: that’s not the case AFAIK. I don’t know if you can verify the signature on Linux. Probably possible but maybe complicated to add.
< wumpus>
right, thanks
< luke-jr>
is the CI failing to merge with master before running? or did we lose a bunch of Cirrus instances?
< jonasschnelli>
I was just checking and came to the same conclusion luke-jr... hmm..
< luke-jr>
jonasschnelli: if I fix my PR will it mess up your ability to troubleshoot?
< jonasschnelli>
luke-jr: your right... so yes. bitcoinbuilds currently builds PR branches as they are (not merged on master)
< jonasschnelli>
luke-jr: no. I'll know how to fix it.
< luke-jr>
k
< sipa>
wumpus: treating the block data as an import sounds like a good idea, and not too hard
< achow101>
wumpus: yes, the prepared statement for each batch is so that we don't have issues where there are multiple batches for the same database. Although I'm pretty sure that we never have multiple batches writing to the same db at the same time anyeays.
< achow101>
for some reason gverify is giving a mismatch for darosior's 0.20.2rc1 win and osx builds, but manual inspection does not show a difference
< luke-jr>
O.o
< luke-jr>
achow101: where do you see it?
< achow101>
locally
< luke-jr>
I also see no difference
< roconnor>
does gverify look at filename dates or other file attributes?
< darosior>
Hmm, gverify passes on my end
< luke-jr>
achow101: could the signature be invalid/rejected for some reason?
< achow101>
luke-jr: i don't think so
< darosior>
Oh no, it does not
< darosior>
Not for Windows, good catch..
< achow101>
roconnor: iirc it expects the same filenames
< luke-jr>
oh, found it
< luke-jr>
- release: v0.20.2rc1-win-unsigned
< luke-jr>
I bet tha'ts it
< achow101>
ugh
< achow101>
ok that doesn't matter
< prusnak>
Big Sur says Bitcoin Core installed from bitcoin-0.21.0rc3-osx.dmg is broken and should be moved to the bin; that said this happens on M1 system, so maybe on x86-64 it's fine
< prusnak>
maybe jonasschnelli can confirm? ^
< achow101>
prusnak: is it the notarization warning?
< prusnak>
no, it's different than the error with rc2 (which i assume was notarization error)
< luke-jr>
where did the dmg come from?
< luke-jr>
could we have ended up with a signature that doesn't match a non-deterministic bin?
< luke-jr>
sounds like jonasschnelli signed the wrong file, or committed the wrong sig
< luke-jr>
or our combining process is suddnely broke
< achow101>
i think he signed the wrong file
< roconnor>
maybe I should keep my nose out of this, but what are the consequences of signing the wrong thing? AFAIK publish signatures are unrevokable.
< roconnor>
*published
< dongcarl>
who does the windows codesigning?
< achow101>
me
< dongcarl>
roconnor: You might be thinking of notarization instead of codesigning?
< roconnor>
maybe. What's the difference?
< achow101>
roconnor: in theory there shouldn't be any consequences because code signing is only done to shut up os warnings.
< roconnor>
ah.
< achow101>
but in practice, it both doesn't do that anymore, and it does convey some level of "this software is trusted" because users don't quite understand why we do that
< luke-jr>
maybe we should just stop signing then?
< achow101>
the consequence is that the software that is signed is malicious and some user is tricked into signing it
< achow101>
*using it
< roconnor>
maybe these code signatures are revokable?
< luke-jr>
roconnor: these days, to *actually* shut macOS up, you have to opt in to Apple privacy violations (it tells them every time the app is opened, and which app it was)
< achow101>
luke-jr: code signing is required for notarization, and notarization is now the thing to make macOS not give a warning
< luke-jr>
achow101: but notarization is problematic, and we don't do it
< luke-jr>
so if the signature-alone is useless, why bother?
< achow101>
iirc there's 2 levels of warnings, with signature alone being slightly less aggressive?
< achow101>
and at least it still works on older macOS
< luke-jr>
older macOS that we dropped support for? :P
< dongcarl>
luke-jr: Without signing, the warning is much scarier... something like "unknown developer"
< achow101>
I think there's an older macOS we support that doesn't do the notarization check
< luke-jr>
anyway, jonas can prob fix this easily
< jonasschnelli>
sorry.. was afk.
< jonasschnelli>
Reading backlog
< jonasschnelli>
I codesigned 5e3a08ae8195190d6f1b12e3e1e9d710e7ad385941a6e8d04e3391f12deddb11 bitcoin-0.21.0rc3-osx-unsigned.tar.gz
< jonasschnelli>
achow101, luke-jr: I just checked the DMG is built initially and it works, signature is correct
< jonasschnelli>
I got 998dddf3c0f9b568fc0c39e61e3d61d2843dfb968016b7ceaf23aca94ace2542 bitcoin-osx-signed.dmg as the only one
< jonasschnelli>
But yes,... 46cfa036d365d69db2a3b78377621d6b214f2d78f3082f9c7ebd7a9b89cfc599 bitcoin-osx-signed.dmg has an invalid code signature
< jonasschnelli>
but I signed 5e3a08ae8195190d6f1b12e3e1e9d710e7ad385941a6e8d04e3391f12deddb11 bitcoin-0.21.0rc3-osx-unsigned.tar.gz
< jonasschnelli>
let me try agai
< jonasschnelli>
I double checked and I can confirm that I have signed 5e3a08ae8195190d6f1b12e3e1e9d710e7ad385941a6e8d04e3391f12deddb11 bitcoin-0.21.0rc3-osx-unsigned.tar.gz
< achow101>
is signing deterministic?
< jonasschnelli>
The signature is not deterministic,... doing it again gives me a different file/hash
< jonasschnelli>
I can't verify what went wrong
< jonasschnelli>
I have verified the signed 5e3a08ae8195190d6f1b12e3e1e9d710e7ad385941a6e8d04e3391f12deddb11 bitcoin-0.21.0rc3-osx-unsigned.tar.gz and it works
< jonasschnelli>
I don't know what to do,... I can sign and push again. But I rather know why it happend
< achow101>
How can you check what was signed?
< jonasschnelli>
I keep the files
< achow101>
ah
< achow101>
I guess for an rc it's fine to leave it, but in the future we should test the signed binary first
< wumpus>
we'll correct it for the next rc (or final) I guess, it's too bad the sig verification can't run in linux
< achow101>
we can't verify the signature after signing?
< sipa>
we have a linux based signing tool, right?
< achow101>
only for windows afaik
< sipa>
i'd be surprised if it doesn't have most of the code needed for verification too
< sipa>
oh
< achow101>
for windows we can (and do I think) verify the sig after signing
< sipa>
so the osx signature is created on osx, but stapled onto the binary in gitian/linux ?
< sipa>
*macos
< achow101>
yes
< luke-jr>
jonasschnelli: can you upload or compare 998dddf3c0f9b568fc0c39e61e3d61d2843dfb968016b7ceaf23aca94ace2542 with 46cfa036d365d69db2a3b78377621d6b214f2d78f3082f9c7ebd7a9b89cfc599 ?
< luke-jr>
I guess our combiner is malfunctioning?
< jonasschnelli>
I think the dmg hash mismatch has nothing to do with the invalid signature
< jonasschnelli>
hmm...
< jonasschnelli>
I think what I need to do before pushing the macOS signatures is doing a gbuild with gitian-descriptors/gitian-osx-signer.yml with the detached signature
< jonasschnelli>
as sort of a dbl-check
< luke-jr>
would still be nice to figure out why it didn't work this time :x
< jonasschnelli>
I guess it always pulls it from there.
< midnight>
Oh nice, no changes necessary in the gitian signing env and I'm getting matches.
< luke-jr>
not if you specify -u
< luke-jr>
-u, --url PAIRS comma separated list of DIRECTORY=URL pairs
< luke-jr>
not very useful --help lol
< jonasschnelli>
I do that for bitcoin=
< jonasschnelli>
but wasn't aware you can also do it then for signature=
< luke-jr>
it's no different ;)
< jonasschnelli>
However,.. I'm going for the additional input file
< midnight>
just me and wumpus I think with the windows sigs for the rc but
< jonasschnelli>
Seems dummysave
< * midnight>
shrug
< luke-jr>
midnight: ?
< midnight>
luke-jr: working nicely still, in spite of my semi-custom build environment.
< luke-jr>
midnight: I see lots of windows sigs for 0.21.0rc3
< midnight>
hrm
< jonasschnelli>
midnight: I guess semi-custom is good. Better than if everyone uses the same host/env with the same script
< luke-jr>
my attempt to semi-custom was met with mismatching
< luke-jr>
but my idea of semi-custom was a ppc64le VM image :P
< jonasschnelli>
have you figured out why?
< midnight>
jonasschnelli: The maintenance of the custom(ish) build env is entirely on me, and I figure it's better to arrive at similar results this way.
< luke-jr>
jonasschnelli: apparently the ppc64le compilers do not output the same objects as the x86_64 compilers, even cross compiling
< luke-jr>
no idea why :/
< midnight>
triangulation and all that.
< luke-jr>
you would *think* i686-w64-mingw64-gcc (or whatever) would produce the same stuff on x86_64 and ppc64le, but apparently not
< bitcoin-git>
[bitcoin] dongcarl opened pull request #20629: depends: Improve id string robustness (master...2020-12-improve-depends-id-string) https://github.com/bitcoin/bitcoin/pull/20629
< jonasschnelli>
i'm doing the 0.20.2 mac signatures asap (need to track-down the previous issue first)
< jonasschnelli>
is there a way to speeup gitians "Upgrading system, may take a while (log in var/install.log)"?
< jonasschnelli>
Re-create the base system?
< jonasschnelli>
s/system/image
< sipa>
iirc, yes
< sipa>
creating an image after the update speeds things up
< jonasschnelli>
the install part takes >10mins here.. annoying for testing
< wumpus>
yes, the "Upgrading image" takes longer than the actual builds here
< jonasschnelli>
ideed
< jonasschnelli>
+n
< wumpus>
I have no idea how to speed it up, I hacked it once to upgrade the base image but then somehow ended up with two versions of every package and non-deterministic results, so yea, if regenerating the base image solves it I'd definitely recommend that path
< jonasschnelli>
Okay... I'll try that
< wumpus>
fairly sure that works for one of {KVM, LXC}, don't know which one, the other always gets the ancient image
< wumpus>
(they both generate the base image in completely different ways)
< wumpus>
it would make sense to look into it some day but also guix builds are around the corner right
< sipa>
Any Day Now(tm)
< wumpus>
releases are sufficiently rare anyway, but the extra delays are mostly annoying when testing/iterating something
< jonasschnelli>
I'm really confused
< jonasschnelli>
I have downloaded again 5e3a08ae8195190d6f1b12e3e1e9d710e7ad385941a6e8d04e3391f12deddb11 bitcoin-osx-unsigned.tar.gz ...
< sipa>
so the extraction/reattaching process doesn't result in the exact same binary
< achow101>
yeah
< sipa>
can you run "pagestuff -p" on both?
< achow101>
identical
< sipa>
so
< sipa>
the detached-sig-create.sh tool uses pagestuff -p | tail -2 | grep offset | sed 's/[^0-9]*//g' to figure out the offset
< sipa>
and the size of the sign file for the size
< achow101>
the sizes are off by one byte
< sipa>
aha
< sipa>
that's what i was going to ask next
< sipa>
i think this is the cause
< achow101>
x86_64-apple-darwin-otool -l Bitcoin-Qt for segment "__LINKEDIT" says "vmsize 0x000000000007b000" but on the other it's "vmsize 0x000000000007c000"
< sipa>
which one is bigger?
< achow101>
gitian one is bigger
< achow101>
wait no, that's backwards
< achow101>
signed one is bigger, gitian is smaller
< sipa>
that's a 4 kB difference though
< sipa>
not 1 byte
< achow101>
The files are the same size
< achow101>
and diffoscope only sees the difference on that single byte
< sipa>
what if you run the same on the unsigned gitian output (= the input to the macos signing tool)
< achow101>
not sure what you mean
< sipa>
x86_64-apple-darwin-otool -l Bitcoin-Qt
< sipa>
oh
< sipa>
nvm
< sipa>
yeah, what if you run that on the unsigned Bitcoin-Qt binary?
< achow101>
__LINKEDIT segment is the same size as the gitian result
< sipa>
so
< sipa>
the codesigning tool modified the __LINKEDIT segment, but applying the detached signature didn't do the same?
< achow101>
seems so
< sipa>
if you run "pagestuff -p Bitcoin-Qt | tail -2 | grep size | sed 's/[^0-9]*//g'", what do you get?
< sipa>
on the various version
< achow101>
ah wait, I was looking at the wrong file earlier
< sipa>
where do the linux binaries pagestuff and codesign_allocate come from?
< achow101>
the otool on the unsigned binary gives a way smaller __LINKEDIT, presumably because the code signature is involved in there
< sipa>
i guess that's expected
< achow101>
they come from one of the depends packages, not sure which
< sipa>
ah i see
< achow101>
the sizes from that pagestuff command are the same for both gitian and signed
< sipa>
and the offsets too?
< achow101>
not the same for unsigned, but that's expected because the section size it looks for doesn't exist as it's the code sig
< sipa>
yeah
< achow101>
same offests
< sipa>
ok, so the byte that changed is in the MP_MACH_HEADERS section, in the very beginning of the file
< sipa>
while the codesig is at the very end
< achow101>
presumably that section is a table
< achow101>
so the problem must be with codesign_allocate because that's what sets that size value
< sipa>
the codesign tool has a --detached option to construct detached signatures directly
< sipa>
why is the tooling doing signing directly, and then extracting the signatures from the result?
< sipa>
this may be a reason for a difference
< achow101>
this workflow probably was setup before that existed
< sipa>
is it possible that the codesign_allocate tool is out of date?
< achow101>
possibly
< sipa>
what else does otool or anything else say about __LINKEDIT?
< achow101>
nothing else afaict
< sipa>
achow101: does macos come with a codesign_allocate tool too?
< achow101>
yes
< sipa>
the native_cctools depends package is only a few months old, and most changes since have been for apple ARM stuff it seems
< sipa>
achow101: if you run "codesign_allocate -i <unsigned Bitcoin-Qt filename> -a x86_64 225312 -o <some temp file>
< sipa>
on macos
< sipa>
and then inspect that temp file, does it have the correct __LINKEDIT vmsize?
< achow101>
will try
< achow101>
It's the correct size
< sipa>
hmm, we should try updating to the latest native_cctools package i guess and see if that fixes it
< sipa>
an alternative is making jonasschnelli downgrade his codesign_allocate tool to one that's compatible with the native_ccttols we use, and then tell his codesign tool to use that
< achow101>
I wonder if we're just running into some weird edge case now