< venzen>
wumpus: i assume those are different keys for different uses
< Chicago>
venzen, just ask for the long keyid-format to confirm whether you're using the correct signatures with anything you find. The short keyid is prone to collisions. a 'keyid-format 0xlong' in your gpg.conf will ensure gpg always shows you the long keyid-format.
< venzen>
Chicago: thanks, I'll post the link you provide to the r/bitcoin post. The OP seems to be confused and I figured wumpus should be aware of the post
< Chicago>
venzen, I would presume the bitcoin.org (over HTTPS) hosted key is an order of magnitude more trustworthy than anything you randomly find by looking for a short string at the MIT PGP server.
< venzen>
Chicago: agree, PGP key servers being neither secure nor authoritative. Theymos clarified the issue for the OP, so wumpus needn't respond
< Chicago>
yeah, looks like he got to it 15 minutes ago on the Bitcoin subreddit
< gmaxwell>
sipa: perhaps we need a three-liner to detect after loading the block index if the abc first block is in it and marked valid, and if so, trigger the reindex needed message. :(
< gmaxwell>
sipa: I've encounter several other people running into chaos due to abc corrupted blockchains.
< timothy>
gmaxwell: the real problem is that the broken forks (aka abc) uses the bitcoin-core data directory
< timothy>
instead of using another one
< gmaxwell>
timothy: yes, sure, but they were asked to not do these moronic things that would cause harm to users and rather rudely refused to. We can't control them... what we can do is mitigate what harm we can.
< gmaxwell>
reuse of the datadir is hardly the worse of their sins, reuse of the address version will likely cause the most funds loss eventually.
< timothy>
so do you think SIGHASH_FORKID is not enough to protect them?
< gmaxwell>
timothy: no man, already people are losing funds because they swap btc and bcash addresses; this is not surprisign either, because a few altcoins have previously made this mistake and it caused a lot of funds losses there too.
< timothy>
oh right, using compatible addresses can generate chaos for users
< gmaxwell>
it's not quite as bad as the ethereum checksumless stuff, but ... not that much better either.
< goatpig>
it's terribly unsafe
< goatpig>
there's a major attack vector where someone can send you an address that's a nested SW script
< goatpig>
if you fill that on the bcash chain, you just created a anyone can pay output
< goatpig>
their approach is basically killing P2SH on that chain
< goatpig>
Bitcoin is unaffected though
< gmaxwell>
we might end up inadvertantly rescuing them with a more rapid adoption of BIP173 though then there is the same idiocy that may happen four months from now. :( and maybe we should hold back posting the BIP173 integration from core so that it's distinct from the next of these dumb forks.
< goatpig>
bech32 address or not, I'd stay far far away from P2SH on the bcash chain
< goatpig>
they could have simply changed the script hash prefixes and that would have gone a long way to avoid this mess
< gmaxwell>
goatpig: for a lot of places paying to a from plain 1xx address isn't much better.
< goatpig>
i just paid to a bitpay address a couple hours ago and had that same reaction
< gmaxwell>
If my keys are in a hsm, in some offline host, whatever. Fat freeking chance that I'm going to go and load up bcc potential key leaking malware any time soon to go recover lost funds sent to them.
< goatpig>
im implementing bch signing in armory as we speak =D
< gmaxwell>
if bcash price isn't ~0 in short order there will also be a hundred more of these things.
< goatpig>
at least my users won't have to expose their coins to god knows what's in that code
< gmaxwell>
(not that bch is even the first altcoin to airdrop on bitcoin users, just the first with a highly funded marketing effort behind it)
< goatpig>
i didn't even know of the previous attempts
< goatpig>
learned of them through the bch "insistance"
< gmaxwell>
goatpig: I knew of two of them, learned of a few more since.
< goatpig>
what really pissed me off besides the hurried out fork is that they didn't bother contacting people in the ecosystem, nor do i have any idea where to look for their testnet, if they even have one
< goatpig>
no wonder Coinbase is planning to support them in 2018!
< gmaxwell>
they don't have one.
< goatpig>
man... i was afraid it would be the case
< arubi>
it appears you have to sign up to the testnet
< arubi>
I was just thinking maybe there should be an open testnet
< goatpig>
sign up? wth
< goatpig>
it's like they want the pretense of open source, but every step they take is to obfuscate development and keep the ecosystem at bay save a few hand picked actors
< gmaxwell>
yea, I was talking to some people about what it would take to support this... there are people out there that have automatically managed keys in HSMs which can't deal with this stuff without being reinitialized, and where any major modification to the software will require an outside security and crypto assessment with a $200k-ish pricetag.
< arubi>
I saw a message on one of their subreddits, I think it was "ftrader" that told the person asking about it that he'll need to sign up for their slack
< arubi>
at least iirc, I didn't bother
< gmaxwell>
arubi: oh they edited that post to now say "sign up"
< wallet42>
where/how does segwit block data is stored on disk? is it appended to the block in the *.blk files?
< gmaxwell>
wallet42: what? it's stored in the blocks just like it's sent on the wire. you're probably thinking segwit seperates signatures, it does not, it leaves them out of txids, but they're inside transactions.
< gmaxwell>
goatpig: as far as testnet goes, it doesn't pass the unit/system tests that it ships with.
< arubi>
right that's it. was on github
< goatpig>
nice
< gmaxwell>
goatpig: I was trying to make a safer version of it by reverting most of the code to things tracable to audited code; then trying their tests to see what the changes broke... wasted a bunch of time because the tests never passed to begin with.
< goatpig>
let's look at it like this: at least they build!
< arubi>
now I know what I'm doing tomorrow
< arubi>
regtest didn't complain, at least that helped with the sighash thing
< arubi>
well, didn't break more like it
< jonasschnelli>
achow101 Ping. I have read on the gitian related github repos that you also had problems with LXC and "init.lxc: failed to mount /dev/shm : No such file or directory"
< jonasschnelli>
Any idea?
< jonasschnelli>
... how to solve this?
< jonasschnelli>
SInce updating from jessie to strech (debian 8 - 9) I'm no longer capable to build with LXC
< aj>
jonasschnelli: do you need to make a symlink to /run/shm?
< jonasschnelli>
aj: can you elaborate more in detail?
< jonasschnelli>
symlink /run/shm to /dev/shm?
< aj>
jonasschnelli: not accurately without checkng some details. :) /dev/shm got renamed to /run/shm (and maybe removed entirely?)
< aj>
jonasschnelli: seems like that shouldn't be your issue though
< jonasschnelli>
Oh.. Let me try that
< jonasschnelli>
aj: hmm.. added lxc.autodev = 1 to my gitian builders etc/lxc.config.in
< jonasschnelli>
But still get the same error while gbuild
< jonasschnelli>
aj: It worked on jessie... it does not work on stretch
< aj>
jonasschnelli: i think as of stretch autodev=1 is the default
< aj>
jonasschnelli: yeah, jessie was 1.0.6, stetch is 2.0.7, it changed around 1.1.2
< jonasschnelli>
lxc.autodev = 0 solves the issue "init.lxc: failed to mount /dev/shm : No such file or directory"
< jonasschnelli>
Though other error appear. :)
< jonasschnelli>
tar: cache: Cannot open: No such file or directory
< jonasschnelli>
tar: Error is not recoverable: exiting now
< aj>
jonasschnelli: https://github.com/moby/moby/issues/12912 has a change to make it actually create /dev/shm when trying to mount, but it's supposed to be applied already the way i read it
< jonasschnelli>
aj: Thanks. I'll look into it after lunch...
< Chicago>
Why not continue using Jesse, it hasn't reached end-of-life quite yet.
< jonasschnelli>
Chicago: I have already upgraded and downgrade seems painful
< jonasschnelli>
But I would not upgrade again...
< Chicago>
Just for Gitian building, using a Virtualbox instance seems to be an easy 5 minute build. If using QEMU/LXC, then maybe more time consuming.
< Chicago>
I'm doing it with libvirtd and Virtual Manager, its pretty efficient and fast when creating a new Gitian VM.
< aj>
jonasschnelli: is the container ubuntu or debian?
< aj>
jonasschnelli: (is the config available somewhere?)
< jonasschnelli>
lxc-execute: cgroups/cgfsng.c: cgfsng_create: 1363 No such file or directory - Failed to create /sys/fs/cgroup/systemd//lxc/gitian: No such file or directory
< jonasschnelli>
as well as...sudo: unknown user: ubuntu
< jonasschnelli>
sudo: unable to initialize policy plugin
< jonasschnelli>
I can "adduser debian" in the boostrap fixups..., but not sure about the systemd/lxc issue
< jonasschnelli>
Oh.. after creating the user in bootstrap.fixups and giving it the right permissions, the gitian build has at least started...
< jonasschnelli>
(including aj lxc mountpoint fix above)
< Chicago>
jonasschnelli, awesome :)
< aj>
jonasschnelli: wow, what a mess :(
< jonasschnelli>
Chicago: Do you use VirtualBox instead of LXC/KVM(qemu) for the guest/build VM? Of for the host VM?
< Chicago>
jonasschnelli, I'm currently very happy with libvirtd ran through Virtual Manager with Debian 8 as the guest VM operating system. Gitian decides which OS is used for the image, in the recipe. (currently its Trusty)
< Chicago>
The host OS is Gentoo GNU/Linux x86_64.
< jonasschnelli>
Hmm... Chicago: I use a "physical debian 9" as the host for gitian (no VM in between),... gitian then spinns the LXC ubuntu trusty on that physical host.
< jonasschnelli>
I wonder if using the gitians VirtualBox way to deterministically build is faster then LXC... though I doubt it.
< Chicago>
Well... there are tricks to it. You can update the Gitian configuration to use more memory and processors than the 3000M of RAM and 2 vCPU it calls for out-of-the-box.
< jonasschnelli>
Chicago: indeed. Speeding up with mem and parallelism makes sense... also dependency caching can maybe be improved
< Chicago>
hell... if you have a big box, put everything into a tmpfs
< Chicago>
40G ain't much RAM on modern gear.
< jonasschnelli>
Chicago: I doubt it's much faster then running on the 1GB/s SSD
< jonasschnelli>
disk access seems to be not the issue. Compiling is usual pure CPU, not?
< Chicago>
VT-x extensions will give the compiler access to the vCPU, but compiling and linking still has disk i/o and SSD is still at least > 1 order of magnitude slower then RAM.
< jonasschnelli>
Can't the updating apt and "Upgrading system" in gitian be cached?
< jonasschnelli>
Updating apt-get repository (log in var/install.log)
< jonasschnelli>
Installing additional packages (log in var/install.log)
< jonasschnelli>
Upgrading system, may take a while
< jonasschnelli>
Those steps seems to take a couple of minues.
< jonasschnelli>
Caching as long as the hash of the descriptor hasn't changed?
< Chicago>
Well, you know once the base image is built; it could be a few weeks between Bitcoin release cycles; and so it has to build the dependency graph and do the package installations deterministically such that if you built an image last month with Trusty and I built an image today with Trusty, we both end up getting the exact same depgraph when we go to build everything.
< Chicago>
The caching comes from apt-cacher-ng so that you don't have to repeatedly fetch those files.
< MarcoFalke>
Would it be sufficient to call invalidateblock $abc_block ?
< MarcoFalke>
sorry, replied to scrollback.
< jonasschnelli>
Build server is up and running... thanks to aj!
< jcorgan>
inb4 someone interprets that as BLOCK_SIZE
< wumpus>
lol
< wumpus>
I can see the reddit troll posts already "wladimir j. van der laan proposes to change block size to 150GB"
< luke-jr>
lol
< luke-jr>
so anyhow, what's with the release notes saying current wallet RPCs don't work with multiwallet in 0.15?
< luke-jr>
I thought no matter what a new interface does, we will be backward compatible⁇
< paveljanik>
wumpus, 150GB or 140GB?
< paveljanik>
the first comment says 150GB, but the code is changed to 140
< Lauda>
I think he meant to do 150 GB. The current size is 135 + 15GB mentioned in the commit.
< jonasschnelli>
wumpus: 7min for OSX is quick... though 28 min Linux seems slow...
< achow101>
jonasschnelli: I never figured out how to fix the lxc problem. I just switched to using kvm instead after trying too many things that didn't work
< achow101>
if you could figure it out though, that would be great