< fanquake>
What we really need is a bot that picks out every typo, spurious include and incorrect space from every new PR, and embarrassingly notifies the contributor of their transgression
< fanquake>
/s
< wumpus>
in triplicate, of course
< wumpus>
for typos in translation strings it could even be useful
< wumpus>
but in comments, bleh
< wumpus>
especially if the gist of the word is clear anyway
< timothy>
hi, is there a max size for rev*.dat files?
< wumpus>
timothy: afaik no, the size of the rev depends on what is in the blk*.dat which is size-limited though
< timothy>
yes, max size of blk is 128 MiB
< wumpus>
I suppose the rev will always be smaller than the blk
< gmaxwell>
wumpus: it could be larger if you did something absurd.
< timothy>
gmaxwell: absurd like what?
< gmaxwell>
wumpus: e.g. say early blocks make zillions of 9999 byte outputs, spendable with OP_TRUE. then later blocks spend them and do nothing else.
< gmaxwell>
That will make the rev data larger than the related blocks.
< timothy>
right
< gmaxwell>
you could get them up to a ratio of perhaps 230 times larger.
< timothy>
still less than 4GB (max file size of FAT32)
< gmaxwell>
of course none of these txn would pass standardness tests and what not... not likely to see it in mainnet, but it's possible.
< timothy>
uhm no, more than that
< timothy>
30 GB
< wumpus>
oh wow that's pretty bad
< wumpus>
so I guess ideally the logic should be: if *either* the rev or dat reaches 128MB, roll to the next file
< timothy>
is there any reason to use 128 MiB instead of other values?
< gmaxwell>
it should be relatively small because it preallocates to reduce fragmentation. (or otherwise windows users cry)
< timothy>
NTFS doesn't support fallocate or similar?
< gmaxwell>
but not so small that its making a kazillion files and causing poor file system performance.
< timothy>
I mean, preallocation without really writing the bytes
< gmaxwell>
I long since forgot what the tradeoff surface was on windows.
< gmaxwell>
But I didn't think NTFS did sparse files.
< wumpus>
I guess it could harmlessly be changed to 256, but I expect there to be no performance gain
< gmaxwell>
wumpus: 60 GB rev files! :P
< wumpus>
gmaxwell: yeah after rev files capped ofcourse
< wumpus>
with a blow-up of 230, at least one block's undo data would fit into that :-)
< gmaxwell>
sipa might have more accurate figures on the worst case, but it's something around that much.
< wumpus>
"Most modern file systems support sparse files, including most Unix variants and NTFS, but notably not Apple's HFS+. Sparse files are commonly used for disk images, database snapshots, log files and in scientific applications."
< wumpus>
so MacOS is the problem here, not windows
< gmaxwell>
The extra question though is if they prevent fragmentation.
< wumpus>
I don't know, is there such a guarantee for UNIX filesystems?
< wumpus>
oh I was confused, this isn't about sparse files at all but posix_fallocate
< wumpus>
in which case the disk space is reserved explicitly
< gmaxwell>
sorry, confusion I caused, late here.
< wumpus>
we apparently do have an implementation of AllocateFileRange for windows, but as I understand from MSDN it might create a sparse file (SetEndOfFile sets the file size,but not the "allocation size"), so this confusion is more general :)
< wumpus>
the documentation is confusing though so I'm not sure
< bitcoin-git>
[bitcoin] fanquake closed pull request #9427: Use compact blocks for blocks that have equal work to our active tip (master...UseCmpctBlockForCompetingBlocks) https://github.com/bitcoin/bitcoin/pull/9427
< sipa>
wumpus, gmaxwell: the 128 MiB is a tradeoff between fragmentation overhead and granularity for pruning
< sipa>
the very first versions of the patch that introduced it (ultraprune) just used a single file per block
< sipa>
but that was very slow
< luke-jr>
sipa: I wonder if pruning ought to perhaps consider punching sparse holes?
< sipa>
luke-jr: i think we can also just reduce that 128MiB number significantly
< morcos>
i think 128 works fine for now doesn't it?
< luke-jr>
maybe. but some filesystems perform differently than others..
< morcos>
perhaps if we properly introduce sharding, then we need to rethink the design
< luke-jr>
btrfs is annoyingly slow, I've found.
< wumpus>
sipa: yes, it seems a good compromise
< wumpus>
AFAIK monero stores all blocks in a single lmdb
< wumpus>
why reduce the 128? agree with morcosthat it works fine
< wumpus>
for pruning granularity it's also good enough, given how much space the utxo database takes a variance of 128mb+~16mb (usual rev files) doesn't seem to bad
< wumpus>
although it could be worse than that in some cases depending on how blocks are distributed over the files
< sipa>
wumpus: ok
< wumpus>
and 128mb is at most 128 blocks, less than a day of blocks, even less than that w/ witness data, it's not that much
< bitcoin-git>
bitcoin/master d8e03c0 Jack Grigg: torcontrol: Improve comments
< bitcoin-git>
bitcoin/master 29f3c20 Jack Grigg: torcontrol: Add unit tests for Tor reply parsers
< bitcoin-git>
bitcoin/master d63677b Jack Grigg: torcontrol: Fix ParseTorReplyMapping...
< bitcoin-git>
[bitcoin] laanwj closed pull request #10408: Net: Improvements to Tor control port parser (master...torcontrol-parser-patches) https://github.com/bitcoin/bitcoin/pull/10408
< luke-jr>
wumpus: it's much more than 128 blocks early in the chain?
< sipa>
yes
< wumpus>
luke-jr: of course, but it blasts past that anyway
< wumpus>
most pruning nodes will be - more or less - up to date
< wumpus>
but yes it's easy to forget that once, blocks were that far from full
< CodeShark>
roasbeef has worked on an idea based on golomb coded sets
< jonasschnelli>
«he most efficient data structure is similar to a bloom filter, but you use more bits and only one hash function. The result will be mostly zero bits. Then you entropy code it using RLE+Rice coding or an optimal binomial packer (e.g. https://people.xiph.org/~greg/binomial_codec.c).»
< sipa>
yes?
< CodeShark>
gcs sacrifices CPU for space
< jonasschnelli>
I think what we would need is data about the filter size for the last 100000 blocks...
< CodeShark>
filters are smaller, but queries are more computationally expensive
< gmaxwell>
CodeShark: CPU for who when is always the question.
< roasbeef>
jonasschnelli: I have that
< jonasschnelli>
roasbeef: Oo... share?
< CodeShark>
hey, roasbeef! :)
< gmaxwell>
what BIP37 does is very cpu expensive for the serving party, which is why it leads to dos attacks.
< gmaxwell>
with any of the map based proposals that goes away and the cost to construct is not very relevant.
< CodeShark>
constructing a gcs isn't very computationally expensive
< sipa>
more so than bip37
< gmaxwell>
Similarly, cost to lookup is not very relevant, the reciever will decode one per block.
< CodeShark>
the queries are a little more computationally expensive than bloom filters, but that is done on client
< roasbeef>
jonasschnelli: i have a csv file of stats for the entire chain, can easily get the last 100k out of it, the csv file itself is 14MB
< gmaxwell>
sipa: maybe the lots of hash functions make it more expensive than you might guess.
< jonasschnelli>
roasbeef: I take the complete one,. thanks. :)
< CodeShark>
but gcs only needs to be computed once per block
< sipa>
CodeShark: do you suggest this as something that blocks commit to?
< sipa>
or something that a full node would precompute and store?
< roasbeef>
with bloom filters, there are several hash functions, with the gcs based approach, there's a single hash function. but the set itself is compressed, so you need to decompress as you query
< CodeShark>
the latter for starters
< sipa>
i suppose the last
< jonasschnelli>
precomp and store
< roasbeef>
sipe: something a node would precompute and store, to start
< sipa>
okay
< sipa>
what would be stored in the set?
< gmaxwell>
I'm dubious that we'd get state of the art performance from golomb coding, but interested to see.
< jonasschnelli>
Can be done after the block has been connected
< gmaxwell>
sipa: I believe the discussion is the 'bloom map' proposal.
< CodeShark>
roasbeef was suggesting two filters - one for super lightweight clients, another for clients that require more sophisticated queries
< jonasschnelli>
What are the differences? The tx template types?
< CodeShark>
the former would only encode UTXOs, the latter would also encode witness data
< gmaxwell>
encode witness data?!
< CodeShark>
well, if you want to query for whether a particular execution path has been taken - necessary for things like lightning
< roasbeef>
basic has: outpoints, script data pushes. extended has: witness stack, sig script data pushes, txids
< sipa>
but do you need to _search_ based on witness data?
< sipa>
i understand you may want to see it
< sipa>
but you know what UTXOs to query for, no?
< CodeShark>
I'm guessing revocation enforcement might be outsourced to nodes that cannot know the exact transaction format - only some key
< CodeShark>
roasbeef, wanna comment?
< gmaxwell>
Yes, requesting it is fine, searching on it? Be careful: it has serious long term implications if you expect that data will even be readily available. I am doubtful five years from now most nodes will have any witness data from more than a year back.
< gmaxwell>
(witness data also means non-utxo transaction data in that above comment)
< gmaxwell>
aside, I'm glad to hear this discussion has moved past just replicating the BIP37 mechenism.
< roasbeef>
rationale to include witness data was to allow light cleitns to efficielty scan for things like reusable addresses (stealth addresses), i think my model of how folks do that on-chain these days is dated thoughu, i guess they stuff a notification on Op_returns?
< sipa>
i'm not sure that is worth the cost
< sipa>
also, individual scriptPubKey pushes?
< sipa>
if anything, my preference would just be outpoints and full scriptPubKeys
< roasbeef>
they do make the extended filters quite a bit bigger (i have testnet data also)
< gmaxwell>
well no one does those things in practice, and everyone who previously has implemented them that I'm aware of performed all scanning via a centeralized server, even though they could have matched on the OP_RETURN.
< CodeShark>
we can always start with the simplest minimal filter and then add more if we find use cases
< roasbeef>
gmaxwell: well the intention was to allow the new light client mode to actually make using them pratcical without delegating to a central server
< gmaxwell>
roasbeef: that was already possible with BIP37 and the prior design.
< jonasschnelli>
Can we start with adding the same elements that bip37 does?
< roasbeef>
sipa: so including the op-codes?
< gmaxwell>
Usuabilty of SPV clients that scan using BIP37 is really poor though, thus the rise of electrum.
< sipa>
roasbeef: bah, and 1) further encourage op_returns and 2) make them even more expensive for full nodes?
< gmaxwell>
jonasschnelli: the things BIP37 added largely turned out to be a mistake that really degraded BIP37 so I hope a new proposal would do less.
< sipa>
well the degradation problem doesn't exist here
< sipa>
as the filter is not cumulative
< luke-jr>
sipa: is there a way to do it without OP_RETURN?
< gmaxwell>
yes, but you still need a bigger filter for same FP ratio. It's just less awful. :)
< sipa>
luke-jr: sure, payment protocol like systems
< luke-jr>
well, true, but then you don't need the crypto stuff for it
< sipa>
i think that's a separate discussion and probably not one for here
< luke-jr>
k
< CodeShark>
for starters we should look at the most basic use cases
< gmaxwell>
Yea, we should have a subcommittee. :P
< sipa>
jonasschnelli, CodeShark, roasbeef: is there a use case for individual pushes in scriptPubKeys?
< jonasschnelli>
the action is probably define a set of filter and create a spec that leaves room for future filter types
< CodeShark>
jonasschnelli: indeed
< sipa>
especially in a world where everything is P2PKH/P2SH/P2WPKH/P2WSH
< CodeShark>
once we have the framework for adding new filters, it should be easy to do
< gmaxwell>
jonasschnelli: multiple filter types can result in n-fold overhead, which will be a significant pressure against defining many.
< roasbeef>
sipa: sure, the filter is smaller if one doesn't include the op-code as well
< sipa>
roasbeef: eh?
< sipa>
i must be misunderstanding something then
< roasbeef>
oh you mean insert the _entire_ thing
< sipa>
yes, just the whole scriptPubKey
< sipa>
1 element per output
< sipa>
well, and another one for the outpoint
< roasbeef>
mhmm, only advtange to data pushes in that case is in a world where mbare multi-sig is actually used
< gmaxwell>
sipa: wait why?
< sipa>
gmaxwell: why what?
< gmaxwell>
roasbeef: yes, which we don't expect that world to exist.
< sipa>
roasbeef: yes, the reason it's in BIP37 is for bare multisig support... but i don't think that's very interesting now
< gmaxwell>
sipa: I expect one insert per output. The scriptpubkey. Why would you insert anything else (for normal functionality)
< gmaxwell>
s/now/ever/ but hindsight is 20/20
< gmaxwell>
blockchain isn't a message bus. :P
< sipa>
i guess if you want to look for an outpoint, you can always search for its scriptPubKey
< gmaxwell>
sipa: right.
< gmaxwell>
okay.
< sipa>
in BIP37 there was a reason to separate it, as it would be less bandwidth if you wanted a specific coutpoint, despite there being many scriptPubKeys with it
< sipa>
but here, that reason doesn't really matter i think?
< sipa>
roasbeef: what do you think? just a filter with scriptPubKeys?
< gmaxwell>
sipa: the privacy leak from correlated data still exists in map proposals, based on what blocks you choose to scan further, though much less severe than BIP37. Keep that in mind.
< roasbeef>
if it's just spk's, then how does one query the filters to see if an outoint has been spent?
< sipa>
roasbeef: by querying for the scriptPubKey that outpoint created
< sipa>
roasbeef: which you will always know, i think?
< gmaxwell>
roasbeef: by looking for its spk.
< roasbeef>
sipa: which would require adding parts of the witness/sigScript though?
< sipa>
?
< sipa>
i'm confused
< roasbeef>
me too :)
< CodeShark>
txhash:txindex -> scriptPubKey
< sipa>
maybe we should do this outside of the meeting
< gmaxwell>
roasbeef: has nothing to do with the witness. You validate the transaction, you know the content of the outpoint.
< sipa>
it seems we're doing protocol design here now
< gmaxwell>
12:17 < gmaxwell> Yea, we should have a subcommittee. :P
< CodeShark>
anyhow, we don't need to decide the specifics of what goes in the filter right now
< sipa>
agree
< roasbeef>
ok, sure, to summarize: we have working code for the construction, have nearly finished integrating it into lnd, have a BIP draft that should be ready by next week-ish (will also integrate feedback from thjis discussion)
< CodeShark>
I like the idea of creating a framework that allows us to arbitrarily define filters later on
< sipa>
i think it's an interesting thing to research further
< sipa>
not sure what else needs to be discussed here
< gmaxwell>
well we aren't deciding anything right now... :)
< gmaxwell>
CodeShark: there is an n-fold cost to additional filters. It is unlikely to me that nodes would be willing to carry arbritarily many in the future.
< gmaxwell>
CodeShark: there might be a reasonable case for more than one, sure.
< gmaxwell>
In any case, I think this is good to open up more discussion and participation.
< gmaxwell>
I'm quite happy to hear that there is activity in this area and I'd like to help.
< jonasschnelli>
gmaxwell: I see this point but I don't think it would hurt if the specs would allow new filter types
< CodeShark>
gmaxwell: point is the code complexity to support adding arbitrary filters isn't that great and it avoids the bikeshed in writing up the initial BIP ;)
< gmaxwell>
jonasschnelli: yea sure, whatever, but thats just a type paramter.
< jonasschnelli>
gmaxwell: right.
< sipa>
end of topic?
< * roasbeef>
now uunderstands what sipa was referring to
< wumpus>
I don't think any other have been proposed?
< gmaxwell>
you're gonna regret saying that.. :P
< gmaxwell>
quick: high priority PRs.
< wumpus>
nearly halfway time
< jonasschnelli>
kallewoof had also an approch that peers could serve digests of filters to check the integrity among different peers
< wumpus>
#topic high priority PRs
< sipa>
small topic for later: bytes_serialized
< gmaxwell>
Congrats Morcos on the merge of the new fee estimator stuff.
< jonasschnelli>
\o/
< sipa>
it will need cleanups, but that's fine
< morcos>
thanks, quick PSA.. if you run master now it'll blow away your old fee estimates, you might want to make a copy
< wumpus>
quite a few high priority PRs were merged this week, so there's place for new ones, please speak up if there's any that block further work for you
< gmaxwell>
"micros" not withstanding.
< morcos>
i'm hoping to get an improvment which makes the transition more seamless before 0.15
< sipa>
it shows the relative depth of each block downloaded from my node _excluding_ compact blocks
< sipa>
gmaxwell did some statistical analysis on it
< gmaxwell>
Sipa's data is interesting. 144 is to small for sure. 1008 is fine. I'm of the view that we don't need more than a dozen or so blocks of headroom. I think the BIP should be written based on what you should keep. How you decide where to fetch depends on exactly what you're doing.
< stickcuck>
hm
< gmaxwell>
I found no really evidence of a real preference for N weeks in sipas data, but rather, advantages for doing 1-day 2-day 3-day ... etc. But 'day' is a lot more than 144 blocks, because of hashrate increases.
< gmaxwell>
You can process the data to roughly remove IBDing peers and the fall off is pretty stark.
< gmaxwell>
note sipas graph ignores depth 0.
< sipa>
it'd be a hockeystick if it included 0
< jonasschnelli>
What would you recommend for "day" instead 144, calc in the historical hashrate increase?
< gmaxwell>
also 0 data is inaccurate because it excludes compact blocks
< sipa>
gmaxwell: didn't you suggest 288?
< gmaxwell>
jonasschnelli: I think we should make the first threshold 288. It's more than enough to cover a 'day' in practice.
< jonasschnelli>
288 and 1008...
< jonasschnelli>
But then the current minimum (prune=550) would not allow to signal the LOW mode?
< morcos>
the current minimum is 288
< gmaxwell>
and then peers should estimate what they need (based on time, or headers if they have them) and choose where to connect. The estimate should be conservative but it doesn't need to be a 100 block headroom, a dozen blocks should be fine. If you get headers and find that you need more, you'll disconnect and go elsewhere.
< jonasschnelli>
Or is 288 including headroom?
< morcos>
the 550 is just so you don't set a prune limit which you have no hope of respecting
< gmaxwell>
the minimum is 288 blocks.
< morcos>
its out of date with segwit
< gmaxwell>
and we'll blow over the prune setting to preserve 288 blocks.
< morcos>
i think the calculation is presented in the code comments
< jonasschnelli>
Yes. 288 is the minimum. So we should remove the BIP headroom/buffer from the BIP
< gmaxwell>
I think eventually we should be changing the prune setting to be enum-like but thats another matter.
< gmaxwell>
jonasschnelli: I think the BIP shouldn't have any buffer. "You store X from your tip" "You store Y from your tip" it can then make advice to users on how to choose connections. but the requirement is just what you promise to store.
< jonasschnelli>
gmaxwell: ack
< gmaxwell>
The advice can say to use the best info you have available (time or headers if you have them) to figure out what you need, and then give enough headroom maybe 6 or 12 blocks that you can fetch parents. The cost of connecting to someone that doesn't have what you need is not that great. You'll request headers from them, learn you need blocks they don't have and you'll disconnect them and connect
< gmaxwell>
to someone else.
< jonasschnelli>
For the 1008 I guess the BIP can no longer state blocks for 1 week. Now the question is to use 2016 or say it 3.5 days..
< sipa>
?
< sipa>
i think it should just say 1008 or 2016 blocks or so, and not make any connection with time
< jonasschnelli>
From what I understood is that 144 is to little for a day regarding the increasing hash-rate
< gmaxwell>
jonasschnelli: I'll catch up with you later today, I don't have my processed results in front of me. But I think I found that after elimiating IBDs there were very few fetches in sipas data past 1000 blocks deep. And indeed, it shouldn't mention time.
< jonasschnelli>
But light client implementations are really looking for "days" rather the blocks.. but, sure, they can do their homework... but would have been nice to mention day values in the BIP.
< jonasschnelli>
But maybe they are to inaccurate
< gmaxwell>
The bit(s) should just be defined as "I claim I will keep at least X blocks deep from my tip, maybe I keep more, maybe not."
< sipa>
jonasschnelli: light clients know how many blocks they are behind after header sync
< gmaxwell>
jonasschnelli: anyone using these bits will fetch headers.
< jonasschnelli>
Indeed.... okay. Got it.
< gmaxwell>
now, before you connect you won't have headers and you'll need to make a time based guess. If you guess wrong you'll need to disconnect and go elsewhere. Not the end of the world.
< jonasschnelli>
Yes. I agree on that. Re-connecting should be hard.
< jonasschnelli>
Maybe even an additional dns query may be involved (in case you filter)
< sipa>
even if it happens, it'll happen just once
< jonasschnelli>
Yeah,... shouldn't be a problem for clients
< sipa>
because even if you connect to a peer that does not have enough blocks, they'll have the headers to teach you how many blocks you are behind
< sipa>
so i don't think it's such a big issue
< sipa>
done topic?
< gmaxwell>
I think I mentioned it on the list, but it should be clear that these bits should still mean that you can serve headers for the whole chain.
< wumpus>
#topic bytes_serialized (sipa)
< sipa>
thanks
< gmaxwell>
Kill with fire (sorry wumpus)
< jonasschnelli>
gmaxwell: seems obvious.. but I'll mention it
< gmaxwell>
:P
< sipa>
so currently gettxoutsetinfo has a field called bytes_serialized
< sipa>
which is based on some theoretical serialization of the utxo set data
< wumpus>
I think there's something to be said for a neutral way of representing the utxo size, that doesn't represent on estimates of a specific database format
< sipa>
wumpus: agree with that
< gmaxwell>
what I said to sipa the other day was that if we list the total bytes in values and the txout counts, that lets you come up with whatever kind of seralized size estimate you want.
< sipa>
but would you be fine with it just being the size of keys+values in a neutral format, _not_ accounting for the leveldb prefix compression?
< wumpus>
sipa: yes
< gmaxwell>
If you want you could multiply that count by 36 and add the values and that gives you the size for the dumbest seralization that hopefully no one would use.
< luke-jr>
values counted as 8 bytes, or compressed?
< wumpus>
sipa: that's be fine really, and the format change provides oppertunity to change the definition
< sipa>
wumpus: agree
< gmaxwell>
okay if wumpus and sipa agree I'll shutup.
< sipa>
luke-jr: no strong opinion. do you?
< luke-jr>
sipa: I don't think the compression should be exposed, ideally.
< sipa>
luke-jr: seems fair
< gmaxwell>
wumpus: the only concern I had with a really neutral figure is that it's misleading.
< luke-jr>
not a strong opinion though
< wumpus>
luke-jr: just a fixed size seems ok to me
< wumpus>
luke-jr: that's more future proof likely
< wumpus>
luke-jr: so we can have a statistic to compare over time
< morcos>
can't we output more than one thing?
< luke-jr>
wumpus: indeed
< gmaxwell>
e.g. a naieve seralization would have 32 bytes for txid, but the reality is probably under 16 due to sharing. But as long as it doesn't require scanning that data I guess I don't care.
< sipa>
morcos: so #10396 reports the actual disk usage
< luke-jr>
gmaxwell: well, to be fair, we've never had a formal time limit for meetings..
< luke-jr>
:p
< instagibbs>
it's a standardness rule...
< kanzure>
it was to prevent spam
< gmaxwell>
I like that they're limited. even though I always spend another half hour in resulting discussions.
< gmaxwell>
kanzure: that limit was temporary!
< instagibbs>
I think it's good to focus and respect people's time
< wumpus>
agree
< sipa>
we should revert to the original limit of 24 hours
< luke-jr>
>_<
< gmaxwell>
esp considering timezones don't put this meeting at good times of day for many.
< wumpus>
so make sure that you have topics ready at the beginning, that makes it easier to schedule time for topics
< sipa>
it's especially annoying for people in asia
< luke-jr>
sipa: IMO the original limit was 5 hours
< sipa>
i wonder if we should have the meeting alternate between two times
< luke-jr>
sipa: since that's how long until the day changes in UTC
< gmaxwell>
luke-jr: That isn't consistent with Craig Wright^W^WSatoshi's vision!
< luke-jr>
gmaxwell: it's consistent with tonal though
< cfields>
sipa: nah, let's just use an accounting trick and have meetings on a plane zooming through timezones.
< luke-jr>
anyway, my parents showed up, so going to say hi and then get back to multiwallet
< kanzure>
yes if you navigate the plane correctly, you can actually not spend any time at all in the meeting if you hop between timezones just right.
< cfields>
I'm pretty sure we can cram 2 days into 1 that way :p
< luke-jr>
cfields: rofl
< gmaxwell>
too bad they stopped flying the concord.
< sipa>
you just need a plane circeling the arctic
< kanzure>
sounds like bip148 discussion is slightly blocked by luke-jr parental units
< wumpus>
sipa: if there's interest from people from asia joining we should certainly do that; in practice I never had any concerete complains about the current meeting time though
< sipa>
wumpus: we did, a long time ago
< gmaxwell>
wumpus: jl2012 has lamented, and I believe kallewoof too.
< cfields>
iirc it's prohibitive for jl2012, at least
< instagibbs>
oh yeah, kalle too
< wumpus>
ok good to know
< wumpus>
maybe fanquake too (australia)
< instagibbs>
he's a kiwi I thought
< jtimon>
luke-jr: aug2017 seems to soon to me, I have no problems with bip149 on the other hand
< gmaxwell>
we could also just look at the log data, determine a time when when most of us are already here that the asian people can meet, and maybe just setup a hour to talk to them when they know people will be around.
< wumpus>
I'm usualy up very early so that'd be ok with me
< gmaxwell>
I think there is no time everyone can meet. But thats okay.
< gmaxwell>
wumpus is up that early.
< gmaxwell>
oh oops.
< wumpus>
better than late at night
< instagibbs>
I'll survive once a week if that works
< instagibbs>
oh right Chaincode...
< instagibbs>
:)
< sipa>
damn timezones
< achow101>
I'd rather not be up at 1 am
< sipa>
achow101: you'll be on the west coast soon :)
< instagibbs>
Maybe figuring a way to reliably rotate or something. I dunno.
< achow101>
sipa: thinking ahead a bit past the summer :)
< gmaxwell>
instagibbs: well Above I just suggested we have a second meeting at another time. It may be the case that the activity level in the meetings with asia are low enough that rotating wouldn't make sense.
< gmaxwell>
instagibbs: if we pick at time when 'enough' people are here anyways, then it's not like setting aside the slot has a huge cost.
< instagibbs>
hm yeah that makes more sense
< luke-jr>
jtimon: well, it's already happening Aug 1 with BIP148..
< jtimon>
luke-jr: right, I mean that seems too soon
< jtimon>
so I don't think I will run bip148 myself
< gmaxwell>
sipa: so there is like 3 hours between japan and auckland, so that might actually fail to get everyone in that part of the globe.
< luke-jr>
jtimon: oh well. :<
< sipa>
gmaxwell: yes, we need a slower earth rotation
< instagibbs>
don't give kanzure any ideas
< gmaxwell>
instagibbs: kanzure wants to destroy the moon I thought, that would reduce the slowing a lot.
< gmaxwell>
sipa: thats already happening, just wait a while.
< sipa>
gmaxwell: 2ms per century isn't very much
< kanzure>
yeah i have some plans but it's sort of off-topic
< stickcuck>
ok
< bitcoin-git>
[bitcoin] jnewbery opened pull request #10423: [tests] skipped tests should clean up after themselves (master...cleanup_skipped) https://github.com/bitcoin/bitcoin/pull/10423