< achow101>
sipa: meshcollider: Instead of having the ScriptPubKeyManagers be the SigningProviders for CWallet, what do you think about keeping CWallet as a SigningProvider and just having it poll all of its ScriptPubKeyManagers for keys and scripts? I think this makes the implementation much simpler as it doesn't require knowing exactly which address type the output is in order to sign for it (as well as a bunch of other things that take
< achow101>
SigingProviders)
< sipa>
achow101: did you see my comment on your issue?
< sipa>
i think a more efficient interface is having a method that you give an sPK, and it gives you a SigningProvider for it
< sipa>
which for the legacy one would just give its built-in bulky one
< sipa>
and for descriptor based ones would expand the relevant descriptor at the right index and just return that
< achow101>
sipa: there are a few things I think that approach won't work for. I'll take a closer look at that tomorrow
< sipa>
achow101: yeah, i believe that; it's just a suggestion
< meshcollider>
achow101: I thought that was handled by just checking IsMine on all of them anyway
< achow101>
ngl, I think I did it way too rushed. If you look at the commits in my box-the-wallet branch, you'll see that they quality of each commit starts dropping off rapidly towards the end. I'll probably end up redoing a large chunk of it anyways
< achow101>
at least now I know what all needs to actually be changed
< fanquake>
Has anyone else seen a crash similar too #16027 ? I don't think it's related solely to sleep/hibernation, as I've had two occurrences, both happening right after unloading a wallet. Unfortunately not running under lldb in either case..
< bitcoin-git>
[bitcoin] promag closed pull request #12419: Force distinct destinations in CWallet::CreateTransaction (master...2018-02-distinct-destinations) https://github.com/bitcoin/bitcoin/pull/12419
< bitcoin-git>
[bitcoin] promag opened pull request #16299: bench: Move generated data to a dedicated translation unit (master...2019-06-benchmark-data) https://github.com/bitcoin/bitcoin/pull/16299
< wumpus>
provoostenator:if you want to nominate a backport might be better to do it in MarcoFalke 's topic?
< provoostenator>
Yes, sorry
< wumpus>
oh, it's not merged yet to master
< wumpus>
ok that should be under 'bugfix' the nI suppose
< provoostenator>
Based on stats from Blockchair on 0.1 BTC fees, I think quite a few people are firing that footgun (unless there's another wallet that produces exact 0.1 BTC fees).
< provoostenator>
It's beacuse if you set feeRate to "1" that doesn't mean 1 satoshi per byte.
< wumpus>
right, sounds like a bug
< sipa>
provoostenator: holy crap that's insane
< wumpus>
another case of quietly ignoring an error
< wumpus>
that's always a red flag
< provoostenator>
It rounds down the fee instead of aborting. Which has been the case for years, but the "satoshi per byte" convention is newer, so maybe that's what causes the increase.
< promag>
:o
< provoostenator>
And the new PSBT methods also have this setting.
< sipa>
provoostenator: that's 25 BTC in fees this month overall
< promag>
this is the miner trolling X)
< provoostenator>
Some of these are batches, but quite a few are small txs.
< provoostenator>
For big batches something else happens: the user sets a higher fee, but then we round it down.
< provoostenator>
Which can cause large batch transactions to get stuck.
< jonasschnelli>
^^
< provoostenator>
In both cases, I think throwing an error is just better. User can always override the maxfee, or manually set a fee.
< sipa>
agree
< achow101>
how is that we are only running into this now? hasn't this behavior been in for ages?
< wumpus>
this is the first time I hear this is the case, it sounds awful
< sipa>
it looks like in december 2017 there were ~100 cases of this per day as well
< wumpus>
+1 for merging provoostenator's PR soon and doing 0.18.1
< provoostenator>
I'll be quick to address feedback on the PR.
< wumpus>
thanks
< jonasschnelli>
thanks provoostenator for bringing this to attention
< promag>
sipa: that was the ath period?
< sipa>
promag: i just looked at dec 20th 2017
< provoostenator>
December 2017 was fee madness yes.
< provoostenator>
So people start manually setting the fee.
< sipa>
so this is certainly not a few phenomenon, and also not the first that it seems actually impactful
< sipa>
s/few/new/
< provoostenator>
And also when mempool "weather reports" became popular, and more wallets started supporting fee settings. Most using the satoshi per byte unit.
< achow101>
ack with fixing it
< wumpus>
really wonder why this is never reported, not strange some people complain about high fees at least then :(
< promag>
multiwallet is kind of useless for integrators without that
< sipa>
actually; the cases in dec 2017 were not absurd; they were all paying reasonable feerates for that time
< wumpus>
we're talking about 0.18.1 here the 0.19 feature freeze is somewhat further waway
< promag>
wumpus: i understand, it's a bit sad it missed 0.18
< wumpus>
(2019-09-15 to be exact)
< wumpus>
promag: yes, blame windows and its absurd escaping rules
< wumpus>
absurd and inconsistent
< cfields>
So for the travis/depends bottleneck issue, I thought of some low-hanging fruit that I think would have quite an impact. By simply sharing the intermediate depends binary packages globally among builds, we avoid situations where dozens of PRs are all rebuilding all of depends.
< cfields>
Instead, the first to finish would send it to the cache server, and each client would check for that package before building it itself. Because all packages-names are deterministically generated and unique, there should be no filename collisions, so maintenance should be effectively zero on the storage side. At most, a cron job to delete the oldest files now and then.
< promag>
wumpus: I've asked if we could just ignore windows, let's move to the PR later
< cfields>
As a side-effect, it would also kick in and avoid complete depends builds when Travis fails to download its cache.
< cfields>
I'll try to hack it together this week. It may be enough that we don't need to make the bigger changes we discussed a few weeks ago.
< wumpus>
cfields: how would this cache server work, e.g. how to prevent PRs from uploading arbitrary binary dependencies, or do you intend to build them outside of travis?
< MarcoFalke>
cfields: The cache would be read-write by anyone?
< wumpus>
this has always been the problem with uploading any kind of data from travis
< achow101>
you can configure travis to have local secrets as environment variables
< achow101>
so you have an api key or something in one of the travis environment vars that lets you upload to the server
< promag>
achow101: I could write a PR to dump those secrets?
< MarcoFalke>
You can disable secrets for prs
< cfields>
wumpus: Indeed. It's also going to be a problem with some of the other, more complicated splits that we discussed. I figure this is a much smaller surface to experiment with.
< wumpus>
for branches that would be OK, that's proabably enough
< wumpus>
no one is going to merge a PR that dumps secrets and if so we have much bigger issues :)
< MarcoFalke>
I feel like the same problem would be solved by having a shorter cache expiry on travis for pull requests
< cfields>
wumpus: I'm thinking it may be possible to leverage github tags somehow for "allowed to cache" or so. That way the cache is always primed before another PR branches from it.
< wumpus>
so the alternative CI ideas were a dead end?
< achow101>
travis lets you encrypt variables, but it's not available for PRs for the reason that promag said
< MarcoFalke>
Travis already re-generates and caches depends on master, the pull requests are just too slow to pick it up, since they still have their own cache
< jonasschnelli>
semaphore2 would have a nice cache tool
< cfields>
wumpus: nono, this was just something that occured to me today. I thought it was a good idea, but if you don't like it, no big deal.
< jonasschnelli>
that lets you manually control the key/storage-blobs
< wumpus>
jonasschnelli: semaphore2 sounds great, but it doesn't allow viewing the test logs?!?
< MarcoFalke>
cfields: Have you seen my comment?
< jonasschnelli>
they promised to get this done in the next days...
< wumpus>
cfields: it just feels so hacky to implement our own caching on an external server because travis is too stupid to handle that correctly, it seems a base thing !
< jonasschnelli>
but,.. maybe its something we should follow but not do now
< wumpus>
cfields: I'm not against your idea, but it seems to go from bad to worse, what's the next thing we have to implement for them :)
< jonasschnelli>
indeed
< cfields>
MarcoFalke: yes, that's what I was attempting to address. All PRs would immediately have access to those files instead of waiting on the cache.
< jonasschnelli>
since we are customer of travis, can we not request a feature?
< wumpus>
that will probably take too long, if they pick it up
< jonasschnelli>
very likely
< cfields>
wumpus: I'm not sure that's fair. They have storage/upload capabilities, but we're just using the cache.
< MarcoFalke>
cfields: With immediately you mean "after the depends built finished"?
< wumpus>
cfields: oh we'd be using their upload/storage capabilities?
< MarcoFalke>
So we'd have to wait either until our own depends server finishes the depends built or until travis finishes it. I don't see the difference
< wumpus>
cfields: that makes sense, I wasn't aware of that
< cfields>
MarcoFalke: I mean: PR1 is created which touches depends, then PR2 is created, then PR1 is merged, PR2 rebuilds depends next time it's bumped whether it touched them or not.
< cfields>
(I believe I typed that out right)
< wumpus>
cfields: I thought this would have to be some external server run by ourselves
< promag>
cfields: also doesn't work for prs
< cfields>
wumpus: well, that was my open question, but I guess you've answered it.
< jonasschnelli>
cfields: why is it racy? Do they use non-atomic copies/overwrites?
< cfields>
promag: say I have a PR that changes one of the base depends files, say depends/Makefile. That will cause all of depends to be rebuilt for that PR. When it's merged into master, the master branch is rebuilt and the depends are cached at that point. But all existing PRs have their own cache already from the previous master build. So when they're rebased to master, they have to rebuild everything again.
< cfields>
So basically touching depends/ causes a huge slowdown across all PRs and builds for a while.
< jonasschnelli>
bah...
< cfields>
promag: it looks like only the addon is disabled for PRs, not the artifact tool?
< promag>
cfields: "So when they're rebased to master, they have to rebuild everything again." why?
< cfields>
promag: because their depends are now stale, master bumped something.
< cfields>
blah, ok.
< jonasschnelli>
cfields: Yeah. I guess it has to. Writing with moving can be make atomic. But I guess if a worker is reading the cache while another worker is writing the same is problematic. They probably lock the cache during reads.
< jonasschnelli>
*can be made atomic
< cfields>
jonasschnelli: yep.
< jonasschnelli>
Though writing 40MB should be a matter of ms'es
< cfields>
jonasschnelli: it's also a question of inheritance, though. Who pulls from whom and when?
< cfields>
That's the main complication.
< jonasschnelli>
I guess that fallback model makes sense to me.
< jonasschnelli>
Check if the branch has a cache or fall back to master (or more grained)
< promag>
but we aren't changing depends that much are we?
< jonasschnelli>
can't we not just cache single depends packages instead of a whole host package?
< cfields>
jonasschnelli: that's close to what I was suggesting.
< jonasschnelli>
like md5( host || package-name || package-version)
< cfields>
jonasschnelli: s/md5/sha2, and that's how it currently works :)
< jonasschnelli>
aha...
< jonasschnelli>
TIL
< jonasschnelli>
cfields: but that is how the depends internally caches... right?
< cfields>
It recurses through the dependency chain, though.
< jonasschnelli>
but not how we write it to the travis cache?
< cfields>
jonasschnelli: we just write those files to the cache, and the next build picks them up...
< cfields>
I was suggesting that we could somehow write them to a big global cache, and let future builds try to fetch them before building themselves.
< jonasschnelli>
yes...
< jonasschnelli>
need to read back... I guess
< cfields>
that operation can be write-only, because 2 package names will never collide with each-other. So it would be trivial to automate.
< jonasschnelli>
I was just hopping,... lets assume Qt dependency has changed, we grab all the others (since sha2( host || package-name || package-version) is cached) and just compile the new Qt
< cfields>
But the question of access causes it to break down, as everyone was quick to remind me :(
< cfields>
Yes, that's how it works.
< cfields>
The problem is when, for example, openssl is changed. That ALSO causes qt rebuilds.
< cfields>
So anything that's more low-level than the package descriptors screws everything up for a while.
< jonasschnelli>
so take the dependency package & version into the hash?
< cfields>
Not exactly the easiest thing to read, but it takes all of that into account. The hashes are nearly guaranteed to be unique.
< cfields>
jonasschnelli: you're fine! It's not exactly straightforward.
< jonasschnelli>
[22:10:42] <cfields>But the question of access causes it to break down, as everyone was quick to remind me :(
< jonasschnelli>
^ -- do you mean with that that reading while writing is racy?
< cfields>
jonasschnelli: no, I mean allowing open access to upload somewhere without being DoS'd.
< jonasschnelli>
Oh. I see.
< cfields>
That's why we've ultimately avoided having Travis do any useful uploading.
< jonasschnelli>
I was trying to check wether with samephores2 cache manager this would be possible...
< jonasschnelli>
since they have key/blob storage, I think it would
< cfields>
So I think the ideal solution for us would be to allow for merged caches.
< cfields>
I'll have to think about it some more, but I believe that would be perfect.
< jonasschnelli>
cfields: merged between branches?
< jonasschnelli>
(single global cache)
< cfields>
We have a per-branch (master) cache as well a per-PR branch. If both of those were fetched and extracted, I think we would almost always skip rebuilding.
< cfields>
s/we/Travis.
< jonasschnelli>
I see.
< cfields>
Right now, according to MarcoFalke, we can choose.
< cfields>
Hmm.
< cfields>
I'll experiment with that. Might be easy for Travis to implement, too.
< jonasschnelli>
Yes. Asking them costs nothing...
< jb55>
re promag on #13339: yes please! walletnotify seems kind of gimped now without it getting the walletname. running patched versions isn't fun either :[