<sipa>
it never progressed past being a prototype, but it'd be nice if someone picked it up
<dergoegge>
sipa: cool! i think the cool thing about cryptofuzz was the differential oracle
<sipa>
dergoegge: yeah, i'm aware
ioannis has quit [Ping timeout: 252 seconds]
<dergoegge>
we have a fork of cryptofuzz (latest commit afaik) but i would assume it's not trivial to maintain
<vasild>
"In light of the EU Product Liability Directive and other speech criminalizations I am ceasing all my publications including FOSS contributions."
<furszy>
hi
<achow101>
yeesh
<fanquake>
dergoegge: it’s missing my last upstream fix but otherwise up to date yea
<sipa>
some portion of the kinds of things cryptofuzz could catch are also covered by wycheproof
<fjahr>
dergoegge: does it make sense to put it under a core org?
<fanquake>
not unless someone is planning to maintain it
<fjahr>
at least until another, better maintained for emerges
<fjahr>
for someone to start it needs to be available somewhere
<instagibbs>
how much work remains to get a glimpse of end to end usage
<sipa>
it implements (part of) the "medium level" layer: sort of a whole-mempool representation of the transaction graph, but very stripped down in functionality (it knows only fees, sizes, dependencies, and represents transactions as abstract handle "Ref" objects, no txids, no ctransaction, ...)
<sipa>
it's tested by a single simulation fuzz test, which compares the behavior of it with a naive reimplementation (mostly) which treats the entire mempool as a single depgraph
<sipa>
instagibbs: you mean in terms of what is left to do at the txgraph level, or what is left for cluster mempool as a whole?
<instagibbs>
latter
<sipa>
so i'm working on extending txgraph to be fully featured (i have some non-pr'ed stuff that is ready already) so it provides enough functionality (specifically, it needs mining/eviction, and a way to deal with reorgs that might otherwise violate policy limits)
<sipa>
after that, i think sdaftuar is planning to rebase #28676 on txgraph
<sipa>
then there are a few "optimizations" that we'll probably want soon, but aren't blockers for more progress on the mempool side (e.g. reducing memory usage for small clusters in txgraph, and a way to do background relinearization, ...)
<sipa>
in either case, 31363 is the thing to review now
<sipa>
based on feedback, i'm happy to split off more parts, split off commits, ... whatever helps
<sipa>
perhaps also an interesting update that won't affect things in the too near future: during bitcoin research week at chaincode, some researchers came up with an idea for how optimal cluster linearization may be doable in polynomial time
<glozow>
sipa: do you think that algo would require a refactor? or could it just be an edit to cluster_linearize.h?
<sipa>
depending on how things evolve there, the approach may end up being too slow to run inside our "relay-time" linearization logic, so it wouldn't really affect our guarantees or cluster size limit
<instagibbs>
sipa interested to learn how th relaxation was shown to be optimal :)
<glozow>
(assuming you want to incorporate it)
<sipa>
but it may mean that a background relinearization can practically linearize everything optimally
<instagibbs>
that's awesome
<sipa>
glozow: it'd be pretty much a drop-in replacement for some code in cluster_linearize.h
<glozow>
awesome! so it could be worked on in parallel
<sipa>
instagibbs: i can give you a quick intuition, but let's do that after the meeting
<instagibbs>
yeah offline
ioannis has quit [Ping timeout: 252 seconds]
<sipa>
that's it from me, unless sdaftuar still appears and has an update
ioannis has joined #bitcoin-core-dev
<achow101>
#topic MuSig2 WG Update (achow101)
<achow101>
No updates, waiting for further review of #31242 and #31243
<gribble`>
https://github.com/bitcoin/bitcoin/issues/31242 | wallet, desc spkm: Return SigningProvider only if we have the privkey by achow101 · Pull Request #31242 · bitcoin/bitcoin · GitHub
<gribble`>
https://github.com/bitcoin/bitcoin/issues/31243 | descriptor: Move filling of keys from `DescriptorImpl::MakeScripts` to `PubkeyProvider::GetPubKey` by achow101 · Pull Request #31243 · bitcoin/bitcoin · GitHub
<achow101>
#topic Legacy Wallet Removal WG Update (achow101)
<achow101>
furszy found a few additional edge cases in migration and opened #31374 and #31378 to handle those. These are the current PRs to review.
<achow101>
#topic package relay WG Update (glozow)
<glozow>
I've put up the next PR for making orphan resolution more robust by trying with all candidate peers, #31397. It is getting some review (thanks!!). We have a meeting 2h from now in #bitcoin-core-pr-reviews. I'm also happy to do a video call walkthrough if enough people are interested? lmk.
<glozow>
The step after that would be orphanage protection and more thinking around how we can limit per-peer usage of orphanage
<glozow>
Separately, #31385 may be of interest to mempool folks and users. It relaxes the "all unconf parents must be present" requirement for package validation.
<gribble`>
https://github.com/bitcoin/bitcoin/issues/31385 | package validation: relax the package-not-child-with-unconfirmed-parents rule by glozow · Pull Request #31385 · bitcoin/bitcoin · GitHub
<sipa>
glozow: i'm interested, but i've been quite out of the loop w.r.t. orphan handling... is there a writeup/pr/... that would help to get up to speed?
<jonatack>
oops sorry, functional tests, different topic
<instagibbs>
sipa ok you can try explaining like I haven't thought about QPs/LPs in like 15 years
<sipa>
instagibbs: yeah!
<instagibbs>
my naive thought would result in an exact MILP, or eps-close LP, but obviously we care about optimal linearizations, so I'll let ya give it a try
darosior has joined #bitcoin-core-dev
<instagibbs>
(and MILP would probably be stupid slow)
<_aj_>
stevenroose: test_framework/test_framework.py has code to generate and cache a 199-block-long chain which makes things faster, though i guess you might still see getblock entries in your logs
ioannis has quit [Ping timeout: 252 seconds]
<sipa>
so we start with the problem of trying to find the highest-feerate topologically-valid subset (if you can do that repeatedly, you can linearize, just move the highest-feerate subset to the front; *if* an optimal linearization exists, this must find it)
ioannis has joined #bitcoin-core-dev
<sipa>
which is: given a number n of transaction with fees (fee_1, fee_2, ..., fee_n) and sizes (size_1, size_2, ..., size_n), and dependencies: (par_1, chl_1), (par_2, chl_2), ...
<sipa>
we're looking for boolean variables x_1, x_2, x_3, such that sum(fee_i * x_i) / sum(size_i * x_i) is maximized (and the denominator is nonzero; zero would mean empty set)
<sipa>
and the constraint that for every (par_i, chl_i) pair, (x_{chl_i} => x_{par_i})
<sipa>
sounds right?
Zenton_ has quit [Remote host closed the connection]
<_aj_>
("=>" -- implies, not greater-or-equal?)
Zenton_ has joined #bitcoin-core-dev
<sipa>
_aj_: yeah, just taking x_i to be booleans, so => is implication
<instagibbs>
ah, ok yeah
<instagibbs>
think im tracking
<sipa>
but (x_{par_i} >= x_{chl_i}) would be equivalent if you think of booleans as {0, 1} integers
<sipa>
and as you already know, the first step is relaxing the domain of the x_i variables to be real non-negative numbers instead
<instagibbs>
mhmm
<stevenroose>
_aj_: yeah wallets will still parse them all to sync.
<stevenroose>
But thanks :)
<sipa>
it's clear that multiplying all x_i's by the same constant won't change the solution, but it's perhaps less clear that allowed *distinct* real numbers for each of the x_i (not all exactly 1.0) won't introduce solutions to the problem that don'
Emc99 has quit [Ping timeout: 240 seconds]
<sipa>
't actually correspond to solutions of the optimal subset problem
<sipa>
i'll get back to that, ok?
<stevenroose>
sipa: yeah wouldn't want to mess with consensus parameters for this case. though there's so many other ones, that was my main question of it was one or not. thanks
<_aj_>
stevenroose: use one wallet for mining and a different one for testing, and compact block filters so the wallets only look at relevant blocks?
<stevenroose>
I'm talking about other wallets like BDK. If they don't support birthdays (or the granularity of the birthday support is not block based but rough time based) it will still sync the entire regtest chain.
<sipa>
instagibbs: with me so far?
<_aj_>
stevenroose: backdate the blocks on regtest, mine a few at current time, and time based birthdays shoud be fine?
<instagibbs>
sipa think so! one step of the optimization we already do, how do bool relaxed get optimal results, you will tell me
<sipa>
ok, one more step before we arrive at the LP problem
<stevenroose>
_aj_: interesting suggestions :)
<sipa>
because scaling all x_i by the same constant doesn't change the solution, we can choose a normalization constraint that (arbitrarily) picks one scaling for them
<stevenroose>
_aj_: we use bdk and afaik it doesn't support birthdays yet
<stevenroose>
I can just settle to love with the noise for another few years.
<sipa>
and the key insight is that the normalization constaint can be: set the denominator of the goal to 1 (sum(x_i * size_i) = 1), because it makes the goal function (the only part of the problem that's nonlinear) linear
<sipa>
it could also be something else, as long as it's linear; e.g. (sum(x_i * size_i) = sum(size_i)) which can be simplified to sum((x_i - 1) * size_i) = 0, for example... doesn't matter; but we'll continue with sum(x_i * size_i) = 1
<sipa>
observe that multiplying all x_i by say 2 doesn't change (a) the validity of any constraint and (b) doesn't change the goal function (numerator and denominator both double)
<stevenroose>
Colleague tells me it takes over 15 seconds on his machine to generate 100 blocks and I couldn't believe it. It took 5 secs on my beefy workstation as well.
<sipa>
so if we for example just said sum(x_i) = 1, nothing really changes - among all solutions which only differ by an (identical) scaling of all x_i values, it just picks one of them
<instagibbs>
ah wait, the original formulation isn't a LP(or QP or whatever) at all yet, so you're working on turning it into one?
<instagibbs>
im backtracking sorry lol
<sipa>
indeed
<instagibbs>
ho ho ho ok re-reading a third time
<sipa>
🎅
<achow101>
stevenroose: try having a smaller wallet
preimage has joined #bitcoin-core-dev
Zenton_ has quit [Remote host closed the connection]
Zenton_ has joined #bitcoin-core-dev
<sipa>
instagibbs: sorry, maybe i started too far back and should just have jumped to showing how the LP problem matches optimal sets
<sipa>
but maybe there are other people reading along who don't yet know what the LP formulation is
<instagibbs>
how do you set the denominator to a value without solving the LP, clearly missing something heh
<stevenroose>
achow101: it's empty. they're regtests wallet on an empty chain. just that all of them spam all the rpc interactions for the 100+ (104 generally) block we have to generate to get some coin. I miss elements' initialfreecoin :D
<instagibbs>
maybe the final LP will be illuminating yeah
Zenton__ has joined #bitcoin-core-dev
<sipa>
instagibbs: we literally just add a constraint to the problem: sum(size_i * x_i) = 1
<instagibbs>
ok, so this by itself isn't turning it into an LP, you're just stating the constraint
<instagibbs>
?
<sipa>
adding that constraint doesn't affect the solution space, because all it does is pick among every set of equivalent solutions, one canonical one
<sipa>
indeed
<sipa>
you agree with that?
<instagibbs>
got it, I keep jumping ahead it seems yes
<sipa>
ok!
<instagibbs>
are we an LP yet are we an LP yet etc
<sipa>
last step: we can replace the goal function from max sum(fee_i * x_i)/sum(size_i * x_i) to just max sum(fee_i * xi)
Zenton__ has quit [Remote host closed the connection]
<sipa>
because we *know* the denominator is 1 (that's a constraint of the problem, which we leave in place)
<instagibbs>
mmmmmm
<_aj_>
(VCs are going to be so excited that bitcoin-core-dev is finally focussed on LPs)
Zenton_ has quit [Ping timeout: 248 seconds]
<sipa>
haha
<sipa>
like, clearly dividing by something we know equals 1 doesn't change the result
<sipa>
we did of course skip one part: showing that the relaxation to real numbers doesn't introduce *additional* solutions to the problem which are *more* optimal, but don't correspond to our real problem
Zenton has quit [Remote host closed the connection]
Zenton has joined #bitcoin-core-dev
<instagibbs>
👍
<sipa>
to convince you that we didn't, i'm going to show you that *every* solution the LP problem (optimal and non-optimal) corresponds to a convex combination (a linear combination, where all coefficients are being 0 and 1, and which sum up to 1) of topologically-valid subsets... and thus that the goal function applies to them will also be a convex combination of those subset's feerates, which thus
<sipa>
cannot possibly be more than the highest feerate of any of them
<sipa>
when i say "convex combination" you can also think "weighted average"
<sipa>
which can't be more than the maximum of any of the values being averaged
<sipa>
ok
<sipa>
let's say T_1, T_2, T_3, ..., T_m are the non-empty topologically valid subsets of our graph (in the real problem space, not the LP one)... there are clearly only a finite number of them as the graph is finite, but it could be exponential
<sipa>
and call v_1, v_2, v_3, ..., v_m the corresponding LP solutions: v_k is the vector of (x_1, x_2, ..., x_n) where x_i=0 if i is not in T_i, and x_i=1/sum(size_j for j in T_k) if i is in T_i
<sipa>
so that's just the vector of the boolean solutions, but rescaled so that sum(x_i * size_i)=1 holds
<sipa>
so all the v_k are solutions to the LP problem... not necessarily the only ones, but just the ones that correspond to a single topologically-valid set
Guest48 has joined #bitcoin-core-dev
<sipa>
sorry this part is slightly technical and maybe not that easy to convey over text, but i promise it gets better
<_aj_>
every valid solution to the LP problem would correspond with one of those v_k's in some sense -- they'd have all the same x_i=0 values, but some of the non-zero x_i's might have different values, no?
<instagibbs>
to retread a bit, your contention is that any solution that has x_i as 0 implies exclusion, and any non-0 x_i inclusion?
<instagibbs>
or am I jumping ahead again
<sipa>
yeah :)
<sipa>
look at the set of all possible LP solutions (optimal and non-optimal)
<instagibbs>
ok, not convinced, but I can understand the words separately :P
<sipa>
do you agree that the v_k vectors i've given you are all part of that set?
Guest48 has quit [Client Quit]
<sipa>
so every v_k consists of 0s and all-identical nonzeros
<sipa>
they cannot have distinct non-zero values
<instagibbs>
I can stipulate because you're saying it's true haha, this part might need full writeup and me staring at it
<sipa>
say you have a graph {A,B,C} and B depends on C, so our T's are: T_1={A}, T_2={C}, T_3={B,C}, T_4={A,C}, T_5={A,B,C}
<sipa>
all the same size
Zenton has quit [Remote host closed the connection]
Zenton has joined #bitcoin-core-dev
<sipa>
then the v's are: v_1=(1,0,0), v_2=(0,0,1), v_3=(0,1/2,1/2), v_4=(1/2,0,1/2), v_5=(1/3,1/3,1/3)
<sipa>
do you see that these are all solutions of the LP problem?
<instagibbs>
C depends on B* I presume
<instagibbs>
or no
<sipa>
B depends on C
<sipa>
i'm never including B without also include C, but C can appear on its own
<_aj_>
instagibbs: (he defined them to have all-identitcal nonzeros, x_i=1/divisor; and that satisfies all the constraints. since he defined them that way, they "cannot" have different values...)
<sipa>
all i'm asking is: do you agree these are solutions to the LP problem
<sipa>
i'm not asking to see that the optimal one is among them
<sipa>
nor am i asking to see that they're the only solutions to the LP problem (which wouldn't be true, but that's jumping ahead)
bugs_ has joined #bitcoin-core-dev
<instagibbs>
probably yeah
<sipa>
B depends on C, so one of the constraints is x_3 >= x_2, which is true for all of them
<instagibbs>
I can re-read the argument later to fully convince, continue
<sipa>
and the normalized constraint holds because all sizes are one so it's just sum(x_i)=1
<sipa>
etc
<sipa>
ok, for each of these v_k solutions, the goal function g(v) = sum(fee_i*v_i) equals the feerate of the corresponding T_k
<sipa>
i should write g(x) = sum(fee_i*x_i), sorry for variable switch
<sipa>
and each v_k is one possible x
<sipa>
or put otherwise: among all these v_k's (and excluding any other LP solutions, which might be even better) the optimal one is the one corresponding to the highest-feerate topologically-valid subset (in the real problem)
<instagibbs>
huh ok
<sipa>
like these v_k's are just the corresponding solutions to the boolean problem we started with, but rescaled so that the sum(x_i size_i) = 1 constraint holds
<sipa>
and we know rescaling doesn't change feerate
<instagibbs>
claim seems clear, thanks
<sipa>
OKAY
<sipa>
the next step is showing that *every* LP solution can be written as a weighted average of one or more of these v_k's
<sipa>
because of linearity, if that is true, then the goal function sum(fee_i * x_i) in each LP solution is also a weighted average of the goal function of some of these v_k's, which cannot exceed the feerate of the highest topologically-valid subset of the graph
kevkevin has joined #bitcoin-core-dev
<sipa>
that's jumping ahead, i haven't shown that every LP is a weighted average of v_k's yet, but do you see that if that's the case, we are done?
<instagibbs>
sure
<sipa>
okay
<sipa>
given a solution x to the LP problem (any solution, optimal or not), which is a vector with components (x_1, x_2, ..., x_n)
<sipa>
find the smallest nonzero x_i
<sipa>
call that alpha
<sipa>
observe that the set of nonzero x_i's must be topological: if a child x_i is included, the parent x_j must be nonzero too
<sipa>
so this set of nonzero x_i's must correspond to some T_k, which has a correspond v_k, which we can subtract from x
<sipa>
sorry, alpha is the ratio between the smallest nonzero x_i and the nonzero values in v_k
Murch[m] has quit [Changing host]
Murch[m] has joined #bitcoin-core-dev
<sipa>
so x' = x - alpha*v_k is nonnegative still, and also still topological (parents are higher than children, we've subtractedf the same amount from all parents and children, so parents are still higher than children)
<sipa>
so we can keep repeating this... find the set of nonzero values in x', find the corresponding v_k, subtract a multiple of v_k from it, and continue until the whole x is 0
<sipa>
every time we've subtracted a non-negative multiple of a v_k
<_aj_>
x' is no longer an LP solution because it violates the sum(x'_i*size_i) = 1 constraint?
<sipa>
_aj_: correct!
<sipa>
it's a solution to the LP problem if you ignore the normalization constraint, though
<_aj_>
so eg if i had (2/5, 1/5, 2/5) then B is the smallest; non-zeroes correspond to (1/3,1/3,1/3) in v_5, scale and subtract gives me (1/5,0,1/5), which matches v_4
nymius has quit [Quit: nymius]
<sipa>
exactly
<sipa>
and the first alpha would be 3/5, the second alpha would be 2/5
<sipa>
which form the coefficients of the weighted average
<sipa>
i guess i don't really have a super tight argument why the alphas sum up to 1, but they have to
<instagibbs>
_aj_ thanks for the example, was helpful
ioannis has quit [Ping timeout: 252 seconds]
<_aj_>
sipa: consider the highest value initially (either of the 2/5), it's reduced in each step, until it's eventually zero; i think that forces it to end up at 1? i don't have my head around the alphas though
<vasild>
(lldb) p ~((unsigned)0)
<vasild>
(unsigned int) 4294967295
<sipa>
_aj_: the highest value is not necessarily reduced in every step
<vasild>
as expected, but what is this:
<vasild>
(lldb) p ~((unsigned short)0)
<vasild>
(int) -1
<_aj_>
sipa: huh?
<sipa>
vasild: integer promotion; any operand argument smaller than int gets converted to int
<vasild>
Why not 65535?
<sipa>
vasild: the most nonsensical feature in C and C++, probably
<vasild>
[01:27:57.782] /ci_container_base/src/util/bitset.h:369:34: runtime error: implicit conversion from type 'int' of value -1 (32-bit, signed) to type 'value_type' (aka 'unsigned short') changed the value to 65535 (16-bit, unsigned)
<vasild>
unsigned short variable = ~I{0}; // where I is unsigned short
<sipa>
Integer promotion is the implicit conversion of a value of any integer type with rank less or equal to rank of int or of a bit-field of type _Bool(until C23)bool(since C23), int, signed int, unsigned int, to the value of type int or unsigned int.
<sipa>
If int can represent the entire range of values of the original type (or the range of values of the original bit-field), the value is converted to type int. Otherwise the value is converted to unsigned int.
<vasild>
sipa: do you mind if I change that ~I{0} in bitset.h to std::numeric_limits<I>::max()?
<instagibbs>
ok, so it cannot *exceed*, but since it's a relaxation, it cannot *underperform*?
<sipa>
vasild: i'd prefer I(~I{0})
<vasild>
ok, will PR it shortly to fix the fuzz CI (not sure why it failed now or whether it failed before)
<_aj_>
what's the "relaxation"?
<sipa>
instagibbs: all we're trying to get to is that every LP solution x can be written as sum(alpha_k * v_k), where all alphas are non-negative, and sum up to 1
<_aj_>
from bools to reals?
<sipa>
and if that's the case, then every LP's solution's goal function cannot exceed the highest-feerate topological subset, because the goal function on the v_k's cannot exceed that
preimage has quit [Quit: WeeChat 4.4.4]
<sipa>
and thus, the relaxation of allowing non-negative real x_i instead of booleans doesn't introduce any solutions that are *even better* than the highest-feerate topologically-valid subset one
<instagibbs>
I was zooming out a bit to restate that LP relaxations theoretically do not result in worse objective values
<dergoegge>
in my case it was triggered by replacing the Assumes with asserts, and my guess was it had something to do with compiler optimizations (which might also be the case for you with the additional std library flags...)
<sipa>
instagibbs: that's indeed what we're trying to get to
<vasild>
dergoegge: sounds feasible, but then I could reproduce locally on master without extra hardening flags, probably clang 20 (which I use locally) makes a difference
<sipa>
vasild: i would also be ok with a suppression, because this isn't UB
<_aj_>
sipa: that's a nice argument, much more constructive than i was thinking/expecting ("if they're unequal, some value is higher, which if it's a child vs a parent is invalid; if it's a parent vs a child, should drop the child to zero, blahblahhandwave")
itsarjn has quit [Remote host closed the connection]
lucasbalieiro has joined #bitcoin-core-dev
<sipa>
instagibbs, _aj_: maybe an argument for why the alphas sum up to one: if instead of just subtracting alpha*v_k, you rescale: x' = (x - alpha*v_k) / (1 - alpha), you get an actual LP solution after every step
<sipa>
and then you need not sum of alphas, but sum of partial products of alphas is 1
<instagibbs>
sipa having trouble framing my point better, but I've convinced myself that trivially a relaxation cannot do what I was thinking of, excluding numerical fun
<sipa>
i found mine more constructive, but they're probably equivalent in some sense
ioannis has joined #bitcoin-core-dev
<sipa>
_aj_: but to be clear, it *is* possible that the optimal LP solution consists of unique x_i's; however, this will imply it's a weighted average between distinct solutions, each of which corresponds to an optimal feerate topologically-valid subset, so the overall set of nonzero element is still optimal and topological
<sipa>
e.g. A=2/1, B=2/1, C=1/1, A and B both depend on C
<sipa>
it's possible your solution is a weighted average between (1/2,0,1/2) and (0,1/2,1/2), both of which are optimal-feerate topologically-valid subsets
<sipa>
uhhhh
<sipa>
i just realized this examples disproves me
<sipa>
oh, no!
<sipa>
{A,B,C} is actually better than {A,C} and {B,C}, this is a bad example, sorry
sebastianvstaa has quit [Quit: Client closed]
<sipa>
say the super simple example: {A,B}, no dependencies, both have fee and size 1
<bitcoin-git>
[bitcoin] vasild opened pull request #31431: util: use explicit cast in MultiIntBitSet::Fill() (master...fuzz_bitset_silence_warning) https://github.com/bitcoin/bitcoin/pull/31431
<sipa>
the v_ks are all optimal: (1,0), (0,1), and (1/2,1/2)... but it *is* possible that the optimal LP solution is say (1/3,2/3), which is a non-equal weighted average between (1,0) and (0,1)
<sipa>
but that's fine because the set of nonzero elements is {A,B}, which is still a solution
<_aj_>
for (1/3, 2/3) i'd be tempted to follow your proof approach and take the last set (0,1/3) as the LP's recommendation to try to get something smaller. doesn't help if the LP recommends (1/2, 1/2) though.
<sipa>
since we want *minimal-size* highest-feerate topologically-valid subsets, you probably want the set of all the highest values, rather than just the nonzero ones
<sipa>
and even that isn't enough, because of the (1/2,1/2) example... which can be improved by doing connected-component analysis on the solution
<sipa>
sadly, even that is not enough, because the two sets which were (by a remarkable coincidence) exactly equally mixed could be dependent on one another
<sipa>
if you want to fix even that, you need to do trial removals "can i forcibly set this x_i to 0, and find an equally-good solution?" for every nonzero x_i
<_aj_>
could do it for the non-parents (of nonzero x_i's) only and get a first pass answer
<_aj_>
i feel like the other cluster mempooly techniques will probably already deal with that in most cases too
Zenton_ has joined #bitcoin-core-dev
<_aj_>
though in most cases you probably don't have exact equality anyway, so...
<sipa>
yeah
<sipa>
i wonder if postlinearization + chunking doesn't suffice
<_aj_>
there's probably some edge case where it doesn't
<sipa>
probably
<_aj_>
does suhas or someone have some super complicated cluster examples to throw at an LP implementation?
Zenton has quit [Ping timeout: 248 seconds]
<sipa>
he's working on a fuzzer that tests things through an LP-solving library
<_aj_>
(oh, the alphas sum to one via scaling thing i like. you can argue that each step is taking a weighted average between two valid solutions, so if your original was the best, it must equal both the things you're averaging at each step, afaics)
Zenton_ has quit [Read error: Connection reset by peer]
__DuBPiRaTe__ has joined #bitcoin-core-dev
lucasbalieiro has quit [Ping timeout: 252 seconds]
Zenton_ has joined #bitcoin-core-dev
<sipa>
yup
<sipa>
that's a nice way of explaining it
<_aj_>
(so your original combined solution is equally good as the final v_k you ended up with that has equal weighting for all the elements that had the highest x_i values)
Zenton_ has quit [Remote host closed the connection]