< dcousens>
I was considering compiling it without clang, just to see
< dcousens>
I forgot how much slower gcc builds :S
< gmaxwell>
dcousens: build with O1.
< gmaxwell>
Clang default optimization levels are less effective than GCC's it compiles faster but for most code produces slower binaries.
< dcousens>
gmaxwell: did you need me to build with O1 for debugging? Or just future reference
< sipa>
i sometimes configure with -O1 to compile faster
< gmaxwell>
no, just if the compile times bother you-- :)
< sipa>
you could compile with "-Og -ggdb3"
< dcousens>
brb, making breakfast ha
< gmaxwell>
I see things waiting on cs_main in that backtrace... supports the locking theory but I don't see where.
< cfields>
BlueMatt: around?
< cfields>
BlueMatt: something I'm hacking on just started coming together nicely, but would like to make sure we're on the same page before I get in too deep
< dcousens>
behaviour unchanged with gcc
< dcousens>
wait no
< dcousens>
no, RPC is responding now
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9286: Time-based expiration in LimitOrphanTxSize only when map is full. (master...defer_orphan_exp) https://github.com/bitcoin/bitcoin/pull/9286
< dcousens>
though, only on requests not waiting CS_MAIN :P
< gmaxwell>
hah
< gmaxwell>
dcousens: do you know how to compile with debug lockorder?
< cfields>
dcousens: any chance it's sitting in an fread?
< gmaxwell>
My WAG is that something about your configuration has revealed a lock inversion introduced -- likely by one of Matt's recent race condition fixes -- but other people aren't seeing it.
< dcousens>
gmaxwell: no, a few commands away from doing it
< cfields>
dcousens: if so, i get that on one of my machines when i use a certain combo of clang/libc and hardening enabled
< dcousens>
gmaxwell: did you want me to start culling params?
< gmaxwell>
dcousens: no, I'd like you to build with the lockorder debugging enabled.
< gmaxwell>
if it is an inversion like I'm guessing it should immediately tell us exactly where.
< cfields>
dcousens: oh, nm, i see the bt
< dcousens>
gmaxwell: what do I need to do to enable it?
< gmaxwell>
dcousens: run ./configure CXXFLAGS="-DDEBUG_LOCKORDER -g" then build like normal
< dcousens>
OK, interestingly this doesn't happen on a fresh .bitcoin (tried locally)
< dcousens>
running the configure now
< dcousens>
scratch that, still happens on a fresh .bitcoin :) (yay)
< BlueMatt>
cfields: yea, I can take a look
< cfields>
BlueMatt: well, not much to show yet, just an idea...
< BlueMatt>
cfields: didnt you already show me this?
< cfields>
with that, CConnman holds an instance of the message processor interface. Taking that a step further...
< cfields>
BlueMatt: yea, just giving the context
< BlueMatt>
ok
< dcousens>
while its compiling, i've binary searched the arguments to when it doesn't happen...
< dcousens>
and, I have the argument that causes it :P
< dcousens>
-minrelaytxfee=0.00000001
< cfields>
BlueMatt: er, jump to pm?
< dcousens>
remove that argf
< dcousens>
everything works
< BlueMatt>
cfields: ok
< dcousens>
ping gmaxwell
< gmaxwell>
dcousens: sweet
< dcousens>
and the lock order is compiled
< dcousens>
want a full bt?
< gmaxwell>
just run it.. is it spitting out any errors?
< dcousens>
gmaxwell: no errors yet
< dcousens>
nothing seen, would it be a hard crash or a debug log error?
< dcousens>
in any case, neither
< gmaxwell>
It logs with the normal logging process.
< dcousens>
gmaxwell: for what its worth, you could probably check it out with just the above arg, a fresh compile off master causes that deadlock with that arg
< gmaxwell>
yep. will be doing as soon as it compiles here.
< dcousens>
like, fresh compile, fresh .bitcoin, deadlocks on -minrelaytxfee=0.00000001
< gmaxwell>
(takes like 15 minutes to comile bitcoin on my laptop)
< dcousens>
haha, the joy
< gmaxwell>
(and all my other systems are busy with longer running expirements at the moment)
< gmaxwell>
okay longer than 15 minutes.
< gmaxwell>
dcousens: your misbehavior doesn't reproduce for me. :(
< dcousens>
gmaxwell: :( - really?
< dcousens>
Dumb question
< gmaxwell>
dcousens: well it could require interaction with some other option, or some library/compiler version.
< dcousens>
It definitely fixes my issue, I mean, all my nodes are back up haha
< dcousens>
but, I'll retry local repro
< dcousens>
rm -rf ~/.bitcoin, git clean -xdf, ./autogen.sh, ./configure --disable-bench --disable-tests --disable-wallet --without-miniupnpc, make -j4, ./src/bitcoind -minrelaytxfee=0.00000001 ---- that was my previous steps, I'm retrying them now
< dcousens>
Otherwise, I guess I could maybe grab a list of some local package versions for you if you want, or try it out on a fresh ubuntu install
< dcousens>
but, I probably don't have that time atm :(
< dcousens>
(for the fresh install etc)
< dcousens>
gmaxwell: reproduced with the above commands
< gmaxwell>
hm. disable wallet. I could try that.
< dcousens>
commit 7d5d449
< dcousens>
gmaxwell: interesting
< dcousens>
-minrelaytxfee=0.1 does not deadlock
< dcousens>
-minrelaytxfee=0.00000001 does
< dcousens>
-minrelaytxfee=0.00000002 does not deadlock
< dcousens>
lol
< dcousens>
for god sake, I reckon the binary just has it in for my configuration
< dcousens>
rounding error I suppose?
< dcousens>
Want me to bisect back to the source commit?
< gmaxwell>
dcousens: would be useful, since only you can break it right now.
< dcousens>
gmaxwell: must be platform related, since the hardware is completely different (EC2 instance, RPi, and my Laptop)
< dcousens>
but they all run Arch
< dcousens>
:P
< dcousens>
and yeah, only in the last... 1.5 weeks of dev, if that
< gmaxwell>
dcousens: disable wallet didn't break it for me.
< dcousens>
gmaxwell: awesome, the fact this happens with me for both gcc+clang makes me think it must be some other dependency?
< dcousens>
And cross-hardware
< dcousens>
man, net_processing touches so many files haha
< dcousens>
none the less, found the source of the bug so far, seems to be that static FeeFilterRounder
< dcousens>
if I change that to a constant fee != 1, it doesnt deadlock
< gmaxwell>
morcos: I think it might be a product of which peers I connect to.
< gmaxwell>
e.g. doesn't get triggered until I send a fee filter.
< morcos>
should we fix it by doing anything other than max(1,)
< gmaxwell>
I don't see any reason to do anything but that. But thats not my code. :) I'd have to understand it to answer!
< morcos>
yeah , i mean we could do something else entirely, its probably not the cleanest way to accomplish the goal, but i don't know that its worth doing anyting other than that, i'll pr
< gmaxwell>
we could keep the behavior "this is what you get for setting you relayfee to 10 nanobitcoin and spamming other folks" :P
< morcos>
Its an optimization itself, this is your nodes expected performance profile
< gmaxwell>
"We've decided to save you the bandwidth and just give you a simulation of the expected cpu usage."
< gmaxwell>
really since the introduction of fee filter, low minrelay fees aren't so anti-social anymore at least.
< dcousens>
gmaxwell: IMHO, all I really would want as policy is IsStandard, and a maximum in-memory size, sorted by fee descending (accounting for CPFP etc)
< dcousens>
hence, the setting of 1 haha
< dcousens>
gmaxwell: I wouldn't mind seeing double spends in the mempool too, but, I guess that would annoy too many people
< dcousens>
and probably opens too many DOS vectors
< gmaxwell>
well double spends in the mempool would break varrious assumptions in the code. :)
< gmaxwell>
dcousens: thats not unreasonable since the advent of feefilter.
< dcousens>
morcos: line 304 could probably be bounded too, just incase MIN_FEERATE was changed
< wumpus>
so did anyone compile bitcoin for risc-v yet? :)
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9290: Make RelayWalletTransaction attempt to AcceptToMemoryPool. (master...resend_retries_mempool) https://github.com/bitcoin/bitcoin/pull/9290
< bitcoin-git>
[bitcoin] sipa opened pull request #9291: Remove mapOrphanTransactionsByPrev from DoS_tests (master...reallyoneorphan) https://github.com/bitcoin/bitcoin/pull/9291
< warren>
wumpus: there's machines available for it?
< gmaxwell>
Does anyone recall why we decided to not backport #9053 ? I was just thinking about maybe it should be backported, then saw it was tagged and untagged.
< gribble>
https://github.com/bitcoin/bitcoin/issues/9053 | IBD using chainwork instead of height and not using header timestamps by gmaxwell · Pull Request #9053 · bitcoin/bitcoin · GitHub
< wumpus>
warren: I just ordered a hifive board, but that's arduino-class, nothing that can handle bitcoind IIRC. There's a toolchain and qemu (also linux user) though so it'd be possible to compile and run it
< sipa>
gmaxwell: i think we considered it for 0.13.1, but decided it wasn't important enough for that?
< wumpus>
warren: maybe if you use a RiscV core on one of the high-end FPGAs
< gmaxwell>
TD-Linux: was telling me recently he planned on having an actual RiscV chip soon... but not one that would be anywhere near powerful enough to run bitcoin core. :P
< wumpus>
gmaxwell: yeah, if there's no discussion on the issue it probably means it was discussed in a meeting
< jonasschnelli>
<*highlight>[18:40:09] <morcos:#bitcoin-core-dev> jonasschnelli: you around?
< jonasschnelli>
Now. You?
< gmaxwell>
12:57 <@wumpus> I think personally I'd prefer to keep it for 0.14, so the new rule/logic can prove itself a while
< jonasschnelli>
ref: [18:40:09] <morcos:#bitcoin-core-dev> jonasschnelli: you around?
< gmaxwell>
okay, yep. sorry, hamster brain here.
< wumpus>
gmaxwell: well it has been a while in master now ...
< wumpus>
gmaxwell: I think that was just last-minute before a release
< jonasschnelli>
gmaxwell: I have a solution for the HD chain-split (95% complete). I think it could be backwards compatible. Means, if a 0.13 wallet will be loaded, only external (m/0'/0'/k) will be generated. Creating a new wallet will respect the 0'/|1'/ switch.
< gmaxwell>
I don't have a really strong opinion, but I did just have a 0.13.1 testnet node act goofy today in a way that change fixes, so it came back to mind.
< jonasschnelli>
Loading the 0.14 wallet in 0.13 will result in using only the external chain.. or we disable that.
< jonasschnelli>
(probably better to disable)
< gmaxwell>
jonasschnelli: but if you go back to an older version do funds silently disappear? or .. I guess we'll just not be backwards compatible?
< wumpus>
I do think we should only backport critical things, not things that 'would be nice' but are risky. Also if it *only* affected testnet it'd be a no-brainer.
< jonasschnelli>
Once the keypool is filled, the keypath doesn't matter.
< jonasschnelli>
I think we should probably disable loading 0.14 (hd splitI) wallets in 0.13. Becuase once you load them in 0.13, you take the risk of having chain outputs in the external chain.
< jonasschnelli>
This can be confusing.
< wumpus>
this also has an effect on mainnet, it'd be a pain if it introduced some regression, though that chance is lower than at Nov 3 which was two days after it was openened :)
< gmaxwell>
wumpus: well it fixes bad behavior that is possible on mainnet too just really unlikely and not something we're currently observing.
< wumpus>
it's been merged in master for more than a month and no one at least observed any wacky behavior due to it, so I'd trust it for backport now
< wumpus>
had already hit enter then I saw it appear at the same time :p
< wumpus>
yes luckily just a help message
< sipa>
well the comment is more about showing an error when an unknown value is passed
< sipa>
i'll PR my suggested change
< jonasschnelli>
sipa: Oh. Yes.
< jonasschnelli>
For the chain-split, what would be preferable, a keypool=100 results in 50 internal and 50 external keys or in 100/100?
< bitcoin-git>
[bitcoin] sipa opened pull request #9292: Complain when unknown rpcserialversion is specified (master...nofutureserial) https://github.com/bitcoin/bitcoin/pull/9292
< sipa>
jonasschnelli: internal keys need way fewer
< jonasschnelli>
sipa: I guess its 2.30am you as well?
< sipa>
jonasschnelli: no, i'm on the west coast, 23:28 here
< gmaxwell>
no, the US is a big place.
< jonasschnelli>
Ah. Right. He's NY.
< gmaxwell>
US timezones span 7 hours IIRC.
< sipa>
there's a tiny part of the US on the other side of the international date line
< jonasschnelli>
sipa: But in general, should the keypool=<value> results in <value>=<internal>+<external> or <value>=<internalamout>, <value>=<externalamount>?
< gmaxwell>
yea, I wasn't counting guam.
< gmaxwell>
in that case they span like 20 hours. :P
< sipa>
gmaxwell: you don't need to count territories. there is a tiny part of alaska across the date line
< sipa>
ah, but it still uses western time. ignore me
< sipa>
Excluding merges, 33 authors have pushed 160 commits to master and 175 commits to all branches. On master, 288 files have changed and there have been 13,187 additions and 10,679 deletions.
< wumpus>
nice!
< wumpus>
BlueMatt is clearly winning this month :)
< sipa>
also the last week and the last 3 days
< sipa>
but over a month... 112 PRs merged, and 50 PRs opened
< sipa>
that's pretty good
< bitcoin-git>
[bitcoin] gmaxwell opened pull request #9293: [0.13 Backport] IBD using chainwork instead of height and not using header timestamp (#9053) (0.13...9053_backport) https://github.com/bitcoin/bitcoin/pull/9293
< gmaxwell>
based on the prior discussion ^, if we decide we don't want to-- I won't be offened. But I figured I wouldn't ask about it then fail to do the work.
< gmaxwell>
(it wasn't any work, however)
< wumpus>
great
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9294: Use internal HD chain for change outputs (hd split) (master...2016/12/hd_split) https://github.com/bitcoin/bitcoin/pull/9294
< bitcoin-git>
bitcoin/master fa2ecc4 MarcoFalke: [qa] pruning: Use cached utxo set to run faster
< bitcoin-git>
bitcoin/master fab1af3 MarcoFalke: [qa] maxuploadtarget: Use cached utxo set
< bitcoin-git>
bitcoin/master 919db03 MarcoFalke: Merge #9274: [qa] Use cached utxo set to fix performance regression...
< bitcoin-git>
[bitcoin] MarcoFalke closed pull request #9274: [qa] Use cached utxo set to fix performance regression (master...Mf1612-qaPruningCacheUtxos) https://github.com/bitcoin/bitcoin/pull/9274
< wumpus>
jonasschnelli: yeah that comment is definitely wrong
< wumpus>
jonasschnelli: it should raise an exception, not crash the application
< jonasschnelli>
wumpus: Yes. Will fix that soon,
< * wumpus>
has built a riscv64 toolchain, then a riscv64 bitcoind and is running it in qemu-riscv64 (linux userspace), now syncing the chain with it. This was surprisingly easy with the depends system, bar 1 line I had to add for openssl configuration and -lpthreads in the LDFLAGS it just worked!
< bitcoin-git>
bitcoin/master 89a3723 Jonas Schnelli: [Qt] Show ModalOverlay by pressing the progress bar, disabled show() in sync mode
< bitcoin-git>
bitcoin/master e15660c Jonas Schnelli: Merge #9280: [Qt] Show ModalOverlay by pressing the progress bar, allow hiding...
< bitcoin-git>
[bitcoin] jonasschnelli closed pull request #9280: [Qt] Show ModalOverlay by pressing the progress bar, allow hiding (master...2016/12/qt_modal) https://github.com/bitcoin/bitcoin/pull/9280
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9295: [Wallet] Bugfix: Fundrawtransaction: don't terminate when keypool is empty (master...2016/12/fix_frt) https://github.com/bitcoin/bitcoin/pull/9295
< MarcoFalke>
jonasschnelli: Does this fix the segfault on master?
< jonasschnelli>
Which Segfault?
< jonasschnelli>
It should fix an assert terminiation
< jonasschnelli>
not sure if this can lead to a segfault.
< jonasschnelli>
Hmm... that one probably needs further investigation.
< paveljanik>
MarcoFalke, I can confirm the same problem here.
< jonasschnelli>
paveljanik MarcoFalke: do we have steps to reproduce?
< jonasschnelli>
Or a failing rpc test snipped?
< MarcoFalke>
Interestingly the rpc tests do not fail, it seems
< paveljanik>
jonasschnelli, only in GUI.
< MarcoFalke>
only the gui, really odd
< paveljanik>
Just Send something somewhere.
< paveljanik>
after waiting for Yes(2) ... Yes. Click on Yes, BUMP
< afk11>
there are some others that wind up exposed through bitcoinconsensus.h.. anyone wish to advise if we should have those functions check if an impossible situation is about to arise before calling VerifyScript?
< afk11>
or consumers of bitcoinconsensus.h?
< afk11>
sorry, others = assertions
< jonasschnelli>
paveljanik MarcoFalke: wait, it happens >always< on any tx in the GUI?
< paveljanik>
both two my tries finished with SEIGSEGV
< jonasschnelli>
I can send to myself.. but not directly on master (some commits behind)
< morcos>
I panicked a little because I thought it was potentially corrupting the wallet at first, but what is getting committed to the wallet is fine, it was just serialization for the GUI that was messed up..
< MarcoFalke>
So "needs backport" and "0.14.0" does not make sense :)
< morcos>
jonasschnelli: yes, i left a few comments on irc yesterday re: 8501.
< jonasschnelli>
morcos: I think there can be a problem with storing to many timestamp... is that what you thought?
< morcos>
it just seemed to me that the idea of storing 1000 seconds, 1000 minutes, 1000 hours and 1000 days (with no time stamps) is perfectly adequate for any stat, but maybe not? (or 2000)
< jonasschnelli>
Yes. But that would required fix frequency and active lock occupy. Not?
< morcos>
i think the question would be what to do if some number of seconds go by before you record a new data point... i would argue that you want to differentiate between recording a repeated data point and not updating the stat
< morcos>
jonasschnelli: not the way i'm imagining it
< morcos>
lets say you had a way to indicate no value recorded.
< jonasschnelli>
morcos: Yes. But I guess you still blow memory up to the size of the samples structure?...
< morcos>
and look for a moment and the 1000 seconds data points... you have a circular buffer of 1000 entries
< morcos>
and every time you call it, say you call it 17 seconds later.. you put in 16 NA's and then the latest data point
< morcos>
so then graphing can be smart about interpolating
< jonasschnelli>
hm...
< morcos>
but if you call it every second with the same data point, it would put it in , you would distinguish between a step function and slow increase
< jonasschnelli>
So you would not care if the 17second samples was done at 17.5s?
< morcos>
yes, it would use constant memory
< morcos>
but not that much
< morcos>
a few MB
< jonasschnelli>
I think the possible time offset could matter at the 1000 hours circular buffer.
< morcos>
right, so you'd have to figure out how to cover that.. but i think thats doable.. suppose the data point for 12:14:16 means the latest data point that you have before 12:14:16.00000000
< jonasschnelli>
Assume sample 1 was done at 0h, sample 2 and 0.6h (=1h sample), sample three at 1.4.
< jonasschnelli>
What speaks against storing a sample with a delta timestamp (1byte) to the last sample?
< jonasschnelli>
Could be something like a linked list with NA's
< jonasschnelli>
*linked list
< morcos>
then if you call it at 14.93 (that goes in the 12:14:15 data point) then you save just the latest call every time if you call it at 15.03, 15.37, 15.87, and then when you call it at 16.05, you put the 15.87 data point in for 12:14:16
< morcos>
i think you could also do linked lists with delta time stamps, but i think the delta time stamps don't add much
< morcos>
i guess i'm envisioning that for things with 1h samplign or 1day sampling you'll be calling it much more frequently anyway
< jonasschnelli>
Maybe the advantage of delta timestamps is: more accurate (no interpolation in the recorded data), flexible memory cap-
< morcos>
so you'll get a data point pretty close to the cut off
< jonasschnelli>
sipa once had the idea to remove in-between samples once they are old and stats-memory gets drained.
< morcos>
but are you envisioning that the part of the program thats making the call to record the sample is doing it separately for each timescale?
< morcos>
well the design i'm trying to describe would never even keep "in-between" samples
< jonasschnelli>
morcos: IMO the stats infrastructure should not actively record stuff. It should offer an interface to store records based
< morcos>
yes, i understand
< jonasschnelli>
"in-between" samples is probably the wrong term. Sorry.
< morcos>
so in your mempool code, you call stats.record(SomeStats)
< jonasschnelli>
Let's assume you record samples in a 1s frequency...
< jonasschnelli>
then you reserved stats memory gets low
< morcos>
but i'm saying that you can then record those however often you want from the mempool
< jonasschnelli>
You could remove some of the old 1s samples...
< morcos>
and the stats class will save the last 1000 seconds, 1000 mins, 1000 hrs and 1000 days of data poitns for you
< morcos>
it removes the old 1s samples b/c its a circular buffer
< jonasschnelli>
Yes. But maybe someone once to keep 1s stats longer.. in 8501 I introduces a -statsmaxmemory
< morcos>
ok, well that was my argument, is that no one would care about keeping 1s stats for longer than say 1000 or 2000 secs
< morcos>
if they wanted to look at a longer time period, they would look at minutes
< jonasschnelli>
Thats a point. Right
< morcos>
if that is false, then i agree
< jonasschnelli>
My experience tells me, record as much as possible, :-)
< jonasschnelli>
I would even say a stats to disk dumper would be great
< jonasschnelli>
Especially stuff that can't be reconstructred
< jonasschnelli>
(freerates, etc.)
< morcos>
the concern i had about your design was that if we are recording lots of stats
< morcos>
then we lose the ability to look over a long time period without using tons of memory
< morcos>
i don't really care how we do it
< jonasschnelli>
Yes. That is not done well in 8501
< morcos>
but i think the stats class should automatically keep less fine-grained data for a longer time in order to keep the memory limited
< morcos>
i was just presenting one way to do that
< jonasschnelli>
The idea was to slowly stretch the frequency over time, depending on the available stats ram.
< jonasschnelli>
Not just cut the back
< morcos>
but don't you think you might want to look at both?
< jonasschnelli>
Yes.
< morcos>
high frequency recent and low frequency long time horizon
< morcos>
ok... so thats my goal...
< jonasschnelli>
I think you convinced me to do the 1000s 1000m 1000h 1000d approach.
< jonasschnelli>
maybe the 1000 is configurable.
< morcos>
doesn't matter to me how we do it... i think a delta version coudl be just as good
< morcos>
and you could just be smart about trimming the delta list or something
< morcos>
yes, 1000 should be configurable i thik... actually maybe isn't enough for a default
< jonasschnelli>
also, I liked the configurability of the buffer in MB.
< jonasschnelli>
That's what you probably care about.
< jonasschnelli>
Not the 1000
< morcos>
1000 secs is just 16 minutes... you would not want to have to only have 16 data points
< jonasschnelli>
You would say, I reserve 300MB for stats.
< jonasschnelli>
Right... just en
< jonasschnelli>
just as an example
< jonasschnelli>
So,.. you convinced me for high frequency recent and low frequency long time horizon,...
< morcos>
ok cool... any approach that automatically keeps both recent fine-grained and long time horizon bigger step, is fine with me
< jonasschnelli>
And I think the delta timeoffset thing could work.
< morcos>
thanks
< jonasschnelli>
It can be tricky if you cut sample not at the back
< jonasschnelli>
You need to re-adjust the delta.
< jonasschnelli>
Losing a sample will corrupt the timestream
< timothy>
kanzure: you can still build using mingw32 on linux :P
< instagibbs>
is there a written down explanation of the lock macros anywhere
< Chris_Stewart_5>
sipa: re: yesterdays convo about syncing, seems like another guy had a similar experience that I did in #bitcoin, not sure if this is a concern or not
< sipa>
Chris_Stewart_5: that sync is slow? of course it is a concern
< sipa>
but we can't just go double our memory requirements
< gmaxwell>
the speeds Chris_Stewart_5 were reporting sounded reasonably slow, but yes-- the processing is slowing down as the chain grows and will continue to do so.
< gmaxwell>
We have been fighting that effect for years and will continue to do so, but nothing changes the fact that more state means more work.
< Chris_Stewart_5>
gmaxwell: Yes, I was wondering if this was average or extremely slow. I thought I read something about it being on the order of magnitude of 12 hours -- everything you read online is accurate right? :P
< owowo>
I synced today, was about 4k block behind, but it was quite fast.
< morcos>
gmaxwell: can we duel about SelectCoinsMinConf(.., #ancestors, ..)?
< morcos>
both sdaftuar and i think that the most important thing for your goal is to try low values for number of ancestors
< morcos>
and lets for the moment just worry abou tthe devault limit of 25... then you are suggesting 1,6,13,25
< morcos>
sorry 0,6,13,25 (eh, 0 or 1 doesn't mean anything anyway though b/c you skip anything >= n and everything has at least 1 , it counts itself)
< morcos>
so i think emphasizing more tries in the low limits helps keep any individual chain small and therefore helps avoid polluting small chains by combining them with big ones
< morcos>
so trying 2,3,4,25 might even be better than 2,6,13,25
< morcos>
anyway thats the tradeoff sdaftuar and i were trying to suggest, emphasize trying the low numbers
< gmaxwell>
morcos: I agree with you, I didn't perform that bit of shed painting myself because I didn't think it mattered _that_ much. The only reason I commented there was because instagibbs changed it in a way that was clearly wrong for some values. :)
< morcos>
yeah but all it is is a bit inefficient for not default limits
< morcos>
no one has non-default limits
< morcos>
who cares
< morcos>
its not broken
< gmaxwell>
I don't care about inefficient, but when he changed the patch he harcoded one to 3 which could be higher than your limit.
< morcos>
but that doesn't hurt anything
< gmaxwell>
say you've set your limit to two, it will cause you to go past it.
< morcos>
it just lets you potentially pick coins which will let you create a tx which will fail the final test.. thats already a possibility anyway..
< morcos>
the actual test is way more restrictive, none of your ancestors can have more than the descendant limit
< gmaxwell>
Personally, if I wrote it, I would have done 1,2,3,max-2,max-1,max or something like that.
< morcos>
we're nto checking that in this quick heuristic anyway
< morcos>
so you always might be picking coins with which you have no chance , you just don't know
< morcos>
but better to have multiple tries with things which have a good heuristic
< gmaxwell>
you might be, but that isn't the pratical case people are encountering.
< gmaxwell>
I don't know where we disagree.
< morcos>
mostly i think its fine as is, and your suggestion is a slight downgrade.. but i agree its mostly bikeshedding i suppose
< gmaxwell>
hm. if the limit is set to 2.. why should it check 3? would you agree that min(3,limit) wouldn't be better?
< morcos>
sure
< gmaxwell>
I think it absolutely should spend more tries on low numbers, because say it runs with limit/4, then it combines two inputs a depth 2 and a depth limit/2 .. great, now you've just trashed a short chain.
< morcos>
exactly
< gmaxwell>
but for that e.g. spacing my a multiplicave factor at least won't screw you by more than that factor.
< morcos>
i'm fine adding a min, i just think making it work well for a limit of 25 is fine, as long as its not horribly broken for other limits, which it wont' be b/c it'll still try 2 first
< gmaxwell>
okay we agree then.
< morcos>
2, min(4,limit), something in between, limit
< morcos>
:)
< gmaxwell>
I posted min(2,limit/4), min(4, limit/3), limit/2
< gmaxwell>
, limit
< instagibbs>
is it safe to come out of my bikeshelter :)
< sipa>
cfields: i've been playing with -flto lately, and you can use -flto=jobserver at link time to make gcc itself perform the linking step in multiple processes... the problem is that this requires prefixing the compiler invocation in make with a '+' so make knows to expose its jobserver interface to it
< gmaxwell>
instagibbs: haha
< sipa>
cfields: any idea how to accomplish that in automake. it doesn't seem that CXX="+g++" works
< gmaxwell>
morcos: I really try to not bikeshed this stuff too much, because it's just a hop-skip-and-jump away from arguing that we should be linking GLPK.
< morcos>
heh
< morcos>
well sdaftuar and i played around with testing the PR, and it kind of performed poorly
< morcos>
if you started with two inputs
< morcos>
instead of getting 50 txs, you only got 30 (whereas before you got 25)
< morcos>
this was because we were sending to ourselves so the outputs were being combined quickly and we were trashing chains, so led us to prefer this small change approach.. obviously as sdaftuar reported, it works much better when the test doesn't pay yourself
< morcos>
s/small change approach/emphasis on small limits approach/
< gmaxwell>
I'm glad you tested it-- I anticipated that result but I was trying to resist bikesheding it and making it not go into 0.13.2.
< morcos>
i thik with a large set of coins though , it'll really shine as it'll stop you from just being stupid
< morcos>
btw, it would be helpful for me if more people went through and tagged things for 0.14, i could review some more things, and would like to know where to concentrate...
< morcos>
i think i've done what i can on the existing milestones
< instagibbs>
ok, putting final touches on the selection, thanks for all the feedback
< instagibbs>
(also am I the only one getting spammed to hell by phising PMs?)
< gmaxwell>
instagibbs: yes.. you can /mode instagibbs +R to turn off pms from unregistered users.
< instagibbs>
ah thx
< gmaxwell>
remember to turn it back off again later, if you want j-random-people to pm you later.