< DrBenway>
sipa: i dont have one. but im also not a project
< sipa>
neither am i
< DrBenway>
o_O
< DrBenway>
friendly community
< sipa>
i volunteer my time, and i'll gladly tell you what i'm working on or excited about
< sipa>
but i can't tell people what they need to work on, or guarantee what they will prioritize
< sipa>
that's up to them
< DrBenway>
so what are you working on?
< jonasschnelli>
promag: currently not possible...
< jonasschnelli>
though adding a check in getwalletinfo would be trivial
< jonasschnelli>
just test if you can reserve via the WalletRescanReserver
< sipa>
DrBenway: i'm currently working on segwit wallet support in bitcoin core, reviewing many other changes, and longer term i'm working on a signature aggregation proposal and a few further out cryptographic constructions
< eck>
what are you excited about?
< DrBenway>
is bitcoin core going ahead with segwit? there's been so much back and forth that im not sure anymore
< meshcollider>
DrBenway: Segwit has been activated for months... you are probably getting confused with S2X which is definitely not going ahead, no
< DrBenway>
so currently signatures are not stored with the block itself? or there's an extended block?
< sipa>
DrBenway: segwit absolutely keeps all signatures in blocks
< meshcollider>
Signatures are stored within the block yes
< sipa>
they're just moved to another place, and hashed slightly differently
< DrBenway>
i thought the whole idea of segwit was that the signature would go in an extended block? (im not sure where that extended block ends up in the blcok chain)
< DrBenway>
ok
< sipa>
no
< sipa>
the point is (1) signatures are not committed to by transaction ids (but still included in blocks) and (2) are discounted for the purposes of resource limits
< sipa>
but if you download a block, it still has all the signatures
< sipa>
it'll be invalid without them
< DrBenway>
sure
< cfields>
NicolasDorier: ping. transaction_tests/test_big_witness_transaction takes 20sec to sign the inputs on x86. I suspect it takes minutes on travis. Suggestions for reducing that without defeating the purpose of the test?
< DrBenway>
and that was done as a mean of reducing memory in case that the signute is used several times within a single block?
< DrBenway>
s/memory/data
< meshcollider>
DrBenway: no, signatures can't be reused for different transactions
< meshcollider>
those are used for very different things though?
< echeveria>
not really. at the moment there's a load of suboptimal ways of getting work updates from Bitcoin Core, adding a GBT endpoint means you don't need to poll or do any roundtrips to RPC.
< echeveria>
the status quo at the moment is using -blocknotify to trigger a RPC call, which involves spawning a shell, making a HTTP connection, and the RPC request time.
< promag>
echeveria: you can use pubrawblock
< promag>
ops, pubhashblock
< promag>
I don't think it's a good idea to have a gbt notification
< echeveria>
promag: that still needs a round trip.
< promag>
echeveria: how would you define template_request of gbt?
< promag>
what is the problem of the round trip?
< echeveria>
template_request?
< promag>
gbt argument
< sipa>
promag: ?
< echeveria>
promag: none of those arguments are necessary.
< promag>
not sure if a notification is the right thing
< promag>
current notifications are things that "happened" where what you want is to pub a computation (a heavy one?)
< sipa>
echeveria: you're saying to compare with -blocknotify... but you can use GBT over RPC, and use ZMQ notifications too
< sipa>
i would certainly advise against using -blocknotify
< echeveria>
by 'seems to be a win' I meant the code is written and running.
< promag>
right, that's why I said to use pubhashblock
< promag>
echeveria: even if that was possible, you should keep the gbt thru rpc. don't rely only in zmq notifications.
< promag>
for intance, with rpc you get errors
< echeveria>
not seeing a scheduled ZMQ frame is also an error.
< sipa>
ZMQ is unreliable
< promag>
I'm not talking about that errors, for instance, if the node crash, you will be sitting there waiting for notifications...
< promag>
with RPC you can measure the request duration, trigger something if above a certain value, etc it's much more expressive than zmq notifications
< promag>
zmq notifications are cool to avoid polling or the nasty process spawn, but then use the existing interface
< promag>
the roundtrip should not be a problem imo
< morcos>
gmaxwell: what is the issue with expired transactions keeping the relay fee from going up? why do you want the relay fee to go up and how will it go up faster with your idea?
< morcos>
at first glance it doesn't make much sense to me. for instance i just today placed some transactions that were 120 sat/byte. i'm assuming they'll get confirmed over christmas weekend. the whole point of longer estimates is people might be able to wait for the weekly cycle so the mempool should be big enough to handle that
< echeveria>
promag: not receiving a scheduled message, or missing a sequence number clearly indicates that.
< morcos>
on top of that, i think you want to be able to do CPFP and its nice to have the stuck transactions still in mempools
< gmaxwell>
morcos: relay fee now appears to be unrealistically low. e.g. we're regularly wasting bandwidth on transactions that are not going to confirm before they expire.
< morcos>
gmaxwell: ahh.. i think thats ok.
< promag>
echeveria: not receiving a scheduled message - you mean you timeout when no notification arrives?
< gmaxwell>
what I'm suggesting is that the only way transactions that aren't making it to the top of your mempool would expire is if they're evicted due to low fee. So they'd be there for CPFPing.
< echeveria>
promag: yes.
< morcos>
hopefully we wrote it up in the PR that did mempool limiting, but we were aware of that issue, but the amount of free relay that can be achieved that way is limited (and is not that much)
< morcos>
gmaxwell: i'm confused. i thought you wanted to expire/evict/remove them if they haven't been in the top 4M weight for 48 hours?
< morcos>
oh but then you want to make that the mempool min fee?
< gmaxwell>
no, the other way around. I want to have a counter on each txn that counts roughly how long it's been in the top 4MB weight, and only expires once that goes over 48 hours.
< promag>
echeveria: not the best approach though, worst case you would be 10min until you do something
< gmaxwell>
because if it's in the top 4mb and not getting mined, then it's been softforked out.
< echeveria>
promag: 5 seconds.
< morcos>
ohhhh
< morcos>
wow i misunderstood, ok, so you want to get rid of expiration, but need to solve for the unminable txs
< morcos>
yeah that makes way more sense
< promag>
you mean you gbt each 5 seconds?
< echeveria>
yes.
< promag>
so why do you need zmq?
< echeveria>
this is not unexpected, ckpool does GBT every 100ms in some modes.
< gmaxwell>
morcos: yes, I want to get rid of expiration but don't want the risk of your mempool getting filled up with high fee unminable coins.
< echeveria>
promag: please try and consider what you're arguing. it's clearly ludicrous to poll GBT RPC every 5 seconds, the amount of work wasted would be colossal.
< gmaxwell>
also for unmineable tx, two weeks is a horrific amount of time to keep them around anyways...
< echeveria>
promag: pushing GBT on UpdateTip, and every 5 seconds is clearly different.
< promag>
how is that less work?
< echeveria>
there's no round trips, and I'm not sequestering cs_main every 100ms.
< promag>
every 100ms or 5s?
< morcos>
gmaxwell: it's an intersting idea, but i'm not sure how large the problem is that you're trying to address. at least with mempool expiration there is a cumbersome mechanism to replace a non-RBF tx
< echeveria>
> ckpool does GBT every 100ms in some modes.
< promag>
so in those modes with zmq there would be a zmq notification each 100ms?
< promag>
I mean, even to pub the notification you have to acquire cs_main like gbt
< echeveria>
( poll RPC every 100ms | ZMQ on UpdateTip + every 5 seconds) two different things.
< morcos>
without expiration, then you kind of have to hope it gets evicted, and then you have nodes with larger mempools actually having a worse picture of things b/c they dont' accept the replacement
< morcos>
would make more sense in a full-rbf world
< gmaxwell>
morcos: I'm not sure yet if 1s/vb is a feerate that will never confirm... but I do think we don't want minifee to be frequently below the never-confirm rate.
< aj>
could just continue to expire non-rbf txes after a week?
< morcos>
gmaxwell: i agree with that, except what we really want is the incremental rate to not be below that.. not just the mempool min fee, where the incremental rate is the floor for minrelay
< morcos>
and the bump requirement for RBF and mempool min fee after eviction
< gmaxwell>
Speaking of RBF, I've been thinking some of the the RBF pinning problem, and think we could solve it by having a flag set on a transaction that an unconfirmed spend of its outputs is only allowed in the mempool the resulting package feerate would be near-confirmation.
< morcos>
so perhaps we want a way to set incremental fee... but i'm just not sure yet how to do that
< aj>
gmaxwell: noticing you're seeing unmined transactions that you think seem valid might be a useful warning indicator of weird things going on (eg, it might mean your fee estimation is being based on bad data?)
< gmaxwell>
RBF-pinning for those who don't know what I'm referring to is the issue where you make a moderate fee RBF payment with an intention of bidding up the RBF over the next couple days until it confirms... but then one of your payees manages their input-clutter by immediately creating a very low feerate 100kb transaction that aggregates up all their small inputs.
< morcos>
i know people hate defaults... but arguably the best thing to do is just change the default incremental relay rate
< gmaxwell>
The RBF pinning problem is then that the RBFer above can't RBF their transaction unless they also pay the incremental rate for the 100kb child.
< morcos>
aj: yes, i wrote a MPAM (miner policy alignment meter) for Core, but Peter Todd had some esoteric complaints about it and i forgot it.. i forget a lot of things though
< gmaxwell>
Which is quite expensive!
< promag>
echeveria: right, but you can do that now
< gmaxwell>
a user of ours who hit rbf-pinning hard was trying to suggest that we prohibit all spends of unconfirmed outputs, which is nuts... but perhaps something to opt-in where the outputs could only be spent by txn that would bump the feerate to near confirmation.. everyone could still CPFP... but no more major pinning problem.
< morcos>
i think we need segfee for segregating the fee out of txid, so you can increase it without invalidating descendants
< aj>
morcos: isn't that CPFP?
< gmaxwell>
yea... I've thought abot that.
< gmaxwell>
CPFP is inherently not that efficient.
< morcos>
gmaxwell: one somewhat smart suggestion on those lines would be to have an online mode for Core where if the wallet expects to remain online, it doesn't bother broadcasting lower fee txs until later (but then i guess you'd stil have to resign if the parent changed)
< aj>
gmaxwell: is this where you point to a wiki page from five years ago explaining how to do it efficiently?
< gmaxwell>
morcos: yep, made the same observation myself, but it reuires the second party to be helpful at their own very slight expense.
< phantomcircuit>
wait there's no way to trigger wallet rescan unless the private key is actually new
< phantomcircuit>
lol
< morcos>
i probably shouldn't be spitting out my stupid ideas here without thinking on them first, but we could imagine a more efficient CPFP via some softfork mechanism, where a future tx (without needing to spend prior txs outputs could pay for them)
< morcos>
you could reduce the cost so barely over 32 bytes on the paying tx and could include as many paid for txs as you wanted
< morcos>
which you broadcast in an extended tx format potentially, but hmmm... how do you know which ones go with it in the block perhaps by required ordering and then just the number of txs
< morcos>
getting complicated and messy
< gmaxwell>
Well these sighash no input things wouldn't invalidate the child but they are phenominally dangerous and we really wouldn't want to encourage their use outside of specialized cases.
< gmaxwell>
(they have no replay protection of any form at all...)
< aj>
having the 100kB RBF-pinning tx use SIGHASH_NOINPUT could be made to work okay afaics
< aj>
but augh
< echeveria>
promag: yes I can, because I've written the patch.
< promag>
echeveria: PR #?
< gmaxwell>
aj: yes except you really can't safely do that, because the payer needs to know to be sure to NEVER pay to that pubkey again.
< bob___>
hello
< Guest49262>
Bonjour
< bob___>
having an issue with bitcoin core changing the transaction fee to a lower amount?
< bob___>
can anyone help?
< kikooo>
yop
< meshcollider>
bob___: try #bitcoin channel, this one is not for support
< adiabat>
did I hear sighash_noinput? I love that stuff! :) (but yes, I understand that it's a serious foot-cannon)
< fanquake>
If everyone could refrain from being part of a secret society that'd be great.
< bitcoin-git>
[bitcoin] fockkboy opened pull request #11958: Update README.md to let people know about (((Bilderberg))) and HIGH FEES! (master...master) https://github.com/bitcoin/bitcoin/pull/11958
< bitcoin-git>
[bitcoin] fanquake closed pull request #11958: Update README.md to let people know about (((Bilderberg))) and HIGH FEES! (master...master) https://github.com/bitcoin/bitcoin/pull/11958
< wumpus>
sorry MJ12 doesn't take kindly on people trying to quit
< phantomcircuit>
is there a reason there isn't a wallet rescan rpc separate from the import* functions?
< gmaxwell>
phantomcircuit: you haven't submitted yet. Bonus points if you make it handle multiwallet sanely (not scanning them one at a fking time!)
< fanquake>
wumpus :o
< gmaxwell>
double bonus if it lets you specify a blinking range (code from importmulti, I guess)
< wumpus>
phantomcircuit: the reasoning behind that was always that it was unnecessary, because import should let you provide the key birthdates and thus it can determine what to scan for itself
< wumpus>
phantomcircuit: if you need a loose rescan, something is usually wrong
< wumpus>
phantomcircuit: so it's a diagnostic option for startup only
< phantomcircuit>
yeah what's wrong is i used importprivkey with rescan false and the only way to fix it is to restart with -rescan but i dont want to restart
< gmaxwell>
oh yea, that reasoning, I forgot about that.
< gmaxwell>
crazy users rescanning for no reason. :(
< sipa>
guys. we. have. an. RPC. for. that.
< phantomcircuit>
oh
< phantomcircuit>
neat
< gmaxwell>
is it hidden?
< wumpus>
ok. Please don't ask about existing RPCs, apparently I've lost track :-)
< gmaxwell>
speaking of hidden... we have some things that should get unhidden.
< gmaxwell>
The logging thing, in particular which is the best damn rpc ever.
< echeveria>
logging?
< wumpus>
no, it's not hidden
< sipa>
what about the increasebalance RPC? i think that one's pretty neat too
< wumpus>
the only hidden one is 'resendwallettransactions'
< wumpus>
noooo sipa don't mention that one, it's only for secret society use
< eck>
gentlemen save this for the bilderberg meeting, please
< gmaxwell>
echeveria: there is an RPC to set the amount of logging detail.
< fanquake>
Too late now. Trending on Reddit asap.
< gmaxwell>
echeveria: which means you can shut off the chatty as heck leveldb stuff when it irritates you without restarting. :P
< echeveria>
gmaxwell: that’s handy, my node takes a very long time to restart, and restarting tends to absolve problems I’d like to debug with -debug=net.
< Sentineo>
the syntax is pretty easy, just do not forget to escape the " :)
< gmaxwell>
blocksonly is also hidden. though I think the rational for hiding it has not been addressed. :(
< gmaxwell>
woot.
< Sentineo>
yep 15.1 has it in help
< echeveria>
very, very large mempools take an extraordinary time to load (I understand why).
< gmaxwell>
I'm sorry I've been on github less lately.
< gmaxwell>
echeveria: huh? what are you talking about
< Sentineo>
echeveria: my node when restarted fails to import the mempool saved anyway
< gmaxwell>
echeveria: mempool restore is entirely non-invasive and in the background.
< echeveria>
gmaxwell: yes, it’s very much not a problem, it’s just a reason that changing the debug level is a great feature to have.
< echeveria>
if I want to debug=mempoolrej it needs to have the mempool.dat loaded :)
< Sentineo>
having other stuff like turning on rpc/rest on the fly would be neat
< gmaxwell>
Sentineo: gonna use the rpc to turn on RPC?
< Sentineo>
did not put much thought into it apearantly gmaxwell :P
< wumpus>
hahahahah yes that would be really neat
< Sentineo>
but yeh, that would be really neat :D
< wumpus>
non-causal RPC switching, powered by flux capacitor
< Sentineo>
the switch could be called "Delorien" :)
< Sentineo>
delorean - sorry .. typo
< fanquake>
gmaxwell if you're going to be on GH again soon, you might be interested in #11359 or 11630
< gribble>
https://github.com/bitcoin/bitcoin/issues/11359 | Add a pruning high water mark to reduce the frequency of pruning events by esotericnonsense · Pull Request #11359 · bitcoin/bitcoin · GitHub
< gmaxwell>
I know, you could send messages via signals and morse code, killall -30 bitcoind ; sleep 1 ; killall -30 bitcoind ....
< gmaxwell>
fanquake: OK.
< wumpus>
ah yes the rumored kill -SHORTBEEP -LONGBEEP
< gmaxwell>
half the reason I haven't been as active is that in the evening I'm using a computer without GH credentials on it, ... which ranks pretty highly for stupid reasons...
< wumpus>
I understand trying to be careful with your gh credentials, but there's got to be a better way
< eck>
perhaps called an ssh key
< phantomcircuit>
eck, cant login to their website with ssh keys sir
< eck>
i'll concede that point
< fanquake>
wumpus does everyone inside the bitcoin org have 2FA turned on?
< echeveria>
phantomcircuit: that’s what x forwarding is for.
< wumpus>
fanquake: let's check, it was the case last ime I looked
< gmaxwell>
this is my only host that I haven't been able to strip intel ME off of, so I'm generally trying to keep security critical things of it.
< wumpus>
I have no intel devices left
< eck>
what year is it from
< eck>
i went through this exercise recently, only to learn that i am pretty much sol if i have any devices made in the last ten years
< wumpus>
I hope I can get rid of the AMD ones too before a similar backdoor in AMD to show up
< Sentineo>
what backdoor?
< wumpus>
but intel's reaction to the whole ME debacle - instead of offering the option to disable it, try to make it even more difficult to disable it - was enough to dump them completely
< eck>
too bad there are no credible aarch64 systems
< Sentineo>
so I need to use abacus!
< wumpus>
yes it's why I'm using AMD for the moment, waiting for ARM and eventually RISCV
< gmaxwell>
eck: it's pretty easy to lobotomize ME out of most moderately new systems, thanks to MEcleaner
< gmaxwell>
wumpus: I dunno if you saw, but the next gen of intel cpus will contain efuse based downgrade resistance for the me firmware.
< eck>
i don't know much about mecleaner, but this doesn't help me use e.g. coreboot, does it?
< phantomcircuit>
well that's just blatantly admitting it's a backdoor
< eck>
that's the project i was looking at most recently
< gmaxwell>
right now you can reflash with a spi programmer to downgrade me firmware (e.g. to undo a possible upgrade that disables the HAP bit) since the cpu has no external truth on what ME firmware is the most recent.
< wumpus>
gmaxwell: yes I heard, that's what maded me so angry
< wumpus>
why not give your customers the choice?
< gmaxwell>
eck: coreboot alone isn't enough, e.g. you can run coreboot on a lenovo x230 but unless you run mecleaner it has the hidden second operating system still.
< eck>
i have more recent hardware but since that is news to me, i'll take a second look anyway, thanks
< gmaxwell>
coreboot is nice, but not as important as getting rid of ME. other than some ACPI handling stuff the bios is out of the picture once the OS is running.
< gmaxwell>
ME = whole seperate quasi-pentium cpu that runs all the time in the background (even with computer suspended) and has access to everything.
< gmaxwell>
separate meaning still inside the cpu package, however.
< eck>
the whole point of coreboot from my pov is to know for sure that IME is disabled
< eck>
otherwise how would you be sure?
< gmaxwell>
eck: because you physically rewrote the flash chip and took most of the me data out of it.
< gmaxwell>
which is what me cleaner does.
< gmaxwell>
until me cleaner that wasn't possible for coreboot to disable ME on most hardware that had ME, the issue is on most of those systems the system will shut down after 30 minutes if ME doesn't boot.
< eck>
i will have to read more about ME and coreboot and mecleaner to comment
< eck>
what you said makes sense though
< gmaxwell>
so the coreboot instructions basically have you avoid rewriting the ME partition so the computer will keep working.
< Sentineo>
so what are the implications of not removing it for a noob? :)
< gmaxwell>
Sentineo: maybe nothing, or maybe intel and anyone who controls them or compromised them or found bugs in their code has full backdoor access to the computers running it.
< wumpus>
there's a parallel OS running on your CPU, running a fairly insecure software stack, network connected
< wumpus>
-> you can work out the rest of the details
< Sentineo>
yeah ...
< wumpus>
oh yes it happens to have ring -infinite access over anything else your CPU might be doing, so any access controls in your OS mean nothing
< eck>
the assumption here though is that the me requires external flash memory to run, since it's some bloated c/c++ program, right?
< Sentineo>
so you were refering to arm ... e.g. running stuff on rasberry pi sounds more secure than? or I got it wrong?
< eck>
what if the ME was coded directly into the silicon? or is that not likely due to its complexity?
< gmaxwell>
eck: the flash on current motherboards is ~32 MB in size in total.
< fanquake>
Good thing there isn't a torrent of bugs found in ME :)
< gmaxwell>
eck: it's not so much of a mystery now, it runs minix. people how have jtag access to it.
< wumpus>
Sentineo: RPI is a bad example because it also has a proprietary core glued to the CPU; but something like i.mx6 which can run blobless would be more secure, everything else the same
< gmaxwell>
you can also run arbritary code on it now and bypass the code signing, at least if you can write to the flash.
< eck>
wild
< eck>
and it's all undocumented, right?
< gmaxwell>
I don't think anyone has targeted it with a compiler yet, its instruction set is non-standard.
< gmaxwell>
it's a 486 with some pentium features added and some legacy features dropped.
< gmaxwell>
eck: right.
< gmaxwell>
but with jtag access people can reverse engineer things.
< eck>
wait can it access the host os memory
< Sentineo>
ah insane
< eck>
what memory mode is it in
< Sentineo>
so doing dice for privkeys and paper wallet does not sound that a bad idea now :D
< eck>
i wonder how you would synchronize between the me processor and ring 0/-1
< gmaxwell>
eck: presumably you use some kind of IO functionality to access the host memory, it's not direct mapped to the host memory.
< wumpus>
it would have been fairly ok if they just allowed reprogramming it, targetting it with custom software from now on, from now on, but no, they had to clamp down on it more
< gmaxwell>
which probably also avoids having to make it cache cohearent.
< eck>
not that i (or anyone else) is running such code, but if there was synchronization between the kernel and some ME processor, surely you could tell from timing
< eck>
i've written a bunch of ptrace stuff, and from userspace it's pretty obvious when you're being traced due to the clock slowdowns
< wumpus>
and then, you're going to measure every single memory operation to catch an ME backdoor in the act?
< gmaxwell>
heh
< eck>
depends what the overhead is
< wumpus>
I can't wait for such security theatre in operating systems </s>
< gmaxwell>
of course the problem is that all it needs to do is snoop your network, which it might do for free, then when triggered push a single write into kernel memory to open up a backdoor.
< gmaxwell>
and of course mystical power management on recent cpus makes timing kind of a mystery. :)
< eck>
for sure
< wumpus>
it's clearly not the solution, certainly not on long term, all those parameters would have to be figured out again for every new chip
< eck>
on a numa system you can't even depend on time being coherent across threads on the same cpu, much less in the presence of a management engine
< wumpus>
the ME will just be one of the many DMA streams going on
< eck>
gmaxwell: curious if you had to deal with this kind of thing at all at xiph
< eck>
one could easily imagine hardware to block certain content
< gmaxwell>
intel does use ME in some windows DRM stuff, but that isn't a think that comes up for free codecs.
< phantomcircuit>
gmaxwell, hdcp stuff?
< gmaxwell>
no idea. I don't care because DRM and because windows. :P
< gmaxwell>
apparently the SGX monotone counter stuff uses ME too, some presumably e.g. teechains is class broken now.
< wumpus>
hdcp is specific to hdmi compression, but I'd assume it's used to create a 'trusted video path' (puke) between the video decoder and graphics/scanout
< wumpus>
s/compression/encryption
< eck>
clearly i have 0% of the knowledge in this space as you, but the adversarial attack i was thinking of would be fingerprinting decoding or *encoding* somehow, so you could figure out who decoded/encoded a video
< eck>
although clearly that would be dependent on the encoder at least using hardware primitives
< wumpus>
DRM is never about encoding, always about decoding
< wumpus>
it's rooted in a world where every client is a consumer, and there's a few "premium" producers whose content has to be protected. But ok, this is getting off topic, sorry.
< wumpus>
we replaced the bdb4 patch with a newer one, to accomodate freebsd/openbsd's clang, so need to know if this still works ok for macosx
< provoostenator>
wumpus: I'll try it now
< provoostenator>
Any configure flags you need me to use?
< wumpus>
I'm not sure; just following the osx build guide would be best
< fanquake>
wumpus I'm fairly certain it is ok, because the new patch does the same thing we do patch bd4 in depends.
< wumpus>
fanquake: I've just tested on linux and gcc and it's ok at least
< fanquake>
I'm just running though the basics on osx
< provoostenator>
Related (?) question: does --with-incompatible-bdb still do something?
< wumpus>
I should probably bite the bullet at some point and get it to work on freebsd
< wumpus>
it's two lines of script or so...
< wumpus>
provoostenator: yes, it makes the configure accept bdb 5+
< provoostenator>
Ok, so I shoulnd't use that in this case I assume, because your changes relates to bdb 4?
< wumpus>
it would be unnecessary but not do harm
< provoostenator>
It actually wouldn't even do anything if you don't have berkeley-db5 installed?
< wumpus>
right, it just removes the bdb 4.8 version check, if you want to point it at a different bdb installation you can use LDFLAGS="-L${BDB_PREFIX}/lib/" CPPFLAGS="-I${BDB_PREFIX}/include/"
< provoostenator>
Ok, TIL...
< wumpus>
the use of with/enable in our build system is quite inconsistent
< provoostenator>
Yes, I also found out that it happily continues if QR reader dependencies are missing, which could cause someone to not even know that feature exists.
< wumpus>
from what I remember officially --with is for specifying dependencies, and enable for features
< fanquake>
11960 can go in.
< wumpus>
fanquake: thanks
< wumpus>
provoostenator: it's not considered a critical feature that you have to explicitly disable; it should print it in the summary though
< fanquake>
wumpus are you only using clang on openbsd now?
< provoostenator>
Yes, it does show in the summary.
< wumpus>
fanquake: yes; I think from 6.2 on we should be encouraging people to just use the built-in clang CC=cc CXX=c++
< wumpus>
fanquake: I haven't tried with any other compilers
< fanquake>
wumpus Ok, that can be an update to the build instructions after 11945
< wumpus>
fanquake: yes, could do that in the same PR, but I think it's somewhat orthogonal
< fanquake>
It might also depend on what happens in 11921, so can wait for cory to comment there
< fanquake>
wumpus Did you want to drop the gcc instructions as part of 11945 then? Might as well do the Clang switch in there.
< fanquake>
clang feels like it's just tacked on the end at the moment heh
< wumpus>
fanquake: I think we should keep the current doc, just add new instructions for version 6.2+
< wumpus>
fanquake: people might still want to build for older openbsd, for now
< wumpus>
not sure
< fanquake>
I'm not even sure how many people are building on openbsd, given how often the instructions seem to be *broken*
< wumpus>
great, I have it working on freebsd
< fanquake>
found a new sha256 tool>
< wumpus>
well it tends to get detected quite quickly when they're broken
< wumpus>
bsd users don't tend to be so noisy otherwise
< fanquake>
fair enough, add a 6.2+ section which is clang specific? That document is fairly short anyways.
< wumpus>
yep
< Lauda>
After running make check, where should the binaries be built?
< Lauda>
I'm going through the build docs. After make check completed, I'm having trouble finding them
< wumpus>
make builds the binaries in the build directory, which is where you invoke make. This will be src/bitcoin, src/qt/bitcoin-qt, src/test/test_bitcoin etc
< Lauda>
Does that also build bitcoind by default?
< wumpus>
I'm not sure. 'make check' is to run the unit tests, which don't need bitcoind
< Lauda>
Alright, I'll redo make. I also wonder why 'make check' was used in the build example for Arch Linux
< Lauda>
and not just make?
< wumpus>
because running the tests is always recommended
< wumpus>
and maybe they assume 'make check' builds the non-tests too. I really don't know if that's the case or not.
< Lauda>
Well, for ARM example only 'make' is used and in the Arch one 'make check'. Just got me wondering
< wumpus>
ok
< provoostenator>
Note to self: actually checkout the correct branch before running make...
< Lauda>
is RBF supposed to be rejected if you specify a custom change output?
< Lauda>
i.e. from the same wallet
< provoostenator>
Regarding production DNS seeds, "// Note that of those with the service bits flag, most only support a subset of possible options": sipa does your tool support all of those out of the box or just a subset?
< sipa>
provoostenator: you can configure it
< sipa>
i think the default is fine
< provoostenator>
Ok, I'm using the default. Should I clarify anything in the comment in that case?
< provoostenator>
Also, is your script able to keep the airdrop coin nodes away? Or is that not really a problem?
< provoostenator>
Right, but how does script prevent Bitcoin Core nodes from bootstrapping with a BCash peer and then getting stuck in August 2017 if none of those peers know of any Bitcoin Core peers? Not sure what the odds of that are.
< jonasschnelli>
1, 5, 9, 13...
< jonasschnelli>
I guess we should add NODE_NETWORK_LIMITED there soon
< jonasschnelli>
though... not sure
< wumpus>
jonasschnelli: would there be any reason for nodes to search out NODE_NETWORK_LIMITED peers?
< wumpus>
(I mean specifically)
< jonasschnelli>
That is questionable, but maybe to improve network "health"... but yeah. maybe you don't need that
< jonasschnelli>
Also,... it would have to hide out in non filtering and other filter flags..
< jonasschnelli>
so,... nm, NODE_NETWORK_LIMITED needs not to be there
< wumpus>
yes I understand that as argument to also connect to them, but not to seek them specifically
< jonasschnelli>
indeed. Though you can't mix them with NODE_NETWORK on the seed-db... so peers lern only over getaddr LIMITED nodes
< jonasschnelli>
(once this has been implemented)
< wumpus>
right, true
< maaku>
I've moved some discussion regarding implementation of BIPs 98, 116, and 117 to #bitcoin-mast
< MarcoFalke>
jonasschnelli: Think of the tests as a layer around bitcoin-core, not the other way round ;)
< jonasschnelli>
MarcoFalke: yes. that indeed true.
< jonasschnelli>
Though, I think test environments need to be practical, thats why we have almost no difficulty in regtest,.. and without a static fallback fee, regtests gets pretty unusable
< jonasschnelli>
We could also inject deterministic fee estmation data
< jonasschnelli>
But disabling on regtest means regtest-wallet is only useful with -fallbackfee... this leads me to think it should be there by default
< jonasschnelli>
(no on testnet though)
< jonasschnelli>
Not sure about others, but I use regtest quite often
< jonasschnelli>
(more then mainnet *duck*)
< jonasschnelli>
bloody fee spam on github,.. all during the same time of bcash pump and mempool spam.
< wumpus>
at least I banned the neo-nazi with his secret society nonsense from the org today, anything more?
< jonasschnelli>
No.. the rest is just normal nonblockable spam
< jonasschnelli>
Is there no different way of losing tests into the test_runner.py that would not rais a git conflict all the time? Like autoloading them via readdir()?
< jonasschnelli>
*loading
< wumpus>
yes, that could work
< wumpus>
well except for the order of execution
< wumpus>
you'd have to add metadata for every test as well, in a separate file
< wumpus>
so that they can be sorted by +/- size
< wumpus>
eh s/size/execution time/
< jonasschnelli>
or filename based
< jonasschnelli>
1-flag-testname.py
< jonasschnelli>
(where 1 is the oder)
< jonasschnelli>
*order
< wumpus>
nah
< wumpus>
I'd really prefer not to encode anything into the file name
< wumpus>
we don't want to be renaming tests all the time, that's awfully inconvenient if you want to execute them seprately to test something
< jonasschnelli>
Or directly include the meta in the python file?
< wumpus>
yes, exactly
< jonasschnelli>
Yeah.. that true
< wumpus>
just a comment in a certain format at the top
< jonasschnelli>
Yeah.. ping MarcoFalke ^
< jonasschnelli>
Let me do an issue
< sipa>
or what about a test that checks every functional test is listed in test_runner.py?
< wumpus>
we already have that
< sipa>
oh.
< wumpus>
check_script_list() in test_runner.py
< sipa>
then what's the problem?
< wumpus>
merge conflicts
< sipa>
that seems to solve the risk of accidentally lose things in the list
< wumpus>
yes, that it does, it's just that it doesn't prevent having to rebase, jonasschnelli's comment was more about avoiding the hotspot
< sipa>
ah, i see
< jonasschnelli>
I did at least 5 rebases the last two weeks because of the test_runner hotspot
< jonasschnelli>
Not problematic,... but maybe it can be solved in a way where things get even more simple
< jonasschnelli>
(just placing a test script into the right directory seems more elegant)
< wumpus>
yes, I agree
< jonasschnelli>
Though it would be another larger change in the test framework. Everytime I write a new test, something is new,... :)
< wumpus>
lots of changes to the test framework lately
< * luke-jr>
grumbles at GitHub changing all our tarballs again
< wumpus>
what did they change them into this time
< MarcoFalke>
re: jonasschnelli Place them at random locations
< MarcoFalke>
I don't get why everyone puts them in the last line
< MarcoFalke>
They are supposed to be sorted by approximate run-time, not date of insertion
< MarcoFalke>
We should add a comment in the last line to not put any tests there
< jonasschnelli>
Heh.. yes. Your right,.... but would you not also prefere the auto-loading approach? Or what downsides do you see?
< MarcoFalke>
Some of the files that end in ".py" are not test scripts
< sipa>
jonasschnelli: how do you determine the order of auto loaded tests?
< MarcoFalke>
and that
< sipa>
we could move the timing information to the test files themselves, and then use that to determine the order
< MarcoFalke>
And we'd have to write a test for the auto-loader
< jonasschnelli>
sipa: with some metadata in the test script (header)
< sipa>
in that case we could get rid of the hardcoded lists entirely
< MarcoFalke>
Hmm, I am really scared of not running tests
< MarcoFalke>
I.e. the autoloader skips tests
< jonasschnelli>
also, if you change a test, so it will consume more time, you need to shuffle stuff...
< jonasschnelli>
The script list in test_runner seems redundant info to me
< MarcoFalke>
jonasschnelli: Not too important. The time is measured in the order of minutes
< jonasschnelli>
But it's just a though....
< MarcoFalke>
s/minutes/10s-of-seconds/
< MarcoFalke>
So even if we did the autoloader, I'd only feel comfortable doing it if we hardcoded the test list or at least the number of tests that are supposed to be run.
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #11965: qa: Note on test order in test_runner (master...Mf1712-qaTestRunnerOrder) https://github.com/bitcoin/bitcoin/pull/11965
< luke-jr>
wumpus: yet another digit added to commit hashes, it appears
< luke-jr>
maybe we should just use the full hash in the code
< MarcoFalke>
But I wanted to see if you have any opinion, since it is basically a re-write
< wumpus>
seems it needs rebase again, but no not really have an opinion on it, if manually specifying the number of iterations works better for precious that's ok with me
< wumpus>
precision*
< wumpus>
not sure how the changes to check-doc.py belong in there though
< MarcoFalke>
bench is also "test", I assume. Since it is a rewrite of bench, might as well fix that up
< bitcoin-git>
[bitcoin] luke-jr opened pull request #11966: clientversion: Use full commit hash for commit-based version descriptions (master...ver_full_commit_hash) https://github.com/bitcoin/bitcoin/pull/11966
< cfields>
gmaxwell: just to clarify, all that's needed upfront for the threshold signing for apple codesigning is a properly signed csr with the (not-yet-existing) new pubkey shoved in, right?
< cfields>
obviously the pubkey will be shoved in, then the signature generated as a second step
< cfields>
if so, looks straightforward, just need to figure out how the digest is calculated