< luke-jr> is it intentional that the test framework's addmultisigaddress substitute for descriptor wallets doesn't accept addresses?
< achow101> luke-jr: yes
< luke-jr> k, makes sense
< luke-jr> achow101: I can't get it to work at all :/
< luke-jr> test_framework.authproxy.JSONRPCException: Cannot import descriptor without private keys to a wallet with private keys enabled (-4)
< achow101> luke-jr: the error message is self explanatory. you need a wallet with private keys disabled
< luke-jr> achow101: but it's not supposed to be watch-only
< achow101> then it has to have at least one private key
< luke-jr> it does, addmultisigaddress still requires the pubkey passed though…
< achow101> how to setup multisigs in descriptor wallets is still an unsolved problem
< luke-jr> :/
< achow101> what exactly are you trying to do?
< luke-jr> achow101: https://dpaste.com/5K7JHWDS4
< achow101> but why tho
< luke-jr> so it gets tested
< achow101> import a sortedmulti descriptor instead of adding more stuff to addmultsigaddress?
< luke-jr> this is from 2016
< luke-jr> and works with normal wallets
< achow101> since addmultisigaddress doesn't exist for descriptorwallets, I don't think it makes sense to try to test it for them
< luke-jr> fair enough
< achow101> the helper only exists for when we use addmultisigaddress to make a multisig to test other stuff
< CubicEarth> on the P2P side fo things, is there a reason for the client to not look for and prioritize downloading block data from sources on the LAN?
< sipa> CubicEarth: if you -addnode them, it'll likely fetch most blocks from there
< sipa> there is no functionality for automatically detecting other bitcoin node in the local network
< CubicEarth> I use addnode or connect :) It seemed like such functionality could be of benefit helping to make dissemination of the block data more user friendly. Totally understood that it would be a low priority in any case.
< CubicEarth> But is there a reason why such functionality would be bad?
< sipa> define more user friendly?
< sipa> i can see the use of detecting other nodes on a local network, but nobody has implemented that
< sipa> bneyond that, i don't know what you're asking for
< sipa> is it behaving badly currently?
< CubicEarth> One assumption I am making: the IBD can be costly in terms of ISP data caps
< sipa> ah
< sipa> i'd suggest to have one gateway node in your network, and have the other nodes not make outgoing connections, if you're concerned about that
< CubicEarth> Yeah, it is easy enough for me avoid pulling the data twice through my WAN
< CubicEarth> I was just thinking of making it happen with less user intervention
< sipa> doing it without user configuration is hard through
< sipa> you could have a rule say that causes a delay before fetching from external IPs to give a chance for a local node to announce it first to you
< sipa> but now you need to make sure there is one node in your network that doesn't have this delay, or you're just going to lag behind on all nodes
< CubicEarth> Isn't is trivial to know if a device is on the same ... subnet?
< CubicEarth> not sure if subnet is the right term... argh
< sipa> CubicEarth: that's not the problem!
< sipa> the problem is making sure that not everyone on the same network is slowing down blocks from the outside
< sipa> they're still supposed to come in quickly to one node from outside
< sipa> so you need some kind of "leader election"...
< CubicEarth> Are you talking about just keeping up with new blocks as they are issued? I am thinking about when one node is far behind in block height
< sipa> ah, yes
< sipa> for IBD it should Just Work(tm)
< sipa> as it autoselects for faster peers
< sipa> though probably not aggressively enough to get ~all blocks from just local nodes
< CubicEarth> Interesting. I haven't noticed this to be the case, but I can test it going forward and see how it behaves
< CubicEarth> And I know i
< CubicEarth> I've asked this before, but I forget the answer... how far ahead of the validation will it download blocks?
< sipa> 1024 blocks
< CubicEarth> the "sliding window"? thanks
< sipa> yes
< CubicEarth> "The main purpose of this is so that blocks that are near one another on the blockchain are most likely contained in the same .dat file (where the raw block data is stored on disk)."
< CubicEarth> Still the case?
< sipa> yes
< sipa> otherwise you can't effectively prune
< CubicEarth> I've never looked at, or asked about how the .dat files were structured, but I had assumed the blocks were more or less kept in order. But actually each .dat file is just filled with whatever the next blocks to come in are?
< sipa> yep, they're just appended in the order they arrive
< sipa> you wouldn't know where to put things if you want to keep them in order
< sipa> as you don't know the size of the blocks you still miss
< CubicEarth> couldn't you use the headers?
< sipa> headers don't contain the block's sizes
< CubicEarth> Got it. And the reason to tie the sliding window to the validation is make sure that the sliding window never gets more than 1024 blocks ahead of blocks known to be good?
< CubicEarth> Because otherwise it would seem the sliding window could just be in relation to highest unbroken block in sequence, and could therefore advance far faster than validation.
< CubicEarth> *highest block in unbroken sequence
< CubicEarth> (meaning an attack could feed bad blocks, and then the blk.dat files would be more out of order)
< CubicEarth> sipa: You are always the person to answer my questions about the P2P stuff... and gmaxwell sometimes. Are you like the only person who knows all of the 'why' the p2p stuff is the way it is, or are you just the person willing to entertain my questions?
< sipa> CubicEarth: i mean... a block arrives, you don't have its parent(s) yet... where do you store it?
< sipa> CubicEarth: i've been around for a while :)
< sipa> and in this... i wrote a significant part of the block fetching logic
< sipa> +case
< CubicEarth> so this about avoid disk thrashing?
< CubicEarth> is ... avoiding
< sipa> compared to which alternative? you haven't given any
< sipa> a possibility is just storing every block in a separate file
< sipa> that's great as it means you can delete whatever block at any time
< sipa> unfortunately most filesystems perform terribly with huge numbers of files
< sipa> another possibility is constantly rewriting block files
< sipa> to keep the blocks in order
< CubicEarth> I guess the question becomes how bad is it if that some blocks are way out of order. On the surface, it seems desirable for the block download to be independent of the cpu intensive validation.
< sipa> it is
< CubicEarth> Coupled with the LAN prioritization I was musing about, imagine that for some short period of time, you have a high bandwidth connection. It would be advantageous to be able to download all of the blocks right then
< sipa> it doesn't matter where blocks are stored for validation
< CubicEarth> and then let the cpu chew through it later
< CubicEarth> well that is good
< sipa> yeah, that sounds vaguely useful
< CubicEarth> You are saying those processes are independent, but at the moment, they are tied together with the 1024 block window?
< sipa> ah yes
< sipa> but it doesn't matter where things are stored for validation
< sipa> it's just restricted to make sure pruning is possible
< CubicEarth> interesting. Yeah, with pruning, especially if pruning is on because there just ins't *that much* disk space available, it all makes perfect sense. There is basically no advantage to what I am thinking.
< sipa> that's the only reason
< sipa> otherwise you could end up with 1000 block files, and each contains blocks from all over the chain
< sipa> so you can't delete any of them
< sipa> because they all also contain very recent blocks
< CubicEarth> I hear that, but I aren't we mixing two concepts here? because couldn't you still have the sliding window, but just have it be decoupled from validation?
< CubicEarth> or does the expose an attack vector?
< CubicEarth> that
< sipa> it's not tied to validation, except implicitly
< sipa> it's tied to the moving window
< sipa> which moves along the chain as far as it can while it has all blocks
< sipa> that happens to be identical to what validation needs
< CubicEarth> too bad validation can't get ahead of the block download ;)
< sipa> you see what i'm saying?
< sipa> we validate blocks as soon as we have all its parents (and those parents are validated)
< CubicEarth> I get that, and that fits my long held understanding
< sipa> and that's also when the download window moves (because limiting out-of-orderness has the exact same requirement)
< CubicEarth> that's why I was asking about the attack consideration, because otherwise you wouldn't need to validate the blocks to limit out-of-orderness, right? You would just need to make sure you didn't download to crazily
< sipa> oh i see
< sipa> you're talking about making validating independent of block download entirely
< sipa> that's a whole other can of worms
< CubicEarth> yeah
< sipa> right now we always make sure that the best chain we know about is the actively validated one
< sipa> if you drop that, the window could move ahead of validation
< CubicEarth> It seems like a feature that could add convenience, along with some inherent vulnerabilities. But it seems the consequences of those vulnerabilities being exploited would more or less be reduced to the loss of the convenience, along with perhaps some additional nuisance
< CubicEarth> Meaning, if there was an option to allow the node to let the window move ahead, independent of the state of validation, the benefit would be the possibility that validation could continue on in absence of good network connectivity
< CubicEarth> And the risk would be that the wasn't the right chain in the end
< CubicEarth> And so downloading would need to happen again
< sipa> seems reasonable
< wumpus> sipa: to link dynamically against qt you need libQt5Core.so, only libQt5Core.so.5 won't do, it should be created as a symlink though... sounds more like a weird linker path issue
< wumpus> it's in qtbase5-dev
< bitcoin-git> [bitcoin] fanquake opened pull request #20504: build: use more legible (q)make commands in qt package (master...legible_qt_config_cmds) https://github.com/bitcoin/bitcoin/pull/20504
< wumpus> ubuntu does not install statically linkable qt libraries, only dynamic linker ones, but that's fine if you're only building locally and not for distribution
< bitcoin-git> [bitcoin] dergoegge opened pull request #20505: [backport] build: Avoid secp256k1.h include from system (0.21...21_backport) https://github.com/bitcoin/bitcoin/pull/20505
< sipa> wumpus: bizarre, i installed and reinstalled all the packages
< sipa> i'll retry and symlink it myself
< sipa> thanks
< wumpus> so you have /usr/lib/x86_64-linux-gnu/libQt5Core.so but it can't find it during linking?
< sipa> only .so.5
< hebasto> sipa: waiting for my groovy installation complete to check your link issue
< wumpus> does "dpkg -S dpkg -L qtbase5-dev|grep libQt5Core.so" show it should be in there?
< wumpus> eh just "dpkg -L qtbase5-dev|grep libQt5Core.so"
< wumpus> you really shouldn't have to make the symlink yourself
< wumpus> maybe the file is somewhere else on ubuntu 20.10 I haven't used that yet
< sipa> wumpus: ok false alarm, i reinstalled it and now the .so file is there
< sipa> i must have reinstalled another package before
< wumpus> phew, still wonder how it came to be erased but good to know!
< sipa> yeah, no idea how this happened
< sipa> i haven't done many weird things in this install
< wumpus> could be a bug I mean that version of ubuntu is still very new
< sipa> wumpus: i remember yeard ago that a -l argument always needed a .a, was a wrong, or is that since outdated?
< wumpus> that's true on windows, it has these import libraries (.lib IIRC) to tell what is in a dll, but not (and never was) on linux, the linker links directly to the .so, linking to an .a is static linking
< wumpus> (there's also .la files with library metadata that are used by libtool, but these are not required for anything, and doesn't look like qt uses that system)
< sipa> wumpus: hmm, maybe i conflated windows stuff earlier then
< wumpus> i think windows' .lib system is pretty nice, it allows for linking against a library without having the compiled library (the .lib is just a list of symbols), right now in the depends system we need to build some libraries (such as freetype) just to link against the system copy of it... that could be avoided in that case
< wumpus> though there is work on this under the name of 'interface stubs', but only in clang afaik, https://phoronix.com/scan.php?page=news_item&px=Clang-Interface-Stubs
< fanquake> also tbd stubs heh
< wumpus> fanquake: those are the macos specific kind isn't it?
< fanquake> yea
< wumpus> would be so nice to just have (deterministically generated) stubs and headers for OS dependencies, this could even replace the symbol check because too new symbols would just not be in the stubs
< wumpus> fanquake: now for ELF :)
< vasild_> sipa: https://bpa.st/VU2Q makes sense?
< wumpus> fanquake: but yes that's certainly the idea, it's nice that they have a text based format too
< wumpus> "-interface-stub-version=experimental-yaml-elf-v1" hmm let's see if my clang is new enough
< wumpus> "error: invalid value 'Invalid interface stub format: experimental-yaml-elf-v1 is deprecated.' " loool oh already deprecated
< vasild> serialization is ok, but deserialize could produce strange results indeed - the value stored in nVersion here `int nVersion = s.GetVersion();` will be overwritten 2 lines below by `READWRITE(nVersion);` by what comes in from disk. That overwritten nVersion will be used to make decision about compactsize services, but will not be used by READWRITEAS(CService, obj);, which will use s.GetVersion()
< wumpus> experimental-ifs-v2 works but doesn't look like there is a text format anymore
< wumpus> otoh we're only linking against a few system C libraries not C++ libraries this makes things way simpler
< wumpus> e.g. ELF library -> text file with symbols and metadata -> ELF library with only symbols and metadata for linking
< wumpus> the only thing is that you'd end up with a file per architecture, as there might be symbol differences, but as we have only a limited number of architectures that's not too bad
< hebasto> sipa: on fresh minimal ubuntu 20.10 `configure --with-incompatible-bdb && make` works flawlessly
< wumpus> oh would have been more on-topic in #bitcoin-builds i guess sorry
< jonatack> MarcoFalke: thanks for fixing that stray line I added in the release note wiki. Just updated it to add the new fee_rate sat/vB changes and adjust the PR 11413 fee rate info.
< jonatack> The wallet RPC changes are a bit spread out ATM and should probably be regrouped in one place under the Wallet section, Updated RPCs
< jonatack> I didn't do that, other than moving the PR 11413 entries down to just after the new RPC send entry so it makes more sense.
< jonatack> As the 11413 entry now mentions RPC send.
< hebasto> wumpus: may I remind to post rc2 binaries to https://bitcoincore.org/bin/ ?
< wumpus> hebasto: already working on that
< hebasto> thanks!
< bitcoin-git> [bitcoin] hebasto closed pull request #19832: p2p: Put disconnecting logs into BCLog::NET category (master...200829-log) https://github.com/bitcoin/bitcoin/pull/19832
< fanquake> 🚀
< bitcoin-git> [bitcoin] sipsorcery opened pull request #20506: WIP: AppVeyor CI fixes in preparation for next image bump (master...msvc-no-optimise) https://github.com/bitcoin/bitcoin/pull/20506
< bitcoin-git> [bitcoin] vasild opened pull request #20507: sync: print proper lock order location when double lock is detected (master...double_lock_print_location) https://github.com/bitcoin/bitcoin/pull/20507
< rebelution> luke-jr ------->> [[ You are as grotesque as a disgraceful savage shitload of ineffectual repugnant maggot slime ]] <<-------
< bitcoin-git> [bitcoin] sipsorcery opened pull request #20508: IGNORE: Testing appveyor CI build with pre-built dependencies (master...msvc-vcpkg-prebuilt) https://github.com/bitcoin/bitcoin/pull/20508
< bitcoin-git> [bitcoin] vasild opened pull request #20509: net: CAddress deser: use stream's version, not what's coming from disk (master...caddress_deser_version) https://github.com/bitcoin/bitcoin/pull/20509
< vasild> sipa: ^
< luke-jr> any idea why bitcoind crashes at startup in Valgrind? :/
< luke-jr> with UBSan*
< bitcoin-git> [bitcoin] jonatack opened pull request #20510: [backport] wallet: allow zero-fee fundrawtransaction/walletcreatefundedpsbt and other fixes (0.21...backport-fee_rate-follow-ups) https://github.com/bitcoin/bitcoin/pull/20510
< RickMortir22> Anyone Can Help? - My Bitcoin Core dont Work Anymore, it stoped syng 19 hours ago and dont sync anymore, its shows "19 hours ago" forever, and status bar show "connecting..."
< wumpus> luke-jr: no idea, I'm sure valgrind gives some kind of error?
< wumpus> there's sanitizer suppressions for ubsan in fuzz/test/sanitizer_suppressions/ubsan
< MarcoFalke> can valgrind run with sanitizers on top, even?
< wumpus> that's a good question
< sipa> afaik no
< sipa> vasild: yes, that works
< glozow> #proposedmeetingtopic package validation design question
< sipa> i was working on a fix as part of rebasing the serialization parameters
< glozow> is that how u do it
< jnewbery> that's exactly how you do it!
< glozow> whew, thanks jnewbery
< ajonas> #proposedmeetingtopic 2019-20 Coredev survey summary
< sipa> ugh, anchors.dat stores CAddress objects on disk, without file versioning
< sipa> this means it doesn't support v2 addresses
< jonatack> good catch
< sipa> going to open an issue
< sipa> glozow: wth is peter wheel? ;)
< glozow> sipa: bitcoin gandalf, roams Middle Earth with the hobbitcoins
< hebasto> as anchors.dat is a new feature, could it use only v2 addresses?
< wumpus> #startmeeting
< core-meetingbot> Meeting started Thu Nov 26 19:00:49 2020 UTC. The chair is wumpus. Information about MeetBot at https://bitcoin.jonasschnelli.ch/ircmeetings.
< core-meetingbot> Available commands: action commands idea info link nick
< sipa> hebasto: yes, but what when we introduce v3?
< jonasschnelli> hi
< wumpus> #bitcoin-core-dev Meeting: achow101 aj amiti ariard bluematt cfields Chris_Stewart_5 digi_james dongcarl elichai2 emilengler fanquake fjahr gleb gmaxwell gwillen hebasto instagibbs jamesob jb55 jeremyrubin jl2012 jnewbery jonasschnelli jonatack jtimon kallewoof kanzure kvaciral lightlike luke-jr maaku marcofalke meshcollider michagogo moneyball morcos nehan NicolasDorier paveljanik
< kanzure> hi
< wumpus> petertodd phantomcircuit promag provoostenator ryanofsky sdaftuar sipa vasild wumpus
< hebasto> hi
< achow101> today's a us holiday so there may be fewer people
< jonatack> hola
< glozow> hi
< wumpus> congrats on rc2 everyone !
< fjahr> hi
< jb55> hi
< wumpus> achow101: yes, might be a short meeting
< sipa> hi
< ajonas> hi
< michaelfolkson> Happy Thanksgiving US peeps
< wumpus> we do have two proposed meeting topics for today: package validation design question (glozow), 2019-20 Coredev survey summary (ajonas)
< wumpus> any any others if people have last minute proposals
< wumpus> #topic High priority for review
< core-meetingbot> topic: High priority for review
< wumpus> https://github.com/bitcoin/bitcoin/projects/8 9 blockers, 2 chasing concept ACK
< jnewbery> hi
< wumpus> anything to add/remote or that is ready for merge?
< fjahr> Can we add #19055 to blockers again?
< gribble> https://github.com/bitcoin/bitcoin/issues/19055 | Add MuHash3072 implementation by fjahr · Pull Request #19055 · bitcoin/bitcoin · GitHub
< wumpus> fjahr:sure
< fjahr> thx
< sipa> #20207 should be ready... not really a blocker for me, but would be nice to get in
< gribble> https://github.com/bitcoin/bitcoin/issues/20207 | Follow-up extra comments on taproot code and tests by sipa · Pull Request #20207 · bitcoin/bitcoin · GitHub
< wumpus> you have nothing else on the list so will add it
< wumpus> that concludes the topic I think
< jonatack> not as a blocker, but maybe #20483 for backport to 0.21 to avoid the feeRate / fee_rate options being a potential source of confusion/footgun
< gribble> https://github.com/bitcoin/bitcoin/issues/20483 | wallet: deprecate feeRate in fundrawtransaction/walletcreatefundedpsbt by jonatack · Pull Request #20483 · bitcoin/bitcoin · GitHub
< wumpus> ok
< jonatack> if not, otherwise it can wait
< wumpus> for 0.21.1 I guess? if it's not a bugfix I don't think we should merge it between rcs
< jonatack> (feeRate is in BTC/kB, fee_rate is in sat/vB)
< jonatack> wumpus: as you think best
< wumpus> in any case it's in the high prio list now
< jonatack> 👌
< wumpus> oh it's 8 files changes, of which 7 tests and only small changes in a cpp
< wumpus> could still be in a rc i guess
< wumpus> #topic package validation design question (glozow)
< core-meetingbot> topic: package validation design question (glozow)
< glozow> So I’m in the process of doing package mempool acceptance logic. I have a design question that boils down to “if a transaction in a package fails, should we reject the whole package immediately?”
< glozow> If it’s a package from p2p (i.e. through package relay) then it seems most logical to fail as early as possible. my draft implementation does all PreChecks first for this reason, and then we don’t do script checks.
< glozow> If we’re doing a testmempoolaccept for a package, however, it seems appropriate to fully validate each one so clients have more helpful information.
< glozow> Wondering if people have thoughts/opinions?
< sipa> you're basically asking if the whole package would fail, should the behavior sort of automatically retry with subpackages?
< sipa> (or be equivalent to that)
< glozow> sipa: yeah
< sipa> is there a need for that?
< jnewbery> That sounds reasonable to me. package acceptance over p2p should be atomic. If you're doing a testmempoolaccept it's helpful to return more granular acceptance information
< sipa> the idea is initially just having this as RPC?
< wumpus> yes, P2P clients don't get helpful information back anyway
< jnewbery> If there was a sendrawtransactionpackage we'd probably want that to be atomic I think
< glozow> yeah, i want to do 1 testmempoolaccept, 2 submit packages through rpc, 3 package relay
< sipa> no reason why the RPC can't be initially just only entire-package, and if there is a use for more granular acceptance, add RPC support for that later
< sipa> (if ever)
< wumpus> for RPC it's useful to at least get detailed information on which transaction failed, but it still makes sense for it to be atomic
< sipa> i could imagine that on P2P maybe there are reasons why you'd want to be able to accept subpackages, e.g. to avoid one bad transactions added to a package by an attacker can't prevent the entire package from being relayed
< sipa> but i think those are longer term questions
< jnewbery> I think if you submit a package (tx A -> tx B -> tx C) to testmempoolaccept it's helpful to the user to know whether A or B or C failed
< glozow> yes, i'm leaning towards running full checks for each tx until they pass/fail in a testaccept
< michaelfolkson> But surely something has gone badly wrong if you can't figure out which transaction in your package was the reason for the rejection?
< wumpus> sipa: right, shouldn't cache all the transactions individually as rejected
< glozow> yah good point
< michaelfolkson> I guess it depends on whether if there are weird rejection policies out there
< glozow> michaelfolkson: what kinds of weird rejection policies?
< jnewbery> Do we ever need to sort the txs in a package (from user or over P2P), or do we expect them always to be sorted?
< glozow> jnewbery: i think we should sort it ourselves
< sipa> jnewbery: i'd expect to use dependency ordering
< sipa> (first sort by how many parents in the package it has, then by txid)
< sipa> on either the sender or the receiver side
< michaelfolkson> glozow: I'm trying to think of reasons for a package rejection that isn't clear to the sender
< sipa> michaelfolkson: the obvious one is a new standardness rule on the network (e.g. softfork that enables new scripts, which the sender knows about but the receiver doesn't)
< sipa> if that happens in a leaf of a package
< jnewbery> sipa: how many parents or how many ancestors?
< sipa> jnewbery: either works
< sipa> presumably the receiver still wants the package without that leaf
< glozow> would you know how many ancestors without doing a few checks? uh oh
< sipa> glozow: that's trivial, i think?
< sipa> we do that sorting everywhere
< sipa> inside tx messages e.g.
< jnewbery> huh? A tx's parent can have more parents in the package than the child, no?
< glozow> ah okay cool
< sipa> jnewbery: oh sorry i mean you can either use ancestors, or depth
< sipa> not # of parents
< jnewbery> right
< sipa> but i think these are all questions for later
< jonatack> glozow: is your concern about a trade-off between resource use/ddos and full atomic validation?
< sipa> for now i think, as an RPC it's fine to just try atomically accepting the whole package
< michaelfolkson> sipa: Is it possible that post Taproot activation that an unupgraded node would reject Taproot transactions in a package?
< sipa> michaelfolkson: they should
< sipa> (except there won't be pre-taproot post-package relay nodes :p)
< michaelfolkson> sipa hopes
< michaelfolkson> haha
< sipa> regardless of whether it activates or not
< glozow> jonatack: yeah, even if not a dos concern, it at least seems unproductive to run PolicyScriptChecks for any tx if we're going to reject the whole package, since those aren't even cached.
< sipa> policy checks are cheap though
< glozow> PolicyScriptChecks?
< jnewbery> sipa: PolicyScriptChecks are script/sig validation
< sipa> oh, ok
< sipa> not familiar with that function
< michaelfolkson> Why "they should"? What do they have to lose by accepting a anyonecanspend transaction as part of the package?
< sipa> michaelfolkson: soft fork safety relies on using transactions the unupgraded networks considers nonstandard
< jnewbery> sipa: it's a step in AcceptToMemoryPool
< jnewbery> added by sdaftuar when he split it possible to implment package acceptance I believe
< sipa> ah
< glozow> yeah added in #16400
< gribble> https://github.com/bitcoin/bitcoin/issues/16400 | refactor: Rewrite AcceptToMemoryPoolWorker() using smaller parts by sdaftuar · Pull Request #16400 · bitcoin/bitcoin · GitHub
< jnewbery> sipa: so you think the node should always dependency sort packages, both from p2p and rpc?
< jnewbery> seems to me that the sender/client should do that?
< sipa> jnewbery: yeah, no opening on whose responsibility it should be
< sipa> in P2P, i think it may make sense to force the burden on the sender, because they already know this information anyway
< glozow> sipa: so if you receive a package you don't check to see if it's sorted properly, you just reject if it doesn't work?
< jnewbery> I can see the argument for rpc to accept just a bag of txs and sort it
< sipa> glozow: yeah
< sipa> jnewbery: yeah
< glozow> (also so i don't take up too much time, just making sure, for RPC, it's ok if i do full validation checks for each tx in a testmempoolaccept package, and atomic for real package submissions?)
< sipa> glozow: i'd more think the other way around
< wumpus> there's one other, probably short, topic, so no you're not taking up too much time :)
< jnewbery> I think sipa is saying make both RPC and P2P atomic
< sipa> i'm saying, for now, only do RPC, and make it atomic
< sipa> because there is no need for anything else unless someone comes up with a use case
< sipa> for P2P package relay... i feel there may be a need for accepting subpackages, but that's a question we can discuss later
< jnewbery> I still think there's value to make RPC granular if it doesn't add too much complication, because providing the extra information to the user can be helpful
< sipa> extra information and being atomic aren't in conflict with each other
< sipa> you can report "the package failed... and this is the first problem i encountered, in tx A"
< wumpus> right
< jnewbery> ah, maybe we haven't described the problem well
< sipa> being non-atomic would be a result of the form "the subset of packages {A,B,C,...} is acceptable from your bag, but D failed because Y"
< jnewbery> there are many steps in AcceptToMemoryPool: Prechecks, policy checks, consensus checks
< sipa> at least that's how i interpreted it
< michaelfolkson> Non-atomic would be accepting a subset of the package right? Rather than rejecting it outright and providing info why
< jnewbery> we should first do prechecks for all txs, then policy checks for all txs, then consensus checks for all txs
< sipa> jnewbery: agree
< glozow> so the question is, if A and B passed prechecks but C didn't, should we still run script checks for A and B?
< jnewbery> but if tx C fails prechecks, it might still be helpful to do policy checks for tx A and tx B and consensus checks for tx A and tx B so we can say to the user "tx A and tx B are good, but tx C fails in prechecks"
< sipa> i'd say no - for now - because the package in its entirety isn't acceptable
< michaelfolkson> I guess the concern that there is inefficiency. The sender keeps coming back with a smaller package that still gets rejected
< sipa> michaelfolkson: in RPC there is no "sender"
< glozow> what if there's a script error in A, and a prechecks error in C (and C depends on B which depends on A), it'd be more helpful to the RPC client to know how to fix the error in A first?
< sipa> and if the user wants that kind of functionality, we should add it
< sipa> glozow: is that a big design question that needs to be known up front?
< sipa> otherwise i'd just say do the simplest thing first, and iterate
< glozow> it does affect the testmempoolaccept API, i'd like as helpful a response as possible if the package fails
< sipa> i feel that adding more detailed error reporting later can probably be done in a backward compatible manner
< MarcoFalke> I'd also say to do the simpler thing first. Literally any progress on teaching TATMP packages is a nice feature
< glozow> ok, so atomic for now
< sipa> if the RPC is "your package is acceptable in its entirety: yes/no. here is a list of problems i found: []"
< MarcoFalke> Also, we can't keep policy error messages identical between releases anyway
< sipa> MarcoFalke: right
< glozow> got it, thanks everyone
< wumpus> #topic 2019-20 Coredev survey summary (ajonas)
< core-meetingbot> topic: 2019-20 Coredev survey summary (ajonas)
< MarcoFalke> glozow: Thanks for working on this!
< sipa> absolutely
< ajonas> In January, jnewbery sent out a survey that he planned to present the results of at the Coredev in March. Given that the meeting never happened, I put together a presentation/post summarizing the responses. Both can be found at https://adamjonas.com/bitcoin/coredev/retro/coredev-2019-retro/.
< ajonas> No action items. Just want to call it to people's attention.
< wumpus> ajonas: awesome!
< jnewbery> ajonas: thank you for picking this up
< michaelfolkson> For the summary watch the video I guess?
< ajonas> the post and the video are about the same
< sipa> short note: i just opened #20511, which i think is something to address for 0.21
< gribble> https://github.com/bitcoin/bitcoin/issues/20511 | anchors.dat doesnt support V2 addresses · Issue #20511 · bitcoin/bitcoin · GitHub
< jonatack> ajonas: wow! thanks!
< glozow> ajonas: wow cool!!!
< sipa> ajonas: thanks for that, will read (and maybe watch)
< jonasschnelli> nice!
< ajonas> thanks to jnewbery for sending it out and I hope we can do another round in early 2021.
< fjahr> ajonas: cool!
< wumpus> unless anyone wants to discuss a specific thing from that overview, I think this concludes the meeting
< jnewbery> I agree that there's much more value in these things if we repeat them periodically. Everyone who answered talked about their hopes for the project and their contributions in 2020, so looking back at those should be interesting for people
< wumpus> yes it's interesting I agree with most of the comments
< wumpus> #endmeeting
< core-meetingbot> topic: Bitcoin Core development discussion and commit log | Feel free to watch, but please take commentary and usage questions to #bitcoin | Channel logs: http://www.erisian.com.au/bitcoin-core-dev/, http://gnusha.org/bitcoin-core-dev/ | Meeting topics http://gnusha.org/bitcoin-core-dev/proposedmeetingtopics.txt / http://gnusha.org/bitcoin-core-dev/proposedwalletmeetingtopics.txt
< core-meetingbot> Meeting ended Thu Nov 26 19:51:25 2020 UTC.
< michaelfolkson> I'm guessing there will be no Core dev meeting until it can be done in person? People aren't keen on video conference like things
< MarcoFalke> michaelfolkson: IRC works better than video
< sipa> i'm personally not particularly interested in non-in-person coredev
< michaelfolkson> Fair enough
< michaelfolkson> I think it is IRC < Video < In person for engagement. But IRC definitely most convenient
< jonatack> same for me as MarcoFalke and sipa ^
< hebasto> ^ same
< michaelfolkson> To be clear I wasn't proposing video instead of IRC for the weekly meetings lol. Only as a possible substitute to the in person Core dev meeting until Covid is over
< michaelfolkson> Great video ajonas. Interested in how next year's will compare
< michaelfolkson> I'm guessing at least some of this has improved. Signet and BIP 157 have made progress. Coin selection less so presumably
< michaelfolkson> And process separation and fuzzing PRs struggle for review
< bitcoin-git> [bitcoin] emilengler opened pull request #20512: doc: Add bash as an OpenBSD dependency (master...2020-11-doc-build-openbsd-bash-dependency) https://github.com/bitcoin/bitcoin/pull/20512
< bitcoin-git> [bitcoin] bitcoinhodler opened pull request #20513: Release notes: remove mention of default wallet creation (master...no-default-wallet) https://github.com/bitcoin/bitcoin/pull/20513
< aj> wumpus: any thoughts on "bitcoin-util grind" ? another option might be to add "bitcoin-cli -grind=HEADER" with the idea being to move it into bitcoin-util in a later PR that also includes other useful stuff, like psbt commands?
< schmeckin> It's amazing what a bunch of assholes yall are. You wonder why you're being insulted after spending 8 years being corrupt pathological lieing fucktards instead of rectifying the issue?
< schmeckin> midnight ------->> [[ You are as rotten as a clumsy decaying myriad of smelly bat pus ]] <<-------
< schmeckin> luke-jr ------->> [[ You are as noxious as a dirty clumsy accumulation of rotten despicable disgusting cockroach dung ]] <<-------
< schmeckin> gmaxwell ------->> [[ You are as hostile as a barren mass of ineffectual bat orifices ]] <<-------
< bitcoin-git> [bitcoin] bitcoinhodler closed pull request #20513: Release notes: remove mention of default wallet creation (0.21...no-default-wallet) https://github.com/bitcoin/bitcoin/pull/20513
< bitcoin-git> [bitcoin] sipa opened pull request #20514: Use addrv2 serialization in anchors.dat (master...202011_v2_anchors) https://github.com/bitcoin/bitcoin/pull/20514
< luke-jr> wumpus: not a sensible one https://dpaste.com/8KGS42GH5
< luke-jr> I mean, if this were correct, it wouldn't matter if it was Valgrind run or not XD
< luke-jr> oh lovely, it goes away if I LogPrintf the pointer -.-
< bitcoin-git> [bitcoin] sipsorcery closed pull request #20508: IGNORE: Testing appveyor CI build with pre-built dependencies (master...msvc-vcpkg-prebuilt) https://github.com/bitcoin/bitcoin/pull/20508
< michaelfolkson> Don't know why the trolls are targeting you today luke-jr. On behalf of humanity I apologize
< luke-jr> michaelfolkson: me either :/
< sipa> wumpus: qt compiles now, "apt install --reinstall qtbase5-dev" was enough to fix it
< luke-jr> O.o
< sipa> luke-jr: somehow i was missing libQt5Core.so (i only had .so.5)
< luke-jr> weird
< luke-jr> hrm, -O0 doesn't crash in Valgrind
< luke-jr> not sure this tells me anything
< sipa> luke-jr: are you mixing sanitizers and valgrind?
< luke-jr> sipa: just ubsan
< sipa> that's certainly the least invasive
< luke-jr> I suppose I could rebuild without just to see
< luke-jr> but if it fails to reproduce, I'm not sure UBSan is at fault even then
< luke-jr> LogPrintfs down to FastRandomContext::rand64 leave the crash intact and show non-null ptr
< luke-jr> LogPrintfs in FastRandomContext::FillByteBuffer or ChaCha20::Keystream show the same non-null ptr and fix the crash
< sipa> luke-jr: i mean... if it were actually trying to write 4 bytes to nullptr, your process would segfault
< sipa> so if it works by just running, and not inside valgrind (both with ubsan enabled), it's a problem with the combination of the two
< luke-jr> sipa: it does segfault :P
< luke-jr> inside valgrind*
< luke-jr> but not without UBSan
< luke-jr> strange
< sipa> luke-jr: but does it segfault *with* ubsan, but outside valgrind?
< luke-jr> sipa: no
< luke-jr> but that could just indicate a timing issuse
< sipa> luke-jr: could be, indeed
< luke-jr> in the meantime, the actual issue I was *trying* to debug, doesn't occur in Valgrind x.x