< meshcollider>
would it make more sense to have the rpc cookie file stored in the "files" argument section or the "rpc" argument section
< karanlearns>
hi , i cloned from github and installed on my online computer. i also added few commits to create segwit address
< karanlearns>
i then copied the bitcoin-qt file from bin folder to my other offline computer.
< karanlearns>
i get this error when i try to run latest source bitcoin-qt from offline computer
< karanlearns>
error while loading shared libraries: libboost_system.so.1.63.0: cannot open shared object file: No such file or directory
< rafalcpp>
karanlearns: move this question to #bitcoin imo. How ever the problem seems to be that you do not have lib boost system installed (system wide) on the target offline computer
< karanlearns>
rafalcpp: this was working fine when i copied the 0.15.1 release bin file to offline computer
< karanlearns>
earlier - i had copied bitcoin-qt from bin/ of the release 0.15.1 on my offline computer and it worked fine.
< karanlearns>
now when i cloned from github,added one commit , make, install and then copied bitcoin-qt file from bin on offline computer - i got error
< karanlearns>
"error while loading shared libraries: libboost_system.so.1.63.0: cannot open shared object file: No such file or directory"
< rafalcpp>
karanlearns: you mean the officially released binaries worked?
< karanlearns>
rafalcpp: yes
< karanlearns>
then i needed one commit not present in released binary. so i cloned, added the commit, make, install
< karanlearns>
and then copied the bitcoin-qt file from src folder first, then tried with bitcoin-qt under bin folder
< karanlearns>
copied to offline computer and bitcoin-qt gives error
< rafalcpp>
karanlearns: official binaries are from Gitian. It could be building it differently then normal Make, e.g. static linking some libraries like boost system. While your normal ./configure + make does not
< rafalcpp>
karanlearns: target computer is 64 bit PC right?
< karanlearns>
ok
< karanlearns>
yes
< karanlearns>
tails os = linux 64 bit
< rafalcpp>
karanlearns: and you build on what, also 64 bi PC, 64 bit linux?
< rafalcpp>
karanlearns: and you build on what, also 64 bit PC, 64 bit linux?
< karanlearns>
so i just need to do make host = <target>
< karanlearns>
instead of make
< karanlearns>
yes
< karanlearns>
rafalcpp: yes
< arubi>
it's more than that
< arubi>
you have to build the stuff in depends too
< rafalcpp>
if you build and run on same architecture (64 bit intel/amd, 64 bit linux os) then the host=... option is not needed, skip it. Continue to the paragraph below about installing dependencies
< rafalcpp>
arubi: I'm not sure if that README alone address his issue. Doesn't he need to either 1) install lib boost on target OS, or 2) use options to make the build be a [partially] static one?
< karanlearns>
everything works great on my online computer already from cloned,modified source
< rafalcpp>
karanlearns: easiest would be imo to install libboost package (not -dev, just the regular one) on the target offline machine
< arubi>
iirc, it's cd into the depends dir, run make (with host set or not), go back to the root dir, run configure with the prefix flag set to the depends build dir, run make
< karanlearns>
now i need to build this for my target system so that target system doesnt look for libboost_system.so
< arubi>
it's not that the target is wrong, the binary you built is link dynamically
< arubi>
you'll want to use the depends system to build statically with the proper versions of the libs
< arubi>
s/link/linked/
< rafalcpp>
karanlearns: yeap try that method with cd and make in deps first as above; let me know if it worked :)
< karanlearns>
arubi: thanks - i am trying this out.
< karanlearns>
rafalcp: thanks, i shall report back.
< karanlearns>
rafalcpp: thanks, i shall report back.
< sipa>
karanlearns: you can do a depends build if you want release-like binaries without the overhead of gitian's determistic build system
< arubi>
already linked ^
< sipa>
ok, cool, i didn't read backlog
< rafalcpp>
why Bitcoin chooses to link libc dynamically? any pros/cons?
< contrapumpkin>
on macOS, it's effectively required to link it dynamically, to be well behaved. Not that everyone respects that...
< contrapumpkin>
are you concerned about something in particular?
< rafalcpp>
contrapumpkin: I'm generally learning how Bitcoin chooses to use static vs dynamic linking and why so
< contrapumpkin>
do you understand the pros and cons in other contexts?
< contrapumpkin>
I don't think it's all that different for bitcoin, unless you're concerned about someone injecting malicious code by swapping out a dynamically linked dependency. But if they're futzing with executable code on your computer, you're probably screwed anyway (they could do LD_PRELOAD, poke around in memory, and various other shenanigans to mess with your running node)
< sipa>
rafalcpp: release binaries have it statically, no?
< sipa>
or, as BlueMatt tells me irl "because otherwise resolv.conf doesn't work properly"
< contrapumpkin>
because some glibcs interpret nsswitch.conf differently?
< rafalcpp>
sipa: no, released (Gitian) binary dynamically links libc, also librt, libgcc, libpthread, (and ld), and libm in bitcoind but not in -cli. (ldd bitcoin-cli shows)
< contrapumpkin>
those are all pretty basic runtime libs
< rafalcpp>
indeed. Though some could be moved to static too, so I wondered about this reasoning
< contrapumpkin>
I take it you're only asking about the linux situation?
< rafalcpp>
contrapumpkin: nope, for all platforms too
< contrapumpkin>
on macOS, those libraries are all lumped into one called libSystem (minus libgcc/librt which are replaced with their LLVM counterparts), and you're expected to dynamically link against it, because Apple doesn't commit to a syscall ABI from the kernel
< contrapumpkin>
the big exception here is Go, who ignored the advice and wrote their own syscall wrappers, so go binaries compiled before 1.7 will break on recent macOS
< tyrick>
where is the new segwit UI code? I checkout out sipa201709_segwitwallet2
< tyrick>
But not seeing new UI there
< sipa>
tyrick: it'll give you segwit addresses by default everywhere
< contrapumpkin>
oh, is it using bech32 now?
< sipa>
you can control the type of addresses with the -addresstype command line / config option
< sipa>
contrapumpkin: p2sh-p2wpkh by default
< sipa>
bech32 if you ask for it
< contrapumpkin>
nice
< tyrick>
nice job!
< contrapumpkin>
this is to shut up all the people asking for segwit by default?
< contrapumpkin>
erm, I mean, to improve adoption
< sipa>
contrapumpkin: the interesting thing is that about 90% of the complication in that PR is dealing with backward compatibility
< contrapumpkin>
jb55: from a similar question I asked yesterday,
< contrapumpkin>
[13:31:53] <andytoshi>to the extent that they're talking about anything specific, i believe they're talking about the aggregate signature proposal that gmaxwell sipa and myself are working on
< jb55>
nice
< rafalcpp>
so cool, so there could be then services to recover dust of various people, and pay then out of bound, e.g. by fiat, or charging their LN?
< helpplx>
hi, i tried to send a tx from bitcoin core with too low fee it seems, it shows as 0/not broadcasted yet. trying to use pushtx services shows the fee is too low. it deducted the balance from the bitcoin-core wallet.. what should i do? restart the wallet? if so will the funds return there? thatnks, sorry
< helpplx>
its 25 btc
< helpplx>
:(
< helpplx>
" Error sending transaction: insufficient priority and fee for relay. " blockcypher push tx shows, and blockchain pushtx shows Validation Error: Insufficient fee. Please try again with a higher fee... bitcoin console showsTX decode failed (code -22) when broadcasting with sendrawtx
< alf1>
hi!
< alf1>
anyone dev here?
< alf1>
or support guy?
< contrapumpkin>
alf1: ask your question and if someone can answer, they will (or will tell you where to ask instead)
< contrapumpkin>
this is not a support channel though
< helpplx>
im sorry too for asking a support question but its a 25 btc transaction. sendrawtransaction fails, i dont understand if the btc are still in my wallet if i restart the bitcoin core wallet..only this..thanks guys
< helpplx>
..+
< jb55>
helpplx alf1: try asking on bitcoin.stackexchange.com
< helpplx>
thanks but its really a "yes" or "no" question, the devs here know this..i used bitcoin-core wallet, set the fee to low (24 hours) and this mess happend. balance is deducted but tx not broadcated. do i need to restart btc core to get those 25 btc (390k $) back into the sender wallet? thanks for the work.
< provoostenator>
contrapumpkin: I made PR #11991 to add a bech32 checkbox in the GUI, which is on top of sipa's changes.
< alf1>
okay, i try: i have transfer btcs from bitpanda.com to my bitcoin core wallet (0.15.1). (transaction ID 41354056bfe77f201d1aa098b2a2b34505aa9d4812935c44cf66a417abcde3ed). but i cant working with btc, because its still pending. i still waiting for "availible". i try -rescan, but it isnt working. can anybody help?
< helpplx>
is this hard to get a yes or no from the official irc channel of the software i just used to send 390k$ and got issues? sorry but holy shit im stressed out
< rabidus>
calm down. official channel is #bitcoin
< contrapumpkin>
helpplx: short answer is that the core client never gets rid of your private keys unless you delete them yourself, so if the txn really wasn't broadcast, the money is still yours. If it was broadcast and the client is just being weird and not showing you that, then the money is going where you wanted it to. I don't see many chances for bad outcomes
< rafalcpp>
helpplx: this channel is only about C++/etc developers who code bitcoin core client. Ask in #bitcoin. And as others say, you can NOT lose your money by sending it with too low fee.
< helpplx>
thanks, tx shows as "Status: 0/offline, has not been successfully broadcast yet" on my wallet (sender).. so i can safely restart it? and funds will be returned..
< helpplx>
after restarting it shows the same..balance deducted..shit..
< provoostenator>
I might be witnessing #10646 live. Trying to gather some useful info. What aspect of catching up on the latest blocks uses gigabytes of disk I/O but <5% CPU and 350 MB memory, while hardly decreasing the number of blocks left for > 10 minutes?
< provoostenator>
Not pruned. I'm running a bitcoind instance in the background, but not on the same network. Also running btcd in the background. So there's probably a lot of disk I/O getting in the way from these competing things.
< provoostenator>
But the machine is very responsive otherwise.
< provoostenator>
Mmm, it also only has 2 peers, that's odd
< provoostenator>
It only received data from one of those peers, and just 5 MB
< helpplx>
im importing a full datadir from another client (same linux, diff laptop), and keeping only the original wallet.dat. will this return my 25 btc to the wallet? thanks
< sipa>
helpplx: this channel is not for support, take this elsewhere
< provoostenator>
getblockchaininfo takes 64 seconds to respond, but UI remains responsive.
< helpplx>
its a issue of the wallet, it didint warn me. i selected "low fee, 24 hours", it created a transaction that the network doesnt accept. is not my fault, at least add a warning. where to ask? #bitcoin is sandboxed nobody replies its a joke
< provoostenator>
Oh and as I said in the Github ticket, this is the SegWit wallet branch, though slightly outdated. I'll make sure to update it so I can get more useful info if it happens again.
< sipa>
provoostenator: is the I/O due to bitcoin core, or from other processes?
< provoostenator>
BitcoinQT
< provoostenator>
Bitcoin Core wrote 3.2 GB so far and read 4 GB, not sure how much bitcoind and btcd read/wrote in the same time period.
< sipa>
is it writing the whole time, or only reading, or writing in batches?
< sipa>
bitcoind == Bitcoin Core?
< provoostenator>
No, I meant "Bitcoin Core" when I said "BitcoinQT" (the name of the GUI process)
< provoostenator>
It seems to be reading and writing at the same time, 158 reads/sec, 61 writes/sec. Not sure how aggregated that is.
< provoostenator>
I can run top with your favorite arguments...
< sipa>
the expected behaviour is that it's reading pretty much the whole time, and only writing when flushing
< sipa>
you can see the flushing by seeing the size of the dbcache (in the uodatetip log line) go down
< provoostenator>
And I assume you'd see memory use drop during a flush? I don't see that. It's climing slowly, maybe a couple of MB per minute, at about 400 MB now. Total system memory usage is only 9 GB out of 16.
< helpplx>
it still shows the deducted balance wtf
< sipa>
it should go up until it hits the limit, and then drop back to zero
< helpplx>
what can i do seriously? this is bad
< sipa>
helpplx: i'm sure this is a serious problem for you, but this is not the place to ask. people here are at work and have their own priorities. on stackexchange or other fora there are far more people who can help you
< provoostenator>
Right, that's not happening; and it really shouldn't hit the limit of 5 GB anytime soon Do you mean debug.log or another file? I don't see any "tip" entries for this session.
< sipa>
debug.log should always contain those entries
< helpplx>
you understand thousands of people could suffer the same "bug" where the wallet creates invalid tx? if 25 btc its not important i dont know what is lol. i just asked what to do, like "that tx will return to you in X hours, chill" or "do this or that, fine" is ok.
< sipa>
helpplx: i don't have time to look into all details of your situation, but i suspect it will just eventually either confirm, or cancellable using abandontransaction
< provoostenator>
Note the block index 369541ms on line 44 , which I assume is there the UI was handing
< sipa>
that's 6 minutes
< sipa>
that may just be the flush
< provoostenator>
Meanwhile disk I/O is now at 3.9 / 5.22 GB (from 3.2 / 4 GB 20 minutes ago)
< provoostenator>
(normally I would just kill everthing in the background and things generally get better, but I'll leave those on now)
< helpplx>
Transaction not eligible for abandonment (code -5)
< sipa>
helpplx: that means the transaction is in the mempool and will confirm when a miner picks it up
< sipa>
helpplx: in any case, either the transaction goes through or it does not; you don't lose money
< sipa>
helpplx: now, please, go to the proper forums
< sipa>
provoostenator: i'm confused, what is the unexpected behaviour you're observing?
< sipa>
is it during a flush?
< sipa>
you'd expect it do several GB of writes during a flush
< provoostenator>
Ok, so you're saying it's been flushing for the past 30 minutes? Flushing what exactly? Is there any way to know for sure?
< sipa>
are there new lines being produced in the log?
< sipa>
flushing the utxo changes
< provoostenator>
yes, every time it receives a block
< sipa>
flushing is blocking
< sipa>
so no
< sipa>
you wouldn't see anything during a flush
< provoostenator>
Right, so there's some other reason why it's slowly reading and writing gigabytes of data...
< sipa>
there shouldn't really been any writes at all
< sipa>
apart from writing the new block to disk
< sipa>
oh, with txindex enabled you'd see much more writes
< sipa>
if that turned on?
< provoostenator>
Oh yes, I probably should have mentioned that.
< sipa>
yes, that will kill performamce
< provoostenator>
That's an understatement.
< sipa>
i don't believe actual progress is that much slowed down by it
< sipa>
but it's not optimized in any way, really
< provoostenator>
txindex is useful if you want to run Lightning against your own full node. So presumably more people will run into this, if performance really is a problem. Although I should compare against a scenario of not having other processes with disk i/o in the background.
< provoostenator>
Does it have to rewrite the index every time a tx comes in?
< provoostenator>
So does lnd I believe, although they currently don't support bitcoin core yet.
< provoostenator>
btcd is taking two weeks to just do an IBD, because it doesn't have any of Core's recent performance enhancement stuff. But I've never tried that without txindex=1
< sipa>
well it's ridiculous to assume end users can run with txindex imho
< sipa>
it's for debugging or running a rudinentary explorer like thing
< provoostenator>
sipa: is it inherently resource intensive?
< jb55>
I want to write a standalone indexing daemon but I wonder how much internal state I would need...
< provoostenator>
lnd will let you connect to somebody else's node fairly easily, but that uses the still experimental neutrino protocol.
< provoostenator>
eclair needs RPC and ZMQ and sends and receive directly using the Core wallet, so hard to avoid the problem there.
< provoostenator>
I don't know what they need the index for, I'd have to dig a bit more and maybe file some Github tickets there to make them rethink that.
< molz>
lnd requires txindex in bitcoind too
< provoostenator>
molz: yes, but in *somebody elses* bitcoind :-)
< sipa>
provoostenator: just the idea that you'd need the entire history of the currency for any end user application is already ridiculous - it being fully indexed even more so
< provoostenator>
I'll ask the Eclair folks as well once I know a bit more why lnd needs this. Anyway, I'll let it run for a few more hours here... will let you know if anything interesting happens.
< alpha_red>
disconnect
< gmaxwell>
if something needs txindex that means that it is incompatible with pruning, which means that it is at least eventally incompatible with decenteralized operation.
< provoostenator>
gmaxwell: it could at least use a txindex that starts at the block height of the oldest open channel I assume.
< provoostenator>
But as I said, I have no idea why they need it at all. Hopefully there's a better approach possible.
< provoostenator>
My guess would be that they're not tracking any state and just tell the node "hey, have you seen this specific tx id yet?". And if so, tell the user the channel just got closed / they got cheated.
< cluelessperson>
why use berkeleydb for wallet.dat ?
< luke-jr>
cluelessperson: only because nobody has done the work of replacing it, basically
< zelest>
lets migrate it to mysql! *trollface*
< * Randolf>
votes for PostgreSQL
< cluelessperson>
luke-jr: Ah. Well my intention isn't to disrespect. I just don't know why it would have been chosen. Perhaps there was some discussion about effeciency or large scaling for vendors, I dunno. I'm a HUGE fan of json, sqlite, postgres myself.
< zelest>
any relation database sounds a bit overkill tbh
< cluelessperson>
zelest: not for exchanges.
< zelest>
true
< zelest>
but for a single users' wallet.dat
< cluelessperson>
zelest: I've had cases where people used tens of thousands of addresses.
< cluelessperson>
has to scale
< cluelessperson>
luke-jr: Can you help me understand the skills I need to help? I'm planning on taking classes, but that's going to take me months-year to even start working in c++
< eck>
berkeleydb can handle tens of thousands of database entries too
< cluelessperson>
eck: it's just impossible for laymen to use/troubleshoot in this state.
< zelest>
i dunno, i might lack the understanding of the bitcoin protocol, but having a whole database because a few single corner cases has to scale feels a bit bloated tbh :o
< cluelessperson>
outside of through bitcoin core itself
< cluelessperson>
Hell, I'm doing this in python
< Randolf>
For a Wallet application written in Java, I'd lean heavily to using the H2 database engine for a high-performance pure-Java SQL option (it has a built-in AES encryption option that encrypts the entire database file): http://www.h2database.com/
< cluelessperson>
for i in w.items(): print(i) and it's 1000 lines of human unreadable info. :/
< zelest>
this just reminds me of when people setup mail servers.. they demand a fully configured database server and yet they setup a single domain with 3 mail accounts.. that's it.. :P
< zelest>
but yeah, mayhaps have the option to use different database backends, if needed.. might be a bit much to support though?
< Randolf>
zelest: That's where you start getting into setting up a "driver" layer so that others can contribute support for their favourite databases in a modular fashion.
< * zelest>
nods
< luke-jr>
cluelessperson: BDB isn't a relational database; Satoshi chose it, so nobody will have a certain answer why
< sipa>
cluelessperson: the wallet loads all data in RAM anyway, the file format is pretty much irrelevant, it could be a text file
< sipa>
cluelessperson: please read the gist i linked to in the segwit pr
< gmaxwell>
cluelessperson: redesigning things you do not understand is a bad reflex.
< cluelessperson>
gmaxwell: I feel that's incredibly insulting and unhelpful. It's a wallet file that stores keys, what do I not understand about it?
< cluelessperson>
what's hard to understand about it?
< sipa>
cluelessperson: that hardly the only thing it stores
< cluelessperson>
sipa: what do you feel I'm missing? transaction cache, block height last seen?, hash of the wallet file so if it's modified, it knows to rescan the blockchain ?
< cluelessperson>
or utxo
< sipa>
all of those and more
< sipa>
redeemscripts, public keys
< sipa>
pre generated keys / keypool
< sipa>
labels
< sipa>
tinestamps, key birthdates
< phantomcircuit>
not to mention most of the data is binary and storing it in json is a huge blowup
< sipa>
cluelessperson: and no offense, but if you need to ask "what's so hard about it", you clearly haven't really done much effort to understand what it is doing now
< sipa>
cluelessperson: that may be fine if you'd want to design something from scratch, but we're stuck with pretty demanding compatibility requirements (you can still load a 0.2.10 wallet.dat file and it will work)
< cluelessperson>
sipa: I'm suggesting a framework to make life easier for users using bitcoin. Extracting and dealing with keys at this point is inevitable and currently painful. All those things you mentioned fit into this framework.
< sipa>
cluelessperson: that still doesn't give a helpful upgrade scenario
< sipa>
the difficulty is not in designing a storage scheme, but in how it fits into all current use cases
< cluelessperson>
This is more of an example of how I want to make life easier for users, by no means would it be exactly this if I seriously suggest a change. :/
< sipa>
well, please go read the gist i linked to in the segwit pr
< cluelessperson>
sipa: sorry, I don't see a link from you
< sipa>
it talks how things work now, and how they can move towards something that's more like what you're describing
< sipa>
i'm on my phone in a train; it's linked in the segwit wallet pr
< cluelessperson>
ah, I'll find it then
< sipa>
also, no we're not seriously going to store wallets in JSON
< sipa>
though something like what you're describing may be useful as a dump/import format
< gmaxwell>
cluelessperson: sorry man, but when you batter your head against something, then don't understand it, and respond by offering a redesign, _THAT_ is insulting and disrespectful. It's sending a message that everyone else is such idiots that they couldn't manage to do it the simple way that you just tossed out. That is seldom the case, usually when someone doesn't understand something thats beca
< gmaxwell>
use they haven't fully internalized the requirements, so they have no idea what its doing... and that is why attempting to redesign when you don't understand things is a bad reflex.
< cluelessperson>
gmaxwell: You're right, I'm sorry. It's not really my intention to throw out your expertise. I just deal with users day in and day out that have difficulty accessing their keys, which I feel is inevitable.
< sipa>
yes, there is no doubt that things can be improved a lot
< sipa>
but changing things in this context is also very hard
< cluelessperson>
agreed.
< gmaxwell>
"difficulty accessing their keys"--
< gmaxwell>
I have no freeking idea what that means.
< sipa>
letting users access their keys was never a design goal
< sipa>
letting them manage them at a high level however perhaps should be
< cluelessperson>
gmaxwell: Currently, the only method that seems reliable for exporting private keys, is to get the bitcoin core binary, start it, and use the console to dumpwallet.
< gmaxwell>
(1) there is a wallet dump which exports keys directly. (2) for HD wallet there probably should eventually be no private keys in the wallet except for master keys. (3) It is extraordinarly dangerous for users to work with keys directly, and has resulted losses of thousands of bitcoins, manually handling keys is not something we should encourage.
< cluelessperson>
which often leads to misunderstanding like, "should I let it sync before I can export?" and things like that, there's a phenomenal amount of ignorance ;)
< gmaxwell>
cluelessperson: yes, and that is how it has to be ultimately; because they keys don't even need to be stored in the wallet files.
< gmaxwell>
some of that misunderstanding is due to the overly agressive initial sync screen, which could be clarified some.
< cluelessperson>
gmaxwell: the problem is that the multiple softwares/wallets all have different methods of using/storying/importing keys. that's what's forcing users to use them directly.
< gmaxwell>
They shouldn't be doing that, they should transfer funds. manually mucking with keys is supremely dangerous and for unsophicated users will reliably result in funds loss.
< gmaxwell>
not to mention that keys from one program may not even be compatible with anything else.
< cluelessperson>
Maybe I should write some procedure paper based on this topic.
< cluelessperson>
gmaxwell: Another reason is that users sometimes go through a lot of effort to memorize seeds. They don't want to send to another wallet, they want to keep it.
< cluelessperson>
that's what causes moving keys between softwares.
< gmaxwell>
you cannot use different 'seeds' in different wallets. They aren't compatible and cannot reasonably be because the functionality is different.
< Bosma>
Private keys have use now because they're used to claim fork funds.
< cluelessperson>
Bosma: paper wallets
< sipa>
Bosma: yeah &$#@!
< gmaxwell>
cluelessperson: which no one should ever be using.
< gmaxwell>
cluelessperson: and which are unrelated to bitcoin core in any case. (they cause a desire for _import_ which we support, not export)
< cluelessperson>
gmaxwell: Another idea comes to mind. Suppose I should focus on writing up a list of reasons *why* I felt I'd want to change the wallet.dat structure. Part of it was to make it portable, modifiable, and allow users to just insert new keys at whim, from Electrum, BIP39, xpriv, xpub, WIF, mini, etc.
< gmaxwell>
that cant work.
< cluelessperson>
Honestly, it's not the wallet.dat that's the problem, it's that I'm asking for more features, and they're not being presented what the requested features are.
< cluelessperson>
I'm confused how that can't work. Electrum does that to an extent.
< jb55>
one of the first things I did was get my keys out of core, never felt comfortable having them in some arbitrary binary blob. I never recommend core wallet new people for that reason, I've seen so many lose their hd/etc. I force them to all get trezors now :P
< sipa>
i agree there
< sipa>
i don't think compatibility with other wallet software, or allowing inserting keys at a wh
< sipa>
but there are benefits to a clearly understandable structure, which we currently really don't have
< cluelessperson>
sipa: The only reason I mention electrum is because they did derivation first, so I consider them a standard.
< sipa>
cluelessperson: and there's no reason to assume we'll want to be compatible - or if we do, that we can remain so
< jb55>
hopefully we can crunch out a clean wallet upgrade path for hd hww's <> core soon. It's so hairy though gahh
< sipa>
compatibility is nice when possible, but generally different wallets have such different ideas about what even constitutes a wallet that i don't think it's a worthwhile goal on its own
< sipa>
jb55: yes!
< cluelessperson>
Here's the thing
< cluelessperson>
Pretty much everything uses BIP32/44 | xpriv,xpub right?
< sipa>
no
< sipa>
bip32 yes
< jb55>
sipa no like 44
< cluelessperson>
so, we could just store that and allow importing functions in general
< sipa>
but electrum does not follow bip44 afaik
< sipa>
cluelessperson: have you read my gist?
< sipa>
(it suggests that)
< zelest>
I have a /usr/local/lib/db4 (4.6.21) and I try to compile with --with-incompatible-bdb, yet ./configure fails with "libdb_cxx headers missing". By looking at the config.log, it tries to include bdb4.8/db_cxx.h, how can I specify what directory to look in?
< zelest>
compile/configure*
< sipa>
cluelessperson: but my gist also explains why doing so right now is very hard
< jb55>
zelest: try 4.8?
< sipa>
zelest: it needs at least 4.7 or 4.8
< sipa>
(i forgot which of the two is the minimum)
< zelest>
Ah, fair enough :)
< zelest>
then I give up completely on the ports version of bdb :)
< zelest>
wumpus, around?
< zelest>
If I have a file I wish to change (quite a lot) and someone has already made a pull request with the same file. How should I approach it? wumpus told me to "please base it on" the pull request. Anyone care to explain what that means or how I do that? Thanks
< goatpig>
zelest: pull the PR locally
< goatpig>
rebase your commits on top of it
< zelest>
Ah, and create a PR of that?
< goatpig>
im guessing you'd be submitting that way yes