< dviola>
hello, the archlinux bitcoin-qt package maintainer builds bitcoin-qt using "--with-incompatible-bdb" and bdb 5.3.28, I have heard there could be potential problems by taking a wallet.dat that was created with newer bdb and then using it on a bitcoin-qt that was compiled with an older version of bdb, are there any tests I could run to know I won't run into issues?
< phantomcircuit>
dviola, 5.x has a different format than 4.8 which is not backwards compatible
< phantomcircuit>
if you open a wallet in 5.x it wont open in 4.x
< phantomcircuit>
so that's a one way trip
< dviola>
phantomcircuit: I see, so would it be better if I create the wallet using the bitcoin-qt builds from bitcoin.org?
< dviola>
I'm a little confused because I created a wallet.dat using Arch's build, then used the wallet.dat with bitcoin-qt from bitcoin.org, and it seems to have worked just fine, but I don't have funds in this wallet of course
< dviola>
I could still see the address and such
< dviola>
which was the same
< dviola>
just tested again
< luke-jr>
dviola: *using* the wallet.dat with bdb5 should break using it with bdb4.. no matter how it was created.
< dviola>
hrm
< dviola>
here's what I did: I installed bitcoin-qt with pacman, started a fresh ~/.bitcoin and wallet.dat, I created 5 addresses with name, etc. I copied the wallet.dat to my home dir, I downloaded bitcoin-qt from bitcoin.org, started that with the old wallet.dat and all the address were there
< dviola>
luke-jr: I understand, but isn't what I did the equivalent of what you said? I'm a little confused over this
< gmaxwell>
droark: it should just fail to even open, always did in the past.
< gmaxwell>
maybe some time in BDB 5 they changed the format to be backwards compatible if shut down cleanly?
< phantomcircuit>
gmaxwell, does it have to handle reorgs correctly?
< phantomcircuit>
i dont think it would actually if you were saving address -> [block hash/tx index/out index]
< phantomcircuit>
but that's maybe slightly bigger than just address > txout
< phantomcircuit>
oh it's just for the utxo?
< phantomcircuit>
yeah i guess
< gmaxwell>
phantomcircuit: it's just for the utxo set, the PR is somewhat poorly named!
< CodeShark>
utxosbyaddress? :)
< CodeShark>
utxosbyscriptpubkey might even be more useful
< CodeShark>
I might take a look at it, it is a pretty nice feature
< gmaxwell>
assuming the database is constructed correctly (lets hope) the difference between by address vs by scriptpubkey is purely a RPC parameter one.
< CodeShark>
yeah, indeed :)
< CodeShark>
but we don't have address formats for all scriptpubkey types
< CodeShark>
how big is the extra index here?
< gmaxwell>
CodeShark: should be roughly similar in size to the utxo set.
< CodeShark>
it's something of a reverse lookup, no?
< CodeShark>
when validating we need to grab scriptpubkey from blockhash:txindex
< gmaxwell>
right.
< gmaxwell>
I don't recall what it implements, but
< CodeShark>
or txhash:outindex, rather ;)
< gmaxwell>
e.g. it could map(h(scrippubkey)[64 bits]) -> [txid:vout,...] which would potentially equal in size or smaller even though it's going to be more sparse.
< gmaxwell>
Well my right was responding to what you meant, of course.
< gmaxwell>
at least if I'd written this from scratch I would have done that kind of hash, probably with a per database salt to keep clowns from intentionally making your queries slow by causing collisions.
< CodeShark>
I'm a little ambivalent about using an in-process storage engine for things not required for full validation. I'd love to see an architecture with an external storage engine that can be configured for different kinds of queries. But I guess this index isn't too huge
< CodeShark>
pulling the consensus-critical stuff out to an external storage engine is perhaps a bit risky, though
< CodeShark>
and perhaps there are optimization issues as well here with the caching
< CodeShark>
I'm not too familiar with the cache stuff
< CodeShark>
the main advantage here is that different people could extend the functionality of the storage engine and queries supported without having to touch any consensus code at all
< CodeShark>
nor having to rebuild the validation engine
< CodeShark>
but we already have the existing RPC framework...which encourages just adding more in-process RPC calls
< CodeShark>
people always talk about keeping these indices external to bitcoind...but without a framework for it that people can easily extend everyone has to reinvent the wheel
< CodeShark>
...poorly :p
< gmaxwell>
You can always index whatever you want talking to things over the rpc. but there are plenty of uses for this just from the commandline.
< CodeShark>
no question it's useful. but there could be a bunch other useful queries for app developers (or even for interactive use from commandline) - and having to merge another in-process RPC call everytime is a bottleneck
< CodeShark>
or it encourages people to have to maintain their own forks of the entire project so they can customize it
< CodeShark>
as for an external index, we could get far better performance foregoing the RPC and using a different IPC
< CodeShark>
also, the indices could be built up during IBD
< CodeShark>
but these are longer term objectives - I still like this PR :)
< jonasschnelli>
Is it okay to use GPG's (v2.1+) Ed25516 or even secp256k1 keys for gitian signing?
< jonasschnelli>
*25519
< wumpus>
well, it's not forbidden, but I can't count them right now (my gpg is at 1.4.20, ubuntu 16.04 lts)
< jonasschnelli>
Yes. It's probably to early.
< wumpus>
I can look at compiling my own for 0.14, but not for 0.13.2 today
< phantomcircuit>
wumpus, you can install gnupg 2.x
< phantomcircuit>
it's like gnupg-2 or something
< jonasschnelli>
gpg2
< wumpus>
phantomcircuit: oh!
< jonasschnelli>
I may start with signing with Ed25519, just to have some variety.
< gmaxwell>
jonasschnelli: distributions ship the table version of gpg, mostly (exclusively?) so AFAICT relatively few people have it.
< wumpus>
phantomcircuit: jonasschnelli: that gets me 2.1.11
< jonasschnelli>
I guess for all ecdsa curves (including secp256k1) you need 2.1.16 (or 2.1.17?)
< wumpus>
gah, it doesn't use the debian alternatives system though, so I have to manually patch up scripts to use gpg2 instead of gpg
< jonasschnelli>
But watch out when using gpg2
< jonasschnelli>
It will upgrade you .gpg folder
< jonasschnelli>
And will no longer be backward comp.
< jonasschnelli>
So,... make a backup first
< wumpus>
oops
< jcorgan>
eww
< jonasschnelli>
maybe this was too late. :)
< CodeShark>
lol
< wumpus>
let's see :) I only looked at the help message, didn't encrypt anything yet
< wumpus>
good, v1 still works for verifying signatures
< wumpus>
I'll look at this for 0.14 okay
< jonasschnelli>
Yes. No hurry.
< jonasschnelli>
If secp256k1 is supported through GPG, this would allow signing over a hardware wallet without new curves added to them.
< wumpus>
yea that could be interesting, though it would have to assign a single purpose to keys, don't use keys for signing that are used for bitcoin
< jonasschnelli>
Yes. Indeed.
< jonasschnelli>
I'd also prefer Ed25519,... not supported by (all?) available hardware wallets I guess.
< wumpus>
ed25519 would have the advantage it can be used for ssh too
< jonasschnelli>
Oh. Yes.
< wumpus>
(even if you compile it without openssl :p)
< wumpus>
uploaded binaries and pushed 0.13.2 release announcement to the mailing lists, waiting for travis on bitcoin.org (https://github.com/bitcoin-dot-org/bitcoin.org/pull/1470), feel free to get the rest of the announcement cycle started
< jonasschnelli>
In a long shot, I could imagine, BFD (filter digest / filter commitments) for filtering from untrusted peers, and BIP39 for filtering from trusted peers (once BIP150 has ben established).
< jonasschnelli>
Even if BIP39 is not the most efficient way.
< sipa>
BIP39?
< sipa>
are you sure?
< jonasschnelli>
37! argh
< jonasschnelli>
Bloom Filter
< jonasschnelli>
But I think there is no solution for a non-BIP37 0-conf "filtering" without running into these privacy issues related to bip37
< gmaxwell>
jonasschnelli: for your wokrin in 9238 just transfer all transactions, any filtering scheme will trash privacy, and transfering transactions is an aggregate of 14 kilobit/s.
< gmaxwell>
But displaying unconfirmed transactions in any non-validating wallet is just an invite to abuse.
< gmaxwell>
Many lite wallets that do this today are utterly crippled by it, because they also spend unconfirmed coins, so a single troublemaker giving them some fake payments makes them unusable. (or at least they did a half year ago.)
< jonasschnelli>
gmaxwell: Yes. Agree. But the user-experience without displaying incoming funds will probably be not acceptable "in the field".
< gmaxwell>
please.
< jonasschnelli>
I don't know. I also don't like the 0-conf displaying. While working on the Core wallets SPV mode, I just had the feeling that people will not use it because of the missing 0-conf display.
< gmaxwell>
I think it's a bad idea to release features that would require an emergency software update to disable them as soon as one board person spends an hour making a few like hack that sends out garbage transactions and spins up a bunch of sybil nodes.
< jonasschnelli>
0-conf / spv should probably only be possible in conjunction with BIP150 and a trusted peer (at home or somewhere)
< gmaxwell>
In any case, just send all the data. As mentioned it's 14kbit/sec. The history is really the only annoying part about that.
< jonasschnelli>
gmaxwell: do you mean send all the data if someone request a filtered mempool?
< jonasschnelli>
or just for tv invs?
< jonasschnelli>
*tx
< gmaxwell>
I mean don't do any tx filtering.
< gmaxwell>
Same as we do today. Please. implementing spv mode in core doesn't mean implementing all the flaws and vulnerabilities in other software slavishly. Thats pointless. If someone wants something that is going to blast out there addresses, validate nothing, etc.. they can already use bitcoinj based software.
< jonasschnelli>
Heh. True.
< gmaxwell>
:) at least the spv behavior in core can be private.
< jonasschnelli>
I have implemented the BFD relevant hooks. IsBlockRelevantToWallet(pindex).
< jonasschnelli>
gmaxwell: The only usability issues are: catch-up a couple of weeks (144*<day> MBs, take a while) and the missing 0-conf. Maybe this is acceptable to prevent privacy.
< gmaxwell>
I think it's not bad, especially if we had better facilities for keeping it caught up in the background.
< jonasschnelli>
Though, I haven't fully understood the semi.-trusted oracle idea for the BDF. I would expect to have the BDF in the coinbase...
< jonasschnelli>
But somehow the BFD is required for older blocks as well
< gmaxwell>
It's not really great to go from nothing to consensus rule if it can be avoided: No design survives first contact with users. Better to prove out the idea and refine it before its required, which is possible here.
< jonasschnelli>
From where would you get the BFD (maybe its not bloom filters in the end)? A new p2p command?
< gmaxwell>
just another type of element that you can getdata, perhaps more than one.
< jonasschnelli>
this would result in trusting the peer, right? The whole signing thing of BFDs would still be a thing? Or just compare BFDs from the data retrived from other peers?
< jonasschnelli>
Probably easy to sybil you with fake data
< gmaxwell>
I thought you were referring to the filter itselt and not the hash root... Unclear, multiple models are possible and _all_ are better than BIP37.
< jonasschnelli>
Right, tx withhold is also possible with BIP37.
< gmaxwell>
and asking multiple peers is very inefficient with BIP37.. needs n-times the bandwidth.
< gmaxwell>
jonasschnelli: but pulling data from peers is just one option, you could-- for example-- go get a hash root (root of a tree of digest hashes) from a website and pop that into a configuration.
< gmaxwell>
(and then use that for filtering the past-- then the future, past that root, you just fetch whole blocks)
< BlueMatt>
cfields: ohhh, static CNodeState::cs....missed the static part
< BlueMatt>
cfields: ok, I'll make it static in the pr and add a TODO: do something smarter
< BlueMatt>
cfields: oh, wait, no, really dont want to do that
< BlueMatt>
cfields: that would mean that #9375 + multithreaded processmessages wouldnt accomplish anything :/
< jonasschnelli>
Do you have any specification for that?
< BlueMatt>
cfields: hmmm, regarding #9441, do we really want to run the entire ProcessMessages loop (calling SendMessages potentially umpteen times) just to process multiple messages from the same node?
< bitcoin-git>
[bitcoin] laanwj opened pull request #9460: Fix a few typos in translated strings (master...2017_01_messages_update) https://github.com/bitcoin/bitcoin/pull/9460
< luke-jr>
wumpus: btcdrak: bad signature on sendy release ann
< wumpus>
luke-jr: are you checking the html or text attachment?
< wumpus>
in case of the html you should run gpg on the raw html file
< wumpus>
(both attachments verify OK here)
< luke-jr>
both
< luke-jr>
how do you run GPG on the raw HTML file, seeing as the HTML is outside the signature? :/
< bitcoin-git>
bitcoin/master a9d6151 Wladimir J. van der Laan: qt,wallet: Fix a few typos in messages...
< bitcoin-git>
bitcoin/master d45b21e Wladimir J. van der Laan: qt: Fill in English numerusforms...
< bitcoin-git>
bitcoin/master e0dc3d7 Wladimir J. van der Laan: Merge #9460: Fix a few typos in translated strings...
< btcdrak>
luke-jr: matches for me
< bitcoin-git>
[bitcoin] laanwj closed pull request #9460: Fix a few typos in translated strings (master...2017_01_messages_update) https://github.com/bitcoin/bitcoin/pull/9460
< wumpus>
that mail has a text/plain as well as a text/html mime attachment - just save them to a file and run gpg on them?
< luke-jr>
hmm, that works. I wonder what copy/paste is changing
< btcdrak>
Thunderbird also automatically validated it for me.
< jonasschnelli>
Same here
< luke-jr>
my mail client normally auto-validates, but it didn't like the formatting or something
< bitcoin-git>
[bitcoin] jonasschnelli opened pull request #9461: [Qt] Improve progress display during headers-sync and peer-finding (master...2017/01/qt_sync) https://github.com/bitcoin/bitcoin/pull/9461
< jonasschnelli>
luke-jr: I think squashing makes sense when one commit overwrites many lines from a former commit in the same PR.
< jonasschnelli>
But Okay for #8877
< gribble>
https://github.com/bitcoin/bitcoin/issues/8877 | Qt RPC console: history sensitive-data filter, and saving input line when browsing history by luke-jr · Pull Request #8877 · bitcoin/bitcoin · GitHub
< luke-jr>
jonasschnelli: there are some cases where I think it may make sense, yes
< jonasschnelli>
But your right, squashing and loosing "logical history" is not ideal
< cfields>
BlueMatt: yes, cs_vSend wasn't touched because of your PR. I've just been operating under the assumption that the lock around SendMessages will be removed by one PR or another.
< bitcoin-git>
bitcoin/master fc95daa Jonas Schnelli: Qt/RPCConsole: Save current command entry when browsing history...
< bitcoin-git>
bitcoin/master 9044908 Jonas Schnelli: Qt/RPCConsole: Don't store commands with potentially sensitive information in the history...
< bitcoin-git>
bitcoin/master de8980d Luke Dashjr: Bugfix: Do not add sensitive information to history for real...
< bitcoin-git>
[bitcoin] jonasschnelli closed pull request #8877: Qt RPC console: history sensitive-data filter, and saving input line when browsing history (master...qt_console_history_filter) https://github.com/bitcoin/bitcoin/pull/8877
< jtimon>
#9271 needs rebase but could still get some general feedback
< instagibbs>
can someone explain ProcessBlockAvailability? The comment for it is unclear: "/** Check whether the last unknown block a peer advertised is not yet known. */"
< instagibbs>
check whether an unknown block is unknown... ok!
< * luke-jr>
rebases #9152 to add history filter before he forgets ☺
< gribble>
https://github.com/bitcoin/bitcoin/issues/9152 | Wallet/RPC: sweepprivkeys method to scan UTXO set and send to local wallet by luke-jr · Pull Request #9152 · bitcoin/bitcoin · GitHub
< BlueMatt>
cfields: hmmm...yes, re: moving the loop into net_processing I figured that was just due to not complicating that pr too much, thanks
< cfields>
BlueMatt: np, thanks for review
< BlueMatt>
cfields: as for not having a per-node loop....hum...can we re-add it in net.cpp with two lines so that its not a (proabbly barely noticeable) regression?
< BlueMatt>
bool fMoreNodeWork = ... while (fMoreNodeWork) ProcessMessages
< cfields>
BlueMatt: I'm not sure there's any actual change from the current behavior
< sdaftuar>
instagibbs: ProcessBlockAvailability is for when you get block announcements via inv (before you have the header)
< BlueMatt>
cfields: you mean like from current master or from current pr?
< BlueMatt>
sdaftuar: we still have crap for that? can we just remove it and replace with "getheaders"
< cfields>
BlueMatt: oh.. it doesn't swallow all messages at once for fairness. I'm pretty sure emptying the node's full queue before moving on would be a DDoS vector
< BlueMatt>
cfields: I believe the improvements against master come largely from removing lock contention between the message processing loop and the read-from-wire loop?
< BlueMatt>
or do I have that backwards?
< BlueMatt>
cfields: re: swallowing from one node...that isnt how I read the old code?
< BlueMatt>
I mean certainly if we got a packet which only had one message we'd not process more than one message at a time, but if the loop is slow to go around (it appears to be sometimes due to various messages being slow) then we might have multiple to process per-peer
< cfields>
BlueMatt: sorry, by no change in current behavior, i meant the swallowing-one-message part. Obviously fixing the lock contention is a behavioural change :)
< BlueMatt>
and in that case I think we'd prefer to only call SendMessages less often (though, to be fair, doing them back-to-back is less likely to be slow than otherwise)
< cfields>
BlueMatt: take a look at the current code. I really don't think the new behavior is different, other than the case of corrupted messages
< BlueMatt>
cfields: to answer in a different way: why do we need to change from the current behavior
< BlueMatt>
cfields: whats the break you're referring to in that comment?
< BlueMatt>
(line numbers are confusing here)
< BlueMatt>
the !message.complete() one?
< cfields>
and yes, sendmessages needs to be broken up and fixed in lots of different ways, but imo not here
< cfields>
sec
< BlueMatt>
definitely agreed, not here
< cfields>
BlueMatt: whoops, that should be 2538
< BlueMatt>
just trying to figure out if re-adding a while somewhere in that pr is better or worse, since this is all we can hope for in 0.14 :)
< BlueMatt>
ohhhhhh, I hadnt seen /that/ break
< cfields>
yes. So the loop only continues in 2 cases, iirc.
< cfields>
bad hash, or uhmm.. bad start chars maybe?
< BlueMatt>
wait, so on current master we will continue the loop if the message is somehow corrupted, but otherwise wont? wtf.....
< BlueMatt>
oh, no, just bad hash
< BlueMatt>
yea, this makes no sense
< cfields>
BlueMatt: indeed, those are the cases that the node should be punished for, but instead they get a free pass.
< cfields>
go figure.
< BlueMatt>
well, i guess if your checksum is bad its not like we did any work for you anyway...
< cfields>
sure, but that should at least toss you to the back of the line
< cfields>
BlueMatt: so you agree now that the loop behavior isn't changed here significantly? (or, at least, only for the better?)
< BlueMatt>
yes, agreed
< cfields>
ok
< BlueMatt>
I suppose I wouldnt want to go audit for whether it was a dos vuln to change it in this pr
< BlueMatt>
so, good :)
< cfields>
heh
< cfields>
BlueMatt: as a follow-up for 0.14, if you're concerned about it, we could add a quick hack to quickly exit SendMessages if it hasn't been at least x msec since the last call
< cfields>
but i don't think there's any real need, since this has been the behavior all along
< BlueMatt>
yea, I'm less concerned about that...still hoping for parallel processmessages but....that seems stretchy now :/
< cfields>
:(
< BlueMatt>
depends on how many reviewers pop their heads up this week, I suppose
< cfields>
BlueMatt: other than the current PRs, how much remains there? I think that 9441 should cut down on the scary races a good bit?
< BlueMatt>
yea, I think after current PRs its just one or five std::atomics in CNode and then the one commit which fuckes around with the ProcessMessages loop
< BlueMatt>
which would need rebased on your stuff, but probably wouldnt be too hard
< cfields>
these are atomics that fix more than just CNodeStats which has always been racy anyway?
< BlueMatt>
I havent looked as of the latest stuff so you'd know better than I
< BlueMatt>
I mean we should probably atomic-ify CNodeStats...
< BlueMatt>
easy enough
< cfields>
I'll check out your branch again
< BlueMatt>
cool
< BlueMatt>
I'll probably not get much done until thurs....early flight tomorrow so will likely be super tired during the day at rwc
< cfields>
yea, I was thinking about that yesterday. I think it may make sense to just actually keep the stats in CNodeStats with one lock. See nRecvBytes as an example
< cfields>
then when we need the stats, just do a quick lock and copy out
< BlueMatt>
iirc that like automagically ended up with lock inversions
< BlueMatt>
i think i looked into that, but dont remember now
< BlueMatt>
seems weird that it would, but....something something...
< cfields>
hmm, not sure how. I'll look again
< cfields>
BlueMatt: btw, you're good with the boost changes that 9441 include?
< BlueMatt>
aww fuck, got rebased, yea, I'm pretty sure nothing changed that I didnt think was reasonable or good...I'll try to take another look tomorrow, but I'm pretty sure that ack is still valid
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #9462: [qt] Do not translate tilde character (master...Mf1701-qtTransTilde) https://github.com/bitcoin/bitcoin/pull/9462
< cfields>
BlueMatt: np, thanks
< cfields>
BlueMatt: if i could fixup the racy stuff that would affect parallel processing, want me to PR? Or does that just complicate the current PR maze?
< BlueMatt>
cfields: up to you...I had assumed you were waiting on the current prs to clean up a bit, but if you can do it without stepping on your own toes feel free
< cfields>
ok. I'll at least try to get something ready on top of the queued-up changes. If it happens to not depend on them, I'll go ahead and submit it.
< BlueMatt>
sounds good, thanks
< BlueMatt>
I'll look into the same tomorrow on the flight if I'm not asleep the whole time
< cfields>
heh, ok
< BlueMatt>
ehh, by "the same" I meant redoing the parallel-processmessages pr
< BlueMatt>
that was super unclear....
< instagibbs>
is there a plan for multiwallet for 0.14, or is it too late
< gmaxwell>
we're not at the feature freeze yet, and it seems done +/- things that need to be fixed due to review.
< gmaxwell>
Several people have said they would review it.
< gmaxwell>
It would be really great it multiwallet and the utxo scriptpubkey index could make it in.
< BlueMatt>
luke-jr: good time to go rebase 8694 then :p
< luke-jr>
sure
< luke-jr>
oh, need #8775 still as well
< gribble>
https://github.com/bitcoin/bitcoin/issues/8775 | RPC refactoring: Access wallet using new GetWalletForJSONRPCRequest by luke-jr · Pull Request #8775 · bitcoin/bitcoin · GitHub
< luke-jr>
instagibbs: ^ could use review
< luke-jr>
(once those two get in, looks reasonable to split up 8694 further into base/Qt/RPC PRs)
< sdaftuar>
BlueMatt: if I understand 9375 correctly, I think that when there is a fork, and some nodes have tip A and others tip B, #9375 would cause all nodes to end up downloading both A and B.
< bitcoin-git>
bitcoin/master fb0c934 Luke Dashjr: Wallet: Let the interval-flushing thread figure out the filename
< bitcoin-git>
bitcoin/master 5394b39 Luke Dashjr: Wallet: Split main logic from InitLoadWallet into CreateWalletFromFile
< bitcoin-git>
bitcoin/master 2a524b8 Pieter Wuille: Merge #8776: Wallet refactoring leading up to multiwallet...
< gmaxwell>
\O/
< Chris_Stewart_5>
Can we have OP_1NEGATE used as a push op code for witness program versioning?
< sipa>
no, only OP_0, and OP_1..OP_16
< Chris_Stewart_5>
Interesting, thanks sipa
< Chris_Stewart_5>
Actually, is there any reason for this?
< sipa>
not really
< sipa>
it was considered sufficient :)
< luke-jr>
by some* maybe; IMO it was overlooked.
< Chris_Stewart_5>
Defintely a protocol 'quirk' since the BIP does say '1 byte push opcodes'
< luke-jr>
not worth the effort to fix at this point though
< sipa>
fix the bip :)
< sipa>
(to match the code)
< Chris_Stewart_5>
but it is so much easier to criticize! :P. I'll submit a pull req later..
< sipa>
thanks!
< sipa>
Chris_Stewart_5: the BIP clearly says "for 0 to 16", no?
< sipa>
(i missed that when suggesting you send a PR, sorry)
< Chris_Stewart_5>
sipa: What is '0 to 16'? Bytes? Technically OP_1 is 0x51
< Chris_Stewart_5>
I don't like the wording in that regard either.
< sipa>
the value being pushed
< Chris_Stewart_5>
Why not explicitly state what ops they are?
< sipa>
sure
< sipa>
i trying to find the misunderstanding
< Chris_Stewart_5>
instead having to do the conversion in your head?
< sipa>
if it isn't clear, reformulating helps
< Chris_Stewart_5>
I don't think I have a misunderstanding per se, but I think can be clearer. Instead of having to infer the op codes from the text I think they should just be explicitly stated
< Chris_Stewart_5>
Plus it falls more inline with the actual implementation IMO
< sipa>
fair enough
< fanquake>
sipa You shouldn't see significant speedup with #8610 if you have a large -dbcache set (like 2048), should you?