< bitcoin-git>
[bitcoin] NicolasDorier closed pull request #10947: [Wallet] Bare segwit scriptPubKey should not considered change by the wallet (master...importaddresssegwit) https://github.com/bitcoin/bitcoin/pull/10947
< bitcoin-git>
[bitcoin] jeffrade opened pull request #12187: [Docs] Updating benchmarkmarking.md with an updated sample output (master...benchmark_output) https://github.com/bitcoin/bitcoin/pull/12187
< CubicEarths>
I know there are more sophisticated UTXO plans in the works, but as a quick-fix, why not have a soft-fork where for every 100th block to be valid the miner would have to include two hashes, one of all the blocks up a recent point, and another hash of the UTXO set as of that same recent point?
< gmaxwell>
CubicEarths: to accomplish what goal?
< gmaxwell>
If you want spv security use a blinking SPV client.
< CubicEarths>
blinking?
< CubicEarths>
oh, you are not using that word in a technical sense.
< gmaxwell>
hah, yes.
< luke-jr>
CubicEarths: how do you know the UTXO hash is legit? (hint: you'd have to verify all the prior blocks..)
< CubicEarths>
The idea would be that since it would be a consensus rule, the miner and all others building on top that block of it would verify the the utxo set hashed to that value
< gmaxwell>
If you want to blindly trust miners, you can do that-- thats what SPV does.
< CubicEarths>
as well as all full nodes
< gmaxwell>
it's much more efficient than what it sounds like you're imagining, requires no new consensus rules, and doesn't create normative behavior that requires nodes hash gigabytes of data to validate a block.
< gmaxwell>
which is why I opened with the question of what goal you hoped to accomplish.
< CubicEarths>
Blind trust in this context wouldn't be a good thing. It seems like there is middle ground between that and 'everybody checks everything all the time'.
< gmaxwell>
Where you trust miners to only produce valid blocks because someone else is checking is the SPV model, it's somewhat disproved in practice thanks to spy mining, but if you want to make that assumption you already can, super duper efficiently.
< CubicEarths>
I though SPV mining was only an issue that could go back a few blocks at most?
< gmaxwell>
what? no.
< gmaxwell>
it means miners produce and extend chanins without validating them which means that spv wallets will see false confirmations.
< CubicEarths>
yes, but only a few false confirmations. They wouldn't go 20 blocks deep
< CubicEarths>
or not 100 at least :)
< gmaxwell>
no one is going to wait 100 blocks on confirmations unfortunately. I know of only one commercial party that was doing it, and they abandoned it.
< gmaxwell>
But there is also no limit to the reorg depth that could be created by spymining, just depends on how much participation there is.
< CubicEarths>
Right. If all miners were mining garbage, then the block chain would be just be junk
< gmaxwell>
at least the sitaution has improved in the last few months and miners no longer think they can force invalid blocks onto the network merely by mining them. :)
< gmaxwell>
For a while I worried that there would be some spymining incident with an invalid block, and we'd have miners saying "screw you, we're not going to reorg out our blocks".
< CubicEarths>
yeah, I could see that
< gmaxwell>
then there would be some long outage while the rest of the network forked off those miners.
< gmaxwell>
but I think that the outcome is now pretty clear.
< CubicEarths>
So, I guess the utxo hash idea is a way of network helping nodes to get online. The network enforces rules that make it easier for new nodes to join, and in turn enforce those rules themselves. It makes it easier to have a rotating pool of nodes who validate the miners.
< luke-jr>
CubicEarths: if only miners are verifying blocks, why *wouldn't* they do 100 and beyond?
< CubicEarths>
luke-jr: It wouldn't only be miners. I really don't understand the thinking behind why everyone needs to run a node for Bitcoin to be secure. It seems like a diminishing returns situation. If 1 out 100 people in the work run a node, I just can't really envision what would could wrong that would be made better by 99 out of 100 running nodes
< luke-jr>
CubicEarths: if 1% of the economy runs a node, then 99% of the economy moves on with an invalid chain and becomes dependent on the outcome of that chain
< luke-jr>
putting a huge economic pressure on the 1% rejecting that chain, to change their mind and accept it
< luke-jr>
that 1% can no longer pay the other 99% which won't accept their coins as valid
< luke-jr>
the 99% won't suddenly decide to lose out on their income because 1% says the chain is invalid
< CubicEarths>
I envision more like the 99% would back their clients with local full nodes that they trust, and they should even be able to tap into 10 or 100 full nodes that they trust (because they trust the operators), and query them simultaneously to look for signs of disagreement. Could what you are proposing work in the situation I described?
< luke-jr>
I'm not proposing anything?
< CubicEarths>
the risk you proposed
< luke-jr>
if they trust node operators, they're fine until those node operators are compromised
< luke-jr>
assuming a secure connection to them
< CubicEarths>
well if the ability to connect to an arbitrary number of nodes and cross check their answers exits, it seems safe to assume that they wouldn't all be compromised at once
< CubicEarths>
at least it hardly seems riskier than assuming your own node would be compromised at the same time (if you had one)
< luke-jr>
CubicEarths: you can't interchange trusted nodes and arbitrary nodes..
< CubicEarths>
I meant an arbitrary number of trusted nodes
< luke-jr>
but now you're getting back to a fiat trust model
< CubicEarths>
•
< CubicEarths>
Maybe I should sell my BTC :)
< CubicEarths>
I dunno, it seems different if: 1) There is PoW 2) Anyone can mine 3) Anyone can run a node to validate 4) I decide no to run a full node but instead connect to 100 nodes run by people and organizations that I trust
< CubicEarths>
It seems like quite a small difference between running a full node and doing some advanced SPV in that context
< CubicEarths>
at least in terms of reorgs and being double spent
< wumpus>
if I understand it correctly, we do the same using mock times, but it's certainly an interesting approach
< sipa>
s/unit/functional/
< sipa>
the advantage is that it works across processes, so you can sleep in python, but if flixcapacitor knows every process around is waiting for i/o or sleeping, it can speed up time
< sipa>
*flux
< CubicEarths>
sipa: You are 33 ?!
< sipa>
iseems so
< aj>
wumpus: i'm guessing #11796 will have to wait for 0.17 now?
< bitcoin-git>
[bitcoin] ericallam opened pull request #12189: [Qt] Display transaction fee with sat/vbyte value in SendCoinsDialog (master...sat_vbyte_fee_rate_in_qt) https://github.com/bitcoin/bitcoin/pull/12189
< wumpus>
aj: "INFO: 84 tests not meeting naming conventions (expected 77):" on master now
< wumpus>
eh not master, but master +#12118, but that doesn't add any functional tests
< wumpus>
I haven't seen it fail recently on master at least
< aj>
wumpus: EXPECTED_VIOLATION_COUNT in test_runner.py is the change
< bitcoin-git>
[bitcoin] fwolfst opened pull request #12192: Trivial: Issue 12190: Update http URL of MIT license to use https (master...12190-UPDATE_MIT_LINK_TO_HTTPS) https://github.com/bitcoin/bitcoin/pull/12192
< bitcoin-git>
[bitcoin] MarcoFalke opened pull request #12193: RPC: Consistently use UniValue.pushKV instead of push_back(Pair()) (karel-3d) (master...Mf1801-univalueDeprecatedPair) https://github.com/bitcoin/bitcoin/pull/12193
< promag>
is snake case the convention for rpc arguments?
< wumpus>
promag: yes, all of them follow that convention
< promag>
we have some camel case (object options)
< promag>
there are some like "maxtries"
< wumpus>
all the direct arguments do
< promag>
should be max_tries?
< wumpus>
if it was a new call, that should be the case, don't bother changing the interface now
< promag>
so even options should follow snake case
< wumpus>
for new calls, yes, but we don't change the existing API just to confirm to that
< bitcoin-git>
[bitcoin] promag opened pull request #12194: Add change type option to fundrawtransaction (master...2018-01-fundrawtransaction-changetype) https://github.com/bitcoin/bitcoin/pull/12194
< Tituzin>
Good afternoon Guys - I need help with a little problem I have - I opened my Core wallet from February 2014, after syncing all the blocks, I found a sending transaction that was "unconfirmed/not in memory pool" since 2014. I check for txid on the block explorer and there is no info about that transaction. I went ahead and "ABANDONED TRANSACTION". Will I ever get these btc back? I checked both sender and receipient wallets,
< Tituzin>
after hitting "abandon" - now the transaction only says Status: 0/unconfirmed, not in memory pool, abandoned / Output index: 1
< Tituzin>
what would be my next step to try and recover it?