< sipa> $ ./walletbackup.py
< sipa> INFO: Restoring using dumped wallet
< sipa> Unexpected exception caught during testing: BrokenPipeError(32, 'Broken pipe')
< sipa> Stopping nodes
< sipa> WARN: Unable to stop node: CannotSendRequest('Request-sent',)
< gmaxwell> sipa: patrick got to you?
< gmaxwell> the question is-- why doesn't travis see it as failing?
< sipa> gmaxwell, wumpus, jonasschnelli:
< gmaxwell> sipa, sipa, sipa
< sipa> gmaxwell: are you wumpus and jonasschnelli?
< sipa> i wanted to paste my rorx results, but they seem to have disappeared into my terminal's forgotten history
< gmaxwell> No, are you?
< sipa> both rorxes were similar, and fastest
< gmaxwell> well, see.. I saved them seconds of disappointment by wisely preempting your summons.
< gmaxwell> sipa: did you see the usenix paper I linked to you some indeterminable number of days ago?
< sipa> i skimmed it, not enough to actually find the xor trick you were referring to
< gmaxwell> are you familar with cuckoo hashing?
< sipa> yes
< gmaxwell> So how do you support cuckoo hashing if the whole value can't be stored in the table? -- normally you rehash an entry to find its alternative location when you evict it.
< gmaxwell> The make the primary location H(full value)->to offset and the secondary location H(short value) ^ location. So you only need the current location and the short value to swap an entry between slots.
< gmaxwell> (and you don't need to track which one it was in, since the same xor relates them)
< sipa> gmaxwell, wumpus, jonasschnelli:
< sipa> SHA256_avx,175,0.00575656,0.0058105,0.00575941
< sipa> SHA256_basic,119,0.0088715,0.00893307,0.00887388
< sipa> SHA256_rorx,223,0.00482899,0.004861,0.00483046
< sipa> SHA256_rorx_x8ms,223,0.00475737,0.0047915,0.00476165
< sipa> SHA256_sse4,175,0.00574069,0.00577295,0.00574323
< sipa> i7-4800MQ, fixed at 2.6 GHz
< phantomcircuit> is the first column cycle count?
< sipa> iteration count
< gmaxwell> phantomcircuit: it's number of times the freaky bench innerloop ran.
< phantomcircuit> ah
< sipa> how many tests were done
< sipa> each doing 1 MB of data
< phantomcircuit> ok
< phantomcircuit> neat
< gmaxwell> really for our usage running with 32 and 64 bytes of data is much more interesting.
< gmaxwell> might be useful to see if that changes the numbers at all... perhaps gives AVX a purpose for existing? :)
< sipa> sure, but it's just a proxy for the number of sha256 compression function runs
< gmaxwell> ah duh right, it doesn't have a finialization, so it's not going to change.
< sipa> note, it's a mobile CPU; if i don't fix the clock speed, cpufreq increases my cpu speed somewhere during the rorx_x8m run
< phantomcircuit> gmaxwell, loading into the right registers takes some amount of work
< sipa> making it look wildly better than rorx
< phantomcircuit> sipa, test on a build box?
< phantomcircuit> gmaxwell, any idea if those have turbo or not?
< sipa> turbo is easy to disable
< sipa> on our 56-core machine:
< sipa> SHA256_avx,255,0.00397131,0.00401497,0.00397456
< sipa> SHA256_basic,175,0.00599988,0.00604987,0.00600181
< sipa> SHA256_rorx,319,0.00334556,0.003407,0.00334662
< sipa> SHA256_rorx_x8ms,319,0.00328553,0.00332999,0.00328678
< sipa> SHA256_sse4,255,0.00395919,0.00401807,0.00396108
< sipa> gmaxwell: what cpu is it?
< sipa> i'm surprised that our C code is only 45% slower than intel's optimized asm code
< gmaxwell> the 56-core are broadwell-ep
< gmaxwell> but 'only' at 2.2GHz.
< phantomcircuit> sipa, none of those are parallel calculation right?
< sipa> indeed, all single threaded
< gmaxwell> sipa: sha2 that computes four at once is a considerable additional speedup, but harder to use without more software changes.
< sipa> very doable inside merkle trees, though
< gmaxwell> Yes. though thats pretty much the only place where it's very doable.
< sipa> also the place where it probably matters most
< gmaxwell> I'm sure if you want to write the merkle tree function wumpus will hapily do the work to give you a sha2x4 to call. :)
< gmaxwell> my vague recollection was "the asm was 2x faster than the C, and the 4-way was 3x faster than the C" I dunno how that generalizes with the rorx.
< GitHub130> [bitcoin] sipa opened pull request #8051: Fix walletbackup.py failure (master...fixwalletbackup) https://github.com/bitcoin/bitcoin/pull/8051
< gmaxwell> sipa: considering your PR comments might it be a bit strong to call that 'fix'? rather than 'mysteriously stir'? :)
< sipa> gmaxwell: well, it's reproducible :)
< sipa> maybe i should call it "seemingly fix"
< sipa> done
< gmaxwell> presumably it fails for you because your computer is fast and travis isn't.
< sipa> hmm, i started off by adding sleeps and that didn't help
< sipa> but let me try again, by adding a sleep exactly there
< sipa> gmaxwell: the sync blocks there causes a sleep of 1-2 seconds
< sipa> gmaxwell: an explicit sleep of 60s does not fix it
< gmaxwell> uh how does that make any sense?
< sipa> sense, it makes none
< jonasschnelli> sipa: I read something in the logs: is walletbackup.py fixed?
< sipa> jonasschnelli: for me it is
< jonasschnelli> sipa: I can't reproduce the issue... but you fix looks strange. :)
< sipa> without it, i just can't run rpc_tests.py
< sipa> it runs forever
< jonasschnelli> But you can run walletbackup.py independent?
< sipa> no
< sipa> see the paste in the PR
< jonasschnelli> hmm...
< jonasschnelli> Yes. Saw it...
< jonasschnelli> like to debug it locally.
< sipa> maybe it depends on python version or something else
< sipa> but something very fishy is going on, as it's not just a race condition
< nub> would it be possible to shorten the block creation time from 10 minutes to say 1 minute with a hard fork obviously dividing the reward by a factor of 10 that way confrimations would be much faster i'd imagine
< sipa> yes, it wod be possible with a hars fork
< sipa> a hars fork could also turn bitcoin into a frontend for paypal
< sipa> *hard
< nub> could you explain that?
< nub> is that bad?
< sipa> a hard fork is replacing the system with another system, where all participants agrwe to switch to different software
< sipa> it can change anything
< nub> could a soft fork change block time?
< sipa> the question "is x possible with a hard fork?" always has "yes" as answer
< sipa> no, a soft fork can't do that
< sipa> it would also be a bad idea, as it would result is 10 times higher mining centralization pressure
< sipa> and confirmations that have less meaning
< gmaxwell> sipa: actually not 10 times higher, but likely much higher, since the relationship is non-linear. :)
< nub> how can we speed up transactions?
< nub> or confirmations
< sipa> nub: why do you want to do that?
< sipa> if you want them to be fast enough to work at a point of sale, you'll need different technology
< nub> im a store wanting to sell stuff accepting bitcoin but i dont want them to leave until the bitcoin is in my account....
< sipa> there is simply no way to get global consensus within seconds
< nub> that currently takes far too long
< sipa> 1 minute is also far too long
< sipa> you just can't use bitcoin blockchain transactions for that purpose
< nub> what if the purchaser uses the same hosted wallet platfrom as the seller
< nub> then it could be instant and no bitcoin would need to be sent\
< sipa> that wouls be one example of another technology
< sipa> now you're using the database of the wallet provider rather than the blockchain
< nub> something like cassandra replicated to datacenters all around the world
< sipa> if you have a centrally trusted party running those datacenters, it's easy :)
< sipa> but that's not a luxury we have for bitcoin the base technology
< nub> whats the aim of bitcoin now?
< sipa> this discussion probably belongs in #bitcoin
< nub> wanna move to there?
< sipa> i'm going to sleep, but feel free :)
< nub> is it ok to discuss how a hard fork could work here?
< sipa> it's easy: make a change to the code that does what you want, and convince the whole world to switch to that code
< sipa> and no, the interblock time is not going to change
< nub> what if it was a slow transition say a client which works on both and at a certain (ntp time) it switches everyone
< nub> could put that feature in a year beefore the planned fork so everyones updated to the client that supports that by then
< sipa> you'd still need to convince the entire world to switch before that fkag date
< nub> make the app do it automatically
< nub> bitcoin core
< sipa> bitcoin core does not decide what the rules of the network are
< nub> fair enough
< sipa> it very intentionally does not have an auto update function
< sipa> as developers shouldn't be in charge of the rules
< nub> it should
< sipa> it absolutely should not
< nub> and if not updated you can't connect
< sipa> it would make developers central bankers
< nub> devs could add whatever they want
< nub> what if i were to hire them all?
< sipa> try me
< sipa> i'd find another job
< nub> a surcharge could be added to mining which goes to devs instead
< sipa> but even if you could, hopefully the ecosystem would protest and stop using bitcoin core
< nub> wouldnt a 1 minute block time be welcomed for faster trasnactions?
< sipa> no
< sipa> it's a misconception that that would be beneficial
< nub> it works for litecoin
< sipa> "it works" does not mean it is better
< nub> would bitcoin devs be interested in incorperating and being paid to develop?
< nub> in america
< nub> banks want there software to run on blockchain technology
< nub> we could be running the banks
< sipa> they don't need proof of work or miners
< nub> they dont
< nub> they need a special client which can create and destroy coins
< sipa> this is getting off topic, as you're not talking about developing bitcoin
< sipa> #bitcoin please
< nub> could a soft fork increase block rewards
< gmaxwell> nub: no, and please take further discussion elsewhere.
< GitHub189> [bitcoin] kazcw opened pull request #8052: rpc tests: increase http timeout (master...rpcwallet-test-timeout) https://github.com/bitcoin/bitcoin/pull/8052
< arubi> nub did remind me of a question I was meaning to ask. and I'm sorry if this is obvious. suppose we have a way to send an input entirely as miners' fee, then "provably" receive it as an output from a generation transaction in a block that the input->fee transaction was mined on. if everyone decides one day to use this feature, then could we discard any blockchain data up until the block that this happened, essentially making this new block
< arubi> the genesis block? this might even be a soft in its "forkiness". a new client might want to start syncing from the new genesis, and and old client won't care that it happened? this is very theoretical, I'm just looking to validate my understanding of how this could play out
< sipa> arubi: we can always declare a new block as the genesis block; just snapshot its utxo set
< arubi> sipa, is that the same? why not do it?
< arubi> well it's not the same. I will still need to verify the utxo set to believe that it follows the history since genesis
< sipa> arubi: you can do it. copy your chainstatre directory from someone you trust
< arubi> but trust isn't needed if there's a mechanism like I suggested, right? (maybe not)
< sipa> of course you need trust in your model; you're relying on the assumption that everyone accepts that the new genesis block is in fact derived from the old one
< sipa> and there is no way to verify that
< sipa> we have no technology that can verify the correctness of the blockchain without seeing it :)
< arubi> hm. so you're saying that I'd still need to know the history up until that new genesis block, right? I understand the issue if so
< sipa> you don't need to know it if you trust that it's there
< sipa> but that's no different from copying a chainstate from someone
< arubi> right. thanks sipa.
< sipa> there are ideas about committing the hash of the UTXO set to the blockchain
< arubi> oh, so the utxo set could be shared?
< instagibbs> arubi, miners would commit to utxo set
< sipa> which would allow you to for example download/verify headers up to 1000 blocks in the past, then download a snapshot of the utxo set at that point from anyone, and verify that it matches the hash in the blockchain
< instagibbs> spv trust for utxo set in other words
< sipa> HOWEVER that still involved trust: you've now switched from a model of no trust in history to trusting that miners would not commit to an invalid history
< sipa> which in practice may very well be sufficient, but it's a very different security model
< arubi> so I'd really be trusting the network to verify and relay the chain correctly to me?
< sipa> no
< sipa> you'd be trusting miners to not build a chain with invalid history
< arubi> but that's impossible now, if the network verifies
< sipa> that's true but irrelevant
< sipa> it's impossible because YOU verify, if you run a full node
< sipa> if you assume the network verifies for you, you're trusting them
< arubi> right, so how is an spv node different in verifying the blocks? is it about verifying the transactions themselves?
< sipa> an spv node assumes that miners will not produce an invalid chain
< sipa> (or that they're only connected to honest full nodes)
< arubi> I think I understand, like you mentioned that they verify the headers (and not the block itself?), they trust that what they're verifying is the actual chain verifying nodes use
< sipa> no, they assume that miners would not make an invalid chain
< sipa> (or that full nodes would filter out such an invalid chain for them)
< arubi> that's what I meant by "the actual chain..". sure assuming miners won't produce an invalid chain is understandable, but if you somehow only connect to honest nodes, then you will not get it, and I see where this requires trust
< sipa> well there is no "the actual chain", different nodes can have a different idea of what chain to accept
< arubi> true. the best chain used by the reference client is probably more specific, but also impossible to expect from just connecting to random nodes
< sipa> no no, not by the reference client
< sipa> every individual node, even if they are running the same software, could have a different idea of what the best chain is
< arubi> I know, but there is physically only one best chain
< sipa> no
< arubi> or maybe multiple "same height" chains.. that's bad in itself
< sipa> every indidual node has an idea of what the best chain is
< sipa> there is no guarantee that other nodes have the same idea
< arubi> I get that, but really the chain with the most work will overtake the others quickly, no?
< sipa> that's an assumption :)
< arubi> it worked so far, even on the chaos that is testnet :)
< sipa> and the correctness of the system relies on it, but it's anot a given
< sipa> it's the result of economic and technical properties
< sipa> and you change those if nodes in the network don't validate fully
< arubi> or forks intentionally, or fails due to a bug, right. correctness of the best chain that's advertised to my node is something I really took for granted until now
< arubi> well, not my fully verifying node
< Chris_Stewart_5> sipa: An example of a node not knowing what the longest chain could be is a sustained sybil attack correct?
< sipa> Chris_Stewart_5: or any normal fork
< sipa> Chris_Stewart_5: it's not so much "a node does not know what the best chain is", it is that there _is_ no best chain
< sipa> best chain is something local to your client
< sipa> and we build a system that aims to provide convergence: making sure that over time, blocks in the history of different nodes' best chains end up being the same over tim
< Chris_Stewart_5> interesting. Thanks - it's easy to forget we have 5k individual computers running on this network that need to reach convergence on what is right
< kanzure> those sleeps in the tests are architecturally unfortunate :( shouldn't this be stuff that gets pinged/notified instead of waiting forever aimlessly? e.g. crash could have happened seconds ago.
< kanzure> or is that intentional etc...