< bitcoin-git>
bitcoin/master 5be0190 Matt Corallo: Delete some unused (and broken) functions in CConnman
< bitcoin-git>
bitcoin/master 3c37dc4 Matt Corallo: Ensure cs_vNodes is held when using the return value from FindNode
< bitcoin-git>
bitcoin/master 2366180 Matt Corallo: Do not add to vNodes until fOneShot/fFeeler/fAddNode have been set
< bitcoin-git>
[bitcoin] laanwj closed pull request #9626: Clean up a few CConnman cs_vNodes/CNode things (master...2017-01-remove-broken-unused-funcs) https://github.com/bitcoin/bitcoin/pull/9626
< bitcoin-git>
bitcoin/master 95f97f4 Luke Dashjr: Skip RAII event tests if libevent is built without event_set_mem_functions
< bitcoin-git>
bitcoin/master e99f0d7 Wladimir J. van der Laan: Merge #9647: Skip RAII event tests if libevent is built without event_set_mem_functions...
< bitcoin-git>
[bitcoin] laanwj closed pull request #9647: Skip RAII event tests if libevent is built without event_set_mem_functions (master...raii_tests_optional) https://github.com/bitcoin/bitcoin/pull/9647
< achow101>
what does safe mode do?
< wumpus>
achow101: it disables a few wallet commands concerned with sending funds IIRC
< wumpus>
(anything with okSafe==false in the dispatch table)
< achow101>
what causes safe mode?
< wumpus>
fLargeWorkForkFound or fLargeWorkInvalidChainFound
< wumpus>
more generally, when the client detects something fishy either with itself or the network
< achow101>
if a large work fork is found (>6 blocks deep) and has more work than the current tip, will it switch to that fork and warn the user?
< wumpus>
yes, safemode is a result of warnings
< wumpus>
IIRC it will follow a large work fork (given that it validates correctly, of course) but warn and go to safe mode
< achow101>
ok.
< wumpus>
but I don't know the exact logic deeply you'll have to check the source code
< achow101>
safe mode seems kinda useless, I can't find anything about it that would affect what most users do with the GUI
< wumpus>
it doesn't affect the GUI, from what I remember that's on purpose: GUI is used manually so if there is a big warning visible and the user still wants to send, they can
< wumpus>
RPC is usually driven automatically so the only way to make the operator realize something is wrong is by blocking commands
< achow101>
ah. I see. that makes sense
< achow101>
thanks
< sipa>
jl2012: the serialization code depends on char being 1 byte, short being 2, int being 4, and long being 8
< jl2012>
sipa: isn't this dependent on the architecture?
< sipa>
jl2012: while technically those are dependent on architecture, everything we remotely support has rhese sizes
< sipa>
wait, i believe long does differ
< jl2012>
so is it ok if I use GetSizeOfCompactSize in consensus code?
< sipa>
but long long does not
< jl2012>
or is it already kind of consensus critical?
< sipa>
you already are using it in consensus code. it affects block size calculation
< sipa>
as that uses GetSerializeSize
< jl2012>
isn't it better to directly use numeric number, instead of sizeof(something) ?
< sipa>
yes and no
< sipa>
we're using char/short/int/long long in serialization code anyway, though we've slowly moving away from those in favor of int16 int32_t, int64_t etc
< sipa>
so switching the size calculation on itself would be perhaps more future proof for that function itself
< sipa>
we sjould really get rid of the use of those data type in serialization code (and probably all of consensus code) in the first place
< jl2012>
but there is already a theoretical risks of consensus failure between architectures?
< sipa>
our unit tests would immediately fail when compiling in an environment where these sizes don't hol
< jl2012>
that's true
< jl2012>
but you know, not everyone test their codes before shipping
< jl2012>
by the way, are we going to fix the LOW_S special case I found earlier? Or just use the NULLFAIL rule (sig must be empty if failed) to cover that?
< sipa>
hmm what special case?
< jl2012>
if R is out of range, HIGH_S is allowed
< sipa>
ah, yes
< sipa>
i think we should just propose nullfail as a consensus rule
< jl2012>
the 2 rules combined should cover all edge cases, I guess
< sipa>
which 2 rules?
< jl2012>
lows and nullfail
< sipa>
yes, i believe so
< jl2012>
just wonder if nullfail might irrevocably invalidate some scripts
< sipa>
nullfail in particular simplifies things a lot... as it removes the distinction between valid encoding and valid signature
< jl2012>
and it eliminate unneeded validation
< sipa>
indeed, and storage
< jl2012>
is there anyway to do aggregated validation in ECDSA?
< sipa>
yes, but it requires passing along the oddness of the R point's y coordinate
< sipa>
if you don't have that, there is an expoenntial blowup in combinations to test that is never worth it
< jl2012>
so we currently can't do that?
< sipa>
not with the current signature scheme, no
< jl2012>
it just requires one more bit to encode that?
< sipa>
yes
< sipa>
but switching to schnorr is easier :)
< jl2012>
actually, my original question is about cross-transaction aggregated validation. For example, validate all transactions in a block in one operation
< jl2012>
and just aggregated validation, not aggregated signature (no space saving)
< sipa>
yeah, with the extra bit you can do batch validation
< jl2012>
it could be nicely fit in a 64 bytes (512 bits) signature: R = 256 bits, R oddness = 1 bit, LOW_S = 255 bits
< jl2012>
how about Schnorr, would that require the extra bit?
< sipa>
the Schnorr scheme i proposed before avoids it by requiring that bit to be 0
< sipa>
implicitly
< sipa>
which would equally be possible in ECDSA, but again, requires a change
< jl2012>
so you just assume the bit is 0. If a signature has a non-0 bit, the wallet will find a different nonce to sign again?
< sipa>
all that requires is negating the nonce :)
< jl2012>
sounds like possible with a softfork?
< sipa>
and changing all wallet software
< jl2012>
same as low_s
< sipa>
well as long as OP_CHECKSIG NOT is allowed, we can't do batch verification anyway
< sipa>
though i guess it could apply to OP_CHECKSIGVERIFY only