< jonasschnelli>
wumpus: awesome idea with the "bc-monitor"!
< jonasschnelli>
Would it only require to connect over ZMQ or also over RPC?
< wumpus>
thanks :)
< jonasschnelli>
The later would probably require two stunnels?
< wumpus>
it requires both, eventually I want to make zmq optional
< jonasschnelli>
With RPC long poll? Or how would you update the stat?
< wumpus>
but I'm most interested in the zmq stats myself so that's the priority for what I'm currently buildlng :)
< wumpus>
just poll, updates will be slower and some things will be missing (like the actiity log)
< jonasschnelli>
Hmm... the polling is kinda lame/hacky and also I don't like the JSON/http overhead...
< wumpus>
so the eventual plan is: zmq is *nice*, rpc is required to get the base info
< jonasschnelli>
I had similar ideas for a new GUI (monitor).
< wumpus>
JSON and http overhead doesn't matter for things that never change (say the name and version of the node)
< jonasschnelli>
right..
< wumpus>
as well as the max size of the mempool
< wumpus>
etc
< wumpus>
also things can be updated asynchronously, you don't have to hang the GUI while an RPC request is underway
< jonasschnelli>
I think we should definitively add a sequence number for each zmq notification.
< jonasschnelli>
This is at tiny cost...
< wumpus>
yes I'm not sure why that wasn't done in the first place
< jonasschnelli>
Right. It would be an API break now.
< jonasschnelli>
But we might risk that...
< wumpus>
would need to be a sequence number per notifier, not a global one, you can't expect people to be listening for everything
< jonasschnelli>
or optionally allow to enable it.
< wumpus>
right.
< jonasschnelli>
Right. It should be per message type.
< jonasschnelli>
A uint64 per message type.
< wumpus>
nah a uint64 is not necessary
< wumpus>
you just need to check that next packet is prevpacket+1
< wumpus>
even a byte woulddo :)
< jonasschnelli>
Indeed. A "skip" > 256 would be extremely rare.
< wumpus>
hm you're right... well ok, 2 bytes then
< wumpus>
if you miss 65k packets, well, I'd say you're screwed anyway...
< jonasschnelli>
Hah. True.
< jonasschnelli>
Let me know when you have your bc-monitor ready for testing..
< wumpus>
sure!
< wumpus>
<jonasschnelli> Hmm... the polling is kinda lame/hacky and also I don't like the JSON/http overhead... <- what about encoding univalues as cbor (http://cbor.io/) instead *ducks*
< wumpus>
not actualy convinced how much it would help, but for huge structures like the output of e.g. getrawmempool true it may save a few bytes
< jonasschnelli>
wumpus: hah! More hacky! But i like it. :)
< jonasschnelli>
The polling is just not in-sync with thread synchro.
< jonasschnelli>
The notification approach would scale better IMO
< wumpus>
I do think json parsing can be made very fast too
< wumpus>
(e.g. univalue is already plenty faster than the boost thing)
< jonasschnelli>
I kinda like zmq and I have also though about a channel into the other direction,.. but I'm also aware of the risks.
< wumpus>
sure, notification is good fo things that change, especially at high frequency
< wumpus>
it's not a replacement for a command/response system
< jonasschnelli>
What would speak against accepting requests over zmq (bitcoind listens) and process it async and response with some events (similar to p2p)? Bad design?
< jonasschnelli>
With requests I mean: "give me some informations about connected peers",... just some basic stuff.
< jonasschnelli>
Auth would be a problem.
< wumpus>
why not use rest instead?
< wumpus>
we already have an unauthenticated information interface
< jonasschnelli>
I just don't like having multiple port connected for one purpose (a monitor)
< wumpus>
(and rest doesn't need to be necessarily json)
< jonasschnelli>
It would be very difficult to connect a "monitor" app to your remove node.
< jonasschnelli>
*remote
< wumpus>
I'd like zmq events for some P2P events though
< wumpus>
like 'new node connected' 'node disconnected' etc
< wumpus>
jonasschnelli: I'm mostly aiming at local usage right now, that's why I use ncurses in the first place
< wumpus>
I could split it into two components, one that connects to a local RPC and ZMQ, the other that connects to the tool, and one TCP connection in between. But meh, that's not a priority :)
< jonasschnelli>
wumpus: Yes. This makes sense. I though about a GUI tool that is also process separated.
< jonasschnelli>
And this would require RPC and ZMQ.
< wumpus>
zmq is not very suitable for over-the-internet usage, if you're considering that
< jonasschnelli>
Maybe someone could write a proxy-python script that sits behind apache/ngix and bundles the channels, do the SSL, etc.
< jonasschnelli>
(bundles the channels = RPC & ZMQ)
< wumpus>
something like websocket notification would work better for that. Wouldn't be too difficult to roll a zmq-to-websocket adapter.
< jonasschnelli>
right.
< wumpus>
initialliy I'd have preferred using a streaming http socket for notifications, but zmq is more 'standard'
< jonasschnelli>
Yes.
< jonasschnelli>
I think if one likes to connect over the inet with a monitor app, a CGI python script would be sufficient. You could use SSL and http digest from apache/httpd
< jonasschnelli>
But lets first focus on the monitor locally connected!
< wumpus>
if it's for over the internet a whole new world of authentication, security, etc opens up... scope creep etc :)
< wumpus>
I do agree it'd be nice though
< jonasschnelli>
I really dream of re-usign an old computer as "monitor center" for a couple of nodes... mempool, peers, blocks
< wumpus>
what about snmp support? *sorry, troll mode today*
< jonasschnelli>
Is that still alive.. :)
< jonasschnelli>
But yes. Would allow to attach multiple tools.
< btcdrak>
OMG wumpus.
< wumpus>
I dont't actually know, I played around with it >10 years ago when I did something with *gasp* servers
< jonasschnelli>
haha...
< btcdrak>
mind you, it's probably not a bad idea...
< wumpus>
the protocol is terrible and the implementations are even worse, brittle and old (think: openssl but worse)
< jonasschnelli>
I still use it to create some stats from my servers though. I guess "munin" still uses snmp.
< wumpus>
you could make a snmp-to-RPC adapter though, if you come up with enough OIDs and learn to speak ASN.1
< jonasschnelli>
lol
< wumpus>
jonasschnelli: sure, lots of enterprise equipment still uses it, and the big monitor tools support it. It's just not very internet-ready, unless you use it over a secure monitoring VPN or such.
< GitHub151>
bitcoin/0.12 a0cea89 Wladimir J. van der Laan: Merge #7741: [0.12] Mark p2p alert system as deprecated...
< sipa>
wumpus: if we need to fix the serialization for gettxoutsetinfo... maybe we can replace it with a merkleized version?
< wumpus>
sipa: you mean a 6-month masters project for someone? :) or isn't it that bad to do?
< sipa>
wumpus: i don't mean UTXO commitments
< wumpus>
ohh!
< wumpus>
sure, would be good to have a better format
< wumpus>
I'm everything but married to the current one, and apparently we already broke it once without anyone noticing, so (as long as we mention it in the release notes) I'm not against breaking it again
< sipa>
but we can iterate over the utxo entries in order like now, but use an incremental merkle tree hasher (similar to the algorithm used by ComputeMerkleRoot and friends now)
< wumpus>
cool
< sipa>
the overhead would be the same, and you could make it answer queries for specific entries... and it could later just be converted to a commitment structure
< wumpus>
yes, after the memory improvement for the merkle tree hasher it doesn't have to store all the data in meory
< GitHub90>
[bitcoin] jonasschnelli opened pull request #7761: [Qt] remove trailing output-index from transaction-id (master...2016/03/ui_txid) https://github.com/bitcoin/bitcoin/pull/7761
< jonasschnelli>
wumpus: what about adding a github label "ZMQ"?
< jonasschnelli>
Maybe each IO should have its label
< wumpus>
hmm maybe, I generally prefer to keep anything that is about communicating with applications under 'RPC'
< jonasschnelli>
okay.
< wumpus>
but I'm not against it, we have some more zmq specific issues now
< wumpus>
there is a compromise somewhere between general enough labels for sorting, and specific enough
< MarcoFalke>
what about adding a github label 'easy'?
< wumpus>
not a bad idea
< GitHub188>
[bitcoin] jonasschnelli opened pull request #7762: [ZMQ] append a message sequence number to every ZMQ notification (master...2016/03/zmq_seq) https://github.com/bitcoin/bitcoin/pull/7762
< jonasschnelli>
MarcoFalke: "Easy" would stand for "trivial"? "Easy" to review?
< MarcoFalke>
Easy just for issues
< MarcoFalke>
So when someone want to "try out" patching something in bitcoin, they can jsut grab an easy issue
< MarcoFalke>
But fixing an issue tagged with easy usually implies the review is easy as well.
< wumpus>
easy to implement, not so much 'trivial'
< wumpus>
trivial was used for comment and message changes
< wumpus>
this can be somewhat more beefy, but not requires deep understanding of bitcoin core itself
< wumpus>
and indeed for PRs it makes no sense :)
< jonasschnelli>
maybe we could rename "RPC" to "RPC/ZMQ/REST"?
< jonasschnelli>
and remove "REST"
< jonasschnelli>
For "Easy" we could use "easy to implement" (so interpretation is more clean than)
< jonasschnelli>
*clear
< wumpus>
jonasschnelli: yes, let's do that
< wumpus>
done
< jonasschnelli>
Looks good!
< morcos>
wumpus: 7648 is actually relatively simple code wise, i believe it has gotten a lot of review. i believe at least one of petertodd's comments were about testing of the consensus code in bip68 which is tested fairly heavily by the new rpc test.
< GitHub158>
[bitcoin] laanwj opened pull request #7766: rpc: Register calls where they are defined (master...2016_03_separate_rpc_registration) https://github.com/bitcoin/bitcoin/pull/7766
< jonasschnelli>
Right? importaddress leads to ISMINE_WATCH_UNSOLVABLE? and importpubkey to ISMINE_WATCH_SOLVABLE? (at least for P2PKH outputs)
< sipax>
yes
< jonasschnelli>
Is there no internal ripemd(hash()) cache that could map an P2PKH output to a given private key?
< jonasschnelli>
Or why is importing and address "UNSOLVABLE"?
< jonasschnelli>
I guess privkey->pubke->hash->hash160->base58c, then compare against required scriptsig pkh and sign...
< sipax>
jonasschnelli: it's unsolvable because it doesn't know what scriptPubKey to construct.
< jonasschnelli>
sipax: but couldn't it look through all provided/available private keys and generate the pubkey in a temp. keystore and does a lookup?
< sipax>
jonasschnelli: the point is that it shouldn't
< jonasschnelli>
hah. ok.
< sipax>
wallets should not try to guess how to spend coins
< sipax>
that only leads to confused behaviour, as it makes you forget under what circumstances it works
< jonasschnelli>
okay. This makes sense.
< gmaxwell>
for example, say someone pays a 1 of 2, me and you... and you 'guess' correctly that you can spend it, show it in your wallet... then LOL 8 confirms in I claw the funds back.
< sipax>
it's like saying "how do you mean, you didn't get my money? oh, i took your pubkey and added 21/7 times the generator to it! that's easy rightz your wallet should have guessed that that was possible and look for it!"
< sipax>
if you import an address, you're telling the wallet "treat this address as mine, regardless of whether you know how to spend it or not"
< sipax>
if you want it to be able to solve, you should tell it what it needs to be able to solve (by importing the keys and scripts)
< jonasschnelli>
sipax: Agreed.
< jonasschnelli>
I was just thinking about separating key from the wallet but still use the wallets coinselection, etc.
< jonasschnelli>
You could do this by impoting the addresses/pubkey and use fundrawtransaction to create your transactions.
< sipax>
yes, then you need to import the keys, not just the addresses
< jonasschnelli>
Then sign it locally (in my case over the hardware wallet app), then send it back for broadcasting.
< jonasschnelli>
Right... I see now the point with the keys instead of the addresses.
< jonasschnelli>
First I though "addresses could include less risks to 'pass around'". But agree with your statement "wallets should not try to guess how to spend coins".
< sdaftuar>
instagibbs: gmaxwell: i don't see any blocks mined from 12/1/15 to 3/13/16 that include transactions which fail MTP
< sdaftuar>
however there are very few transactions that use time based locktimes, and only a small fraction of those appear to use times that are remotely relevant
< sipax>
maybe we should try creating some?
< sdaftuar>
yeah i think that would be great
< gmaxwell>
Indeed, for MTP thats the next thing to do.. conduct the test.
< gmaxwell>
what we're trying to do is avoid a hot cut of MTP, causing block orphaning. (again, this doesn't block deployment; but if there are miners still mining these things they'll need to be actively nagged to stop mining them before the soft fork)
< cfields>
anyone happen to know why bitcoind always uses ATYP DOMAINNAME when connecting to a proxy, even when using ipv4/ipv6 ?
< MarcoFalke>
passing getblock(., False) into deserialize of mininode should not fail?
< GitHub186>
[bitcoin] MarcoFalke opened pull request #7767: [qa] add test: Deserialize getblock result (master...Mf1603-qaMininodeDeser) https://github.com/bitcoin/bitcoin/pull/7767