< wumpus> there is a feature for manual pruning, isn't there?
< wumpus> I remember something like that was merged at some point, a command line flag to prevent automatic pruning as well as a RPC to prune
< wumpus> client software could use this to prune only what they no longer require
< wumpus> (I don't think any lightning daemons do this, at the moment, but would be useful)
< CubicEarth> wumpus: yeah, sipa was also saying there is manual prune option
< CubicEarth> so that should be able to be used by lightning nodes to ensure the only blocks pruned are ones they don't need anymore
< gmaxwell> the design of manual prune is kinda not ideal if you have multiple potential things that need data. :(
< CubicEarth> if there was a DoNotPruneAbove, it could be set by different things, with the lowest one controlling
< gmaxwell> indeed but then you'd need interface elements to remove zombie DNPAs.
< CubicEarth> Yeah :)
< CubicEarth> If too many things want the data, not pruning at all might be favorable
< gmaxwell> it might be more useful if there were just a way of setting it up so you won't process blocks unless your lightning is running, or you've removed the binding.
< CubicEarth> In either case though, if blocks are to be retained until the pruning command is given by another service, it seems useful to have bitcoin have an option to not download more than some MBs or GBs worth of blocks
< CubicEarth> ahead
< CubicEarth> gmaxwell: are we saying the same thing?
< wumpus> it's not ideal, but nothing is even using the current functionality as far as I know, so I'm kind of reluctant to discuss adding even more
< wumpus> once it works for the 1 bitcoind - 1 lightningd case, it'd be more fruitful to discuss extending it to other cases
< wumpus> instead of adding a lot of logic for something that no one will be using in practice
< wumpus> FWIW "DNPA" could be implemented by an external process using the current RPC interface
< wumpus> have it manage the list of requirements and manage manual pruning
< wumpus> accordingly
< CubicEarth> Yeah. DNPA is mostly the same as manual pruning it seems.
< CubicEarth> And that makes sense about not adding complexity.
< CubicEarth> Maybe it would be simple to add a DoNotValidateBeyond ?
< wumpus> what I mean is that manual pruning provides the required low-level interface, DNPA would be one strategy to manage it at a higher level with multiple data consumers, but likely not the only way (at some point you might want to kill off consumers if they're too slow, for example, depending on disk space etc)
< CubicEarth> I understood as such ^^ even if my reply suggested otherwise :)
< wumpus> something like that, though validation is not the problem in this case, block download is - you'd want to tell it to pause downloading new blocks
< wumpus> I think there's a command that more or less does that, pause networking, but might be GUI-only I don't remember exactly
< wumpus> "setnetworkactive"
< wumpus> it can continue validating the blocks it already has, that won't fill up disk space
< wumpus> with manual pruning, it's not the validation that triggers pruning anyhow
< CubicEarth> hmmm, but block are not downloaded sequentially anymore, right?
< CubicEarth> or non-sequentially within a sliding window?
< wumpus> I don't see how that is relevant to this, stopping block download is stopping block download, whether they're downloaded sequentially or not, if you want it to stop filling up disk you want to pause block download
< wumpus> the blocks it downloads can be non-sequential in a consensus sense, but on disk, they're all simply to the block files in a first come first serve basis
< wumpus> +appended
< CubicEarth> I was thinking it wouldn't be as helpful to have downloaded and storing a bunch of blocks, and yet be missing an earlier one, at the point networking was shut off
< wumpus> why not?
< CubicEarth> validation couldn't proceed
< wumpus> that stops filling the disk; once it's turned on again, it will download the rest and connect them
< wumpus> of course validation cannot proceed
< wumpus> you're essentially shutting down your node without really shutting it down
< wumpus> so it will respond to RPC queries to get the blocks and other data already there
< wumpus> and to pruning commands
< wumpus> once the clients have caught up, and a pruning command has been issues, the networking can be waken up again, but maybe I'm missing something I don't know...
< wumpus> and validation will continue as far as it can with the blocks that it had at the time of network shutdown, not further
< CubicEarth> I need to think about it more (not from the lightning perspective ... I am no lighting expert). What you are describing might be totally workable. I was just thinking if blocks are not downloaded sequentially, and networking was turned off, there will be storage being taken up with blocks that can't be validated due to gaps. I wasn't thinking practically so much. Just in theory, it would be nice to only be storing
< CubicEarth> what could be validated
< CubicEarth> Yes, those gaps would be filled as soon as networking was turned back on, and it would proceed.
< wumpus> yes I think it makes sense to get a clear idea of what you practically need, otherwise you'll by definition drag unrelated concerns into it
< wumpus> exactly
< wumpus> from the perspective of the lightning clients they will see the block number stop increasing when validation is blocked, which will negatively affect their usefulness as lightning nodes, channel status updates will no longer be detected
< CubicEarth> the block download window is 1024, btw
< gmaxwell> CubicEarth: no, above I meant a something much simpler. A setting in bitcoind where it just won't run unless your lightning daemon is connected.
< gmaxwell> but as wumpus was saying, they aren't even bothering to use the existing functionality.
< CubicEarth> those lazy lighting devs!
< gmaxwell> more like they only recently graduated to not requiring -txindex...
< jl2012> if no becomes true in line 78 to 81, yes must be false
< jl2012> why don't it return the result earlier?
< gmaxwell> Because there is no need to, it might well even be slower to do so.
< gmaxwell> it would also make the function non-constant-time if it terminates early.
< jl2012> oh, I think this is the real reason?
< gmaxwell> if that were the only reason we'd have two versions.
< gmaxwell> But as I said, I wouldn't be surprised if branching out earlier was slower.
< jl2012> thanks!
< wumpus> CubicEarth: please, don't call anyone lazy just because they don't focus on your specific issue, everyone is more like overworked
< wumpus> btw, wow, lightiningd's API to get the last log messages for a specific node is very nice
< wumpus> even works to get log messages at a lower logging level (say, debug) than are written to disk
< gmaxwell> I had aspirations of doing all our logging into a ringbuffer... with disk just taking a feed off it.
< gmaxwell> which would allow us to do things like no logs normally, but dump the ring on crash.
< wumpus> that's what they do, I think, well I don't know if it's implemented as a ringbuffer but it acts that way on the outside
< wumpus> but yes that would be neat to have
< wumpus> at least for log messages that don't cost a lot of overhead to format unnecessarily
< CubicEarth> wumpus: you are right. I meant it not seriously, but things don't always come across as intended on irc. Nothing but respect for lightning devs from me!
< wumpus> CubicEarth: okay, sorry for misreading you then
< gmaxwell> wumpus: yea, I wondered about that but IIRC other than the format strings themselves there are only ~two places in the code where we totally skip some processing according the the log flags.
< gmaxwell> and I seem to recall thinking that those could probably just go away.
< wumpus> gmaxwell: agree, my concern i not relevant to many places and there we already skip the processin explicitly, so the ring-buffer thing would just work
< wumpus> (except for those messages, obviously)
< gmaxwell> I think in general, for most users, the logging provides no real value, just wastes disk space even becomes an attack vector (e.g. via spam up the disk attacks). So as a result we've made default logging pretty conservative, but then useful stuff isn't there when something goes wrong. perhaps more interesting is how any of this would interact with concurrency.
< wumpus> it'd have to synchronize around writes to the ringbuffer, instead of around disk writes like now
< wumpus> there is a PR that moves part of logging to a thread I suddenly remember, #13200, should look into it in more detail
< gribble> https://github.com/bitcoin/bitcoin/issues/13200 | Process logs in a separate thread by jamesob · Pull Request #13200 · bitcoin/bitcoin · GitHub
< gmaxwell> right but if the ringbuffer is getting more logs into it because it logs more things than disk, that might be annoying. Of course we could have a seperate buffer per thread(group) and only take locks for all of them to merge output for display.
< wumpus> yes, fair enough
< phantomcircuit> it seems kind of awkward to me to use the new naming convention in the same (small) function as the old one
< sipa> phantomcircuit: if it's a small function, just change it all
< phantomcircuit> which is mostly moveonly, changing it would change that
< phantomcircuit> but the nMicroTime variable is new