[c-lightning] Replicated backups

Christian Decker decker.christian at gmail.com
Thu May 30 19:24:37 AEST 2019


ZmnSCPxj <ZmnSCPxj at protonmail.com> writes:
> RAID-Z and mirroring.  `scrub` once a week to help the filesystem
> detect inconsistencies among mirrors.  Continuously monitor ZFS health
> and once you start getting high error rates on a component storage
> device, do a graceful shutdown of `lightningd`, replace the failing
> device, have ZFS recover, restart `lightningd`.
>
> This assumes all your hardware is in one place where ZFS can manage them.
> If you need remote backup, well... GlusterFS?
>
> A simpler alternative to ZFS is ChironFS but I do not think it is
> quite as mature as ZFS, and no longer seems maintained, also does not
> auto-heal, simply keeps going if one replica is damaged or destroyed.
> (I believe ChironFS could in-theory use an NFS mount as a replica, but
> problems occur in case the NFS mount is interrupted due to network
> connectivity issues, and since ChironFS does not autoheal, the NFS
> replica will remain non-updated afterwards)

Like mentioned before I think focusing too much on the FS is the wrong
level to look at this. The snapshot + journal approach is more flexible
in that it can be encrypted and stored wherever we want. If we build up
services that accept this general format it doesn't really matter that
we are taking a sqlite3 DB snapshot and then journalling SQL queries. If
we find a better, more compact format, the plugin can just switch to
that and we don't have to change all the backends.

>> > > In the chance we fall out of sync I believe we can just start over a
>> > > fresh snapshot, it's not ideal but should be robust?
>>
>> > How do we know we are out of sync?
>>
>> Christian had the idea of using an update state counter, so the plugin
>> could know if it's at the correct state
>>
>> I guess the problem is if the drive died right after updated to the
>> latest state and somehow the plugin crashes or failed to commit this
>> latest state to the remote server.
>
> This seems plausible.
> I would strongly recommend sending this statecounter in the `db_hook`.
>
> But in any case, regardless of location of where you are replicating
> (remote or local), this is still a form of RAID-1, and consistency
> issues with RAID-1 must be assumed to be possible here.

Effectively this is a classical 2-phase-commit from distributed
computing theory :-) With the rollback ability of the backup that a
trailing journal provides we can restore whatever situation happens
here, since we have the master driving all updates.

> Also, you might not have seen this ninja edit on the github thread:
>
>> Edit: If you ***really*** want to continue here, I would suggest
>> rather the creation of a `channel_state_update` hook that focuses on
>> the only important DB update: revocation of old channel states. This
>> removes a lot of the risk and complexity of using the DB
>> statements. Then add a `restorechannel` command that requires the
>> same information as `channel_state_update` provides, with some checks
>> to ensure that we do not restore to a known-old channel state.
>
> Possibly you might also want a `getchannelstate` command that gives
> the same information as `channel_state_update` hook -- for example,
> after your plugin restarts, you might want to `getchannelstate` all
> live channels.  Attempting `restorechannel` on all channels we
> currently hold would also be doable at startup of plugin.  This may be
> more useful than a remote backup of the entire database.
>
> Of course, loss of invoice data is bad but presumably your shopping
> cart software should also have a copy of any invoice it has issued
> too.

I'd really prefer the blacklisting approach here, since backing up more
information than necessary is never a security issue, but missing some
information that we didn't consider "important" is devastating.

Cheers,
Christian


More information about the c-lightning mailing list