[c-lightning] Towards an autopilot

Rusty Russell rusty at rustcorp.com.au
Wed Apr 4 12:30:31 AEST 2018

ZmnSCPxj <ZmnSCPxj at protonmail.com> writes:
> Good morning Rusty,
> Thank you for your input!
>> ZmnSCPxj at protonmail.com writes:
>> > Good morning c-lightningers,
>> > 
>> > Ideally, a nontechnical human user will simply cause funds to be sent to their c-lightning node, and then some time after be able to send funds on Lightning.
>> > 
>> > A step towards that ideal, is an autopilot (similar to that of lnd) which manages onchain funds and opens channels to the network as it detects funds.
>> > 
>> > So I propose an autopilotd, of the below architecture
>> I think we want something much simpler.
>> 1.  Initial setup: get me a channel or two.
>> 2.  I want to make a payment. Figure it out.
> Unfortunately I do not think it is a good idea to make channels only when the human user makes a payment.  Making a channel can take several hours to confirm for a bad feerate, and we do not have RBF support yet to fix bad feerates after-the-fact.  Better I think to make channels as soon as we get onchain funds to make them.  In particular, if somebody else closes a channel where we have quite a bit of money, we would want to put them back onLightning ASAP I think.

OK, rename (1) to "Setup and Maintenance".

But I think we should support RBF, too.

>>     The setup should use a heuristic like:
>>     a. If we would burn more than 1% of funds to open a channel, only open 1.
>>     b. Otherwise, open two (or three?).
> My planned heuristic is to have a minimum and maximum channel satoshi.  If the onchain funds exceeds the minimum, then we open 1.  If the onchain funds exceeds the maximum, then we open at least one (at most of the maximum size, but somewhere between minimum and maximum), then we re-check again later when the fundchannel succeeds (or if we fail to fund a channel).

I think 3 is a reasonable default maximum, though we should simulate to
see if higher numbers increase network robustness.  They can always ask
for more.

>>     We should agree on a proximity algorithm which is likely to create an
>>     efficient robust network (Cyclic Superhubs?). This needs more research!

Note that I wrote a v. rough simulator for this; it does way worse than
random[1], so I'd like to see more work on this.  I'm not convinced that
you can do better than random.

>>     In addition, we should avoid nodes which are too large, too new or too
>>     unreliable (how do we measure this?).
> More statistics for the c-lightning to keep, of course!

But most nodes have just come online for the first time, and know
nothing but the gossip messages.

> Too large - can already judge from routemap from `listchannels`.  We already record channel capacity, we just need to report this in `listchannels` too.  We can judge a node as too large by summing up the channel capacities of all its channels listed in `listchannels` command.

Well, it can simply use a heuristic that biasses away from supernodes.
But that needs to be spelled out.  I believe biasses are better than
absolute rules, for network health.

> Too new - we could store on DB the earliest time we saw a `node_announcement`.  However when a `node_announcment` is pruned we should take care to also prune it in the database.  We could add a new `listnodes` command that reports nodes, address hints for that node if any, and the earliest we heard about the node.

Not useful for new nodes.  But I suggest we use the channel depth as a
proxy for age; bias against nodes which have no channels older than 2
weeks, say.  The bias away from giant nodes should be stronger than
this, however.

> Too unreliable - when a payment fails, we know the erring node and we know the node after the erring node.  The node after the erring node is the one that is unreliable (the erring node is the one that reported the error, so we know it is reliably alive and responsive).  Keep a counter for each node that is just the number of times it was pointed at by the erring node as failing.  We get some noise due to channel balance issues, but this is better than nothing.  We could also increase the unreliability counter (or have a separate counter) if we connect to the node and fail.

We don't make enough payments to be informed, (especially at startup).
We can test for reachability, of course.  When we have a channel we can
make a test payment through the node, too.

>>This bias against new nodes
>>     should weaken for successive channels,
> Under my proposed architecture, you can simply apply another algorithm
> to be executed after the other algorithms.  That algorithm would check
> `listpeers` to know how many channels it has.  If it has none, it
> filters out too new/too large/too unreliable from the array of
> proposed fundees.  If it has a small number (one or two, or some
> tweakable setting) it can move them to the end of the array (so that
> they get funded last if ever), or flip a weighted coin (weighed for
> 100% removed if channels is 0, reducing greatly as the number of
> channels increases) to remove them from the array of proposed fundees.

Pluggable algorithms are great for researchers, but absolutely wrong for
normal users.  IOW, every option you add halves your userbase, *and*
they'll leave it on the default anyway!

And no user will ever be as informed about the nuances of their choices
as you are.  That means it's *our* responsibility to Make It Just Work,
and every option we add is an abrogation of that responsibility.

>>     The payment should use a heuristic like:
>>     a. Can I send like normal?
>>     b. Can I rebalance channels to send?
>>     c. Do I need to set up a new channel?
> Seems more appropriate for `lightningd/payalgo`.

Perhaps, but the other option is to have the pilot do it, and only
provide low-level controls?


/* First, start with a 32-bit number i = 0.

For each node, get hash = H(i || pubkey), where H is some standard hash
algorithm, and pubkey is the public key of the node.  Also get our_hash = H(i
|| our_pubkey)

Perform successive filtering.  While the set is larger than 2 nodes,
successively compare high bits.  If the highest bit of hash does not match the
highest bit of our_hash, remove it from the set.  If the resulting set is
still larger than 2, match the next bit.  When the set is now 2 or 1 node,
back off by one bit and add back the most recently removed nodes.  This yields
a set that is at least 3 or more nodes.

Sort the nodes according to hash.

Identify where our node is in the sorted list.  Then our candidate is the next
node in the list, or if we are the last node, then the first node in the list.

If the candidate already has a channel with us, or has no address info and
cannot be found by DNS seed or so on, or cannot be contacted, or refuses
incoming channels or some other error, then increment i and try finding again.

/* #define USE_RANDOM 1 */


#include <ccan/asort/asort.h>
#include <ccan/crypto/siphash24/siphash24.h>
#include <ccan/isaac/isaac64.h>
#include <ccan/short_types/short_types.h>
#include <stdio.h>
#include <stdbool.h>
#include <string.h>

static isaac64_ctx isaac;

struct node {
	/* We use u64 as "pubkeys" to save bytes.  0 == removed. */
	u64 pubkey;

	/* Is this available? */
	bool available;

	/* For traversal test */
	bool visited;

	u32 num_connections;
	struct node **connections;

static struct node *random_node(struct node *nodes, size_t num_nodes)
	size_t n;
	do {
		n = isaac64_next_uint(&isaac, num_nodes);
	} while (nodes[n].pubkey == 0);

	return nodes + n;

static void init_node(struct node *nodes, size_t i)
	nodes[i].pubkey = isaac64_next_uint64(&isaac);
	nodes[i].num_connections = 0;
	nodes[i].connections = NULL;
	nodes[i].available = true;

/* We don't bother uniquifying connections */
static void add_connection(struct node *a, struct node *b)
	a->connections = realloc(a->connections, (a->num_connections+1)
				 * sizeof(a->connections[0]));
	a->connections[a->num_connections++] = b;
	b->connections = realloc(b->connections, (b->num_connections+1)
				 * sizeof(b->connections[0]));
	b->connections[b->num_connections++] = a;

static int nodehash_cmp(const size_t *a, const size_t *b, u64 *h)
	if (h[*a] > h[*b])
		return 1;
	else if (h[*a] < h[*b])
		return -1;
	return 0;

static void make_connection(struct node *nodes, size_t num_nodes, size_t i)
	add_connection(&nodes[i], random_node(nodes, num_nodes));
static void make_connection(struct node *nodes, size_t num_nodes, size_t i)
	struct siphash_seed seed;
	struct node *target = NULL;

	/* We use seed as 'i' in the algo. */
	memset(&seed, 0, sizeof(seed));
	for (seed.u.u32[0] = 0;; seed.u.u32[0]++) {
		u64 our_hash = siphash24(&seed, &nodes[i].pubkey,
		size_t *sorted;
		size_t num_similar = 0;
		u64 mask;
		u64 h[num_nodes];
		for (size_t n = 0; n < num_nodes; n++) {
			if (!nodes[n].pubkey)
			h[n] = siphash24(&seed, &nodes[n].pubkey,

		/* How many in each group for each number of bits? */
		for (mask = -1ULL; mask; mask <<= 1) {
			for (size_t n = 0; n < num_nodes; n++) {
				if (!nodes[n].pubkey)
				if (((our_hash ^ h[n]) & mask) == 0)
			if (num_similar >= 3)

		/* Sort */
		sorted = calloc(num_similar, sizeof(size_t));
		num_similar = 0;
		for (size_t n = 0; n < num_nodes; n++) {
			if (!nodes[n].pubkey)
			if (((our_hash ^ h[n]) & mask) == 0)
				sorted[num_similar++] = n;
		asort(sorted, num_similar, nodehash_cmp, h);

		/* Pick next (if no next, wrap around to 0) */
		target = &nodes[sorted[0]];
		for (size_t n = 0; n < num_similar; n++) {
			if (h[sorted[n]] > our_hash) {
				target = &nodes[sorted[n]];
		if (target->available)
	add_connection(&nodes[i], target);
#endif /*! USE_RANDOM */

static void deactivate_node(struct node *node, size_t *num_available_nodes)
	if (node->available) {
		node->available = false;

static void delete_node(struct node *node, size_t *num_available_nodes)
	deactivate_node(node, num_available_nodes);
	node->pubkey = 0;

static size_t visit_nodes(struct node *node)
	size_t num_visited;

	if (!node->available || node->visited)
		return 0;

	node->visited = true;
	num_visited = 1;
	for (size_t i = 0; i < node->num_connections; i++)
		num_visited += visit_nodes(node->connections[i]);

	return num_visited;

static void test_reachability(struct node *nodes,
			      size_t first_possible,
			      size_t num_nodes,
			      size_t num_available)
	size_t visited;

	/* Find an active node */
	for (;;) {
		if (first_possible == num_nodes)
		if (nodes[first_possible].available)

	/* Clear traversal markers */
	for (size_t i = first_possible; i < num_nodes; i++)
		nodes[i].visited = false;

	visited = visit_nodes(&nodes[first_possible]);
	printf("%zu available, %zu reachable\n", num_available, visited);
int main(int argc, char *argv[])
	size_t num_rounds = argc > 1 ? atoi(argv[1]) : 100;
	size_t num_nodes = NEW_NODES_PER_ROUND, num_available_nodes = num_nodes;
	struct node *nodes = malloc(sizeof(*nodes) *
				    (1 + num_rounds) * NEW_NODES_PER_ROUND);

	isaac64_init(&isaac, argc > 2 ? (const u8 *)argv[2] : NULL,
		     argc > 2 ? strlen(argv[2]) : 0);

	/* Initialize seed nodes with single random connection. */
	for (size_t i = 0; i < num_nodes; i++)
		init_node(nodes, i);
	for (size_t i = 0; i < num_nodes; i++)
		add_connection(&nodes[i], random_node(nodes, num_nodes));

	for (size_t round = 0; round < num_rounds; round++) {
		size_t new_num = num_nodes + NEW_NODES_PER_ROUND;

		/* Add new nodes */
		for (size_t i = num_nodes; i < new_num; i++) {
			init_node(nodes, i);
			/* 50% have 2 connections, 25 have 3... */
			do {
				make_connection(nodes, num_nodes, i);
			} while (isaac64_next_uint(&isaac, 2));
		num_nodes = new_num;

		/* Disable some nodes */
		for (size_t i = 0; i < UNAVAILABLE_NODES_PER_ROUND; i++)
			deactivate_node(random_node(nodes, num_nodes),

		/* Remove some nodes */
		for (size_t i = 0; i < REMOVED_NODES_PER_ROUND; i++)
			delete_node(random_node(nodes, num_nodes),
	/* Test reachability as we delete nodes. */
	for (size_t i = 0; i < num_nodes; i++) {
		if (!nodes[i].available)
		deactivate_node(&nodes[i], &num_available_nodes);
		test_reachability(nodes, i+1, num_nodes, num_available_nodes);
	return 0;

More information about the c-lightning mailing list