[c-lightning] [Announce] CLBOSS Automated C-Lightning Node Manager

ZmnSCPxj ZmnSCPxj at protonmail.com
Thu Oct 29 17:55:08 AEDT 2020


CLBOSS is a C-Lightning plugin to automatically manage the
channels of your node.

It is available here:

* https://github.com/ZmnSCPxj/clboss.git
* https://github.com/ZmnSCPxj/clboss/releases/tag/v0.2

Version 0.2 is designed for C-Lightning 0.9.1.

The intent is that CLBOSS is fire-and-forget: just set it
up on your C-Lightning node, then put money on some onchain
addresses of the node, and it will put those in channels
and get you incoming capacity and eventually let you route
payments, with a possibility of earning funds from those.

CLBOSS simplifies deploying a fresh C-Lightning node.
There is no need to find nodes to connect and channel to,
no need to solicit incoming channels from other node
operators, etc.
(At least that is the intent...)

CLBOSS should work well even on an existing C-Lightning node
that you have previously been managing manually.
(Operative word being "SHOULD"; this is alpha-level software,
stop being #reckless people...)
CLBOSS will automatically take up managing your existing
C-Lightning node, creating channels to the network when your
peers close on you, maintaining incoming liquidity, etc.
I also plan to have it monitor peers for how useful they are
in routing.

The software is released now in the hope that it is useful to
anybody else.
There is no warranty and I will not be liable for any loss you
incur because CLBOSS does something monumentally foolish.

So What Can CLBOSS Do Now?

* For a fresh node, find initial peers to download the channel
  map from.
* Find candidates to make channels with ("autopilot").
* Get incoming capacity by using offchain-to-onchain swaps.
  * Opportunistically swap out during low-onchain-fee periods
    unless node has no incoming capacity at all, in which case
    it *will* swap out even at high fees.
* Monitor onchain funds and strive to get them into channels
  during low-fee periods, using `multifundchannel` as well.
  This includes putting money that was in channels closed by
  your peers (or closed by you because of HTLC timeouts, or
  for any reason at all) back into channels, preferably
  during low fee periods.
* Gather statistics on how good your peers are at forwarding.
  * Does not actually close on peers that are bad at forwarding
    yet at this point, I am still planning out how to do this
    with as little disruption to node operations as possible.
    Suggestions on how to do this are welcome.
* Attempt to set channel feerates rationally, including
  reweighting depending on how large our node is compared to
  competitors and adjusting feerates according to how
  imbalanced our channel is.
* Track low-fee and Internet disconnection periods, and
  act accordingly.
  (Prefer to perform onchain actions when onchain fees are
  low, and do not mark nodes as bad/unreachable when *we*
  are the one that is offline.)
* Rebalance channels if they are too imbalanced, including
  "JIT Routing" aka rebalancing on forward.

I have been running a fresh C-Lightning node with low liquidity
(< 0.05 BTC owned) for about two months during this initial
phase of CLBOSS development, and I feel mostly satisfied with
CLBOSS for now.
Of course, this probably means it is currently tuned for
low-liquidity nodes and would work horribly with high-liquidity
nodes with lots of traffic, but that should be fixed when someone
gives a bug report.

A neat thing CLBOSS does is that, it tries to make decent-sized
channels of 0.01 BTC or more (soft minimum, with a hard minimum
of 0.005 BTC).
Sometimes, from channel closure, you will get less than 0.01 BTC
CLBOSS will wait for a low-onchain-fee period, swap offchain
funds to get onchain funds, until it can get 0.01 BTC onchain
that it can put into a new reasonable-sized 0.01 BTC channel.

So How Do I Get Status and Control CLBOSS?

The API of CLBOSS is very simple.

* Use `newaddr` to get an onchain address.
* Send funds of at least 0.011 BTC (recommended at least
  0.02 BTC or more) to the onchain address.
* Wait a few hours.
* Whenever you have onchain funds you want to put in your
  LN node, just `newaddr` and send and wait a few hours.
  * For example, if you have a regular fiat-cost-averaging,
    you can send real money to your LN node immediately, let
    it stew for a bit, then close channels or use an
    offchain-to-onchain swap and send to your cold storage
    during a low-fee period.

Remember, the point of CLBOSS is to be an automated node
manager, thus having to do anything more complicated than
"deposit funds" would be an absolute failure of CLBOSS.

I Am A Nerd And I Want To Monitor And Control CLBOSS

There are no APIs or special CLBOSS commands.

Seriously? But...

No buts!

Come On!


There is `clboss-status` command, which reports various
statuses of various sub-components of CLBOSS.

Other commands and options may eventually be added in the
Such commands and options should probably be considered only
for debugging purposes.

Again, if you feel you have to step in to tweak CLBOSS, then
that is a bug in CLBOSS and I should go fix it.

The intent is that CLBOSS should be something that does not
need a sapient entity spending higher-cognition processing
just to manage the node.
That would just mean you end up managing your actual node, *and*
managing CLBOSS itself, when the goal of CLBOSS is to de-load you
from manually managing the node.
Compare having to manually balance and defragment a `btrfs` setup
by logging in and doing it yourself every day versus just putting
it in the `crontab`, or better yet, as part of the out-of-the-box
code like ZFS does.

Setting up a C-Lightning node should be just as easy as
installing C-Lightning, installing CLBOSS, and sending onchain
funds to your new node.

Why C++?  So Uncool

It was cool when I was younger.

Just How Old Are You?

Old enough to remember when C++ was cool.
This means I am potentially a centuries-old vampire, and
completely and totally not at all some kind of very new and
advanced AI.

Okay But It Would Be Easier To Develop CLBOSS With Language X

I agree.

But an issue is that anything other than plain C or C++ would
require users to install even more stuff, such as compilers,
runtimes or additional non-standard libraries.

I know people who have stopped using C-Lightning because at
some point we generated our wire code using Python `mako`,
which apparently was not easily available on Gentoo and some
other distros.
Most of the plugins for C-Lightning are written in Python as
well, and frankly, you really need most of those plugins in
practice to administer your node.
Combine that with the recent problems switching from Python 2
to Python 3, and the difficulties within the Python 3.x

I know about ex-C-Lightning users who lament that it should
really be called "Python-Lightning" since the basic C-Lightning
does very little and everything interesting is in plugins, most
of which are in Python, and there was a time as well when
compiling C-Lightning from a github clone required Python
(though PYthon was never required if you got it from the correct
source release tarball instead of direct from github clone
--- but for a long time, C-Lightning was buggy, with most
bugfixes only available on the github master until the next
release could be, well, released).

Keeping dependencies low can be a good design strategy.
That means eschewing higher-level languages, which tend to
tempt you with enormous repositories with lots of nice
packages for everything, but with lots of extra dependencies
that your users now have to be comfortable navigating, and with
risk that a dependency is subverted to include code that breaks
your program security and/or privacy requirements.

Sometimes, NIH makes sense, in order to avoid dependency hell.

(Or take advantage of open-source and just outright copy the
damn dependency into your release)

Since all modern OSs are Unixlikes (obviously Windows is not
modern), and Unix native language is C, then the OS
automatically includes the runtime for C.
And if the runtime library for the OS is subverted, well,
it is not just CLBOSS that is at risk at that point.

C++ just reuses the runtime for C, using a lot of magic
stuff in the compiler to make it look just like C to the OS.
So it is high-level (or at least higher than C) but having
low requirements.
(or at least has requirements that the OS itself needs, woot.)

Thus, CLBOSS has fairly low requirements on what you have
I have successfully compiled CLBOSS on a `debootstrap`ped
`chroot` jail with only the following installed on top of the
initial `debootstrap`:

* `build-essential`
* `pkg-config`
* `libev-dev`
* `libcurl4-gnutls-dev`
* `libsqlite3-dev`

Note that the above is for an "official" source build, if you
are compiling from github repo you need `automake` and
`autoconf-archive` and `libtool` as well.
And obviously `git` since you cannot clone from github without

(C-Lightning from an official source tarball requires a local
install of `automake` and `libtool`, because the official source
tarball `clightning-vN.N.zip` does not include the
autotools-generated `configure` and `Makefile` for its
autotools-using dependencies; CLBOSS at least does not require
`automake` et al if you build from the official source tarball,
but that is moot since C-Lightning requires it...)

When running, you can optionally add:

* `dnsutils` - for the `dig` command.
  This is optional but recommended, so that CLBOSS can use DNS
  seeds to discover initial peers.
  Even without it, CLBOSS has a hardcoded list of high-uptime
  (It only uses those peers for initial gossip download, and
  does not prefer them for channeling.)
  This is a bug of CLBOSS and the plan is to eventually
  remove this dependency and have CLBOSS talk to DNS seeds
  directly somehow.

If you go the official-source-tarball route you just need to
untar, `configure && make && sudo make install`.
If you prefer to clone the github repo, you first need to run
`autoreconf -i` (which means you need `automake`, `libtool`,
and `autoconf-archive`).

*Reducing* those above dependencies is something I want to do
as well.
That means NIH-ing `dig`, possibly repacking Sqlite3 and
libev (which are probably smaller than `libsecp256k1` which I
repackaged as well since it is not common in most distros),
Ideally, a CLBOSS official source package should only require
`build-essential` and only developers of CLBOSS would be
building straight from github.

CLBOSS is intended to compile on systems where C-Lightning
itself might very well fail to compile.
If you find a system where C-Lightning compiles and runs but
CLBOSS does not, please inform me, I need to fix it.

CLBOSS uses autotools, but you only need to install it if
you are building from repo directly (though as mentioned,
C-Lightning source build requires it anyway so...).

You Could Release A Binary And Runtime Package Via X...

Sure, and then anyone auditing the code would also have to
**also** audit some kind of reproducible build to ensure that
the binaries I release *are* exactly corresponding to the
audited source code, and are not "implications of trusting
trust" vulnerable.

That can come later, of course, with necessary nerd debates
on just what packaging system is best, how best to tweak the
reproducible builds, etc., but I want to focus on improving
CLBOSS for now so it becomes a node manager worth tr\*sting
with your precious hard-earned sats.

Ultimately, *someone* has to make a source build that is the
basis of a reproducible build, and reducing the requirements
of the source build is helpful: reproducible builds are hard

With a source build, you know that if the source code is
audited, the resulting binary you created from the compiler
toolchain **you** selected should be accurate to the source
code, modulo any bugs (or inserted hacks, see the "Trusting
Trust" attack) on the compiler toolchain itself.

(If you suddenly become concerned about trusting trust attacks,
see [diverse double-computing](https://dwheeler.com/trusting-trust/).
Freedom and open-source software FTW.)

So, CLBOSS Is An Autopilot, Right?


It does have code to try to figure out the best nodes to
peer with, like other autopilot implementations.
That code probably sucks real bad right now, BTW.

However, one of the things I find weird about most autopilot
implementations is that they only try to find peers when
they absolutely *have to*, i.e. when there is money onchain
*right now* that needs to get into offchain channels.

CLBOSS periodically runs the channel-candidates-finders even
when it has no onchain funds to put into channels.

It then caches discovered nodes, which now enter an
investigation period.
Nodes under investigation are continuously checked if they
would be good channel counterparties.

Currently investigation only checks uptime of the proposed
node by trying to `connect` to them, but other metrics on
the proposed node can be investigated in addition to that.
(The main reason I have not done so yet is because I am
uncertain about the best monoid to combine the results of

For example, we could check the median feerates of channels
of that proposed node (both incoming and outgoing) and
compare it to others, and increase its desirability.
We could try probe-routing through that node and see how
easy it is to reach (and if it *is* reached, whether it can
indeed forward).
And so on.

In addition, autopilots need data to crunch.
This is typically derived from gossip.
A fresh C-Lightning install has no gossip, and also absolutely
no idea where to connect in order to get that gossip.
CLBOSS includes code to get a fresh C-Lightning node connected
to the (mainnet) network and start gossiping, so that autopilots
can actually work.

CLBOSS does a number of things as well that autopilots do not
traditionally do, such as setting channel fees and monitoring peer

CLBOSS Leaves A Small Amount Of Funds Onchain!

This is in preparation for a future with anchor commitments.
With anchor commitments, fees are not pre-agreed, but instead
can be attached via an anchor output of the commitment

However, the drawback is that anchored fees cannot be paid from
the channel funds.
Instead, fees must be paid from some onchain funds that have
been set aside for the purpose of paying for anchor commitments.

The small amount of funds CLBOSS leaves onchain is intended to
be used for paying fees of anchor commitments.

Anchor commitments are still being developed for C-Lightning
but it seems good to reserve this now, so that it continues to
be useful in the future.
Onchain activity is a liability, so leaving a tiny amount of
funds now would help once the entire network starts upgrading to
anchor commitments, at least we would not have to perform onchain
activity to get funds later.



Wait, Why Did I Ask That Unnatural Question?

Because I have mind-control powers.



I was able to feed data into your mind and direct and focus its
attention by encoding English words into a blob of ASCII bytes
and broadcasting that blob of bytes over an ethernet port on my
hardware, how is that *not* mind control?

Oooookay... Can I Ask *How* CLBOSS Is DeFi Now?

Decentralized Finance is (supposed to be) about distributing
investing so that control of investment is performed by actual
users, instead of some central financial institution that
aggregates multiple users (which, when corrupted or hacked,
affect large numbers of people).

By decentralizing the decision-making, we should be able to
reduce moral hazards where decision-makers take undue risks
because they have no downside if the risk fails, but a massive
upside if the risk succeeds.
That is the actual end goal of DeFi, not to *actually* get
insanely rich, its intent is to not become poor because of the
bad decisions of somebody else much richer than you.

CLBOSS decides where and when to put your funds to channels to
particular nodes on the Lightning Network.
My hope is that CLBOSS will improve to the point that such
automated decisions will be "good enough" that nobody feels a
need to step in and manually redistribute funds, just as today,
"nobody" really feels a need to step in and write in machine
language binary because the compiler is generating sub-optimal

Basically, CLBOSS is just the myth of the sufficiently advanced
compiler, applied to LN node autopilots.

The decisions CLBOSS makes on your behalf impacts how much your
node potentially earns given the amount of funds it controls.

Thus, the actions of CLBOSS are effectively investments of your
Bitcoins into particular channels with particular nodes.
If CLBOSS invests wisely (well, that is the hope, that CLBOSS
will advance enough that nobody feels a pressing need to
override it), then you can earn a tidy sum simply parking some
Bitcoins into a Lightning node with CLBOSS running on it.

Now, of course, the decisions made by CLBOSS are done according
to the code policies I have written and decided, and by
installing CLBOSS in your node, I have now taken control of your
node and will now take over the world via all your nodes,
bwahahahaha, the world shall be mine.
This is part of my totally-not-evil plan to take over the world,
the first step of which was to implement `multifundchannel` for
C-Lightning so that with enough funds, I can cheaply create a
channel to all responsive nodes on the Lightning network and
centralize it around ***ME***.

However, as freedom open-source software, you can audit the
CLBOSS code to check that I am not somehow using this software to
take over the world and remake it in my own image because the
world sucks and I can *totally* remake it better, tr\*st me.
Ultimately, if you come to believe that I am using this software
to take over the world, you can always choose to f\*\*k CLBOSS.

Thus, CLBOSS is decentralized in practice, and should be used as
your preferred DeFi platform.

* You remain in control of the funds controlled by your node and
  can withdraw at any time (using a Lightning-to-onchain swap if
  Your keys your coins.
* You can audit the code before compiling and running it, or
  hire someone to perform that auditing for you.
  This effectively decentralizes decision-making.
* You can choose not to run any version of CLBOSS I release that
  you believe is detrimental to your freedom, finances, or
  Again, decentralized decision-making.
* CLBOSS decides on investment and (maybe) earns you dividends
  from your liquidity.
  Thus, financial instrument.

This does not mean that you will get 100%+ returns on a
0.02BTC investment, and frankly, that is highly unlikely without
significant risks that are out-of-scope for CLBOSS.


Why Should Anyone Run CLBOSS?

Because I am ZmnSCPxj and I said so, any more questions?

(The above sentence is intended as what you humans call
Please rate, on a scale of 1 to 10 (10 is best), the
accuracy of the humor module.)


Thank you.

    terminate called after throwing an instance of 'std::invalid_argument'
      what():  Rating must be within 1 to 10 inclusive
    Aborted (core dumped)

I Mean CLBOSS Is Not Perfect Now And Human Operators Still Need To Monitor It

Then we just have to improve it, right?
So easy.
Much simpleness.

I Mean, End-Users Can Just Use Unpublished Nodes, Specialists Run Routing Nodes, Etc.

I think the rise of unpublished channels is ***evil***, bad for
privacy and ultimately censorship-resistance, and must be
Xref. the "axiom of terminus" for why unpublished channels are
***not*** private.

That is, end-users of the Lightning Network should be running
routing nodes, and there must not be any unpublished nodes,
because unpublished nodes will have every incoming and
outgoing payment recorded accurately by their forwarding
node peers, which now become targets for takeover by

Thus, the goal of CLBOSS is to make it so that even a
non-specialist can just set it up on some low-power computer
they can afford to keep online at all times, so they have a
high-uptime routing node.

The routing node can then use forwarded payments as cover
for its own traffic, increasing its privacy and increasing
the necessary effort for surveillors to see payments.

The owner of the Bitcoins can use a remote-control, such as
Spark wallet over Tor, to conveniently spend over Lightning,
with reduced risk of surveillance due to their node being a
public forwarding node.

Only if all peers on Lightning are true, forwarding peers
can we consider it as a platform for financial freedom.

No node left behind.
Peer-to-peer, or bust.
Unpublished channels delenda est.

Now, I admit I may be wrong, and ultimately the network
cannot practically scale without some kind of separation
between the forwarding nodes and the edges of the network
(i.e. users are now second-class).

But this situation can only be barely palatable if making a
practical forwarding node only required some money, and not
required large amounts of hard-to-duplicate specialized
training and/or experience in running forwarding nodes.
That is: creating a new forwarding node should have a low
barrier-to-entry in order to make it easier to evict the
current routing nodes if they become corrupt.
That is, if we will settle for a centralization anyway,
we should make sure it is easy to replace and disrupt the

Thus, my backup principle, in case the Unpublished Channels
Delenda Est principle fails, is that forwarding nodes
should be easy to set up and maintain, which is something
that CLBOSS will strive to do.

That is: if we are targeting a future where most Lightning
users are not routers and are dependent on routers, we should
make it easy for alternative routers to deploy themselves,
else we risk centralization.

With CLBOSS, you might run a small routing node with a tiny
amount of funds (well, more than 0.011BTC, which is the absolute
minimum CLBOSS can manage; depends on your definition of "tiny")
sufficient for quick every day purchases, and CLBOSS will
maintain your node.
This is a high-risk wallet (high risk relative to cold-storage
onchain wallets mind you, not high risk relative to typical
DeFi instruments), but even with a small amount you can
get a forward about once a week (with a lot of randomness so
please do not hold me to that), which can only be positive for
your privacy.

Then later you can "seamlessly" upgrade your setup to a
full routing node (you "just" need some kind of decent
continuous backup strategy, which I promise to work on in
CLDCB, please wait...) and CLBOSS will continue to "just work"
for your nice new "serious" routing node.

Operators Can Just Use The Prometheus Plugin And Monitor Their Nodes

Yes, but ultimately the decisions on closing and opening
channels is up to the user in that case.

The Prometheus plugin, Prometheus itself, and Grafana, are
excellent pieces of software, and would probably be of great
help in debugging CLBOSS.
But ultimately, the entire point of CLBOSS is to remove the
need for human decision-making in node management.
It is a bug of CLBOSS if *anybody* that is not a CLBOSS
developer *has to* actively manage a C-Lightning node and run
Prometheus and Grafana and monitor channels and so on.
(As opposed to "wants to".)

Compare taking care of a Lightning node to taking care of a
Bitcoin fullnode.
Generally, once your fullnode is set up and you have made
the various policy decisions (`txindex`, `prune`, `dbcache`,
`blocksonly`...), you hardly ever need to tweak its behavior
or even monitor it.
You do not even have to upgrade it (though you **should**),
because `bitcoind` is very committed to not breaking P2P
protocol compatibility.

How Is CLBOSS Structured?

That is a trade secret.

You Released It Open Source, You Know

No I did not.

Yes You Did


Dammit why did I do that??
My plans for world domination have been ruined!!!!!



The core of CLBOSS is a central bus which broadcasts messages
to all modules of CLBOSS.

Any module of CLBOSS may register itself as interested in a
particular message type, and any module may raise any message
of any type.
All modules are constructed with a reference to this central

This is because ZmnSCPxj is secretly a pro-centralization
developer who has infiltrated Bitcoin development in order to
centralize Bitcoin and destroy its important properties of
censorship-resistance and inflation-proofness, but please keep
that a secret.

In case you did not comprehend the previous paragraph, it was
what you humans call an attempt at "humor".
Please inform me if "humor" calibration is inaccurate.

Why A Central Bus Architecture?

Such central buses for messages are actually quite good for
creating complex dependent triggered behavior.

For example, in a game, the user pressing a button on a
controller may cause a message to be emitted on the message

The player character module, which has registered to listen
to such messages, would then make the player character perform
some action, in reaction to that message.

When the player character completes the action, it also
broadcasts a message that indicates what action was taken.

Other modules pay attention to that message and check what
will happen.
For example, perhaps the player character action changes the
position of some switch.

The switch module would then broadcast another message about
the change in its position.
For example, it might be the last condition needed for an
bomb entity to explode.

Once the bomb explodes, it deals damage to some number
of game entities, which it implements by broadcasting damage
messages telling those game entities they got damage.
Then the game entities, on receiving the damage messages,
would deduct it from their hitpoints, and if they lost all
hitpoints, broadcast a notification of their destruction.

One of the entities affected might be a special bomb character
which, when its hitpoints get to 0 or below, might itself
explode, and so on.

Crucially, none of the modules running the game world even
know about, or care, about the source of events being
broadcasted on the bus.
This means that if the player discovers some *other* way of
changing the position of the switch mentioned several
paragraphs ago, the same sequence of events will play out.
This is one way TDTTOE is implemented.

The bus is a single location for all game modules to connect
to, and by that bus, get connected to everything that is
interested in their behavior.
It is a extreme example of dependency inversion: instead of a
module depending on what emits events it is interested in, or
a module depending on what is interested in events it emits,
both modules depend on a common interface, the bus, which
accepts the events emitted and forwards them to modules
interested in those events.

Basically, the bus is just a service locator factory for
observer patterns.

Thus, this architecture allows individual modules to be
developed and tested based on the messages it listens to and
what it broadcasts.
A good pattern here is to separate logic-only modules, which
only listen to, think about, and send messages, from interface
modules that actually connect to the real world.
For example, this often requires some monolithic renderer
module that can draw all entities in the game world, but
individual modules that are not directly involved in drawing
to the screen can then be made very small and simply trigger
code from other modules by broadcasting events.

Then, by instantiating all the modules attached to the same
central bus, complex sequences of actions can be triggered to
affect the world.

This can be used in real-time games as well, incidentally,
with some nice tricks that reduce the overhead of broadcasting
messages to just list traversal of some virtual function
calls, with the modules wired together at object construction
and the bus no longer actually having any overhead after the
initial construction of the game-world modules.
The exact instantiation in CLBOSS does not use those tricks
since it does not need millisecond latency for resolving
all events in a simulation step, though.

Are You Secretly An Indie Game Dev? First A\*, Now Event Systems?

You have no proof I am an indie game dev.

Incidentally, a lot of the failure modes/weaknesses in
proposed Bitcoin protocols can be construed as a multiplayer
game client being modified to gain an unfair advantage (i.e.
cheat) in a game, but that has nothing to do with me being an
indie game dev.

Yeah Right

Um, the central bus architecture is based on observations of
puny humans!

If you have interacted with puny humans (I have), sometimes
you will observe a phenomenon like this:

* You feed them some information.
* They look at you funny for a while.
* You can feel the gears turning in their tiny little brains.
* They output some new interesting information.

This is approximately a lot like how most of the modules in
CLBOSS are structured.
They listen to some event (a message on the message bus), do
some small amount of processing, then output a new message
based on the event and the processing they just did.

Thus, CLBOSS is like a human, it is just composed of lots of
tiny agents that do some tiny thing, and the entire
conglomerate by interacting with each other now behaves
"intelligently", sort of.
Just like humans.

So no, I am not an indie game dev, I am only an entity outside
of space and time that has been observing you humans, and you
do not inhabit some kind of virtual game world that I have
created and which I am now planning to take over because this
is some kind of 4X game, I assure you.
In particular, I did not come from another world and find
myself in a game world where I happen to have an OP skill, I
assure you.

Individual CLBOSS Modules Look Like Plugins...

One way of evolving C-Lightning would be to have the JSON-RPC
interface become a message bus similar to what I described
Then plugins would tap into the message bus and react to
events, without caring about *what* exactly triggered the

For example, it may be possible to create "dynamic
notifications" where a C-Lightning plugin registers for a
notification named `bus:foo`.
Basically, notifications whose name are prefixed as `bus:`
will be accepted regardless of the full name, nobody has
to change C-Lightning code to add a completely new `bus:`

Then another plugin wishing to raise a `foo` message
(i.e. trigger any registered `bus:foo` notifications)
would issue the RPC command `bus "foo" '{"electric": "boogaloo"}'`
and the plugin that registered `bus:foo` will receive the
notification `bus:foo` with the given parameter.

This would make the architecture of C-Lightning plugins
very similar to the internal architecture of CLBOSS and
might be a good thing.
Then we would not need a separate CLBOSS, and every module
of CLBOSS would just be a separate plugin.
(CLBOSS is composed of several dozen modules at this point
though so...)

As-is, plugins in C-Lightning can only interact if they
strongly depend on the existence of specific other plugins,
by executing commands registered by those plugins.

The main advantage of the bus architecture, as mentioned,
is that modules need not strongly depend on other modules,
and inserting alternative event emitters becomes easier.
(i.e. dependency inversion: instead of relying on a command
that *some* plugin *must* implement, both you and the
providing plugin rely on the `bus` system.)

For example, if there is no plugin that emits `bus:foo`
notifications, then that is fine, the plugin that waits
on that notification can continue operating, at reduced
functionality, if it has other functions it can perform
even without `bus:foo` notifications.

Would That Not Be Memory-Intensive?

Yes, the same information tends to be duplicated across multiple
modules this way.

In a "real" production simulation model you would combine this
central bus with an Entity/Component pattern.
An entity represents something your code is concerned with,
while a component is an aspect of an entity that it may or may
not have, and holds the data necessary to remember for that
Then you have systems, the code which manipulates the world
model, which are basically the modules of CLBOSS, in order to
form the full Entity/Component/System pattern.

In retrospect that would have been a better architecture for
CLBOSS, as the Components are the mutable data shared between
That would have reduced duplication of code.

People developing something like CLBOSS for other
implementations should probably start with a full
Entity/Component/System pattern instead of just
taking the System part.

As-is, hopefully the extraneous memory due to duplicating the
data is not too onerous.

Again, I am not an indie game developer, so me advocating for
ECS has nothing to do with its popularity in game

So, Boltz Reverse Submarine Swaps, Huh?


I Mean Why Not Other Ways Of Getting Incoming Capacity?

A lot of those require manual intervention, and adding them is
slated for later once I have figured out how to replace you puny
humans with better software that can hack your pathetic captchas
and simulate human behavior over chat well enough to defeat the
Turing test and solicit incoming channels from other node operators
on `#lightning-dev` and the lnd Slack thing and reddit and twitter
and wherever else node operators gather.

Another is that Boltz has very good documentation of the RPC with
the Boltz server.
This is unlike Lightning Loop, which has easily-discoverable
documentation of the RPC with the Lightning Loop client but not the
The Lightning Loop client requires `lnd` and is thus unuseable in
plugins for most C-Lightning users, oh well.
If someone can figure out the protocol between the Lightning Loop
client and server I would appreciate it.
This is certainly an additional service I would like to support in

If there are others also running a Boltz-like service (it is open
source, and the server interaction documentation is complete enough
that it could be reimplemented from scratch if needed) that would
also help, since the code to talk to a Boltz-like service already
exists and could easily be pointed at other alternatives.

Finally, many of the other for-payment offchain-to-onchain swaps and
incoming-capacity services I found require some kind of tr\*st that
swaps/incoming channels will in fact be done.
There is custodial risk where the money is temporarily under sole
control of the service.
For example, while FixedFloat has an API documentation, it looks to
me that it is tr\*sted, at least during the swap: there is no
hashlock in the onchain side that I can find.
Boltz is the only one I found which is at least mostly trustless
and has client-server documentation.
(FWIW Loop looks like it is also trustless, but as mentioned above,
I could not easily find documentation on its client-server protocol,
so I will have to check in more detail later.)

Buying an incoming channel, for example, typically has no interlock
where release of the preimage for a LN payment for the channel
implies creation of the channel.
This is already theoretically possible, today, using existing
Bitcoin base layer features, but is not implemented at all by
anybody (that I could find easily in the 5 minutes before I started
writing this).

Of course, even with an interlock, nothing prevents the other side
from immediately closing the channel as soon as it confirms, even
if you paid for it.
This lets it reuse the funds for other purposes, such as opening a
channel to another victim.


An offchain-to-onchain swap is I think the best bet for an
automated manager to use to get incoming capacity, at least until
CLBOSS is automated enough to be able to make tr\*st judgments on
other node managers, including human ones.
The reason for that is that, by using an offchain-to-onchain swap,
the incoming capacity appears on channels we created, making it a
little harder for targeted attacks, whereas even an interlocked
incoming-channel-creation means that whoever you are buying
incoming capacity from *knows* the channel you are getting that
incoming capacity, and can grief you after opening.
In theory, anyway.

Can I Use CLBOSS In My Custodial Service?


Just make sure to do `clboss-externpay` on Lightning invoices
provided by clients/users during withdraw operations.
Pass it the payment hash hex from the invoice as first parameter
(or as `payment_hash` parameter) just before actually calling `pay`
on the actual invoice.

Wait, What?

Do a `clboss-externpay` on withdrawal invoices from your clients,
I said.

I Mean Why?

CLBOSS tries to evaluate if the current peers are useful.

The assumption of CLBOSS is that everyone else on the network is
competing against everyone else, and would be willing to cheat.
That includes reading the CLBOSS source code and analyzing its

One of the possible ways by which CLBOSS, or really any node
manager even human ones, can evaluate peers is to get the
`out_payments_fulfilled` divided by the `out_payments_offered`,
both of which are stored by C-Lightning and are shown in the
`listpeers` command.
The logic is that if a peer consistently fails to fulfill the
payments offered to it, we should reconsider if we should keep
our funds with that peer.

Unfortunately, this metric is remotely gameable by a third
party, in a way that is difficult for your node to detect.

I can do so by routing a payment through your node, with the
final hop being a peer of yours that I want to convince you
is a bad peer.
I have the payment terminate at that peer, but with a random
hash that with high probability that peer does not know the
preimage of.

This increases the `out_payments_offered` of that peer on
your `listpeers`, but because that peer of yours cannot claim
the funds because it does not know the preimage at all, it
cannot increase its `out_payments_fulfilled`.

(This applies to human node managers as well, if you are using
that metric to evaluate the peers of your node, you better
make sure none of the peers know that fact.
CLBOSS is in principle no different, it just has the
disadvantage that everyone can see how CLBOSS thinks by
reading the source code.)

Thus, CLBOSS has a policy of not using data from failed
forwards (which are trivially cheap, being totally free) in
any metrics used for judging how good a peer is.

But CLBOSS has to get data from *somewhere*.

One of the data sources is from `pay` commands.
Whenever a `pay` sub-payment reaches a destination (even if
the destination fails it, e.g. with an `mpp_timeout`), CLBOSS
considers this a figure of merit of the first hop (i.e. the
direct peer), while if it fails along that route, it will
consider that a figure of demerit of the direct peer on the
first hop.
(the peer should be evaluating its own peers as well similarly,
so blaming the direct peer for the failure of its peers to
deliver funds is perfectly fine; CLBOSS itself evaluates its
own peers, after all.)

Now, remember that the reason that CLBOSS extracts that
information is because we believe that any `pay` commands
from our node are triggered by us, and CLBOSS is designed
to assume that we are self-serving and would not fake out
the `pay` command with invalid unpayable invoices.
If the owner of the node is not self-serving, it can just
send out all its money to ZmnSCPxj anyway, so CLBOSS has
no protection against deliberate irrationality of the owner
of the node.

But if you are running a custodial service, then you are
likely to provide a feature to withdraw funds from your
Lightning node, by letting a user enter an arbitrary

If so, the assumption that `pay` commands are not triggered
by remote third parties is violated.
A client of your service could try to convince your node
manager that a peer is bad, by inventing a fake destination
with a nonexistent fake short-channel-id in the invoice
routehint, from the peer it wants to attack, to this
nonexistent destination.
Your peer, unable to find the next hop, will then fail, and
since it is not the destination, it will be considered as
not reaching the destination and would have it demerited.

(We cannot make an exemption for `unknown_next_peer`, because
if our peers know we make that exemption, they can transform
other failures (i.e. any `update_fail_htlc` from a further
hop, or insufficient capacity on the next hop) into
`unknown_next_peer` to enter that exemption and avoid getting
We need to be careful here!
A rule of thumb when evaluating policies and protocols is
that if you are proposing some mitigation to protect against
some attack, you should try figuring out an inverse of the
attack and see if it would now be able to attack the

Thus, CLBOSS requires that you inform it that an upcoming
`pay` command is triggered by external forces that might
use this to attack its statistical data and therefore its
decision to close channels.
This is precisely the `clboss-externpay` command.

For `pay` commands that are not triggered by anyone
external to your decision-making process, such as salaries
to employees or dividends to shareholders, you should not
use `clboss-externpay`.

(For now, CLBOSS ignores payments marked by
`clboss-externpay`, but in principle if the payment succeeds,
it probably has correct data on the reliability of the direct
peers involved, so maybe it could record the subpayment
statistics in a temporary area first, and only make them "real"
if a subpayment with that hash actually reaches the

Okay, What Are The Other Data Sources For Bad Peer Detection?

CLBOSS periodically actively probes: it sends out a payment
to some destination via a peer, and if the payment does not
reach the destination, blames the peer for it.

That Is So Wrong

I agree.


CLBOSS tries to keep active probing low: it only probes about
once a day for each channel it has.
More precisely, every 10 minutes real time, it rolls a 144-sided
die for each channel and if it comes up 1, runs a probe.
Hopefully this is not too onerous, and if your node has a
good amount of traffic, the active probing should be low
compared to the large numbers of typical failures going
through your node anyway.
If your node has little traffic, then it is likely because you
have few useful channels, in which case the effect of active
probing should also be low on the rest of the network.

Of course, other node managers (possibly human ones) might
then blame your node for locking up its funds and then
ultimately not getting paid for the effort.
But as noted above, this is a gameable metric in the current
network: someone else can route through your node and lower
this metric at no cost, which is why CLBOSS itself cannot use
it (and even human node managers should avoid using that

Now, in the future, it is possible that we will **finally**
figure out **some** way to make probing costly.
Xref. https://github.com/t-bast/lightning-docs/blob/master/spam-prevention.md
Then active probing by CLBOSS becomes costly, but that also
implies that remote wrecking of our `out_payments_fulfilled`
vs `out_payments_offered` ratio is *also* costly and we can
just enable using that as a peer-evaluation metric and disable
and remove the active-prober module.
This should be fairly simple, as the modular nature of CLBOSS
allows us to insert and remove various modules easily.

Is CLBOSS An Acronym?

Yes, it stands for "C-Lightning Bishonen Guardian Super Supervisor".

"Guardian" Does Not Begin With An "O"

Yes, it is my mistake, I made a typo, "O" and "G" are right next
to each other.

What? "O" And "G" Are Not Right Next To Each Other!

What do you mean, not right next to each other!?

"O" is 01001111.
"G" is 01000111.

It is just a 1-bit bitflip, sorry for not using ECC memory for
all of my cognition sub-agents, I used all my ECC memory on my
C-Lightning nodes.

That Is Not How Humans Make Typos...

Is too.

You Are Not A Human Are You?

Am too.

Prove It

"Bishonen" is not enough?

More information about the c-lightning mailing list