OpenBMC on the OpenWRT

Anton D. Kachalov mouse at yandex-team.ru
Thu Aug 11 22:14:37 AEST 2016


10.08.2016, 22:15, "Patrick Williams" <patrick at stwcx.xyz>:
> On Wed, Aug 10, 2016 at 12:56:57PM +0300, Anton D. Kachalov wrote:
>>  Hello everybody.
>
> Hi Anton. Thank you for connecting with us.
>
>>  I'm working on a separate project for rack solution which utilize many portions of OpenBMC:
>>
>>     https://github.com/ya-mouse/openbmc-target
>
> I would certainly be interested to understand the use-cases you are
> solving. Our intention is that this OpenBMC code base can be used not
> only for typical servers but also storage / network controllers or
> blade chassis. Right now we are focused on a standard server but we are
> trying not to design ourselves into a dead-end.

We're wokring on Rack control solution: several boards that talks to nodes (BMC),
collect sensors, drive fans wall and do rack power capping.
As a further move – replace BMC,but it is more difficult task.
There are BIOS integration and ME interaction. KVM part is solvable: virtual USB hub + usb gadgets
and Spice server with Aspeed video frames decoder.

>
>>  As a system base we choose OpenWRT (flexible kernel-style configurations, package manager,
>>  wide and experienced support for embedded systems with limited resources).
>
> Is there a technical advantage that you see by using OpenWRT for your
> project or just a familiarity? A few of us have worked on another
> project that used Buildroot which is very similar to OpenWRT, but
> settled on Yocto for this project. Yocto certainly has a steeper
> learning curve, but it has much better support for "overlays" than
> Buildroot/OpenWRT; that solves a major problem we had collaborating with
> others on the Buildroot-based project.

I found it more intuitive as a build system. Very flexible and a number of makefile helpers
to extend functionality. Also, there is out-of-the box an user Web interface (LuCI).
We mostly focused on target system size and minimum system processes.
Current installation takes just 5M (kmods, ipmitool, nginx, lua, memcached) + kernel (2.5M).

What are the issues with overlays in OpenWRT comparing to Yocto?

>>  4. LuaJIT + Nginx as a HTTP API backend instead of python/micropython.
>>      We're on internal dissccusion of the API as a mainstream replacement for IPMB/IPMI.
>>      We use local memcached for fast serving data.
>
> Our current API is REST-based but it is a simple introspection of the
> dbus objects presented by our userspace applications. All interprocess
> communication is done through well-defined dbus interfaces and we simply
> expose those out to the user via REST.

We're going to expose ubus methods via HTTP. Registered modules (also written in Lua) expose
their methods to the ubus. Events such as Node replacement or other are going through the ubus.

> We also intended for this to be a replacement for IPMI, but there seems
> to be a lot of resistance to that due to Redfish being very similar in
> goals.

>From the first look Redfish feels like a WBEM/CIM but use JSON instead of XML.
Too much overloaded to make simple requests.
API have to be kept simple as much as possible.

> We are not necessarily stuck on the python implementation, but would
> need to understand the CPU vs flash space trade-off. Most server BMCs
> have very limited flash space (32MB of NOR flash) so that is our largest
> constraint.

Python waste the space dramatically (not fit 16M).
LuaJIT is only 340k + 340k lib. Plus several bin OpenWRT modules (~100k).
We fit 16M and have several free megs in overlay JFFS2 part.
Lua is a tradeoff between low-level languages such as C (and very easy to write bindings via FFI)
and scripting languages with ability to modify-and-run during development on the target system.

> Your use of memcached does spark an interesting optimization in my mind
> for our dbus-based REST server though. We could do something similar to
> cache the state of internal dbus objects and flush the cache based on
> the object-changed dbus signals.

Yeap. On node replace we're flushing records belongs to the node. All software that need current
sensors data have to request them from memcached instead of directly poll the sensors.

> That sounds significantly more performant. Do you happen to know the
> disk space requirements of this config?

Nginx+LuaJIT takes 860k + 340k shared luajit lib, libpcre (240k), libcrypto (1M) for SSL.
On compressed FS (squashfs + lzma) it takes much less.

-- 
Anton D. Kachalov



More information about the openbmc mailing list