OpenBMC on the OpenWRT
Patrick Williams
patrick at stwcx.xyz
Thu Aug 11 05:15:13 AEST 2016
On Wed, Aug 10, 2016 at 12:56:57PM +0300, Anton D. Kachalov wrote:
> Hello everybody.
Hi Anton. Thank you for connecting with us.
> I'm working on a separate project for rack solution which utilize many portions of OpenBMC:
>
> https://github.com/ya-mouse/openbmc-target
I would certainly be interested to understand the use-cases you are
solving. Our intention is that this OpenBMC code base can be used not
only for typical servers but also storage / network controllers or
blade chassis. Right now we are focused on a standard server but we are
trying not to design ourselves into a dead-end.
> As a system base we choose OpenWRT (flexible kernel-style configurations, package manager,
> wide and experienced support for embedded systems with limited resources).
Is there a technical advantage that you see by using OpenWRT for your
project or just a familiarity? A few of us have worked on another
project that used Buildroot which is very similar to OpenWRT, but
settled on Yocto for this project. Yocto certainly has a steeper
learning curve, but it has much better support for "overlays" than
Buildroot/OpenWRT; that solves a major problem we had collaborating with
others on the Buildroot-based project.
> 4. LuaJIT + Nginx as a HTTP API backend instead of python/micropython.
> We're on internal dissccusion of the API as a mainstream replacement for IPMB/IPMI.
> We use local memcached for fast serving data.
Our current API is REST-based but it is a simple introspection of the
dbus objects presented by our userspace applications. All interprocess
communication is done through well-defined dbus interfaces and we simply
expose those out to the user via REST.
We also intended for this to be a replacement for IPMI, but there seems
to be a lot of resistance to that due to Redfish being very similar in
goals.
We are not necessarily stuck on the python implementation, but would
need to understand the CPU vs flash space trade-off. Most server BMCs
have very limited flash space (32MB of NOR flash) so that is our largest
constraint.
Your use of memcached does spark an interesting optimization in my mind
for our dbus-based REST server though. We could do something similar to
cache the state of internal dbus objects and flush the cache based on
the object-changed dbus signals.
> Just for reference.
> We've compared performance of the several dynamic HTTP configurations such as
> python/micropython/lua (LuCI) and embeeded lua within nginx and discover that last one has almost
> maximum RPS comparing to the static pages serving.
> Static is about 350 RPS while dynamic nginx+luajit with auth checking is at 220RPS.
> Simple python's http serves at 40 RPS. LuCI variant is 4-6 RPS.
> First request to authentication service occurs via https and the result stored in the
> local nginx shared memory storage for the further use (configured for one minute).
That sounds significantly more performant. Do you happen to know the
disk space requirements of this config?
--
Patrick Williams
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 819 bytes
Desc: Digital signature
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20160810/05469137/attachment.sig>
More information about the openbmc
mailing list