OpenBMC on the OpenWRT

Joel Stanley joel at jms.id.au
Wed Aug 10 21:06:00 AEST 2016


Hey Anton,

On Wed, Aug 10, 2016 at 7:26 PM, Anton D. Kachalov <mouse at yandex-team.ru> wrote:
> Hello everybody.
>
> I'm working on a separate project for rack solution which utilize many portions of OpenBMC:
>
>    https://github.com/ya-mouse/openbmc-target

Nice! I came across your repository the other day.

>
> As a system base we choose OpenWRT (flexible kernel-style configurations, package manager,
> wide and experienced support for embedded systems with limited resources).

I'm not too familiar with OpenWRT aside from using it on my WiFi
router at home. I do have experience with buildroot, which uses
Kconfig and does a good job at creating small, configurable images.

Have you been happy with OpenWRT as a build system?

> We still use old u-boot that came from OpenSource release of AMI SDK.
> Have plans to move to the OpenBMC's version to start using of DTS.

Great! Let me know how you go. I'm happy to apply patches in order to
support your configuration. Our goal is to upstream our patches to
remove the need to fork it all together.

> And a few modifications over OpenBMC kernel tree:
>
> 1. I2C slave support for the aspeed adapter (it is ugly a bit).

Did you see the slave patch that Brendan submitted? Would it work for
your requirements?

  https://github.com/openbmc/linux/commit/090cb02e01906e7a7cf9c210237c4899972a9770

> 2. I2C slave support for MUX switch (pca954x).
> 3. DTS overlay support ported from Raspberry Pi with userspace dtoverlay/dtparam/dtmerge tools to manage configs.
> 4. LuaJIT + Nginx as a HTTP API backend instead of python/micropython.
>     We're on internal dissccusion of the API as a mainstream replacement for IPMB/IPMI.
>     We use local memcached for fast serving data.
> 5. Transparent master/slave i2c based IPMB driver with OpenIPMI interface
>     (works with regular ipmitool/freeipmi) to send requests to the Nodes.
>     To work in a daemon mode kernel part of ipmi_msghandler should be modified a bit
>     to receive incoming requests and store it separately while user (opener of the /dev/ipmiX)
>     assigned by the outgoing SeqId. For unwanted (from current module's point of view)
>     incoming packets there is currently no way to proper store such messages.
>
> Just for reference.
> We've compared performance of the several dynamic HTTP configurations such as
> python/micropython/lua (LuCI) and embeeded lua within nginx and discover that last one has almost
> maximum RPS comparing to the static pages serving.
> Static is about 350 RPS while dynamic nginx+luajit with auth checking is at 220RPS.
> Simple python's http serves at 40 RPS. LuCI variant is 4-6 RPS.
> First request to authentication service occurs via https and the result stored in the
> local nginx shared memory storage for the further use (configured for one minute).

Thanks for sharing your results. How does nginx go with memory usage?

Cheers,

Joel


More information about the openbmc mailing list