[RFC v1 1/4] ipmi_bmc: framework for BT IPMI on BMCs

Brendan Higgins brendanhiggins at google.com
Wed Sep 6 19:56:51 AEST 2017


On Thu, Aug 24, 2017 at 6:01 AM, Corey Minyard <minyard at acm.org> wrote:
> On 08/23/2017 01:03 AM, Brendan Higgins wrote:
>
> <snip>
>>
>> ...
>>>>
>>>> As far as this being a complete design; I do not consider what I have
>>>> presented as being complete. I mentioned some things above that I would
>>>> like
>>>> to add and some people have already chimed in asking for some changes.
>>>> I just wanted to get some feedback before I went *too* far.
>>>
>>>
>>> I'm not sure this is well know, but the OpenIPMI library has a fully
>>> functional BMC
>>> that I wrote, originally for testing the library, but it has been
>>> deployed
>>> in a system
>>> and people use it with QEMU.  So I have some experience here.
>>
>> Is this the openipmi/lanserv?
>
>
> Yes.  That name is terrible, but it started out as a server that provided a
> LAN
> interface to an existing BMC over the local interface, primarily for my
> testing.
>
>
>>
>>> The biggest frustration* writing that BMC was that IPMI does not lend
>>> itself
>>> to a
>>> nice modular design.  Some parts are fairly modular (sensors, SDR, FRU
>>> data)
>>> but some are not so clean (channel management, firmware firewall, user
>>> management).  I would try to design things in nice independent pieces,
>>> and
>>> end up having to put ugly hooks in places.
>>
>> Yeah, I had started to notice this when I started working on our userland
>> IPMI
>> stack.
>>
>>> Firmware firewall, for instance, makes sense to implement in a single
>>> place
>>> that
>>> handles all incoming messages.  However, it also deals with subcommands
>>> (like making firewalls for each individual LAN parameter), so you either
>>> have
>>> to put knowledge of the individual command structure in the main
>>> firewall,
>>> or you have to embed pieces of the firewall in each command that has
>>> subcommands.  But if you put it in the commands, then the commands
>>> have to have knowledge of the communication channels.
>>>
>>> I ended up running into this all over the place.  In the end I just
>>> hacked
>>> in
>>> what I needed because what I designed was monolithic.  It seemed that
>>> designing it in a way that was modular was so complex that it wasn't
>>> worth
>>> the effort.  I've seen a few BMC designs, none were modular.
>>>
>>> In the design you are working on here, firmware firewall will be a bit of
>>> a
>>> challenge.
>>>
>>> Also, if you implement a LAN interface, you have to deal with a shared
>>> channel and privilege levels.  You will either have to have a context per
>>> LAN connection, with user and privilege attached to the context, or you
>>> will need a way to have user and privilege information in each message
>>> so that the message router can handle rejecting messages it doesn't
>>> have privilege to do, and the responses coming back will go to the
>>> right connection.
>>>
>>> There is also a strange situation where commands can come from a LAN
>>> (or other) interface, be routed to a system interface where the host
>>> picks up the command, handles it, send the response back to the
>>> BMC which routes it back to the LAN interface.  People actually use this.
>>
>> That sounds like fun :-P
>>
>>> Another problem I see with this design is that most services don't care
>>> about the interface the message comes from.  They won't want to have
>>> to discover and make individual connections to each connection, they
>>> will just want to say "Hook me up to everything, I don't care."
>>>
>>> Some services will care, though.  Event queues and interface flag
>>> handling
>>> will only want that on individual interfaces.  For these types of
>>> services,
>>> it would be easier if they could discover and identify the interfaces.
>>> If
>>> interfaces are added dynamically (or each LAN connection appears as
>>> a context) it needs a way to know when interfaces come and go.
>>>
>>> If you end up implementing all this, you will have a fairly complex
>>> piece of software in your message routing.  If you look at the message
>>> handler in the host driver, it's fairly complex, but it makes the user's
>>> job simple, and it makes the interfaces job simple(r).  IMHO that's a
>>> fair trade-off.  If you have to have complexity, keep it in one place.
>>
>> I think that is a reasonable point. My initial goal was not to move the
>> routing that we do in user land to kernel space and only provide basic
>> facilities that are enough for my use case, but it sounds like there might
>> be some wisdom in handling message routing and message filtering to
>> kernel space. This might also make the framework more platform agnostic,
>> and less tightly coupled to OpenBMC.
>>
>> Nevertheless, that substantially broadens the scope of what I am trying
>> to do.
>>
>> I think a good place to start is still to create a common interface for
>> hardware interfaces (BT, KCS, SSIF, and their varying implementations)
>> to implement, as I have done, and while we are working on the rest of the
>> stack on top of it, we have the device file interface that can be used in
>> the
>> meantime.
>>
>> Let me know what you think.
>
>
> If you just need the ability to catch a few commands in the kernel, what you
> have is fairly complicated.  I think a simple notifier called from every
> driver
> would provide what you needed with just a few lines of code.

What do you mean by a simple notifier called from every driver? I think what
I have here is pretty simple. I think regardless of how we do routing, having
a common framework for low level IPMI hardware interfaces to implement
is pretty useful, even if it is only used to make a common dev file interface.

>
> As far as moving all the message routing to the kernel, my (fairly
> uneducated)
> opinion would be that it's a bad idea.  It's a lot of complexity to put in
> there.
> I can see some advantages to putting it there: it's simpler to interact with
> than
> a userspace daemon and it gives consistent access to kernel and userspace
> users.  But I don't know your whole design.

How about we focus on getting a common framework for the hardware
interfaces to implement? This way at least all of the hardware interfaces
are exposed the same way (similar to what you did on the host side). We
then at least have something to build on.

>
> -corey
>
>> Thanks!
>
>
>


More information about the openbmc mailing list