[RFC v1 1/4] ipmi_bmc: framework for BT IPMI on BMCs
Brendan Higgins
brendanhiggins at google.com
Wed Aug 23 16:03:25 AEST 2017
Sorry for the delayed response.
>>> This piece of code takes a communication interface, called a bus, and
>>> muxes/demuxes
>>> messages on that bus to various users, called devices. The name
>>> "devices"
>>> confused
>>> me for a bit, because I was thinking they were physical devices, what
>>> Linux
>>> would
>>> call a device. I don't have a good suggestion for another name, though.
>>
>> We could maybe do "*_interface" instead of "*_bus" and "*_handler" instead
>> of "*_device"; admittedly, it is not the best name ever: handler has some
>> connotations.
>>
>
> I think the "_bus" name is ok, that's what I2C uses, and "communication bus"
> makes
> sense, at least to me. _handler is probably better than _device, but not
> that much.
> The IPMI host driver uses _user, but that's not great, either. Maybe
> _service? Naming
> is such a pain.
Actually, I like _service. The way I am doing it here it is not
necessarily broken
up by command so service makes sense.
>
>
...
>>
>> As far as this being a complete design; I do not consider what I have
>> presented as being complete. I mentioned some things above that I would
>> like
>> to add and some people have already chimed in asking for some changes.
>> I just wanted to get some feedback before I went *too* far.
>
>
> I'm not sure this is well know, but the OpenIPMI library has a fully
> functional BMC
> that I wrote, originally for testing the library, but it has been deployed
> in a system
> and people use it with QEMU. So I have some experience here.
Is this the openipmi/lanserv?
>
> The biggest frustration* writing that BMC was that IPMI does not lend itself
> to a
> nice modular design. Some parts are fairly modular (sensors, SDR, FRU data)
> but some are not so clean (channel management, firmware firewall, user
> management). I would try to design things in nice independent pieces, and
> end up having to put ugly hooks in places.
Yeah, I had started to notice this when I started working on our userland IPMI
stack.
>
> Firmware firewall, for instance, makes sense to implement in a single place
> that
> handles all incoming messages. However, it also deals with subcommands
> (like making firewalls for each individual LAN parameter), so you either
> have
> to put knowledge of the individual command structure in the main firewall,
> or you have to embed pieces of the firewall in each command that has
> subcommands. But if you put it in the commands, then the commands
> have to have knowledge of the communication channels.
>
> I ended up running into this all over the place. In the end I just hacked
> in
> what I needed because what I designed was monolithic. It seemed that
> designing it in a way that was modular was so complex that it wasn't worth
> the effort. I've seen a few BMC designs, none were modular.
>
> In the design you are working on here, firmware firewall will be a bit of a
> challenge.
>
> Also, if you implement a LAN interface, you have to deal with a shared
> channel and privilege levels. You will either have to have a context per
> LAN connection, with user and privilege attached to the context, or you
> will need a way to have user and privilege information in each message
> so that the message router can handle rejecting messages it doesn't
> have privilege to do, and the responses coming back will go to the
> right connection.
>
> There is also a strange situation where commands can come from a LAN
> (or other) interface, be routed to a system interface where the host
> picks up the command, handles it, send the response back to the
> BMC which routes it back to the LAN interface. People actually use this.
That sounds like fun :-P
>
> Another problem I see with this design is that most services don't care
> about the interface the message comes from. They won't want to have
> to discover and make individual connections to each connection, they
> will just want to say "Hook me up to everything, I don't care."
>
> Some services will care, though. Event queues and interface flag handling
> will only want that on individual interfaces. For these types of services,
> it would be easier if they could discover and identify the interfaces. If
> interfaces are added dynamically (or each LAN connection appears as
> a context) it needs a way to know when interfaces come and go.
>
> If you end up implementing all this, you will have a fairly complex
> piece of software in your message routing. If you look at the message
> handler in the host driver, it's fairly complex, but it makes the user's
> job simple, and it makes the interfaces job simple(r). IMHO that's a
> fair trade-off. If you have to have complexity, keep it in one place.
I think that is a reasonable point. My initial goal was not to move the
routing that we do in user land to kernel space and only provide basic
facilities that are enough for my use case, but it sounds like there might
be some wisdom in handling message routing and message filtering to
kernel space. This might also make the framework more platform agnostic,
and less tightly coupled to OpenBMC.
Nevertheless, that substantially broadens the scope of what I am trying
to do.
I think a good place to start is still to create a common interface for
hardware interfaces (BT, KCS, SSIF, and their varying implementations)
to implement, as I have done, and while we are working on the rest of the
stack on top of it, we have the device file interface that can be used in the
meantime.
Let me know what you think.
Thanks!
More information about the openbmc
mailing list