MCTP/PLDM BMC-host communication design

Tung Nguyen OS tungnguyen at os.amperecomputing.com
Fri Jan 21 15:15:46 AEDT 2022


Dear Jeremy, Andrew,
Appreciated of your comments. We are using the userspace MCTP and will consider moving to kernel space MCTP as the suggestion.
Because of the specific requirements, we look forward for simpler way. In our case, we have on-chip sensors and events which are allocated in both 2 sockets, and the situation is: we must send the PLDM command to poll the data.  If using 2 interfaces to communicate with host, I think it would be complex when sending to multiple sockets.
The things should be considered as :
+ If a socket is problem during runtime, is the process of MCTPL/PLDM still ok
+ If one, or more socket problem. Can we reboot the whole system to recover ?

When using 1 interface, i think:
+ From the host side, socket 0 (master) should manage its other sockets, (might be not via SMBus, but other faster sockets communication). Of course, the more work should be implemented in the firmware, and you have pointed.
+ BMC just recover the system (via reboot) when socket 0 issue, otherwise it does properly

Do you think any more issues with the communication performance ?

Thanks,

________________________________
From: Jeremy Kerr <jk at codeconstruct.com.au>
Sent: Thursday, January 20, 2022 7:53 AM
To: Tung Nguyen OS <tungnguyen at os.amperecomputing.com>; openbmc at lists.ozlabs.org <openbmc at lists.ozlabs.org>
Cc: Thu Nguyen OS <thu at os.amperecomputing.com>; Thang Nguyen OS <thang at os.amperecomputing.com>
Subject: Re: MCTP/PLDM BMC-host communication design

Hi Tung,

> We are using community PLDM/MCTP code to design our MCTP/PLDM stack
> via SMBUS on aarch64 system. Basically, we have 2 CPU sockets
> corresponding with 2 SMBUS addresses, and the MCTP/PLDM stack looks
> like this image:
>
> https://github.com/tungnguyen-ampere/images/blob/7dba355b4742d0ffab9cd39303bbb6e9c8a6f646/current_design.png

That looks good to me, but a couple of notes:

 - EID 0 and EID 1 are reserved addresses according to the spec, the
   usable range starts at 8

 - therefore, the *convention* so far for EID allocation is to assign
   EID 8 to the BMC, as the top-level bus owner, and allocate onwards
   from there. However, that's certainly not fixed if you require
   something different for your design.

 - you don't necessarily need two EIDs (0 and 1) for the BMC there.
   Even if there are two interfaces, you can use a single EID on the
   BMC, which simplifies things.

> Due to the not supported of discovery process, we are fixing the EIDs
> for host.

As Andrew has mentioned, we have the in-kernel stack working, including
the EID discovery process using MCTP Control Protocol messaging.

If you'd like to experiment with this, we have a couple of backport
branches for 5.10 and 5.15 kernels, depending on which you're working
with:

 https://codeconstruct.com.au/docs/mctp-on-linux-introduction/#our-development-branches

It's still possible to use fixed EID(s) for remote endpoints though, if
your host MCTP stack does not support the control protocol. You'll just
need to set up (on the BMC) some static routes for the fixed remote
EIDs. I'm happy to help out with configuring that if you like.

> A new way that is considering is like the image:
> https://github.com/tungnguyen-ampere/images/blob/7dba355b4742d0ffab9cd39303bbb6e9c8a6f646/new_design.png

That looks like it has some considerable drawbacks though, being:

 - you'll now need to implement MCTP bridging between the SMBus link
   (between host and socket 0) and whatever interface you're using to
   communicate between socket 0 and socket 1. This may then require you
   to implement more of the control protocol stack on the host (for
   example, as you'll need to allocate EID pools from the top-level bus
   owner, if you're doing dynamic addressing).

   That's all still possible, but requires more firmware you'll need to
   implement

 - if there's an issue with the socket 0's link, (say, if the host
   has offlined offlined CPUs in socket 0), you might lose MCTP
   connectivity between the BMC and socket 1 too.

That said, it's still feasible, but I'd suggest your first design as a
simpler and more reliable solution.

Regards,


Jeremy

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20220121/d98a5351/attachment-0001.htm>


More information about the openbmc mailing list