[EXTERNAL] Re: Managing heterogeneous systems
vishwa
vishwa at linux.vnet.ibm.com
Thu Dec 12 17:59:08 AEDT 2019
On 12/10/19 3:20 PM, Neeraj Ladkani wrote:
>
> Great discussion.
>
> The problem is not physical interface as they can communicate using
> LAN. The problem is entity binding as one compute node can be
> connected to 1 or more storage nodes. How can we have one view of
> system from operational perspective? Power on/off, SEL logs, telemetry?
>
Correct. This is where I mentioned about "Primary BMC acting as Point Of
Contact" for external requests.
Depending on how we want to service the request, we could orchestrate
that via PoC BMC or respond to external requesters on where they can get
the data and they connect to 'em directly.
!! Vishwa !!
> Some of problems :
>
> 1. Power operations : Power/resets/ need to be coordinated in all
> nodes in a system
> 2. Telemetry : OS runs only on head node so if there are requests to
> read telemetry, it should get telemetry ( SEL logs, Sensor Values
> ) from all the nodes.
> 3. Firmware Update
> 4. RAS: Memory errors are logged by UEFI SMM in to head node but
> corresponding DIMM temperature , inlet temperature are logged on
> secondary node which are not mapped.
>
> I have been exploring couple of routes
>
> 1. LUN discovery and routing: this is similar to IPMI but I am
> working on architecture to extend this to support multiple LUNs
> and route them from Head node. ( we would need LUN routing over LAN )
> 2. Redfish hierarchy for systems
>
> "Systems": {
> "@odata.id": "/redfish/v1/Systems"
> },
> "Chassis": {
> "@odata.id": "/redfish/v1/Chassis"
> },
> "Managers": {
> "@odata.id": "/redfish/v1/Managers"
> },
> "AccountService": {
> "@odata.id": "/redfish/v1/AccountService"
> },
> "SessionService": {
> "@odata.id": "/redfish/v1/SessionService"
> },
> "Links": {
> "Sessions": {
> "@odata.id": "/redfish/v1/SessionService/Sessions"
> }
> 3.Custom Messaging over LAN ( PubSub)
>
> I am also working on a whitepaper on same area J. Happy to work with
> you guys if you have any ideas on how can we standardize this.
>
> Neeraj
>
> *From:*vishwa <vishwa at linux.vnet.ibm.com>
> *Sent:* Tuesday, December 10, 2019 1:00 AM
> *To:* Richard Hanley <rhanley at google.com>; Neeraj Ladkani
> <neladk at microsoft.com>
> *Cc:* openbmc at lists.ozlabs.org; sgundura at in.ibm.com;
> kusripat at in.ibm.com; shahjsha at in.ibm.com; vikantan at in.ibm.com
> *Subject:* [EXTERNAL] Re: Managing heterogeneous systems
>
> Hi Richard / Neeraj,
>
> Thanks for bringing this up. It's one of the interesting topic for IBM.
>
> Some of the thoughts here.....
>
> When we have multiple BMCs as part of a single system, then there are
> 3 main parts into it.
>
> 1/. Discovering the peer BMCs and role assignment
> 2/. Monitoring the existence of peer BMCs - heartbeat
> 3/. In the event of loosing the master, detect so using #2 and then
> reassign the role
>
> Depending on how we want to establish the roles, we could have
> Single-Master, Many-slave or Multi-Master, Multi-Slave. etc
>
> One of the team here is trying to do a POC for Multi BMC architecture
> and is still in the very beginning stage.
> The team is currently studying/evaluating the available solution -
> Corosync / Heartbeat / Pacemaker".
> Corosync works nice with the clusters, but we need to see if we can
> trim it down for BMC.
>
> If we can not use corosync for some reason, then need to see if we can
> use the discovery using PLDM ( probably use the terminus IDs )
> and come up with custom rules for assigning Master-Slave roles.
>
> If we choose to have Single-Master and Many-Slave, we could have that
> Single-Master as an entity acting as a Point of Contact for external
> request and then could orchestrate with the needed BMCs internally to
> get the job done
>
> I will be happy to know if there are alternatives that suit BMC kind
> of an architecture
>
> !! Vishwa !!
>
> On 12/10/19 4:32 AM, Richard Hanley wrote:
>
> Hi Neeraj,
>
> This is an open question that I've been looking into as well.
>
> For BMC to BMC communication there are a few options.
>
> 1. If you have network connectivity you can communicate using
> Redfish.
> 2. If you only have a PCIe connection, you'll have to use either
> the inband connection or the side band I2C*. PLDM and MCTP are
> protocols that defined to handle this use case, although I'm
> not sure if the OpenBMC implementations have been used in
> production.
> 3. There is always IPMI, which has its own pros/cons.
>
> For taking several BMCs and aggregating them into a single logical
> interface that is exposed to the outside world, there are a few
> things happening on that front. DMTF has been working on an
> aggregation protocol for Redfish. However, it's my understanding
> that their proposal is more directed at the client level, as
> opposed to within a single "system".
>
> I just recently joined the community, but I've been thinking about
> how a proxy layer could merge two Redfish services together.
> Since Redfish is fairly strongly typed and has a well defined
> mechanism for OEM extensions, this should be pretty generally
> applicable. I am planning on having a white paper on the issue
> sometime after the holidays.
>
> Another thing to note, recently DMTF released a spec for running a
> binary Redfish over PLDM called RDE. That might be a useful way
> of tying all these concepts together.
>
> I'd be curious about your thoughts and use cases here. Would
> either PLDM or Redfish fit your use case?
>
> Regards,
>
> Richard
>
> *I've heard of some proposals that run a network interface over
> PCIe. I don't know enough about PCIe to know if this is a good idea.
>
> On Mon, Dec 9, 2019 at 1:27 PM Neeraj Ladkani
> <neladk at microsoft.com <mailto:neladk at microsoft.com>> wrote:
>
> Are there any standards in managing heterogeneous systems? For
> example in a rack if there is a compute node( with its own
> BMC) and storage node( with its own BMC) connected using a
> PCIe switch. How these two BMC represented as one system ?
> are there any standards for BMC – BMC communication?
>
> Neeraj
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20191212/56334ff9/attachment-0001.htm>
More information about the openbmc
mailing list