BMC redundancy

Brad Bishop bradleyb at fuzziesquirrel.com
Tue Jan 30 08:38:46 AEDT 2018


> On Jan 29, 2018, at 3:43 PM, Vernon Mauery <vernon.mauery at linux.intel.com> wrote:
> 
> On 29-Jan-2018 10:52 AM, Brad Bishop wrote:
>> I know we have a lot of work to do with the basics before tackling something
>> like supporting multiple BMCs in a single system, but its never too early to
>> brainstorm.
>> 
>> Quick community poll:  Please share any thoughts you may have around supporting
>> systems with multiple BMCs.  Does your organization care?  Thoughts on how it
>> could/should be done?  System designs that are a non-starter for OpenBMC?
> 
> Intel has supported systems like this in the past, and it is likely that we will have need of multi-node/multi-bmc systems in the future.

Thanks Vernon - its good to hear there might be room to collaborate on this.

> 
> Our systems in the past have been sled-based with four sleds (BMC+host) per chassis and the BMCs connected via I2C over the backplane.  One of the BMCs was elected (based on availability and ID) to be the master for the system to control the shared resources (power supplies and other common stuff). 
> Each BMC was individually accessible in the normal ways (KCS from host or RMCP/web over the network).
> 
> Is this the sort of stuff that you are looking for or were you thinking of BMC redundancy for a single system?

I think so yes.  Very similar to what you describe but with a single
SMP fabric across all the sleds.  I think in practice it doesn’t
make much difference - the SMP fabric would just another shared
resource amongst the sleds.

> 
> --Vernon


More information about the openbmc mailing list