Multiple BMCs in a system: IPMB? Redfish? MCTP?
Nancy Yuen
yuenn at google.com
Thu Apr 30 10:24:48 AEST 2020
Neeraj, I was not considering aggregation in this case, just having the
intermediate BMC "route".
Vijay, thanks. I was wondering what your applications for IPMB were for.
What's the rationale for using IPMB vs something else? In your multihost
system, one BMC supports multiple host CPUs? Are there also multiple BMCs?
----------
Nancy
On Wed, Apr 29, 2020 at 5:15 PM Vijay Khemka <vijaykhemka at fb.com> wrote:
> Hi Nancy,
>
> We are currently using (1) in our current multi host design. Option (3)
> also looks good.
>
>
>
> Regards
>
> -Vijay
>
>
>
> *From: *openbmc <openbmc-bounces+vijaykhemka=fb.com at lists.ozlabs.org> on
> behalf of Nancy Yuen <yuenn at google.com>
> *Date: *Wednesday, April 29, 2020 at 3:53 PM
> *To: *OpenBMC Maillist <openbmc at lists.ozlabs.org>
> *Subject: *Multiple BMCs in a system: IPMB? Redfish? MCTP?
>
>
>
> I've talked with some people a while back (long while back) about multiple
> BMCs in a system. Either for redundancy or managing separate parts of a
> system. I'm wondering what other people are thinking in this area if at
> all.
>
>
>
> We are considering similar designs and I'm looking into options for
> BMC-BMC communications. Some BMCs may not be externally accessible. Here
> are some options that we've looked at:
>
> 1. i2c/IPMB
> 2. usbnet/Redfish
> 3. i2c/MCTP/PLDM or something else?
> 4. internal network via switch chip/Redfish or MCTP
>
> I'd like to reduce our use of IPMI so I want to avoid (1).
>
>
>
> ----------
> Nancy
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20200429/06499163/attachment-0001.htm>
More information about the openbmc
mailing list