API Aggregator Data Model + OCP Summit

Richard Hanley rhanley at google.com
Fri Feb 21 10:16:10 AEDT 2020


HI all,

Apologies for being radio silent on a BMC aggregator for the last few
weeks.  This email is a bit long, but I wanted to give everyone a quick
snapshot of what I've been thinking about in this space.

(As a quick aside, I mentioned in the last meeting that I will be giving a
talk at the OCP Summit in March
<https://www.opencompute.org/summit/global-summit/schedule>.  The talk
should be summarizing the discussions we've had here in Open BMC, and will
be trying to raise interest in the problem.  Hopefully I'll get to meet
some of you at the summit).

Anyways, the last few discussions about the aggregator have made it clear
that there is some conceptual work to be done on defining what exactly the
aggregator is, and what services need to be created.

To that end, I think the most concise definition of the aggregator is that
it is a way for services to register an API, and consistent semantics for
frontends to be built on top of the registered APIs.

So from the aggregator's point of view, there is no difference between a
local resource or a remote resource.  This implies that any frontend built
on top of the aggregator wouldn't have to worry about "where" the request
gets handled, since that concept has been abstracted away.

Originally, I was thinking that this aggregation service would be done
using Redfish.  This has some problems for systems that want to use another
protocol, or want to use some mixture of protocols (i.e. a out of band
Redfish service alongside an in band IMPI host interface).

However, as a jumping off point I asked myself three questions:
  1) What is the minimum amount of information I would need to construct a
Redfish service?
  2) How reusable is that minimal data model for other protocols and use
cases?
  3) How well does it support our existing DBus usage and ecosystem?

>From that I think we can get a lot of traction by combining two core data
types: Resource Nodes and Edges.

A resource node would contain the following:
  Profile - This would be metadata about the resource, including schema,
cache policy, names, and ACLs

  Supported Methods - Resources could implement any of the HTTP methods
(GET, PUT, POST, PATCH, DELETE).

  Supported Actions - Redfish makes a distinction between calls that
manipulate data and calls that manipulate the physical world.  I think that
separation makes a lot of sense in a general protocol.

   Event Dispatch - This would be the async method for resources to send
events up to any frontend that was listening

   Task Monitor - Each resource may have tasks that are being run as part
of another request.  By giving each resource a task monitor they can own
their tasks without needing to integrate into some global monitor.

Meanwhile the edges would connect resources together, and contain a list of
tags that describe the relationship (e.g. collection membership, contained
by, managed by, etc.)

One thing I like about this data model is that it let's us do some
meaningful work at the aggregation layer without having to know anything
about the data/methods that the resources are providing.

When it comes to sociability with other protocols, I think it is relatively
lightweight.  The data model is a bit richer than what IPMI offers, but I
don't think it is so rich that it would be extra hard to write wrappers.
It would also be a very useful component if the community wanted to support
RDE over PLDM.

So, to close this email, I want to lay out how I would imagine this
aggregator would be used in practice.

Once the aggregator starts up it would have a root resource.  This would
give any important process metadata, a default entry point to look at
registered resources, and a place for clients to listen for events.

Daemons could then register resources.  When they register a resource, they
would give the resource definition and the edges used to connect it into
the aggregator's resource graph.  The aggregator would send event messages
to any listeners whenever a resource is created or destroyed.

When it comes to presenting these resources to the outside world, a
frontend could contain an in memory copy of the resource definition and
edges (since those would be relatively stable), and query the aggregator
for a snapshot of resources at a given time.  The hope is that frontends
could be as stateless as possible.

There are some other topics I could add. In particular I think caching
becomes a very important subject once you start managing distributed BMCs.
However, this email has gotten long enough, so I think I will save that for
another day.

Thanks,
Richard
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20200220/e971607f/attachment.htm>


More information about the openbmc mailing list