Redfish on OpenBMC

Michael E Brown Michael.E.Brown at dell.com
Tue Feb 13 13:52:04 AEDT 2018


On Thu, Feb 08, 2018 at 02:17:02PM +0000, Rapkiewicz, Pawel wrote:
> Hi Chris, Yugi,
> 
> Thanks for your benchmarking you've attached, I'd like to better understand the data. I'm curious about 10 concurrent requests for redfish root service. The average is ~5 seconds for uwsgi, and mostly no response for gevent.
> Few questions:
> - gevent and uwsgi, those are two independent redfish server implementations?
> - do you have any traffic during retrieving root service?
> - Was it with Basic HTTP Auth, or without any?
> 
> I'm also little confused on how do we measure and present code size and performance, since it's not normalized anyhow, it's biased by the measurement method. If the intent of gathering those data is to
> compare different Redfish approaches, we shall consider setting the measure criteria. Some suggestions:
> 
> Code size:
> - real code size, not on compressed file system
> - is it binary or script?
> - what's the size of dependent libraries (already existing in OpenBMC environment vs. the new one required only by given Redfish implementation)?
> 
> Performance:
> - We need common script for measurement
> - And common method, I'd prefer running on machine directly connected to the OpenBMC platform to minimize network latency, or to run it over loopback interface on the platform, but this may affect context switching
> - The performance will still be biased by the CPU the Redfish is running, but it will be closer to make it comparable.
> 
> Regards,
> Pawel
> 
> 
> -----Original Message-----
> From: openbmc [mailto:openbmc-bounces+pawel.rapkiewicz=intel.com at lists.ozlabs.org] On Behalf Of Chris Ong
> Sent: Wednesday, February 7, 2018 5:15 PM
> To: Tanous, Ed <ed.tanous at intel.com>; Yugi Mani <yupalani at microsoft.com>; Ali Larijani <alirhas at microsoft.com>; Paul.Vancil at dell.com; hramasub at in.ibm.com; Michael.E.Brown at dell.com
> Cc: rolfb at us.ibm.com; jwcarman at us.ibm.com; openbmc at lists.ozlabs.org; pradeep.kumar36 at tcs.com; bradleyb at fuzziesquirrel.com; Balaji.B.Rao at dell.com
> Subject: RE: Redfish on OpenBMC
> 
> Hi Ed,
> 
> On our AST2520 platform:
> Reading sensors using Redfish takes about 12-13 seconds for about 200 sensors. If 10 threads are requesting the read at the same time, the average time to return is about 70 seconds.
> 
> For reading logs, it takes about 100 seconds for 4000 log entries. The time scales linearly with the number of logs, e.g. for 2000 logs, it would take around 50s.
> 
> Chris


I'm basically ready to post some numbers for my redfish stack. I was busy beavering away trying to make sure I had at least enough stuff implemented to give a good idea of both the performance as well as implementation difficulty.

Here are my stats:

Language: golang
Source: https://github.com/superchalupa/go-redfish
Runtime RSS: 11MB RAM at rest, 16MB under load.
Binary size: lz compressed: <3MB, uncompressed 11MB

I have a few different benchmark numbers. First, the script I'm using is https://github.com/superchalupa/go-redfish/blob/master/scripts/walk.sh, the time reported is just the time from 'curl' reporting the total request time. Also, my results are running on the Nuvoton Poleg, I would expect times to be slower running on AST. I've put build instructions in the readme.

First of all, the HTTP (no compression) results in an average response time of 0.003 seconds.

For HTTPS, I have two results. First, the native golang SSL stack, which is poorly optimized for 32-bit ARM, results in average requests of about 0.250 per request. Next, I used the golang "spacemonkeygo" openssl wrapper to serve using the more-optimized openssl libraries. This results in 0.150s per request.

And, finally, interesting to note is if you pipeline your requests. For curl, this would be simply putting multiple requests on one command line. For these, the results are that the first request hits at the time for HTTPS (either 0.250 or 0.150 depending on which implementation you use), and the subsequent requests all come in around 0.003s. This means an entire redfish walk can be done in less than a second for my test implementation.

Notes on implementation:

I've implemented thermal using the openbmc dbus xyz.openbmc-project.Sensor.Value interface and using mapper to dynamically construct the thermal sensors available. It appears that it ought to be quick and easy to do the same for fans. I've also implemented a few other dbus calls, such as the Manager.Reset action.

Mostly the implementation caches values. Because of information from Ed that subscribing to events might not result in nice systemwide properties, right now I refresh sensor values on a configurable timer (10s right now).

The more things I've implemented with this implementation, the more I'm liking how nicely things are shaking out, and I'm finally to the point where I can focus on trying to get some other people up to speed on the codebase rather than trying ot get all the basics running.

Thoughts?

I'll post the actual test results shortly, I lost the machine I was using for benchmarking due to a network issue and cannot access it until tomorrow.
--
Michael


More information about the openbmc mailing list