Prioritizing URIs with tight performance requirement in openBmc with bmcweb

Rohit Pai ropai at nvidia.com
Wed May 24 19:35:12 AEST 2023


Hello All,

We have a requirement in our platform to serve a few specific URI with a tight performance requirement on the turnaround time (latency).
One such example is the telemetry sensor metric URI which has power, thermal data can have a max turnaround time of 500ms.

The current bmcweb design uses only a single thread to serve all URI requests/responses.
If bmcweb is processing a huge amount of data (which is common for aggregation URIs) then other requests would get blocked and their latency time would get impacted.
Here I am referring to the time bmcweb takes to prepare the JSON response after it has got the data from the backend service.
In our platform, we see that power thermal metric URI can take more than 500ms when it's requested in parallel to other aggregation URI which have huge response data.

To solve this problem, we thought of a couple of solutions.


  1.  To introduce multi-threading support in bmcweb.
Does anyone have any experience/feedback on making this work?
Is there any strong reason not to have multi-threading support in bmcweb other than general guidelines to avoid threads?


  1.  To use a reverse proxy like nginx as the front end to redirect a few URIs to a new application server.
Here the idea is to develop a new application server to serve the URIs which have strong latency requirements and route the rest of the URIs to bmcweb.
       Has anyone experienced any limitations with nginx on openBmc platforms (w.r.t performance, memory footprint, etc)?
       We also have the requirement to support SSE, Is there any known limitation to make such a feature work with nginx?


Any other suggestion or solution to the problem we are solving to meet our performance requirement with bmcweb?


Thanks
Rohit PAI

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20230524/cbc34174/attachment.htm>


More information about the openbmc mailing list