phosphor-hwmon bottleneck potential

Rick Altherr raltherr at google.com
Sat May 6 02:48:01 AEST 2017


I've chatted with Patrick V. separately about the driver.  AST2400/2500 fan
tach hardware measures only one fan at a time.  I think we can adjust the
driver settings to reduce the measurement time but it will scale with # of
tachs being read.

On Fri, May 5, 2017 at 9:34 AM, Patrick Williams <patrick at stwcx.xyz> wrote:

> On Fri, May 05, 2017 at 01:07:45AM -0400, Brad Bishop wrote:
> > > The solution that comes to mind would be to simply parallelize sensor
> > > updates, such that phosphor-hwmon uses threads to update the sensor
> >
> > I think this is a great idea.  But I would vote for some kind of
> > non-blocking or async io rather than threads.  I don't know what support
> > for that sort of thing is available in the hwmon subsystem so I'm not
> > sure if its even possible, but it seems worth a look anyway.
>
> If the IO operation within the hwmon kernel driver is taking 1 second, I
> don't think multi-threading does anything to improve this, except
> perhaps if you have two threads: 1 for dbus and 1 for hwmon polling.
> Going to N threads or N processes for the hwmon polling would not be
> beneficial since there would only be single driver queueing up the N
> threads anyhow.  Two threads just improves the dbus get-property
> response time for returning the cached value.
>
> Hopefully we can use sd-event with a non-blocking read on the hwmon
> sysfs entry to avoid having to resort to multi-threading.
>
> I don't know which device you are interacting with that is taking so
> long or how the driver was written, but a very common optimization in
> other hwmon drivers is to read all N hwmon registers from the device
> when the user touches any of the N sysfs entries and then cache them for
> a polling interval.  This would hopefully take your 8 seconds down to 1
> second for 8 devices.  Still pretty horrible if you are having to take 1
> second of kernel time for IO every <2 seconds.
>
> --
> Patrick Williams
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/openbmc/attachments/20170505/28b5b77d/attachment.html>


More information about the openbmc mailing list