PECI API?
Brad Bishop
bradleyb at fuzziesquirrel.com
Tue Oct 31 06:21:48 AEDT 2017
> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo at linux.intel.com> wrote:
>
> Hi Dave,
>
> I’m currently working on PECI kernel driver
I’m curious about the high level structure. I’m sure others are as well. Anything
you can share would be informative and appreciated!
A couple questions that popped into my head:
- Would there be a new Linux bus type or core framework for this?
- How many drivers would there be for a full stack. Something like this?
- client? (hwmon, for example)
- core? (common code)
- bmc specific implementation? (aspeed, nuvoton, emulated differences)
- Have you considered using DT bindings and/or how they would look?
These questions are motivated by the recent upstreaming experience with FSI
(flexible support interface) where a similar structure was used. FSI on POWER
feels similar to PECI in terms of usage and features so I thought I’d just throw
this out there as a possible reference point to consider.
> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>
> - low-level PECI xfer command
Would a separate ‘dev’ driver similar to i2c-dev make sense here? Just
thinking out loud.
> - Ping()
> - GetDIB()
> - GetTemp()
> - RdPkgConfig()
> - WrPkgConfig()
> - RdIAMSR()
> - RdPCIConfigLocal()
> - WrPCIConfigLocal()
> - RdPCIConfig()
>
> Also, through the hwmon driver, these temperature monitoring features would be provided:
>
> - Core temperature
> - DTS thermal margin (hysteresis)
> - DDR DIMM temperature
> - etc.
Sweet!
>
> Patches will come in to upstream when it is ready.
>
> Cheers,
> Jae
For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016
but it didn’t go anywhere:
https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
thx - brad
More information about the openbmc
mailing list