PECI API?

Jae Hyun Yoo jae.hyun.yoo at linux.intel.com
Wed Nov 1 03:26:40 AEDT 2017


>> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo at linux.intel.com> wrote:
>> 
>> Hi Dave,
>>  
>> I'm currently working on PECI kernel driver
>
> I'm curious about the high level structure.  I'm sure others are as well.
> Anything you can share would be informative and appreciated!
>
> A couple questions that popped into my head:
>
>  - Would there be a new Linux bus type or core framework for this?  
>  - How many drivers would there be for a full stack.  Something like this?
>      - client? (hwmon, for example)
>      - core? (common code)
>      - bmc specific implementation? (aspeed, nuvoton, emulated differences)
>  - Have you considered using DT bindings and/or how they would look?
>
> These questions are motivated by the recent upstreaming experience with
> FSI (flexible support interface) where a similar structure was used.
> FSI on POWER feels similar to PECI in terms of usage and features so I thought
> I'd just throw this out there as a possible reference point to consider.

PECI is using single-wired interface which is different from other popular
interfaces such as I2C and MTD, and therefore it doesn't have any common core framework
in kernel so I'm adding the PECI main contorl driver as an misc type and the other
one into hwmon subsystem. The reason why I seperate the implementation into two
drivers is, PECI can be used not only for temperature monitoring but also for
platform manageability, processor diagnostics and failure analysis, so the misc
control driver will be used as a common PECI driver for all those purposes flexibly
and the hwmon subsystem driver will use the common PECI driver just for
temperature monitoring. These drivers will be BMC specific implementation
which support Aspeed shipset only. Support for Nuvoton chipset was not considered
in my implementation because Nuvoton has different HW and register scheme, also
Nuvoton already has its dedicated driver implementation in hwmon subsystem
for their each chipset variant (nct6683.c  nct6775.c  nct7802.c).

>> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>>  
>> - low-level PECI xfer command
>
> Would a separate 'dev' driver similar to i2c-dev make sense here?  Just thinking out loud.
>

Yes, drivers will be seperated into two but it's hard to say that this way is similar to i2c-dev.
It would have a bit different shape.

>> - Ping()
>> - GetDIB()
>> - GetTemp()
>> - RdPkgConfig()
>> - WrPkgConfig()
>> - RdIAMSR()
>> - RdPCIConfigLocal()
>> - WrPCIConfigLocal()
>> - RdPCIConfig()
>>  
>> Also, through the hwmon driver, these temperature monitoring features would be provided:
>>  
>> - Core temperature
>> - DTS thermal margin (hysteresis)
>> - DDR DIMM temperature
>> - etc.
>
> Sweet!
>
>>  
>> Patches will come in to upstream when it is ready.
>>  
>> Cheers,
>> Jae
>
> For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn't go anywhere:
>
> https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html
>
> thx - brad
>

My implementation is also heavily based on the Aspeed SDK driver but modified a lot to provide
more suitable functionality for openbmc project. Hopefully, it could be introduced soon.

thx,
Jae

-----Original Message-----
From: Brad Bishop [mailto:bradleyb at fuzziesquirrel.com] 
Sent: Monday, October 30, 2017 12:22 PM
To: Jae Hyun Yoo <jae.hyun.yoo at linux.intel.com>
Cc: openbmc at lists.ozlabs.org; ed.tanous at linux.intel.com; james.feist at linux.intel.com
Subject: Re: PECI API?


> On Oct 23, 2017, at 1:03 PM, Jae Hyun Yoo <jae.hyun.yoo at linux.intel.com> wrote:
> 
> Hi Dave,
>  
> I’m currently working on PECI kernel driver

I’m curious about the high level structure.  I’m sure others are as well.  Anything you can share would be informative and appreciated!

A couple questions that popped into my head:

 - Would there be a new Linux bus type or core framework for this?  
 - How many drivers would there be for a full stack.  Something like this?
     - client? (hwmon, for example)
     - core? (common code)
     - bmc specific implementation? (aspeed, nuvoton, emulated differences)
 - Have you considered using DT bindings and/or how they would look?

These questions are motivated by the recent upstreaming experience with FSI (flexible support interface) where a similar structure was used.  FSI on POWER feels similar to PECI in terms of usage and features so I thought I’d just throw this out there as a possible reference point to consider.

> and hwmon driver implementation. The kernel driver would provide these PECI commands as ioctls:
>  
> - low-level PECI xfer command

Would a separate ‘dev’ driver similar to i2c-dev make sense here?  Just thinking out loud.

> - Ping()
> - GetDIB()
> - GetTemp()
> - RdPkgConfig()
> - WrPkgConfig()
> - RdIAMSR()
> - RdPCIConfigLocal()
> - WrPCIConfigLocal()
> - RdPCIConfig()
>  
> Also, through the hwmon driver, these temperature monitoring features would be provided:
>  
> - Core temperature
> - DTS thermal margin (hysteresis)
> - DDR DIMM temperature
> - etc.

Sweet!

>  
> Patches will come in to upstream when it is ready.
>  
> Cheers,
> Jae

For completeness, a port of the Aspeed SDK PECI driver was proposed in 2016 but it didn’t go anywhere:

https://lists.ozlabs.org/pipermail/openbmc/2016-August/004381.html

thx - brad



More information about the openbmc mailing list