[PATCH linux-next v5 08/13] drivers/mfd: Add PECI client mfd driver
Jae Hyun Yoo
jae.hyun.yoo at linux.intel.com
Tue Jun 19 03:09:17 AEST 2018
On 6/17/2018 10:59 PM, Lee Jones wrote:
> On Thu, 14 Jun 2018, Jae Hyun Yoo wrote:
>
>> Thanks for the review. Please see my inline answers.
>>
>> On 6/12/2018 11:30 PM, Lee Jones wrote:
>>> On Fri, 01 Jun 2018, Jae Hyun Yoo wrote:
>>>
>>>> This commit adds PECI client mfd driver.
>>>>
>>>> Signed-off-by: Jae Hyun Yoo <jae.hyun.yoo at linux.intel.com>
>>>> Cc: Andrew Jeffery <andrew at aj.id.au>
>>>> Cc: James Feist <james.feist at linux.intel.com>
>>>> Cc: Jason M Biils <jason.m.bills at linux.intel.com>
>>>> Cc: Joel Stanley <joel at jms.id.au>
>>>> Cc: Vernon Mauery <vernon.mauery at linux.intel.com>
>>>> ---
>>>> drivers/mfd/Kconfig | 11 ++
>>>> drivers/mfd/Makefile | 1 +
>>>> drivers/mfd/peci-client.c | 205 ++++++++++++++++++++++++++++++++
>>>> include/linux/mfd/peci-client.h | 60 ++++++++++
>>>> 4 files changed, 277 insertions(+)
>>>> create mode 100644 drivers/mfd/peci-client.c
>>>> create mode 100644 include/linux/mfd/peci-client.h
>
> [...]
>
>>> I thought you defined this device as a simple-mfd?
>>>
>>> This leaves me wondering why you have a driver as well?
>>
>> Then what is proper setting for it instead of a simple-mfd?
>
> Drivers described as "simple-mfd" have a special function. If you
> don't know what that function is, I suggest that you do not need
> it. :)
>
> "simple-mfd" devices do not usually have drivers.
>
> [...]
>
Got it. I'll remove the "simple-mfd" from this DT setting.
>>>> +/**
>>>> + * Architectures other than x86 cannot include the header file so define these
>>>> + * at here. These are needed for detecting type of client x86 CPUs behind a PECI
>>>> + * connection.
>>>> + */
>>>> +#define INTEL_FAM6_HASWELL_X 0x3F
>>>> +#define INTEL_FAM6_BROADWELL_X 0x4F
>>>> +#define INTEL_FAM6_SKYLAKE_X 0x55
>>>> +#endif
>>>> +
>>>> +#define LOWER_NIBBLE_MASK GENMASK(3, 0)
>>>> +#define UPPER_NIBBLE_MASK GENMASK(7, 4)
>>>> +
>>>> +#define CPU_ID_MODEL_MASK GENMASK(7, 4)
>>>> +#define CPU_ID_FAMILY_MASK GENMASK(11, 8)
>>>> +#define CPU_ID_EXT_MODEL_MASK GENMASK(19, 16)
>>>> +#define CPU_ID_EXT_FAMILY_MASK GENMASK(27, 20)
>>>
>>> In the case of X86 based devices, are these being redefined?
>>>
>>
>> No define even in x86 arch build. These masks just for PECI use cases.
>> Also, the CPUs in this implementation is not for local CPUs, but for
>> remote CPUs which are connected through PECI interface. It also means
>> that this code can be running on non-x86 kernel too.
>
> This is starting to sound like a 'remoteproc' driver, no?
>
This is driver for remote Intel CPUs that are behind in PECI connection.
>>>> +enum cpu_gens {
>>>> + CPU_GEN_HSX = 0, /* Haswell Xeon */
>>>> + CPU_GEN_BRX, /* Broadwell Xeon */
>>>> + CPU_GEN_SKX, /* Skylake Xeon */
>>>> +};
>>>> +
>>>> +static struct mfd_cell peci_functions[] = {
>>>> + {
>>>> + .name = "peci-cputemp",
>>>> + .of_compatible = "intel,peci-cputemp",
>>>> + },
>>>> + {
>>>> + .name = "peci-dimmtemp",
>>>> + .of_compatible = "intel,peci-dimmtemp",
>>>> + },
>>>> +};
>>>
>>> A couple of temp sensors? This isn't looking very MFD-like.
>>>
>>> What makes this an MFD?
>>>
>>
>> Currently it has only a couple of temp sensors but it's just one of
>> PECI function. Other PECI functions will be added later so it's the
>> reason why it should be an MFD. Please see the following PECI sideband
>> functions that I introduced in the cover letter of this patch set.
>>
>> * Processor and DRAM thermal management
>> * Platform Manageability
>> * Processor Interface Tuning and Diagnostics
>> * Failure Analysis
>
> Are these all devices in their own right?
>
> Will they each have drivers associated with them?
>
> Is there a datasheet for this PECI device?
>
Temperature monitoring driver which I'm trying to upstream should be
added into hwmon subsystem, but other function drivers should be added
as a platform driver or as a something else, so it is the reason why I
made this MFD driver because these function drivers are separated but
use the same device - the remote CPU - actually. Datasheet is available
through http://www.intel.com/design/literature.htm with an NDA process.
Intel is providing the datasheet to HW vendors.
> [...]
>
>>>> +static struct peci_driver peci_client_driver = {
>>>
>>> I'm pretty sure this will be NAKED by the platform Maintainer.
>>
>> Does it have problems? Can you please give me a hint?
>
> New bus types are usually only added for well defined, heavily used
> buses which AFAIK all have their own subsystems. Why can't you use
> one of the existing bus types? Platform is the most frequently used.
>
I implemented this PECI drivers as a simple platform drivers in V1 but
I had to change it to follow other maintainers' suggestions. I believe
driver core maintainers are reviewing the PECI subsystem implementation
code in this patch set as well.
>>>> + .probe = peci_client_probe,
>>>> + .id_table = peci_client_ids,
>>>> + .driver = {
>>>> + .name = "peci-client",
>>>> + .of_match_table =
>>>> + of_match_ptr(peci_client_of_table),
>>>> + },
>>>> +};
>>>> +module_peci_driver(peci_client_driver);
>
More information about the openbmc
mailing list