[PATCH v11 03/14] peci: Add support for PECI bus driver core

Andy Shevchenko andriy.shevchenko at intel.com
Thu Dec 12 07:18:17 AEDT 2019


On Wed, Dec 11, 2019 at 11:46:13AM -0800, Jae Hyun Yoo wrote:
> This commit adds driver implementation for PECI bus core into linux
> driver framework.
> 
> PECI (Platform Environment Control Interface) is a one-wire bus interface
> that provides a communication channel from Intel processors and chipset
> components to external monitoring or control devices. PECI is designed to
> support the following sideband functions:
> 
> * Processor and DRAM thermal management
>   - Processor fan speed control is managed by comparing Digital Thermal
>     Sensor (DTS) thermal readings acquired via PECI against the
>     processor-specific fan speed control reference point, or TCONTROL. Both
>     TCONTROL and DTS thermal readings are accessible via the processor PECI
>     client. These variables are referenced to a common temperature, the TCC
>     activation point, and are both defined as negative offsets from that
>     reference.
>   - PECI based access to the processor package configuration space provides
>     a means for Baseboard Management Controllers (BMC) or other platform
>     management devices to actively manage the processor and memory power
>     and thermal features.
> 
> * Platform Manageability
>   - Platform manageability functions including thermal, power, and error
>     monitoring. Note that platform 'power' management includes monitoring
>     and control for both the processor and DRAM subsystem to assist with
>     data center power limiting.
>   - PECI allows read access to certain error registers in the processor MSR
>     space and status monitoring registers in the PCI configuration space
>     within the processor and downstream devices.
>   - PECI permits writes to certain registers in the processor PCI
>     configuration space.
> 
> * Processor Interface Tuning and Diagnostics
>   - Processor interface tuning and diagnostics capabilities
>     (Intel Interconnect BIST). The processors Intel Interconnect Built In
>     Self Test (Intel IBIST) allows for infield diagnostic capabilities in
>     the Intel UPI and memory controller interfaces. PECI provides a port to
>     execute these diagnostics via its PCI Configuration read and write
>     capabilities.
> 
> * Failure Analysis
>   - Output the state of the processor after a failure for analysis via
>     Crashdump.
> 
> PECI uses a single wire for self-clocking and data transfer. The bus
> requires no additional control lines. The physical layer is a self-clocked
> one-wire bus that begins each bit with a driven, rising edge from an idle
> level near zero volts. The duration of the signal driven high depends on
> whether the bit value is a logic '0' or logic '1'. PECI also includes
> variable data transfer rate established with every message. In this way, it
> is highly flexible even though underlying logic is simple.
> 
> The interface design was optimized for interfacing between an Intel
> processor and chipset components in both single processor and multiple
> processor environments. The single wire interface provides low board
> routing overhead for the multiple load connections in the congested routing
> area near the processor and chipset components. Bus speed, error checking,
> and low protocol overhead provides adequate link bandwidth and reliability
> to transfer critical device operating conditions and configuration
> information.
> 
> This implementation provides the basic framework to add PECI extensions to
> the Linux bus and device models. A hardware specific 'Adapter' driver can
> be attached to the PECI bus to provide sideband functions described above.
> It is also possible to access all devices on an adapter from userspace
> through the /dev interface. A device specific 'Client' driver also can be
> attached to the PECI bus so each processor client's features can be
> supported by the 'Client' driver through an adapter connection in the bus.

Nice, we have some drivers under drivers/hwmon. Are they using PECI? How they
will be integrated to this? Can this be part of drivers/hwmon?

> Changes since v10:

It's funny I don't remember previous version(s), but anyway I'll comment on
this later on -- it has at least several style issues / inconveniences.

> - Split out peci-dev module from peci-core module.
> - Added PECI 4.0 command set support.
> - Refined 32-bit boundary alignment for all PECI ioctl command structs.
> - Added DMA safe command buffer handling in peci-core.
> - Refined kconfig dependencies in PECI subsystem.
> - Fixed minor bugs and style issues.
> - configfs support isn't added in this patch set. Will add that using a
>   seperate patch set.

> +config PECI
> +	tristate "PECI support"
> +	select CRC8

> +	default n

As for beginning, this one is redundant.
If you have more, drop them.

> +#include <linux/bitfield.h>
> +#include <linux/crc8.h>
> +#include <linux/delay.h>
> +#include <linux/mm.h>
> +#include <linux/module.h>

> +#include <linux/of_device.h>

What about ACPI? Can you use fwnode API?

> +#include <linux/peci.h>
> +#include <linux/pm_domain.h>
> +#include <linux/pm_runtime.h>
> +#include <linux/sched/task_stack.h>
> +#include <linux/slab.h>

-- 
With Best Regards,
Andy Shevchenko




More information about the openbmc mailing list