[PATCH v6 0/4] Add perf interface to expose nvdimm

kajoljain kjain at linux.ibm.com
Fri Feb 25 17:38:09 AEDT 2022



On 2/25/22 11:25, Nageswara Sastry wrote:
> 
> 
> On 17/02/22 10:03 pm, Kajol Jain wrote:
>> Patchset adds performance stats reporting support for nvdimm.
>> Added interface includes support for pmu register/unregister
>> functions. A structure is added called nvdimm_pmu to be used for
>> adding arch/platform specific data such as cpumask, nvdimm device
>> pointer and pmu event functions like event_init/add/read/del.
>> User could use the standard perf tool to access perf events
>> exposed via pmu.
>>
>> Interface also defines supported event list, config fields for the
>> event attributes and their corresponding bit values which are exported
>> via sysfs. Patch 3 exposes IBM pseries platform nmem* device
>> performance stats using this interface.
>>
>> Result from power9 pseries lpar with 2 nvdimm device:
>>
>> Ex: List all event by perf list
>>
>> command:# perf list nmem
>>
>>    nmem0/cache_rh_cnt/                                [Kernel PMU event]
>>    nmem0/cache_wh_cnt/                                [Kernel PMU event]
>>    nmem0/cri_res_util/                                [Kernel PMU event]
>>    nmem0/ctl_res_cnt/                                 [Kernel PMU event]
>>    nmem0/ctl_res_tm/                                  [Kernel PMU event]
>>    nmem0/fast_w_cnt/                                  [Kernel PMU event]
>>    nmem0/host_l_cnt/                                  [Kernel PMU event]
>>    nmem0/host_l_dur/                                  [Kernel PMU event]
>>    nmem0/host_s_cnt/                                  [Kernel PMU event]
>>    nmem0/host_s_dur/                                  [Kernel PMU event]
>>    nmem0/med_r_cnt/                                   [Kernel PMU event]
>>    nmem0/med_r_dur/                                   [Kernel PMU event]
>>    nmem0/med_w_cnt/                                   [Kernel PMU event]
>>    nmem0/med_w_dur/                                   [Kernel PMU event]
>>    nmem0/mem_life/                                    [Kernel PMU event]
>>    nmem0/poweron_secs/                                [Kernel PMU event]
>>    ...
>>    nmem1/mem_life/                                    [Kernel PMU event]
>>    nmem1/poweron_secs/                                [Kernel PMU event]
>>
>> Patch1:
>>          Introduces the nvdimm_pmu structure
>> Patch2:
>>          Adds common interface to add arch/platform specific data
>>          includes nvdimm device pointer, pmu data along with
>>          pmu event functions. It also defines supported event list
>>          and adds attribute groups for format, events and cpumask.
>>          It also adds code for cpu hotplug support.
>> Patch3:
>>          Add code in arch/powerpc/platform/pseries/papr_scm.c to expose
>>          nmem* pmu. It fills in the nvdimm_pmu structure with pmu name,
>>          capabilities, cpumask and event functions and then registers
>>          the pmu by adding callbacks to register_nvdimm_pmu.
>> Patch4:
>>          Sysfs documentation patch
>>
>> Changelog
> 
> Tested these patches with the automated tests at
> avocado-misc-tests/perf/perf_nmem.py
> URL:
> https://github.com/avocado-framework-tests/avocado-misc-tests/blob/master/perf/perf_nmem.py
> 
> 
> 1. On the system where target id and online id were different then not
> seeing value in 'cpumask' and those tests failed.
> 
> Example:
> Log from dmesg
> ...
> papr_scm ibm,persistent-memory:ibm,pmemory at 44100003: Region registered
> with target node 1 and online node 0
> ...

Hi Nageswara Sastry,
       Thanks for testing the patch set. Yes you right, incase target
node id and online node id is different, it can happen when target
node is not online and hence can cause this issue, thanks for pointing
it.

Function dev_to_node will return node id for a given nvdimm device which
can be offline in some scenarios. We should use numa node id return by
numa_map_to_online_node function in that scenario. This function incase
given node is offline, it will lookup for next closest online node and
return that nodeid.

Can you try with below change and see, if you are still getting this
issue. Please let me know.

diff --git a/arch/powerpc/platforms/pseries/papr_scm.c
b/arch/powerpc/platforms/pseries/papr_scm.c
index bdf2620db461..4dd513d7c029 100644
--- a/arch/powerpc/platforms/pseries/papr_scm.c
+++ b/arch/powerpc/platforms/pseries/papr_scm.c
@@ -536,7 +536,7 @@ static void papr_scm_pmu_register(struct
papr_scm_priv *p)
                                PERF_PMU_CAP_NO_EXCLUDE;

        /*updating the cpumask variable */
-       nodeid = dev_to_node(&p->pdev->dev);
+       nodeid = numa_map_to_online_node(dev_to_node(&p->pdev->dev));
        nd_pmu->arch_cpumask = *cpumask_of_node(nodeid);

Thanks,
Kajol Jain

> 
> tests log:
>  (1/9) perf_nmem.py:perfNMEM.test_pmu_register_dmesg: PASS (1.13 s)
>  (2/9) perf_nmem.py:perfNMEM.test_sysfs: PASS (1.10 s)
>  (3/9) perf_nmem.py:perfNMEM.test_pmu_count: PASS (1.07 s)
>  (4/9) perf_nmem.py:perfNMEM.test_all_events: PASS (18.14 s)
>  (5/9) perf_nmem.py:perfNMEM.test_all_group_events: PASS (2.18 s)
>  (6/9) perf_nmem.py:perfNMEM.test_mixed_events: CANCEL: With single PMU
> mixed events test is not possible. (1.10 s)
>  (7/9) perf_nmem.py:perfNMEM.test_pmu_cpumask: ERROR: invalid literal
> for int() with base 10: '' (1.10 s)
>  (8/9) perf_nmem.py:perfNMEM.test_cpumask: ERROR: invalid literal for
> int() with base 10: '' (1.10 s)
>  (9/9) perf_nmem.py:perfNMEM.test_cpumask_cpu_off: ERROR: invalid
> literal for int() with base 10: '' (1.07 s)
> 
> 2. On the system where target id and online id were same then seeing
> value in 'cpumask' and those tests pass.
> 
> tests log:
>  (1/9) perf_nmem.py:perfNMEM.test_pmu_register_dmesg: PASS (1.16 s)
>  (2/9) perf_nmem.py:perfNMEM.test_sysfs: PASS (1.10 s)
>  (3/9) perf_nmem.py:perfNMEM.test_pmu_count: PASS (1.12 s)
>  (4/9) perf_nmem.py:perfNMEM.test_all_events: PASS (18.10 s)
>  (5/9) perf_nmem.py:perfNMEM.test_all_group_events: PASS (2.23 s)
>  (6/9) perf_nmem.py:perfNMEM.test_mixed_events: CANCEL: With single PMU
> mixed events test is not possible. (1.13 s)
>  (7/9) perf_nmem.py:perfNMEM.test_pmu_cpumask: PASS (1.08 s)
>  (8/9) perf_nmem.py:perfNMEM.test_cpumask: PASS (1.09 s)
>  (9/9) perf_nmem.py:perfNMEM.test_cpumask_cpu_off: PASS (1.62 s)
> 
>> ---
>> Resend v5 -> v6
>> - No logic change, just a rebase to latest upstream and
>>    tested the patchset.
>>
>> - Link to the patchset Resend v5: https://lkml.org/lkml/2021/11/15/3979
>>
>> v5 -> Resend v5
>> - Resend the patchset
>>
>> - Link to the patchset v5: https://lkml.org/lkml/2021/9/28/643
>>
>> v4 -> v5:
>> - Remove multiple variables defined in nvdimm_pmu structure include
>>    name and pmu functions(event_int/add/del/read) as they are just
>>    used to copy them again in pmu variable. Now we are directly doing
>>    this step in arch specific code as suggested by Dan Williams.
>>
>> - Remove attribute group field from nvdimm pmu structure and
>>    defined these attribute groups in common interface which
>>    includes format, event list along with cpumask as suggested by
>>    Dan Williams.
>>    Since we added static defination for attrbute groups needed in
>>    common interface, removes corresponding code from papr.
>>
>> - Add nvdimm pmu event list with event codes in the common interface.
>>
>> - Remove Acked-by/Reviewed-by/Tested-by tags as code is refactored
>>    to handle review comments from Dan.
>>
>> - Make nvdimm_pmu_free_hotplug_memory function static as reported
>>    by kernel test robot, also add corresponding Reported-by tag.
>>
>> - Link to the patchset v4: https://lkml.org/lkml/2021/9/3/45
>>
>> v3 -> v4
>> - Rebase code on top of current papr_scm code without any logical
>>    changes.
>>
>> - Added Acked-by tag from Peter Zijlstra and Reviewed by tag
>>    from Madhavan Srinivasan.
>>
>> - Link to the patchset v3: https://lkml.org/lkml/2021/6/17/605
>>
>> v2 -> v3
>> - Added Tested-by tag.
>>
>> - Fix nvdimm mailing list in the ABI Documentation.
>>
>> - Link to the patchset v2: https://lkml.org/lkml/2021/6/14/25
>>
>> v1 -> v2
>> - Fix hotplug code by adding pmu migration call
>>    incase current designated cpu got offline. As
>>    pointed by Peter Zijlstra.
>>
>> - Removed the retun -1 part from cpu hotplug offline
>>    function.
>>
>> - Link to the patchset v1: https://lkml.org/lkml/2021/6/8/500
>>
>> Kajol Jain (4):
>>    drivers/nvdimm: Add nvdimm pmu structure
>>    drivers/nvdimm: Add perf interface to expose nvdimm performance stats
>>    powerpc/papr_scm: Add perf interface support
>>    docs: ABI: sysfs-bus-nvdimm: Document sysfs event format entries for
>>      nvdimm pmu
>>
>>   Documentation/ABI/testing/sysfs-bus-nvdimm |  35 +++
>>   arch/powerpc/include/asm/device.h          |   5 +
>>   arch/powerpc/platforms/pseries/papr_scm.c  | 225 ++++++++++++++
>>   drivers/nvdimm/Makefile                    |   1 +
>>   drivers/nvdimm/nd_perf.c                   | 328 +++++++++++++++++++++
>>   include/linux/nd.h                         |  41 +++
>>   6 files changed, 635 insertions(+)
>>   create mode 100644 drivers/nvdimm/nd_perf.c
>>
> 


More information about the Linuxppc-dev mailing list