[Skiboot] [RFC PATCH] core/opal: Add OPAL call statistics
Cédric Le Goater
clg at kaod.org
Wed Apr 1 20:02:20 AEDT 2020
Hello,
On 3/30/20 9:12 AM, Oliver O'Halloran wrote:
> On Sat, Mar 14, 2020 at 5:14 AM Cédric Le Goater <clg at kaod.org> wrote:
>>
>> On 3/12/20 3:09 PM, Naveen N. Rao wrote:
>>> Cédric Le Goater wrote:
>>>> On 2/29/20 10:27 AM, Nicholas Piggin wrote:
>>>>> Cédric Le Goater's on February 29, 2020 4:34 am:
>>>>>> Here is a proposal to collect OPAL call statistics, counts and duration,
>>>>>> and track areas we could possibly improve.
>>>>>>
>>>>>> With a small Linux driver to dump the stats in debugfs, here is what
>>>>>> we get on a P9 after boot:
>>>>>
>>>>> Seems interesting... you could just do it all on the Linux side though.
>>>>
>>>> I thought we might collect more data from OPAL in opal_exit.
>>>
>>> As Nick points out, this can be done from Linux through the use of tracepoints. We already have similar statistics for hcalls through a perf script. A similar script should be able to support OPAL calls.
>>>
>>> See:
>>> https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/tools/perf/scripts/python/powerpc-hcalls.py
>>
>> Thanks,
>>
>>
>> I need to go a little deeper to collect the statistics I am
>> interested in. Some low level HW procedures do polling.
>>
>> I have cooked a set of routines to collect statistics on
>> function calls in :
>>
>> struct stat_range {
>> uint64_t count;
>> uint64_t sum;
>> uint64_t min;
>> uint64_t max;
>> };
>>
>> struct stat {
>> const char *name;
>> uint64_t nr_ranges;
>> uint64_t count;
>> struct stat_range all;
>> struct stat_range ranges[STAT_NR_RANGES];
>> };
>>
>> The stat structure addresses are exported in the DT under
>> "ibm,opal/stats" and the values are exposed to user space
>> using a generic sysfs driver.
>>
>> It's simple and good enough for my needs.
>
> If you're going to do this then put it in /ibm,opal/exports/ with
> hdat_map and friends. That'll work out of the box with existing
> kernels.
It's a bit more complex.
Here is how the DT looks like on a boston :
root at boss01:~# ls /proc/device-tree/ibm,opal/stats/
'#address-cells' phandle stat at 0 stat at 2 stat at 4 stat at 6 stat at 8 stat at a stat at c
name '#size-cells' stat at 1 stat at 3 stat at 5 stat at 7 stat at 9 stat at b stat at d
root at boss01:~# ls /proc/device-tree/ibm\,opal/stats/stat at 0/
addr compatible ibm,chip-id label name phandle
and the sysfs files :
root at boss01:~# ls /sys/firmware/opal/stats/
XIVE_EQC_SCRUB-0 XIVE_IVC_SCRUB-0 XIVE_PC_CACHE_KILL-0 XIVE_SYNC-0 XIVE_VC_CACHE_KILL-0 XIVE_VPC_SCRUB-0 XIVE_VPC_SCRUB_CLEAN-0
XIVE_EQC_SCRUB-8 XIVE_IVC_SCRUB-8 XIVE_PC_CACHE_KILL-8 XIVE_SYNC-8 XIVE_VC_CACHE_KILL-8 XIVE_VPC_SCRUB-8 XIVE_VPC_SCRUB_CLEAN-8
root at boss01:~# cat /sys/firmware/opal/stats/*
XIVE_IVC_SCRUB-0: #1601 0/0/4 - #200 1/1/2 - #200 0/0/2 - #200 0/0/2 - #200 0/1/4 - #200 0/0/1 - #200 0/0/3 - #200 0/0/1 - #200 0/0/1 - #1 0/0/0 - #0 0/0/0 -
XIVE_IVC_SCRUB-8: #551 0/1/5 - #200 1/1/2 - #200 0/1/5 - #151 0/0/1 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_PC_CACHE_KILL-0: #3 0/0/0 - #3 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_PC_CACHE_KILL-8: #3 0/0/0 - #3 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_SYNC-0: #2628 0/9/1457 - #200 0/0/0 - #200 0/0/0 - #200 0/0/0 - #28 0/0/0 - #200 0/0/1 - #200 0/0/0 - #200 0/128/1457 - #200 0/0/1 - #200 0/0/1 - #200 0/0/1 -
XIVE_SYNC-8: #536 0/36/1458 - #200 1/1/1 - #200 1/95/1458 - #136 0/0/1 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_VC_CACHE_KILL-0: #3 0/0/0 - #3 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_VC_CACHE_KILL-8: #3 0/0/0 - #3 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_VPC_SCRUB_CLEAN-0: #64 0/3/11 - #64 0/3/11 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
XIVE_VPC_SCRUB_CLEAN-8: #64 1/3/8 - #64 1/3/8 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 - #0 0/0/0 -
Nothing unexpected from the figures yet.
I can send out patches if this is interesting.
C.
More information about the Skiboot
mailing list