[RFC] [PATCH] Trace TLBIE's

Michael Ellerman mpe at ellerman.id.au
Wed Nov 23 21:15:30 AEDT 2016


Balbir Singh <bsingharora at gmail.com> writes:

> Just a quick patch to trace tlbie(l)'s. The idea being that it can be
> enabled when we suspect corruption or when we need to see if we are doing
> the right thing during flush. I think the format can be enhanced to
> make it nicer (expand the RB/RS/IS/L cases in more detail). For now I am
> sharing the idea to get inputs
>
> A typical trace might look like this
>
>
> <...>-5141  [062]  1354.486693: tlbie:                
> 	tlbie with lpid 0, local 0, rb=7b5d0ff874f11f1, rs=0, ric=0 prs=0 r=0
> systemd-udevd-2584  [018]  1354.486772: tlbie:
> 	tlbie with lpid 0, local 0, rb=17be1f421adc10c1, rs=0, ric=0 prs=0 r=0
> ...
>
> qemu-system-ppc-5371  [016]  1412.369519: tlbie:
> 	tlbie with lpid 0, local 1, rb=67bd8900174c11c1, rs=0, ric=0 prs=0 r=0
> qemu-system-ppc-5377  [056]  1421.687262: tlbie:
> 	tlbie with lpid 1, local 0, rb=5f04edffa00c11c1, rs=1, ric=0 prs=0 r=0

My first reaction is "why the hell do we have so many open-coded
tlbies". So step one might be to add a static inline helper, that way we
don't have to add the trace_tlbie() in so many places.

Also in some of them you call trace_tlbie() before the
eieio/tlbsync/ptesync. Which may not be wrong, but looks worrying at
first glance.

But overall I guess it's OK. We'd want to do a quick benchmark to make
sure it's not adding any overhead.

cheers


More information about the Linuxppc-dev mailing list