[PATCH] powerpc/kernel: Avoid redundancies on giveup_all

Cyril Bur cyrilbur at gmail.com
Fri Jun 23 16:03:12 AEST 2017


On Thu, 2017-06-22 at 17:27 -0300, Breno Leitao wrote:
> Currently giveup_all() calls __giveup_fpu(), __giveup_altivec(), and
> __giveup_vsx(). But __giveup_vsx() also calls __giveup_fpu() and
> __giveup_altivec() again, in a redudant manner.
> 
> Other than giving up FP and Altivec, __giveup_vsx() also disables
> MSR_VSX on MSR, but this is already done by __giveup_{fpu,altivec}().
> As VSX can not be enabled alone on MSR (without FP and/or VEC
> enabled), this is also a redundancy. VSX is never enabled alone (without
> FP and VEC) because every time VSX is enabled, as through load_up_vsx()
> and restore_math(), FP is also enabled together.
> 
> This change improves giveup_all() in average just 3%, but since
> giveup_all() is called very frequently, around 8x per CPU per second on
> an idle machine, this change might show some noticeable improvement.
> 

So I totally agree except this makes me quite nervous. I know we're
quite good at always disabling VSX when we disable FPU and ALTIVEC and
we do always turn VSX on when we enable FPU AND ALTIVEC. But still, if
we ever get that wrong...

I'm more interested in how this improves giveup_all() performance by so
much, but then hardware often surprises - I guess that's the cost of a
function call.

Perhaps caching the thread.regs->msr isn't a good idea. If we could
branch over in the common case and but still have the call to the
function in case something goes horribly wrong?

> Signed-off-by: Breno Leitao <leitao at debian.org>
> Signed-off-by: Gustavo Romero <gromero at linux.vnet.ibm.com>
> ---
>  arch/powerpc/kernel/process.c | 4 ----
>  1 file changed, 4 deletions(-)
> 
> diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
> index 2ad725ef4368..5d6af58270e6 100644
> --- a/arch/powerpc/kernel/process.c
> +++ b/arch/powerpc/kernel/process.c
> @@ -494,10 +494,6 @@ void giveup_all(struct task_struct *tsk)
>  	if (usermsr & MSR_VEC)
>  		__giveup_altivec(tsk);
>  #endif
> -#ifdef CONFIG_VSX
> -	if (usermsr & MSR_VSX)
> -		__giveup_vsx(tsk);
> -#endif
>  #ifdef CONFIG_SPE
>  	if (usermsr & MSR_SPE)
>  		__giveup_spe(tsk);


More information about the Linuxppc-dev mailing list