[PATCH] powerpc/kernel: Avoid redundancies on giveup_all

Breno Leitao leitao at debian.org
Fri Jun 23 06:27:23 AEST 2017


Currently giveup_all() calls __giveup_fpu(), __giveup_altivec(), and
__giveup_vsx(). But __giveup_vsx() also calls __giveup_fpu() and
__giveup_altivec() again, in a redudant manner.

Other than giving up FP and Altivec, __giveup_vsx() also disables
MSR_VSX on MSR, but this is already done by __giveup_{fpu,altivec}().
As VSX can not be enabled alone on MSR (without FP and/or VEC
enabled), this is also a redundancy. VSX is never enabled alone (without
FP and VEC) because every time VSX is enabled, as through load_up_vsx()
and restore_math(), FP is also enabled together.

This change improves giveup_all() in average just 3%, but since
giveup_all() is called very frequently, around 8x per CPU per second on
an idle machine, this change might show some noticeable improvement.

Signed-off-by: Breno Leitao <leitao at debian.org>
Signed-off-by: Gustavo Romero <gromero at linux.vnet.ibm.com>
---
 arch/powerpc/kernel/process.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/arch/powerpc/kernel/process.c b/arch/powerpc/kernel/process.c
index 2ad725ef4368..5d6af58270e6 100644
--- a/arch/powerpc/kernel/process.c
+++ b/arch/powerpc/kernel/process.c
@@ -494,10 +494,6 @@ void giveup_all(struct task_struct *tsk)
 	if (usermsr & MSR_VEC)
 		__giveup_altivec(tsk);
 #endif
-#ifdef CONFIG_VSX
-	if (usermsr & MSR_VSX)
-		__giveup_vsx(tsk);
-#endif
 #ifdef CONFIG_SPE
 	if (usermsr & MSR_SPE)
 		__giveup_spe(tsk);
-- 
2.11.0



More information about the Linuxppc-dev mailing list