[PATCH] powerpc64: use fixed lock token for !CONFIG_PPC_SPLPAR
Kevin Hao
haokexin at gmail.com
Sat Mar 7 22:19:43 AEDT 2015
It makes no sense to use a variant lock token on a platform which
doesn't support for shared-processor logical partitions. Actually we
can eliminate a memory load by using a fixed lock token on these
platforms.
Signed-off-by: Kevin Hao <haokexin at gmail.com>
---
arch/powerpc/include/asm/spinlock.h | 8 ++++++--
1 file changed, 6 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 4dbe072eecbe..d303cdad2519 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -30,7 +30,7 @@
#define smp_mb__after_unlock_lock() smp_mb() /* Full ordering for lock. */
-#ifdef CONFIG_PPC64
+#ifdef CONFIG_PPC_SPLPAR
/* use 0x800000yy when locked, where yy == CPU number */
#ifdef __BIG_ENDIAN__
#define LOCK_TOKEN (*(u32 *)(&get_paca()->lock_token))
@@ -187,9 +187,13 @@ extern void arch_spin_unlock_wait(arch_spinlock_t *lock);
#ifdef CONFIG_PPC64
#define __DO_SIGN_EXTEND "extsw %0,%0\n"
-#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */
#else
#define __DO_SIGN_EXTEND
+#endif
+
+#ifdef CONFIG_PPC_SPLPAR
+#define WRLOCK_TOKEN LOCK_TOKEN /* it's negative */
+#else
#define WRLOCK_TOKEN (-1)
#endif
--
2.1.0
More information about the Linuxppc-dev
mailing list