[PATCH v6 tip/core/locking 8/8] powerpc: Full barrier for smp_mb__after_unlock_lock()

Paul E. McKenney paulmck at linux.vnet.ibm.com
Thu Dec 12 08:59:11 EST 2013


From: "Paul E. McKenney" <paulmck at linux.vnet.ibm.com>

The powerpc lock acquisition sequence is as follows:

	lwarx; cmpwi; bne; stwcx.; lwsync;

Lock release is as follows:

	lwsync; stw;

If CPU 0 does a store (say, x=1) then a lock release, and CPU 1 does a
lock acquisition then a load (say, r1=y), then there is no guarantee of
a full memory barrier between the store to 'x' and the load from 'y'.
To see this, suppose that CPUs 0 and 1 are hardware threads in the same
core that share a store buffer, and that CPU 2 is in some other core,
and that CPU 2 does the following:

	y = 1; sync; r2 = x;

If 'x' and 'y' are both initially zero, then the lock acquisition and
release sequences above can result in r1 and r2 both being equal to
zero, which could not happen if unlock+lock was a full barrier.

This commit therefore makes powerpc's smp_mb__after_unlock_lock() be a
full barrier.

Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
Acked-by: Benjamin Herrenschmidt <benh at kernel.crashing.org>
Cc: Paul Mackerras <paulus at samba.org>
Cc: linuxppc-dev at lists.ozlabs.org
---
 arch/powerpc/include/asm/spinlock.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
index 5f54a744dcc5..f6e78d63fb6a 100644
--- a/arch/powerpc/include/asm/spinlock.h
+++ b/arch/powerpc/include/asm/spinlock.h
@@ -28,6 +28,8 @@
 #include <asm/synch.h>
 #include <asm/ppc-opcode.h>
 
+#define smp_mb__after_unlock_lock()	smp_mb()  /* Full ordering for lock. */
+
 #define arch_spin_is_locked(x)		((x)->slock != 0)
 
 #ifdef CONFIG_PPC64
-- 
1.8.1.5



More information about the Linuxppc-dev mailing list