[RESEND PATCH] documentation: memory-barriers: fix smp_mb__before_spinlock() semantics

Paul E. McKenney paulmck at linux.vnet.ibm.com
Thu Apr 2 02:26:05 AEDT 2015


On Tue, Mar 31, 2015 at 09:39:41AM +0100, Will Deacon wrote:
> Our current documentation claims that, when followed by an ACQUIRE,
> smp_mb__before_spinlock() orders prior loads against subsequent loads
> and stores, which isn't actually true.
> 
> Fix the documentation to state that this sequence orders only prior
> stores against subsequent loads and stores.
> 
> Cc: Oleg Nesterov <oleg at redhat.com>
> Cc: "Paul E. McKenney" <paulmck at linux.vnet.ibm.com>
> Cc: Peter Zijlstra <peterz at infradead.org>
> Signed-off-by: Will Deacon <will.deacon at arm.com>
> ---
> 
> Could somebody pick this up please? I guess I could route it via the arm64
> tree with an Ack, but I'd rather it went through Paul or -tip.

Queued for 4.2, along with a separate patch for PowerPC that make it so
that PowerPC actually behaves as described below.  ;-)

							Thanx, Paul

>  Documentation/memory-barriers.txt | 7 +++----
>  1 file changed, 3 insertions(+), 4 deletions(-)
> 
> diff --git a/Documentation/memory-barriers.txt b/Documentation/memory-barriers.txt
> index ca2387ef27ab..fa28a0c1e2b1 100644
> --- a/Documentation/memory-barriers.txt
> +++ b/Documentation/memory-barriers.txt
> @@ -1768,10 +1768,9 @@ for each construct.  These operations all imply certain barriers:
> 
>       Memory operations issued before the ACQUIRE may be completed after
>       the ACQUIRE operation has completed.  An smp_mb__before_spinlock(),
> -     combined with a following ACQUIRE, orders prior loads against
> -     subsequent loads and stores and also orders prior stores against
> -     subsequent stores.  Note that this is weaker than smp_mb()!  The
> -     smp_mb__before_spinlock() primitive is free on many architectures.
> +     combined with a following ACQUIRE, orders prior stores against
> +     subsequent loads and stores. Note that this is weaker than smp_mb()!
> +     The smp_mb__before_spinlock() primitive is free on many architectures.
> 
>   (2) RELEASE operation implication:

------------------------------------------------------------------------

    powerpc: Fix smp_mb__before_spinlock()
    
    Currently, smp_mb__before_spinlock() is defined to be smp_wmb()
    in core code, but this is not sufficient on PowerPC.  This patch
    therefore supplies an override for the generic definition to
    strengthen smp_mb__before_spinlock() to smp_mb(), as is needed
    on PowerPC.
    
    Signed-off-by: Paul E. McKenney <paulmck at linux.vnet.ibm.com>
    Cc: <linuxppc-dev at lists.ozlabs.org>

diff --git a/arch/powerpc/include/asm/barrier.h b/arch/powerpc/include/asm/barrier.h
index a3bf5be111ff..1124f59b8df4 100644
--- a/arch/powerpc/include/asm/barrier.h
+++ b/arch/powerpc/include/asm/barrier.h
@@ -89,5 +89,6 @@ do {									\
 
 #define smp_mb__before_atomic()     smp_mb()
 #define smp_mb__after_atomic()      smp_mb()
+#define smp_mb__before_spinlock()   smp_mb()
 
 #endif /* _ASM_POWERPC_BARRIER_H */



More information about the Linuxppc-dev mailing list