[RFC] arch: Introduce new TSO memory barrier smp_tmb()

Will Deacon will.deacon at arm.com
Thu Nov 7 22:17:41 EST 2013


Hi Peter,

Couple of minor fixes on the arm64 side...

On Wed, Nov 06, 2013 at 01:57:36PM +0000, Peter Zijlstra wrote:
> --- a/arch/arm64/include/asm/barrier.h
> +++ b/arch/arm64/include/asm/barrier.h
> @@ -35,11 +35,59 @@
>  #define smp_mb()       barrier()
>  #define smp_rmb()      barrier()
>  #define smp_wmb()      barrier()
> +
> +#define smp_store_release(p, v)                                                \
> +do {                                                                   \
> +       compiletime_assert_atomic_type(*p);                             \
> +       smp_mb();                                                       \
> +       ACCESS_ONCE(*p) = (v);                                          \
> +} while (0)
> +
> +#define smp_load_acquire(p)                                            \
> +({                                                                     \
> +       typeof(*p) ___p1 = ACCESS_ONCE(*p);                             \
> +       compiletime_assert_atomic_type(*p);                             \
> +       smp_mb();                                                       \
> +       ___p1;                                                          \
> +})
> +
>  #else
> +
>  #define smp_mb()       asm volatile("dmb ish" : : : "memory")
>  #define smp_rmb()      asm volatile("dmb ishld" : : : "memory")
>  #define smp_wmb()      asm volatile("dmb ishst" : : : "memory")
> -#endif

Why are you getting rid of this #endif?

> +#define smp_store_release(p, v)                                                \
> +do {                                                                   \
> +       compiletime_assert_atomic_type(*p);                             \
> +       switch (sizeof(*p)) {                                           \
> +       case 4:                                                         \
> +               asm volatile ("stlr %w1, [%0]"                          \
> +                               : "=Q" (*p) : "r" (v) : "memory");      \
> +               break;                                                  \
> +       case 8:                                                         \
> +               asm volatile ("stlr %1, [%0]"                           \
> +                               : "=Q" (*p) : "r" (v) : "memory");      \
> +               break;                                                  \
> +       }                                                               \
> +} while (0)
> +
> +#define smp_load_acquire(p)                                            \
> +({                                                                     \
> +       typeof(*p) ___p1;                                               \
> +       compiletime_assert_atomic_type(*p);                             \
> +       switch (sizeof(*p)) {                                           \
> +       case 4:                                                         \
> +               asm volatile ("ldar %w0, [%1]"                          \
> +                       : "=r" (___p1) : "Q" (*p) : "memory");          \
> +               break;                                                  \
> +       case 8:                                                         \
> +               asm volatile ("ldar %0, [%1]"                           \
> +                       : "=r" (___p1) : "Q" (*p) : "memory");          \
> +               break;                                                  \
> +       }                                                               \
> +       ___p1;                                                          \
> +})

You don't need the square brackets when using the "Q" constraint (otherwise
it will expand to something like [[x0]], which gas won't accept).

With those changes, for the general idea and arm/arm64 parts:

  Acked-by: Will Deacon <will.deacon at arm.com>

Will


More information about the Linuxppc-dev mailing list