[RFC 3/5] powerpc: atomic: implement atomic{,64}_{add,sub}_return_* variants

Peter Zijlstra peterz at infradead.org
Fri Aug 28 20:48:54 AEST 2015


On Fri, Aug 28, 2015 at 10:48:17AM +0800, Boqun Feng wrote:
> +/*
> + * Since {add,sub}_return_relaxed and xchg_relaxed are implemented with
> + * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> + * on the platform without lwsync.
> + */
> +#ifdef CONFIG_SMP
> +#define smp_acquire_barrier__after_atomic() \
> +	__asm__ __volatile__(PPC_ACQUIRE_BARRIER : : : "memory")
> +#else
> +#define smp_acquire_barrier__after_atomic() barrier()
> +#endif
> +#define arch_atomic_op_acquire(op, args...)				\
> +({									\
> +	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
> +	smp_acquire_barrier__after_atomic();				\
> +	__ret;								\
> +})
> +
> +#define arch_atomic_op_release(op, args...)				\
> +({									\
> +	smp_lwsync();							\
> +	op##_relaxed(args);						\
> +})

Urgh, so this is RCpc. We were trying to get rid of that if possible.
Lets wait until that's settled before introducing more of it.

lkml.kernel.org/r/20150820155604.GB24100 at arm.com


More information about the Linuxppc-dev mailing list