[PATCH v3] powerpc: spinlock: Fix spin_unlock_wait()

Peter Zijlstra peterz at infradead.org
Mon Jun 6 21:56:55 AEST 2016


On Mon, Jun 06, 2016 at 09:42:20PM +1000, Michael Ellerman wrote:
> +static inline void arch_spin_unlock_wait(arch_spinlock_t *lock)
> +{
> +	arch_spinlock_t lock_val;
> +
> +	smp_mb();
> +
> +	/*
> +	 * Atomically load and store back the lock value (unchanged). This
> +	 * ensures that our observation of the lock value is ordered with
> +	 * respect to other lock operations.
> +	 */
> +	__asm__ __volatile__(
> +"1:	" PPC_LWARX(%0, 0, %2, 0) "\n"
> +"	stwcx. %0, 0, %2\n"
> +"	bne- 1b\n"
> +	: "=&r" (lock_val), "+m" (*lock)
> +	: "r" (lock)
> +	: "cr0", "xer");
> +
> +	if (arch_spin_value_unlocked(lock_val))
> +		goto out;
> +
> +	while (!arch_spin_value_unlocked(*lock)) {
> +		HMT_low();
> +		if (SHARED_PROCESSOR)
> +			__spin_yield(lock);
> +	}
> +	HMT_medium();
> +
> +out:
> +	smp_mb();
> +}

Why the move to in-line this implementation? It looks like a fairly big
function.


More information about the Linuxppc-dev mailing list