[PATCH 2/3] powerpc/spinlocks: Rename SPLPAR-only spinlocks
Andrew Donnellan
ajd at linux.ibm.com
Thu Aug 1 13:27:28 AEST 2019
On 28/7/19 10:54 pm, Christopher M. Riedl wrote:
> The __rw_yield and __spin_yield locks only pertain to SPLPAR mode.
> Rename them to make this relationship obvious.
>
> Signed-off-by: Christopher M. Riedl <cmr at informatik.wtf>
Reviewed-by: Andrew Donnellan <ajd at linux.ibm.com>
> ---
> arch/powerpc/include/asm/spinlock.h | 6 ++++--
> arch/powerpc/lib/locks.c | 6 +++---
> 2 files changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 8631b0b4e109..1e7721176f39 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -101,8 +101,10 @@ static inline int arch_spin_trylock(arch_spinlock_t *lock)
>
> #if defined(CONFIG_PPC_SPLPAR)
> /* We only yield to the hypervisor if we are in shared processor mode */
> -extern void __spin_yield(arch_spinlock_t *lock);
> -extern void __rw_yield(arch_rwlock_t *lock);
> +void splpar_spin_yield(arch_spinlock_t *lock);
> +void splpar_rw_yield(arch_rwlock_t *lock);
> +#define __spin_yield(x) splpar_spin_yield(x)
> +#define __rw_yield(x) splpar_rw_yield(x)
> #else /* SPLPAR */
> #define __spin_yield(x) barrier()
> #define __rw_yield(x) barrier()
> diff --git a/arch/powerpc/lib/locks.c b/arch/powerpc/lib/locks.c
> index 6550b9e5ce5f..6440d5943c00 100644
> --- a/arch/powerpc/lib/locks.c
> +++ b/arch/powerpc/lib/locks.c
> @@ -18,7 +18,7 @@
> #include <asm/hvcall.h>
> #include <asm/smp.h>
>
> -void __spin_yield(arch_spinlock_t *lock)
> +void splpar_spin_yield(arch_spinlock_t *lock)
> {
> unsigned int lock_value, holder_cpu, yield_count;
>
> @@ -36,14 +36,14 @@ void __spin_yield(arch_spinlock_t *lock)
> plpar_hcall_norets(H_CONFER,
> get_hard_smp_processor_id(holder_cpu), yield_count);
> }
> -EXPORT_SYMBOL_GPL(__spin_yield);
> +EXPORT_SYMBOL_GPL(splpar_spin_yield);
>
> /*
> * Waiting for a read lock or a write lock on a rwlock...
> * This turns out to be the same for read and write locks, since
> * we only know the holder if it is write-locked.
> */
> -void __rw_yield(arch_rwlock_t *rw)
> +void splpar_rw_yield(arch_rwlock_t *rw)
> {
> int lock_value;
> unsigned int holder_cpu, yield_count;
>
--
Andrew Donnellan OzLabs, ADL Canberra
ajd at linux.ibm.com IBM Australia Limited
More information about the Linuxppc-dev
mailing list