[PATCH 2/3] powerpc/spinlock: support vcpu preempted check

Boqun Feng boqun.feng at gmail.com
Tue Jun 28 00:58:32 AEST 2016


Hi Xinhui,

On Mon, Jun 27, 2016 at 01:41:29PM -0400, Pan Xinhui wrote:
> This is to fix some holder preemption issues. Spinning at one
> vcpu which is preempted is meaningless.
> 
> Kernel need such interfaces, So lets support it.
> 
> We also should suooprt both the shared and dedicated mode.
> So add lppaca_dedicated_proc macro in lppaca.h
> 
> Suggested-by: Boqun Feng <boqun.feng at gmail.com>
> Signed-off-by: Pan Xinhui <xinhui.pan at linux.vnet.ibm.com>
> ---
>  arch/powerpc/include/asm/lppaca.h   |  6 ++++++
>  arch/powerpc/include/asm/spinlock.h | 15 +++++++++++++++
>  2 files changed, 21 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/lppaca.h b/arch/powerpc/include/asm/lppaca.h
> index d0a2a2f..0a263d3 100644
> --- a/arch/powerpc/include/asm/lppaca.h
> +++ b/arch/powerpc/include/asm/lppaca.h
> @@ -111,12 +111,18 @@ extern struct lppaca lppaca[];
>   * we will have to transition to something better.
>   */
>  #define LPPACA_OLD_SHARED_PROC		2
> +#define LPPACA_OLD_DEDICATED_PROC      (1 << 6)
>  

I think you should describe a little bit about the magic number here,
i.e. what document/specification says this should work, and how this
works.

>  static inline bool lppaca_shared_proc(struct lppaca *l)
>  {
>  	return !!(l->__old_status & LPPACA_OLD_SHARED_PROC);
>  }
>  
> +static inline bool lppaca_dedicated_proc(struct lppaca *l)
> +{
> +	return !!(l->__old_status & LPPACA_OLD_DEDICATED_PROC);
> +}
> +
>  /*
>   * SLB shadow buffer structure as defined in the PAPR.  The save_area
>   * contains adjacent ESID and VSID pairs for each shadowed SLB.  The
> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
> index 523673d..ae938ee 100644
> --- a/arch/powerpc/include/asm/spinlock.h
> +++ b/arch/powerpc/include/asm/spinlock.h
> @@ -52,6 +52,21 @@
>  #define SYNC_IO
>  #endif
>  
> +/* For fixing some spinning issues in a guest.
> + * kernel would check if vcpu is preempted during a spin loop.
> + * we support that.
> + */
> +#define arch_vcpu_is_preempted arch_vcpu_is_preempted
> +static inline bool arch_vcpu_is_preempted(int cpu)

This function should be guarded by #ifdef PPC_PSERIES .. #endif, right?
Because if the kernel is not compiled with guest support,
vcpu_is_preempted() should always be false, right?

> +{
> +	struct lppaca *lp = &lppaca_of(cpu);
> +
> +	if (unlikely(!(lppaca_shared_proc(lp) ||
> +			lppaca_dedicated_proc(lp))))

Do you want to detect whether we are running in a guest(ie. pseries
kernel) here? Then I wonder whether "machine_is(pseries)" works here.

Regards,
Boqun

> +		return false;
> +	return !!(be32_to_cpu(lp->yield_count) & 1);
> +}
> +
>  static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>  {
>  	return lock.slock == 0;
> -- 
> 2.4.11
> 
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 473 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20160627/881c5ded/attachment.sig>


More information about the Linuxppc-dev mailing list