[PATCH 2/3] powerpc/spinlock: support vcpu preempted check
xinhui
xinhui.pan at linux.vnet.ibm.com
Tue Jun 28 13:23:57 AEST 2016
On 2016年06月27日 22:17, Peter Zijlstra wrote:
> On Mon, Jun 27, 2016 at 01:41:29PM -0400, Pan Xinhui wrote:
>> diff --git a/arch/powerpc/include/asm/spinlock.h b/arch/powerpc/include/asm/spinlock.h
>> index 523673d..ae938ee 100644
>> --- a/arch/powerpc/include/asm/spinlock.h
>> +++ b/arch/powerpc/include/asm/spinlock.h
>> @@ -52,6 +52,21 @@
>> #define SYNC_IO
>> #endif
>>
>> +/* For fixing some spinning issues in a guest.
>> + * kernel would check if vcpu is preempted during a spin loop.
>> + * we support that.
>> + */
>
> If you look around in that file you'll notice that the above comment
> style is inconsistent.
>
> Nor is the comment really clarifying things, for one you fail to mention
> the problem by its known name. You also forget to explain how this
> interface will help. How about something like this:
>
> /*
> * In order to deal with a various lock holder preemption issues provide
> * an interface to see if a vCPU is currently running or not.
> *
> * This allows us to terminate optimistic spin loops and block,
> * analogous to the native optimistic spin heuristic of testing if the
> * lock owner task is running or not.
> */
thanks!!!
>
> Also, since you now have a useful comment, which is not architecture
> specific, I would place it with the common vcpu_is_preempted()
> definition in sched.h.
>
agree with you. Will do that. I will also add Suggested-by with you.
thanks
> Hmm?
>
>> +#define arch_vcpu_is_preempted arch_vcpu_is_preempted
>> +static inline bool arch_vcpu_is_preempted(int cpu)
>> +{
>> + struct lppaca *lp = &lppaca_of(cpu);
>> +
>> + if (unlikely(!(lppaca_shared_proc(lp) ||
>> + lppaca_dedicated_proc(lp))))
>> + return false;
>> + return !!(be32_to_cpu(lp->yield_count) & 1);
>> +}
>> +
>> static __always_inline int arch_spin_value_unlocked(arch_spinlock_t lock)
>> {
>> return lock.slock == 0;
>> --
>> 2.4.11
>>
>
More information about the Linuxppc-dev
mailing list