[PATCH 3/3] locking/osq: Drop the overload of osq_lock()
Peter Zijlstra
peterz at infradead.org
Tue Jun 28 00:21:55 AEST 2016
On Mon, Jun 27, 2016 at 01:41:30PM -0400, Pan Xinhui wrote:
> @@ -118,8 +123,17 @@ bool osq_lock(struct optimistic_spin_queue *lock)
> while (!READ_ONCE(node->locked)) {
> /*
> * If we need to reschedule bail... so we can block.
> + * An over-committed guest with more vCPUs than pCPUs
> + * might fall in this loop and cause a huge overload.
> + * This is because vCPU A(prev) hold the osq lock and yield out
> + * vCPU B(node) wait ->locked to be set, IOW, it wait utill
> + * vCPU A run and unlock the osq lock. Such spin is meaningless
> + * use vcpu_is_preempted to detech such case. IF arch does not
> + * support vcpu preempted check, vcpu_is_preempted is a macro
> + * defined by false.
Or you could mention lock holder preemption and everybody will know what
you're talking about.
> */
> - if (need_resched())
> + if (need_resched() ||
> + vcpu_is_preempted(node_cpu(node->prev)))
Did you really need that linebreak?
> goto unqueue;
>
> cpu_relax_lowlatency();
> --
> 2.4.11
>
More information about the Linuxppc-dev
mailing list