[PATCH v3 0/4] implement vcpu preempted check

Pan Xinhui xinhui at linux.vnet.ibm.com
Fri Sep 30 18:52:57 AEST 2016


hi, Paolo
	thanks for your reply.

在 2016/9/30 14:58, Paolo Bonzini 写道:
>>>>> Please consider s390 and (x86/arm) KVM. Once we have a few, more can
>>>>> follow later, but I think its important to not only have PPC support for
>>>>> this.
>>>>
>>>> Actually the s390 preemted check via sigp sense running  is available for
>>>> all hypervisors (z/VM, LPAR and KVM) which implies everywhere as you can
>>>> no longer buy s390 systems without LPAR.
>>>>
>>>> As Heiko already pointed out we could simply use a small inline function
>>>> that calls cpu_is_preempted from arch/s390/lib/spinlock (or
>>>> smp_vcpu_scheduled from smp.c)
>>>
>>> Sure, and I had vague memories of Heiko's email. This patch set however
>>> completely fails to do that trivial hooking up.
>>
>> sorry for that.
>> I will try to work it out on x86.
>
> x86 has no hypervisor support, and I'd like to understand the desired
> semantics first, so I don't think it should block this series.  In

Once a guest do a hypercall or something similar, IOW, there is a kvm_guest_exit. we think this is a lock holder preemption.
Adn PPC implement it in this way.

> particular, there are at least the following choices:
>
> 1) exit to userspace (5-10.000 clock cycles best case) counts as
> lock holder preemption
>
just to avoid any misunderstanding.
You are saying that the guest does an IO operation for example and then exit to QEMU right?
Yes, in this scenario it's hard to guarntee that such IO operation or someghing like that could be finished in time.

  
> 2) any time the vCPU thread not running counts as lock holder
> preemption
>
> To implement the latter you'd need a hypercall or MSR (at least as
> a slow path), because the KVM preempt notifier is only active
> during the KVM_RUN ioctl.
>
seems a little expensive. :(
How many clock cycles it might cost.

I am still looking for one shared struct between kvm and guest kernel on x86.
and every time kvm_guest_exit/enter called, we store some info in it. So guest kernel can check one vcpu is running or not quickly.

thanks
xinhui

> Paolo
>



More information about the Linuxppc-dev mailing list