[PATCH v2 1/4] KVM: PPC: e500mc: Revert "add load inst fixup"

Alexander Graf agraf at suse.de
Wed May 7 01:54:42 EST 2014

On 05/06/2014 05:48 PM, mihai.caraman at freescale.com wrote:
>> -----Original Message-----
>> From: Alexander Graf [mailto:agraf at suse.de]
>> Sent: Sunday, May 04, 2014 1:14 AM
>> To: Caraman Mihai Claudiu-B02008
>> Cc: kvm-ppc at vger.kernel.org; kvm at vger.kernel.org; linuxppc-
>> dev at lists.ozlabs.org
>> Subject: Re: [PATCH v2 1/4] KVM: PPC: e500mc: Revert "add load inst
>> fixup"
>> Am 03.05.2014 um 01:14 schrieb "mihai.caraman at freescale.com"
>> <mihai.caraman at freescale.com>:
>>>> From: Alexander Graf <agraf at suse.de>
>>>> Sent: Friday, May 2, 2014 12:24 PM
>>> This was the first idea that sprang to my mind inspired from how DO_KVM
>>> is hooked on PR. I actually did a simple POC for e500mc/e5500, but this
>> will
>>> not work on e6500 which has shared IVORs between HW threads.
>> What if we combine the ideas? On read we flip the IVOR to a separate
>> handler that checks for a field in the PACA. Only if that field is set,
>> we treat the fault as kvm fault, otherwise we jump into the normal
>> handler.
>> I suppose we'd have to also take a lock to make sure we don't race with
>> the other thread when it wants to also read a guest instruction, but you
>> get the idea.
> This might be a solution for TLB eviction but not for execute-but-not-read
> entries which requires access from host context.

Good point :).

>> I have no idea whether this would be any faster, it's more of a
>> brainstorming thing really. But regardless this patch set would be a move
>> into the right direction.
>> Btw, do we have any guarantees that we don't get scheduled away before we
>> run kvmppc_get_last_inst()? If we run on a different core we can't read
>> the inst anymore. Hrm.
> It was your suggestion to move the logic from kvmppc_handle_exit() irq
> disabled area to kvmppc_get_last_inst():
> http://git.freescale.com/git/cgit.cgi/ppc/sdk/linux.git/tree/arch/powerpc/kvm/booke.c
> Still, what is wrong if we get scheduled on another core? We will emulate
> again and the guest will populate the TLB on the new core.

Yes, it means we have to get the EMULATE_AGAIN code paths correct :). It 
also means we lose some performance with preemptive kernel configurations.


More information about the Linuxppc-dev mailing list