[PATCH v2] KVM: PPC: Book3S PR: only install valid SLBs during KVM_SET_SREGS
David Gibson
david at gibson.dropbear.id.au
Tue Oct 3 11:49:56 AEDT 2017
On Mon, Oct 02, 2017 at 10:40:22AM +0200, Greg Kurz wrote:
> Userland passes an array of 64 SLB descriptors to KVM_SET_SREGS,
> some of which are valid (ie, SLB_ESID_V is set) and the rest are
> likely all-zeroes (with QEMU at least).
>
> Each of them is then passed to kvmppc_mmu_book3s_64_slbmte(), which
> assumes to find the SLB index in the 3 lower bits of its rb argument.
> When passed zeroed arguments, it happily overwrites the 0th SLB entry
> with zeroes. This is exactly what happens while doing live migration
> with QEMU when the destination pushes the incoming SLB descriptors to
> KVM PR. When reloading the SLBs at the next synchronization, QEMU first
> clears its SLB array and only restore valid ones, but the 0th one is
> now gone and we cannot access the corresponding memory anymore:
>
> (qemu) x/x $pc
> c0000000000b742c: Cannot access memory
>
> To avoid this, let's filter out non-valid SLB entries. While here, we
> also force a full SLB flush before installing new entries.
>
> Signed-off-by: Greg Kurz <groug at kaod.org>
Seems sensible to me.
Reviewed-by: David Gibson <david at gibson.dropbear.id.au>
> ---
> v2: - flush SLB before installing new entries
> ---
> arch/powerpc/kvm/book3s_pr.c | 10 ++++++++--
> 1 file changed, 8 insertions(+), 2 deletions(-)
>
> diff --git a/arch/powerpc/kvm/book3s_pr.c b/arch/powerpc/kvm/book3s_pr.c
> index 3beb4ff469d1..7cce08d610ae 100644
> --- a/arch/powerpc/kvm/book3s_pr.c
> +++ b/arch/powerpc/kvm/book3s_pr.c
> @@ -1327,9 +1327,15 @@ static int kvm_arch_vcpu_ioctl_set_sregs_pr(struct kvm_vcpu *vcpu,
>
> vcpu3s->sdr1 = sregs->u.s.sdr1;
> if (vcpu->arch.hflags & BOOK3S_HFLAG_SLB) {
> + /* Flush all SLB entries */
> + vcpu->arch.mmu.slbmte(vcpu, 0, 0);
> + vcpu->arch.mmu.slbia(vcpu);
> +
> for (i = 0; i < 64; i++) {
> - vcpu->arch.mmu.slbmte(vcpu, sregs->u.s.ppc64.slb[i].slbv,
> - sregs->u.s.ppc64.slb[i].slbe);
> + u64 rb = sregs->u.s.ppc64.slb[i].slbe;
> + u64 rs = sregs->u.s.ppc64.slb[i].slbv;
> + if (rb & SLB_ESID_V)
> + vcpu->arch.mmu.slbmte(vcpu, rs, rb);
> }
> } else {
> for (i = 0; i < 16; i++) {
>
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 833 bytes
Desc: not available
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20171003/a319260b/attachment.sig>
More information about the Linuxppc-dev
mailing list