[v4 3/5] KVM: PPC: Book3S HV: in H_SVM_INIT_DONE, migrate remaining normal-GFNs to secure-GFNs.
Ram Pai
linuxram at us.ibm.com
Thu Jul 23 21:39:18 AEST 2020
On Thu, Jul 23, 2020 at 11:40:37AM +0530, Bharata B Rao wrote:
> On Fri, Jul 17, 2020 at 01:00:25AM -0700, Ram Pai wrote:
> >
> > +int kvmppc_uv_migrate_mem_slot(struct kvm *kvm,
> > + const struct kvm_memory_slot *memslot)
>
> Don't see any callers for this outside of this file, so why not static?
>
> > +{
> > + unsigned long gfn = memslot->base_gfn;
> > + struct vm_area_struct *vma;
> > + unsigned long start, end;
> > + int ret = 0;
> > +
> > + while (kvmppc_next_nontransitioned_gfn(memslot, kvm, &gfn)) {
>
> So you checked the state of gfn under uvmem_lock above, but release
> it too.
>
> > +
> > + mmap_read_lock(kvm->mm);
> > + start = gfn_to_hva(kvm, gfn);
> > + if (kvm_is_error_hva(start)) {
> > + ret = H_STATE;
> > + goto next;
> > + }
> > +
> > + end = start + (1UL << PAGE_SHIFT);
> > + vma = find_vma_intersection(kvm->mm, start, end);
> > + if (!vma || vma->vm_start > start || vma->vm_end < end) {
> > + ret = H_STATE;
> > + goto next;
> > + }
> > +
> > + mutex_lock(&kvm->arch.uvmem_lock);
> > + ret = kvmppc_svm_migrate_page(vma, start, end,
> > + (gfn << PAGE_SHIFT), kvm, PAGE_SHIFT, false);
>
> What is the guarantee that the gfn is in the same earlier state when you do
> do migration here?
Are you worried about the case, where someother thread will sneak-in and
migrate the GFN, and this migration request will become a duplicate one?
That is theortically possible, though practically improbable. This
transition is attempted only when there is one vcpu active in the VM.
However, may be, we should not bake-in that assumption in this code.
Will remove that assumption.
RP
More information about the Linuxppc-dev
mailing list