[PATCH 10/17] KVM: PPC: Add support for Book3S processors in hypervisor mode
Alexander Graf
agraf at suse.de
Sat Jul 2 05:12:28 EST 2011
On 01.07.2011, at 20:37, Dave Hansen wrote:
> On Wed, 2011-06-29 at 20:21 +1000, Paul Mackerras wrote:
>> +struct kvmppc_pginfo {
>> + unsigned long pfn;
>> + atomic_t refcnt;
>> +};
>
> I only see this refcnt inc'd in one spot and never decremented or read.
> Is the refcnt just the number of hptes we have for this particular page
> at the moment?
>
>> +long kvmppc_alloc_hpt(struct kvm *kvm)
>> +{
>> + unsigned long hpt;
>> + unsigned long lpid;
>> +
>> + hpt = __get_free_pages(GFP_KERNEL|__GFP_ZERO|__GFP_REPEAT|__GFP_NOWARN,
>> + HPT_ORDER - PAGE_SHIFT);
>> + if (!hpt) {
>> + pr_err("kvm_alloc_hpt: Couldn't alloc HPT\n");
>> + return -ENOMEM;
>> + }
>> + kvm->arch.hpt_virt = hpt;
>> +
>> + do {
>> + lpid = find_first_zero_bit(lpid_inuse, NR_LPIDS);
>> + if (lpid >= NR_LPIDS) {
>> + pr_err("kvm_alloc_hpt: No LPIDs free\n");
>> + free_pages(hpt, HPT_ORDER - PAGE_SHIFT);
>> + return -ENOMEM;
>> + }
>> + } while (test_and_set_bit(lpid, lpid_inuse));
>> +
>> + kvm->arch.sdr1 = __pa(hpt) | (HPT_ORDER - 18);
>> + kvm->arch.lpid = lpid;
>> + kvm->arch.host_sdr1 = mfspr(SPRN_SDR1);
>> + kvm->arch.host_lpid = mfspr(SPRN_LPID);
>> + kvm->arch.host_lpcr = mfspr(SPRN_LPCR);
>> +
>> + pr_info("KVM guest htab at %lx, LPID %lx\n", hpt, lpid);
>> + return 0;
>> +}
>
>> +static unsigned long user_page_size(unsigned long addr)
>> +{
>> + struct vm_area_struct *vma;
>> + unsigned long size = PAGE_SIZE;
>> +
>> + down_read(¤t->mm->mmap_sem);
>> + vma = find_vma(current->mm, addr);
>> + if (vma)
>> + size = vma_kernel_pagesize(vma);
>> + up_read(¤t->mm->mmap_sem);
>> + return size;
>> +}
>
> That one looks pretty arch-independent and like it could use some
> consolidation with: virt/kvm/kvm_main.c::kvm_host_page_size()
Yep, I'd deem that a cleanup for later though. Good point however! We have similar code in e500 kvm today.
>
>> +void kvmppc_map_vrma(struct kvm *kvm, struct kvm_userspace_memory_region *mem)
>> +{
>> + unsigned long i;
>> + unsigned long npages = kvm->arch.ram_npages;
>> + unsigned long pfn;
>> + unsigned long *hpte;
>> + unsigned long hash;
>> + struct kvmppc_pginfo *pginfo = kvm->arch.ram_pginfo;
>> +
>> + if (!pginfo)
>> + return;
>> +
>> + /* VRMA can't be > 1TB */
>> + if (npages > 1ul << (40 - kvm->arch.ram_porder))
>> + npages = 1ul << (40 - kvm->arch.ram_porder);
>
> Is that because it can only be a single segment? Does that mean that we
> can't ever have guests larger than 1TB? Or just that they have to live
> with 1TB until they get their own page tables up?
The VRMA is only important in real mode, so this part looks good. The RMA is usually a lot smaller than 1TB ;).
Alex
More information about the Linuxppc-dev
mailing list