[PATCH] KVM: PPC: Add generic hpte management functions

Alexander Graf agraf at suse.de
Mon Jun 28 19:55:52 EST 2010


Avi Kivity wrote:
> On 06/28/2010 12:27 PM, Alexander Graf wrote:
>>> Am I looking at old code?
>>
>>
>> Apparently. Check book3s_mmu_*.c
>
> I don't have that pattern.

It's in this patch.

> +static void invalidate_pte(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
> +{
> +	dprintk_mmu("KVM: Flushing SPT: 0x%lx (0x%llx) -> 0x%llx\n",
> +		    pte->pte.eaddr, pte->pte.vpage, pte->host_va);
> +
> +	/* Different for 32 and 64 bit */
> +	kvmppc_mmu_invalidate_pte(vcpu, pte);
> +
> +	if (pte->pte.may_write)
> +		kvm_release_pfn_dirty(pte->pfn);
> +	else
> +		kvm_release_pfn_clean(pte->pfn);
> +
> +	list_del(&pte->list_pte);
> +	list_del(&pte->list_vpte);
> +	list_del(&pte->list_vpte_long);
> +	list_del(&pte->list_all);
> +
> +	kmem_cache_free(vcpu->arch.hpte_cache, pte);
> +}
> +

>
>>
>>>
>>> (another difference is using struct hlist_head instead of list_head,
>>> which I recommend since it saves space)
>>
>> Hrm. I thought about this quite a bit before too, but that makes
>> invalidation more complicated, no? We always need to remember the
>> previous entry in a list.
>
> hlist_for_each_entry_safe() does that.

Oh - very nice. So all I need to do is pass the previous list entry to
invalide_pte too and I'm good. I guess I'll give it a shot.

>
>>>
>>>>>> +int kvmppc_mmu_hpte_init(struct kvm_vcpu *vcpu)
>>>>>> +{
>>>>>> +    char kmem_name[128];
>>>>>> +
>>>>>> +    /* init hpte slab cache */
>>>>>> +    snprintf(kmem_name, 128, "kvm-spt-%p", vcpu);
>>>>>> +    vcpu->arch.hpte_cache = kmem_cache_create(kmem_name,
>>>>>> +        sizeof(struct hpte_cache), sizeof(struct hpte_cache), 0,
>>>>>> NULL);
>>>>>>
>>>>>>
>>>>> Why not one global cache?
>>>>>
>>>> You mean over all vcpus? Or over all VMs?
>>>
>>> Totally global.  As in 'static struct kmem_cache *kvm_hpte_cache;'.
>>
>> What would be the benefit?
>
> Less and simpler code, better reporting through slabtop, less wastage
> of partially allocated slab pages.

But it also means that one VM can spill the global slab cache and kill
another VM's mm performance, no?


Alex



More information about the Linuxppc-dev mailing list