[PATCH] KVM: PPC: Add generic hpte management functions

Alexander Graf agraf at suse.de
Mon Jun 28 23:32:53 EST 2010


Avi Kivity wrote:
> On 06/28/2010 04:25 PM, Alexander Graf wrote:
>>>
>>>>> Less and simpler code, better reporting through slabtop, less wastage
>>>>> of partially allocated slab pages.
>>>>>
>>>>>          
>>>> But it also means that one VM can spill the global slab cache and kill
>>>> another VM's mm performance, no?
>>>>
>>>>        
>>> What do you mean by spill?
>>>      
>
>
> Well?

I was thinking of a global capping, but I guess I could still do it
per-vcpu. So yes, doing a global slab doesn't hurt.

>
>>> btw, in the midst of the nit-picking frenzy I forgot to ask how the
>>> individual hash chain lengths as well as the per-vm allocation were
>>> limited.
>>>
>>> On x86 we have a per-vm limit and we allow the mm shrinker to reduce
>>> shadow mmu data structures dynamically.
>>>
>>>      
>> Very simple. I keep an int with the number of allocated entries around
>> and if that hits a define'd threshold, I flush all shadow pages.
>>    
>
> A truly nefarious guest will make all ptes hash to the same chain,
> making some operations very long (O(n^2) in the x86 mmu, don't know
> about ppc) under a spinlock.  So we had to limit hash chains, not just
> the number of entries.
>
> But your mmu is per-cpu, no?  In that case, no spinlock, and any
> damage the guest does is limited to itself.

Yes, it is. No locking. The vcpu can kill its own performance, but I
don't care about that.


Alex



More information about the Linuxppc-dev mailing list