[PATCH 4/7] KVM: PPC: Add book3s_32 tlbie flush acceleration
Alexander Graf
agraf at suse.de
Mon Aug 2 06:20:37 EST 2010
On 01.08.2010, at 16:08, Avi Kivity wrote:
> On 07/29/2010 04:04 PM, Alexander Graf wrote:
>> On Book3s_32 the tlbie instruction flushed effective addresses by the mask
>> 0x0ffff000. This is pretty hard to reflect with a hash that hashes ~0xfff, so
>> to speed up that target we should also keep a special hash around for it.
>>
>>
>> static inline u64 kvmppc_mmu_hash_vpte(u64 vpage)
>> {
>> return hash_64(vpage& 0xfffffffffULL, HPTEG_HASH_BITS_VPTE);
>> @@ -66,6 +72,11 @@ void kvmppc_mmu_hpte_cache_map(struct kvm_vcpu *vcpu, struct hpte_cache *pte)
>> index = kvmppc_mmu_hash_pte(pte->pte.eaddr);
>> hlist_add_head_rcu(&pte->list_pte,&vcpu->arch.hpte_hash_pte[index]);
>>
>> + /* Add to ePTE_long list */
>> + index = kvmppc_mmu_hash_pte_long(pte->pte.eaddr);
>> + hlist_add_head_rcu(&pte->list_pte_long,
>> + &vcpu->arch.hpte_hash_pte_long[index]);
>> +
>
> Isn't it better to make operations on this list conditional on Book3s_32? Hashes are expensive since they usually cost cache misses.
Yes, the same for vpte_long and vpte - book3s_32 guests don't need them except for the all flush. The tough part is that this is not host but guest dependent, so I need to have different structs for book3s_32 and book3s_64 guests. This isn't a big issue, but complicates the code.
> Can of course be done later as an optimization.
Yes, that was the plan. Great to see you got the same feeling there though :). To be honest, I even started a book3s_32 host optimization patch and threw it away because it made the code less readable. So yes, this is on my radar.
Alex
More information about the Linuxppc-dev
mailing list