[PATCH v7 00/11] Speedup mremap on ppc64

Nicholas Piggin npiggin at gmail.com
Tue Jun 8 15:03:22 AEST 2021


Excerpts from Aneesh Kumar K.V's message of June 8, 2021 2:39 pm:
> On 6/7/21 3:40 PM, Nick Piggin wrote:
>> On Monday, 7 June 2021, Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com> 
>> wrote: This patchset enables MOVE_PMD/MOVE_PUD support on power. This 
>> requires the platform to support updating higher-level page tables 
>> without updating page table ZjQcmQRYFpfptBannerStart
>> This Message Is From an External Sender
>> This message came from outside your organization.
>> ZjQcmQRYFpfptBannerEnd
>> 
>> 
>> On Monday, 7 June 2021, Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com 
>> <mailto:aneesh.kumar at linux.ibm.com>> wrote:
>> 
>> 
>>     This patchset enables MOVE_PMD/MOVE_PUD support on power. This requires
>>     the platform to support updating higher-level page tables without
>>     updating page table entries. This also needs to invalidate the Page Walk
>>     Cache on architecture supporting the same.
>> 
>>     Changes from v6:
>>     * Update ppc64 flush_tlb_range to invalidate page walk cache.
>> 
>> 
>> I'd really rather not do this, I'm not sure if micro bench mark captures 
>> everything.
>> 
>> Page tables coming from L2/L3 probably aren't the primary purpose or 
>> biggest benefit of intermediate level caches.
>> 
>> The situation on POWER with nest mmu (coherent accelerators) is 
>> magnified. They have huge page walk cashes to make up for the fact they 
>> don't have data caches for walking page tables which makes the 
>> invalidation more painful in terms of subsequent misses, but also 
>> latency to invalidate (can be order of microseconds whereas a page 
>> invalidate is a couple of orders of magnitude faster).
>> 
> 
> If we are using NestMMU, we already upgrade that flush to invalidate 
> page walk cache right? ie, if we have > PMD_SIZE range, we would upgrade 
> the invalidate to a pid flush via
> 
> flush_pid = nr_pages > tlb_single_page_flush_ceiling;

Not that we've tuned that parameter for a long time, certainly not with 
nMMU probably. Quite possibly it should be higher for nMMU because of 
the big TLBs they have. (and what about == PMD_SIZE)?

> 	
> and if it is PID flush if we are using NestMMU we already upgrade a 
> RIC_FLUSH_TLB to RIC_FLUSH_ALL ?

Does P10 still have that bug?

At any rate, the core MMU I think still has the same issues just less
pronounced. PWC invalidates take longer, and PWC should have most
benefit when CPU data caches are highly used and don't filled with
page table entries.

Thanks,
Nick


More information about the Linuxppc-dev mailing list