[PATCH v6 2/2] arm64: support batched/deferred tlb shootdown during page reclamation

Nadav Amit namit at vmware.com
Wed Nov 16 12:56:17 AEDT 2022


On Nov 15, 2022, at 5:50 PM, Yicong Yang <yangyicong at huawei.com> wrote:

> !! External Email
> 
> On 2022/11/16 7:38, Nadav Amit wrote:
>> On Nov 14, 2022, at 7:14 PM, Yicong Yang <yangyicong at huawei.com> wrote:
>> 
>>> diff --git a/arch/x86/include/asm/tlbflush.h b/arch/x86/include/asm/tlbflush.h
>>> index 8a497d902c16..5bd78ae55cd4 100644
>>> --- a/arch/x86/include/asm/tlbflush.h
>>> +++ b/arch/x86/include/asm/tlbflush.h
>>> @@ -264,7 +264,8 @@ static inline u64 inc_mm_tlb_gen(struct mm_struct *mm)
>>> }
>>> 
>>> static inline void arch_tlbbatch_add_mm(struct arch_tlbflush_unmap_batch *batch,
>>> -                                    struct mm_struct *mm)
>>> +                                    struct mm_struct *mm,
>>> +                                    unsigned long uaddr)
>> 
>> Logic-wise it looks fine. I notice the “v6", and it should not be blocking,
>> but I would note that the name "arch_tlbbatch_add_mm()” does not make much
>> sense once the function also takes an address.
> 
> ok the add_mm should still apply to x86 since the address is not used, but not for arm64.
> 
>> It could’ve been something like arch_set_tlb_ubc_flush_pending() but that’s
>> too long. I’m not very good with naming, but the current name is not great.
> 
> What about arch_tlbbatch_add_pending()? Considering the x86 is pending the flush operation
> while arm64 is pending the sychronization operation, arch_tlbbatch_add_pending() should
> make sense to both.

Sounds reasonable. Thanks.




More information about the Linuxppc-dev mailing list