[PATCH v4 08/12] arm64: mm: replace TIF_LAZY_MMU with in_lazy_mmu_mode()
Kevin Brodsky
kevin.brodsky at arm.com
Tue Nov 4 05:25:41 AEDT 2025
On 03/11/2025 16:03, David Hildenbrand wrote:
> On 29.10.25 11:09, Kevin Brodsky wrote:
>> The generic lazy_mmu layer now tracks whether a task is in lazy MMU
>> mode. As a result we no longer need a TIF flag for that purpose -
>> let's use the new in_lazy_mmu_mode() helper instead.
>>
>> Signed-off-by: Kevin Brodsky <kevin.brodsky at arm.com>
>> ---
>> arch/arm64/include/asm/pgtable.h | 16 +++-------------
>> arch/arm64/include/asm/thread_info.h | 3 +--
>> 2 files changed, 4 insertions(+), 15 deletions(-)
>>
>> diff --git a/arch/arm64/include/asm/pgtable.h
>> b/arch/arm64/include/asm/pgtable.h
>> index 535435248923..61ca88f94551 100644
>> --- a/arch/arm64/include/asm/pgtable.h
>> +++ b/arch/arm64/include/asm/pgtable.h
>> @@ -62,30 +62,21 @@ static inline void emit_pte_barriers(void)
>> static inline void queue_pte_barriers(void)
>> {
>> - unsigned long flags;
>> -
>> if (in_interrupt()) {
>> emit_pte_barriers();
>> return;
>> }
>> - flags = read_thread_flags();
>> -
>> - if (flags & BIT(TIF_LAZY_MMU)) {
>> - /* Avoid the atomic op if already set. */
>> - if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
>> - set_thread_flag(TIF_LAZY_MMU_PENDING);
>> - } else {
>> + if (in_lazy_mmu_mode())
>> + test_and_set_thread_flag(TIF_LAZY_MMU_PENDING);
>
> You likely don't want a test_and_set here, which would do a
> test_and_set_bit() -- an atomic rmw.
Ah yes good point, the new version would do an atomic RMW in all cases.
Simpler code but also slower :/
>
> You only want to avoid the atomic write if already set.
>
> So keep the current
>
> /* Avoid the atomic op if already set. */
> if (!(flags & BIT(TIF_LAZY_MMU_PENDING)))
> set_thread_flag(TIF_LAZY_MMU_PENDING);
Pretty much, since we're now only considering one flag we can simplify
it to:
if (!test_thread_flag(TIF_LAZY_MMU_PENDING))
set_thread_flag(TIF_LAZY_MMU_PENDING);
- Kevin
More information about the Linuxppc-dev
mailing list