[PATCH 1/3] powerpc/64s: Disable preemption in hash lazy mmu mode
Christophe Leroy
christophe.leroy at csgroup.eu
Fri Oct 14 02:29:16 AEDT 2022
Le 13/10/2022 à 17:16, Nicholas Piggin a écrit :
> apply_to_page_range on kernel pages does not disable preemption, which
> is a requirement for hash's lazy mmu mode, which keeps track of the
> TLBs to flush with a per-cpu array.
>
> Reported-by: Guenter Roeck <linux at roeck-us.net>
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>
> ---
> arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 ++++++
> 1 file changed, 6 insertions(+)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index fab8332fe1ad..751921f6db46 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
>
> if (radix_enabled())
> return;
> + /*
> + * apply_to_page_range can call us this preempt enabled when
> + * operating on kernel page tables.
> + */
> + preempt_disable();
> batch = this_cpu_ptr(&ppc64_tlb_batch);
> batch->active = 1;
> }
> @@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
> if (batch->index)
> __flush_tlb_pending(batch);
> batch->active = 0;
> + preempt_enable();
You'll schedule() here. Is that acceptable in terms of performance ?
Otherwise you have preempt_enable_no_resched()
> }
>
> #define arch_flush_lazy_mmu_mode() do {} while (0)
More information about the Linuxppc-dev
mailing list