[PATCH 1/3] powerpc/64s: Disable preemption in hash lazy mmu mode

Guenter Roeck linux at roeck-us.net
Fri Oct 14 11:17:42 AEDT 2022


On Fri, Oct 14, 2022 at 01:16:45AM +1000, Nicholas Piggin wrote:
> apply_to_page_range on kernel pages does not disable preemption, which
> is a requirement for hash's lazy mmu mode, which keeps track of the
> TLBs to flush with a per-cpu array.
> 
> Reported-by: Guenter Roeck <linux at roeck-us.net>
> Signed-off-by: Nicholas Piggin <npiggin at gmail.com>

Tested-by: Guenter Roeck <linux at roeck-us.net>

> ---
>  arch/powerpc/include/asm/book3s/64/tlbflush-hash.h | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> index fab8332fe1ad..751921f6db46 100644
> --- a/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> +++ b/arch/powerpc/include/asm/book3s/64/tlbflush-hash.h
> @@ -32,6 +32,11 @@ static inline void arch_enter_lazy_mmu_mode(void)
>  
>  	if (radix_enabled())
>  		return;
> +	/*
> +	 * apply_to_page_range can call us this preempt enabled when
> +	 * operating on kernel page tables.
> +	 */
> +	preempt_disable();
>  	batch = this_cpu_ptr(&ppc64_tlb_batch);
>  	batch->active = 1;
>  }
> @@ -47,6 +52,7 @@ static inline void arch_leave_lazy_mmu_mode(void)
>  	if (batch->index)
>  		__flush_tlb_pending(batch);
>  	batch->active = 0;
> +	preempt_enable();
>  }
>  
>  #define arch_flush_lazy_mmu_mode()      do {} while (0)
> -- 
> 2.37.2
> 


More information about the Linuxppc-dev mailing list