[RFC v1 05/10] powerpc/64s: Move serialize_against_pte_lookup() to hash_pgtable.c
Christophe Leroy (CS GROUP)
chleroy at kernel.org
Wed Mar 4 20:00:03 AEDT 2026
Le 25/02/2026 à 12:04, Ritesh Harjani (IBM) a écrit :
> Originally,
> commit fa4531f753f1 ("powerpc/mm: Don't send IPI to all cpus on THP updates")
> introduced serialize_against_pte_lookup() call for both Radix and Hash.
>
> However below commit fixed the race with Radix
> commit 70cbc3cc78a9 ("mm: gup: fix the fast GUP race against THP collapse")
>
> And therefore following commit removed the
> serialize_against_pte_lookup() call from radix_pgtable.c
> commit bedf03416913
> ("powerpc/64s/radix: don't need to broadcast IPI for radix pmd collapse flush")
>
> Now since serialize_against_pte_lookup() only gets called from
> hash__pmdp_collapse_flush(), thus move the related functions to
> hash_pgtable.c
>
> Hence this patch:
> - moves serialize_against_pte_lookup() from radix_pgtable.c to hash_pgtable.c
> - removes the radix specific calls from do_serialize()
> - renames do_serialize() to do_nothing().
>
> There should not be any functionality change in this patch.
>
> Signed-off-by: Ritesh Harjani (IBM) <ritesh.list at gmail.com>
Reviewed-by: Christophe Leroy (CS GROUP) <chleroy at kernel.org>
> ---
> arch/powerpc/include/asm/book3s/64/pgtable.h | 1 -
> arch/powerpc/mm/book3s64/hash_pgtable.c | 21 ++++++++++++++++
> arch/powerpc/mm/book3s64/pgtable.c | 25 --------------------
> 3 files changed, 21 insertions(+), 26 deletions(-)
>
> diff --git a/arch/powerpc/include/asm/book3s/64/pgtable.h b/arch/powerpc/include/asm/book3s/64/pgtable.h
> index 1a91762b455d..ff264d930fe8 100644
> --- a/arch/powerpc/include/asm/book3s/64/pgtable.h
> +++ b/arch/powerpc/include/asm/book3s/64/pgtable.h
> @@ -1400,7 +1400,6 @@ static inline bool arch_needs_pgtable_deposit(void)
> return false;
> return true;
> }
> -extern void serialize_against_pte_lookup(struct mm_struct *mm);
>
> #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
> diff --git a/arch/powerpc/mm/book3s64/hash_pgtable.c b/arch/powerpc/mm/book3s64/hash_pgtable.c
> index ac2a24d15d2e..d9b5b751d7b7 100644
> --- a/arch/powerpc/mm/book3s64/hash_pgtable.c
> +++ b/arch/powerpc/mm/book3s64/hash_pgtable.c
> @@ -221,6 +221,27 @@ unsigned long hash__pmd_hugepage_update(struct mm_struct *mm, unsigned long addr
> return old;
> }
>
> +static void do_nothing(void *arg)
> +{
> +
> +}
> +
> +/*
> + * Serialize against __find_linux_pte() which does lock-less
> + * lookup in page tables with local interrupts disabled. For huge pages
> + * it casts pmd_t to pte_t. Since format of pte_t is different from
> + * pmd_t we want to prevent transit from pmd pointing to page table
> + * to pmd pointing to huge page (and back) while interrupts are disabled.
> + * We clear pmd to possibly replace it with page table pointer in
> + * different code paths. So make sure we wait for the parallel
> + * __find_linux_pte() to finish.
> + */
> +static void serialize_against_pte_lookup(struct mm_struct *mm)
> +{
> + smp_mb();
> + smp_call_function_many(mm_cpumask(mm), do_nothing, mm, 1);
> +}
> +
> pmd_t hash__pmdp_collapse_flush(struct vm_area_struct *vma, unsigned long address,
> pmd_t *pmdp)
> {
> diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
> index 359092001670..84284dff650a 100644
> --- a/arch/powerpc/mm/book3s64/pgtable.c
> +++ b/arch/powerpc/mm/book3s64/pgtable.c
> @@ -150,31 +150,6 @@ void set_pud_at(struct mm_struct *mm, unsigned long addr,
> return set_pte_at_unchecked(mm, addr, pudp_ptep(pudp), pud_pte(pud));
> }
>
> -static void do_serialize(void *arg)
> -{
> - /* We've taken the IPI, so try to trim the mask while here */
> - if (radix_enabled()) {
> - struct mm_struct *mm = arg;
> - exit_lazy_flush_tlb(mm, false);
> - }
> -}
> -
> -/*
> - * Serialize against __find_linux_pte() which does lock-less
> - * lookup in page tables with local interrupts disabled. For huge pages
> - * it casts pmd_t to pte_t. Since format of pte_t is different from
> - * pmd_t we want to prevent transit from pmd pointing to page table
> - * to pmd pointing to huge page (and back) while interrupts are disabled.
> - * We clear pmd to possibly replace it with page table pointer in
> - * different code paths. So make sure we wait for the parallel
> - * __find_linux_pte() to finish.
> - */
> -void serialize_against_pte_lookup(struct mm_struct *mm)
> -{
> - smp_mb();
> - smp_call_function_many(mm_cpumask(mm), do_serialize, mm, 1);
> -}
> -
> /*
> * We use this to invalidate a pmdp entry before switching from a
> * hugepte to regular pmd entry.
More information about the Linuxppc-dev
mailing list