[PATCH v3 8/8] mm/mprotect: fix dax pud handlings

James Houghton jthoughton at google.com
Fri Jul 26 04:29:49 AEST 2024


On Mon, Jul 15, 2024 at 12:22 PM Peter Xu <peterx at redhat.com> wrote:
>
> This is only relevant to the two archs that support PUD dax, aka, x86_64
> and ppc64.  PUD THPs do not yet exist elsewhere, and hugetlb PUDs do not
> count in this case.
>
> DAX have had PUD mappings for years, but change protection path never
> worked.  When the path is triggered in any form (a simple test program
> would be: call mprotect() on a 1G dev_dax mapping), the kernel will report
> "bad pud".  This patch should fix that.
>
> The new change_huge_pud() tries to keep everything simple.  For example, it
> doesn't optimize write bit as that will need even more PUD helpers.  It's
> not too bad anyway to have one more write fault in the worst case once for
> 1G range; may be a bigger thing for each PAGE_SIZE, though.  Neither does
> it support userfault-wp bits, as there isn't such PUD mappings that is
> supported; file mappings always need a split there.
>
> The same to TLB shootdown: the pmd path (which was for x86 only) has the
> trick of using _ad() version of pmdp_invalidate*() which can avoid one
> redundant TLB, but let's also leave that for later.  Again, the larger the
> mapping, the smaller of such effect.
>
> Another thing worth mention is this path needs to be careful on handling
> "retry" event for change_huge_pud() (where it can return 0): it isn't like
> change_huge_pmd(), as the pmd version is safe with all conditions handled
> in change_pte_range() later, thanks to Hugh's new pte_offset_map_lock().
> In short, change_pte_range() is simply smarter than change_pmd_range() now
> after the shmem thp collapse rework.  For that reason, change_pud_range()
> will need proper retry if it races with something else when a huge PUD
> changed from under us.
>
> Cc: Dan Williams <dan.j.williams at intel.com>
> Cc: Matthew Wilcox <willy at infradead.org>
> Cc: Dave Jiang <dave.jiang at intel.com>
> Cc: Hugh Dickins <hughd at google.com>
> Cc: Kirill A. Shutemov <kirill at shutemov.name>
> Cc: Vlastimil Babka <vbabka at suse.cz>
> Cc: Thomas Gleixner <tglx at linutronix.de>
> Cc: Ingo Molnar <mingo at redhat.com>
> Cc: Borislav Petkov <bp at alien8.de>
> Cc: Dave Hansen <dave.hansen at linux.intel.com>
> Cc: Michael Ellerman <mpe at ellerman.id.au>
> Cc: Aneesh Kumar K.V <aneesh.kumar at linux.ibm.com>
> Cc: Oscar Salvador <osalvador at suse.de>
> Cc: x86 at kernel.org
> Cc: linuxppc-dev at lists.ozlabs.org
> Fixes: a00cc7d9dd93 ("mm, x86: add support for PUD-sized transparent hugepages")
> Fixes: 27af67f35631 ("powerpc/book3s64/mm: enable transparent pud hugepage")
> Signed-off-by: Peter Xu <peterx at redhat.com>
> ---
>  include/linux/huge_mm.h | 24 +++++++++++++++++++
>  mm/huge_memory.c        | 52 +++++++++++++++++++++++++++++++++++++++++
>  mm/mprotect.c           | 34 ++++++++++++++++++++++-----
>  3 files changed, 104 insertions(+), 6 deletions(-)
>
> diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
> index cff002be83eb..6e742680590a 100644
> --- a/include/linux/huge_mm.h
> +++ b/include/linux/huge_mm.h
> @@ -336,6 +336,17 @@ void split_huge_pmd_address(struct vm_area_struct *vma, unsigned long address,
>  void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>                 unsigned long address);
>
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +                   pud_t *pudp, unsigned long addr, pgprot_t newprot,
> +                   unsigned long cp_flags);
> +#else
> +static inline int
> +change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +               pud_t *pudp, unsigned long addr, pgprot_t newprot,
> +               unsigned long cp_flags) { return 0; }
> +#endif
> +
>  #define split_huge_pud(__vma, __pud, __address)                                \
>         do {                                                            \
>                 pud_t *____pud = (__pud);                               \
> @@ -579,6 +590,19 @@ static inline int next_order(unsigned long *orders, int prev)
>  {
>         return 0;
>  }
> +
> +static inline void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +                                   unsigned long address)
> +{
> +}
> +
> +static inline int change_huge_pud(struct mmu_gather *tlb,
> +                                 struct vm_area_struct *vma, pud_t *pudp,
> +                                 unsigned long addr, pgprot_t newprot,
> +                                 unsigned long cp_flags)
> +{
> +       return 0;
> +}
>  #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
>
>  static inline int split_folio_to_list_to_order(struct folio *folio,
> diff --git a/mm/huge_memory.c b/mm/huge_memory.c
> index c10247bef08a..9a00c5955c0c 100644
> --- a/mm/huge_memory.c
> +++ b/mm/huge_memory.c
> @@ -2112,6 +2112,53 @@ int change_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
>         return ret;
>  }
>
> +/*
> + * Returns:
> + *
> + * - 0: if pud leaf changed from under us
> + * - 1: if pud can be skipped
> + * - HPAGE_PUD_NR: if pud was successfully processed
> + */
> +#ifdef CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD
> +int change_huge_pud(struct mmu_gather *tlb, struct vm_area_struct *vma,
> +                   pud_t *pudp, unsigned long addr, pgprot_t newprot,
> +                   unsigned long cp_flags)
> +{
> +       struct mm_struct *mm = vma->vm_mm;
> +       pud_t oldpud, entry;
> +       spinlock_t *ptl;
> +
> +       tlb_change_page_size(tlb, HPAGE_PUD_SIZE);
> +
> +       /* NUMA balancing doesn't apply to dax */
> +       if (cp_flags & MM_CP_PROT_NUMA)
> +               return 1;
> +
> +       /*
> +        * Huge entries on userfault-wp only works with anonymous, while we
> +        * don't have anonymous PUDs yet.
> +        */
> +       if (WARN_ON_ONCE(cp_flags & MM_CP_UFFD_WP_ALL))
> +               return 1;
> +
> +       ptl = __pud_trans_huge_lock(pudp, vma);
> +       if (!ptl)
> +               return 0;
> +
> +       /*
> +        * Can't clear PUD or it can race with concurrent zapping.  See
> +        * change_huge_pmd().
> +        */
> +       oldpud = pudp_invalidate(vma, addr, pudp);
> +       entry = pud_modify(oldpud, newprot);
> +       set_pud_at(mm, addr, pudp, entry);
> +       tlb_flush_pud_range(tlb, addr, HPAGE_PUD_SIZE);
> +
> +       spin_unlock(ptl);
> +       return HPAGE_PUD_NR;
> +}
> +#endif
> +
>  #ifdef CONFIG_USERFAULTFD
>  /*
>   * The PT lock for src_pmd and dst_vma/src_vma (for reading) are locked by
> @@ -2342,6 +2389,11 @@ void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
>         spin_unlock(ptl);
>         mmu_notifier_invalidate_range_end(&range);
>  }
> +#else
> +void __split_huge_pud(struct vm_area_struct *vma, pud_t *pud,
> +               unsigned long address)
> +{
> +}
>  #endif /* CONFIG_HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD */
>
>  static void __split_huge_zero_page_pmd(struct vm_area_struct *vma,
> diff --git a/mm/mprotect.c b/mm/mprotect.c
> index 2a81060b603d..694f13b83864 100644
> --- a/mm/mprotect.c
> +++ b/mm/mprotect.c
> @@ -425,31 +425,53 @@ static inline long change_pud_range(struct mmu_gather *tlb,
>                 unsigned long end, pgprot_t newprot, unsigned long cp_flags)
>  {
>         struct mmu_notifier_range range;
> -       pud_t *pud;
> +       pud_t *pudp, pud;
>         unsigned long next;
>         long pages = 0, ret;
>
>         range.start = 0;
>
> -       pud = pud_offset(p4d, addr);
> +       pudp = pud_offset(p4d, addr);
>         do {
> +again:
>                 next = pud_addr_end(addr, end);
> -               ret = change_prepare(vma, pud, pmd, addr, cp_flags);
> +               ret = change_prepare(vma, pudp, pmd, addr, cp_flags);
>                 if (ret) {
>                         pages = ret;
>                         break;
>                 }
> -               if (pud_none_or_clear_bad(pud))
> +
> +               pud = READ_ONCE(*pudp);
> +               if (pud_none(pud))
>                         continue;
> +
>                 if (!range.start) {
>                         mmu_notifier_range_init(&range,
>                                                 MMU_NOTIFY_PROTECTION_VMA, 0,
>                                                 vma->vm_mm, addr, end);
>                         mmu_notifier_invalidate_range_start(&range);
>                 }
> -               pages += change_pmd_range(tlb, vma, pud, addr, next, newprot,
> +
> +               if (pud_leaf(pud)) {
> +                       if ((next - addr != PUD_SIZE) ||
> +                           pgtable_split_needed(vma, cp_flags)) {
> +                               __split_huge_pud(vma, pudp, addr);
> +                               goto again;

IIUC, most of the time, we're just going to end up clearing the PUD in
this case. __split_huge_pud() will just clear it, then we goto again
and `continue` to the next pudp. Is that ok?

(I think it's ok as long as: you never map an anonymous page with a
PUD, and that uffd-wp is not usable with non-hugetlb PUD mappings of
user memory (which I think is only DAX?). So it seems ok today...?)

Also, does the comment in pgtable_split_needed() need updating?

Somewhat related question: change_huge_pmd() is very careful not to
clear the PMD before writing the new value. Yet change_pmd_range(),
when it calls into __split_huge_pmd(), will totally clear the PMD and
then populate the PTEs underneath (in some cases at least), seemingly
reintroducing the MADV_DONTNEED concern. But your PUD version, because
it never re-populates the PUD (or PMDs/PTEs underneath) does not have
this issue. WDYT?

Thanks for this series!

> +                       } else {
> +                               ret = change_huge_pud(tlb, vma, pudp,
> +                                                     addr, newprot, cp_flags);
> +                               if (ret == 0)
> +                                       goto again;
> +                               /* huge pud was handled */
> +                               if (ret == HPAGE_PUD_NR)
> +                                       pages += HPAGE_PUD_NR;
> +                               continue;
> +                       }
> +               }
> +
> +               pages += change_pmd_range(tlb, vma, pudp, addr, next, newprot,
>                                           cp_flags);
> -       } while (pud++, addr = next, addr != end);
> +       } while (pudp++, addr = next, addr != end);
>
>         if (range.start)
>                 mmu_notifier_invalidate_range_end(&range);
> --
> 2.45.0
>
>


More information about the Linuxppc-dev mailing list