[PATCH v2 01/14] mm: Convert pXd_devmap checks to vma_is_dax

Alistair Popple apopple at nvidia.com
Thu Jun 19 18:40:24 AEST 2025


On Tue, Jun 17, 2025 at 11:19:34AM +0200, David Hildenbrand wrote:
> On 16.06.25 13:58, Alistair Popple wrote:
> > Currently dax is the only user of pmd and pud mapped ZONE_DEVICE
> > pages. Therefore page walkers that want to exclude DAX pages can check
> > pmd_devmap or pud_devmap. However soon dax will no longer set PFN_DEV,
> > meaning dax pages are mapped as normal pages.
> > 
> > Ensure page walkers that currently use pXd_devmap to skip DAX pages
> > continue to do so by adding explicit checks of the VMA instead.
> > > Signed-off-by: Alistair Popple <apopple at nvidia.com>
> > Reviewed-by: Jason Gunthorpe <jgg at nvidia.com>
> > Reviewed-by: Dan Williams <dan.j.williams at intel.com>
> > 
> > ---
> > 
> > Changes from v1:
> > 
> >   - Remove vma_is_dax() check from mm/userfaultfd.c as
> >     validate_move_areas() will already skip DAX VMA's on account of them
> >     not being anonymous.
> 
> This should be documented in the patch description above.

Ok.

> > ---
> >   fs/userfaultfd.c | 2 +-
> >   mm/hmm.c         | 2 +-
> >   mm/userfaultfd.c | 6 ------
> >   3 files changed, 2 insertions(+), 8 deletions(-)
> > 
> > diff --git a/fs/userfaultfd.c b/fs/userfaultfd.c
> > index ef054b3..a886750 100644
> > --- a/fs/userfaultfd.c
> > +++ b/fs/userfaultfd.c
> > @@ -304,7 +304,7 @@ static inline bool userfaultfd_must_wait(struct userfaultfd_ctx *ctx,
> >   		goto out;
> >   	ret = false;
> > -	if (!pmd_present(_pmd) || pmd_devmap(_pmd))
> > +	if (!pmd_present(_pmd) || vma_is_dax(vmf->vma))
> >   		goto out;
> 
> VMA checks should be done before doing any page table walk.

Actually upon review I think this check was always redundant as well -
vma_can_userfault() checks limit userfaultfd to anon/hugetlb/shmem vma's. Boy we
sure have a lot of these "normal vma" checks around the place ... at least for
certain definitions of normal.

Anyway will remove this and add a note to the commit (and apologies for not
catching this last time as I think you may have ready mentioned this or at least
the general concept).

> >   	if (pmd_trans_huge(_pmd)) {
> > diff --git a/mm/hmm.c b/mm/hmm.c
> > index feac861..5311753 100644
> > --- a/mm/hmm.c
> > +++ b/mm/hmm.c
> > @@ -441,7 +441,7 @@ static int hmm_vma_walk_pud(pud_t *pudp, unsigned long start, unsigned long end,
> >   		return hmm_vma_walk_hole(start, end, -1, walk);
> >   	}
> > -	if (pud_leaf(pud) && pud_devmap(pud)) {
> > +	if (pud_leaf(pud) && vma_is_dax(walk->vma)) {
> >   		unsigned long i, npages, pfn;
> >   		unsigned int required_fault;
> >   		unsigned long *hmm_pfns;
> 
> Dito.

Actually I see little reason to restrict this only to DAX for HMM ... we don't
end up doing that for the equivalent PMD path so will drop this as well. Thanks!

> > diff --git a/mm/userfaultfd.c b/mm/userfaultfd.c
> > index 58b3ad6..8395db2 100644
> > --- a/mm/userfaultfd.c
> > +++ b/mm/userfaultfd.c
> > @@ -1818,12 +1818,6 @@ ssize_t move_pages(struct userfaultfd_ctx *ctx, unsigned long dst_start,
> >   		ptl = pmd_trans_huge_lock(src_pmd, src_vma);
> >   		if (ptl) {
> > -			if (pmd_devmap(*src_pmd)) {
> > -				spin_unlock(ptl);
> > -				err = -ENOENT;
> > -				break;
> > -			}
> > -
> >   			/* Check if we can move the pmd without splitting it. */
> >   			if (move_splits_huge_pmd(dst_addr, src_addr, src_start + len) ||
> >   			    !pmd_none(dst_pmdval)) {
> 
> 
> -- 
> Cheers,
> 
> David / dhildenb
> 
> 


More information about the Linuxppc-dev mailing list