[PATCH v6 updated 9/11] mm/mremap: Fix race between mremap and pageout

Linus Torvalds torvalds at linux-foundation.org
Tue May 25 03:16:43 AEST 2021


On Mon, May 24, 2021 at 3:38 AM Aneesh Kumar K.V
<aneesh.kumar at linux.ibm.com> wrote:
>
> Avoid the above race with MOVE_PMD by holding pte ptl in mremap and waiting for
> parallel pagetable walk to finish operating on pte before updating new_pmd

Ack on the concept.

However, not so much on the patch.

Odd whitespace change:

> @@ -254,6 +254,7 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
>         if (WARN_ON_ONCE(!pmd_none(*new_pmd)))
>                 return false;
>
> +
>         /*
>          * We don't have to worry about the ordering of src and dst
>          * ptlocks because exclusive mmap_lock prevents deadlock.

And new optimization for empty pmd, which seems unrelated to the
change and should presumably be separate:

> @@ -263,6 +264,10 @@ static bool move_normal_pmd(struct vm_area_struct *vma, unsigned long old_addr,
>         if (new_ptl != old_ptl)
>                 spin_lock_nested(new_ptl, SINGLE_DEPTH_NESTING);
>
> +       if (pmd_none(*old_pmd))
> +               goto unlock_out;
> +
> +       pte_ptl = pte_lockptr(mm, old_pmd);
>         /* Clear the pmd */
>         pmd = *old_pmd;
>         pmd_clear(old_pmd);

And also, why does the above assign 'pte_ptl' without using it, when
the actual use is ten lines further down?

So I think this patch needs some cleanup.

              Linus


More information about the Linuxppc-dev mailing list