[PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page

Peter Xu peterx at redhat.com
Thu Aug 25 06:25:44 AEST 2022


On Wed, Aug 24, 2022 at 11:56:25AM +1000, Alistair Popple wrote:
> >> Still I don't know whether there'll be any side effect of having stall tlbs
> >> in !present ptes because I'm not familiar enough with the private dev swap
> >> migration code.  But I think having them will be safe, even if redundant.
> 
> What side-effect were you thinking of? I don't see any issue with not
> TLB flushing stale device-private TLBs prior to the migration because
> they're not accessible anyway and shouldn't be in any TLB.

Sorry to be misleading, I never meant we must add them.  As I said it's
just that I don't know the code well so I don't know whether it's safe to
not have it.

IIUC it's about whether having stall system-ram stall tlb in other
processor would matter or not here.  E.g. some none pte that this code
collected (boosted both "cpages" and "npages" for a none pte) could have
stall tlb in other cores that makes the page writable there.

When I said I'm not familiar with the code, it's majorly about one thing I
never figured out myself, in that migrate_vma_collect_pmd() has this
optimization to trylock on the page, collect if it succeeded:

  /*
   * Optimize for the common case where page is only mapped once
   * in one process. If we can lock the page, then we can safely
   * set up a special migration page table entry now.
   */
   if (trylock_page(page)) {
          ...
   } else {
          put_page(page);
          mpfn = 0;
   }

But it's kind of against a pure "optimization" in that if trylock failed,
we'll clear the mpfn so the src[i] will be zero at last.  Then will we
directly give up on this page, or will we try to lock_page() again
somewhere?

The future unmap op is also based on this "cpages", not "npages":

	if (args->cpages)
		migrate_vma_unmap(args);

So I never figured out how this code really works.  It'll be great if you
could shed some light to it.

Thanks,

-- 
Peter Xu



More information about the Linuxppc-dev mailing list