[PATCH v2 1/2] mm/migrate_device.c: Copy pte dirty bit to page
Peter Xu
peterx at redhat.com
Fri Aug 19 00:44:46 AEST 2022
On Thu, Aug 18, 2022 at 02:34:45PM +0800, Huang, Ying wrote:
> > In this specific case, the only way to do safe tlb batching in my mind is:
> >
> > pte_offset_map_lock();
> > arch_enter_lazy_mmu_mode();
> > // If any pending tlb, do it now
> > if (mm_tlb_flush_pending())
> > flush_tlb_range(vma, start, end);
> > else
> > flush_tlb_batched_pending();
>
> I don't think we need the above 4 lines. Because we will flush TLB
> before we access the pages.
Could you elaborate?
> Can you find any issue if we don't use the above 4 lines?
It seems okay to me to leave stall tlb at least within the scope of this
function. It only collects present ptes and flush propoerly for them. I
don't quickly see any other implications to other not touched ptes - unlike
e.g. mprotect(), there's a strong barrier of not allowing further write
after mprotect() returns.
Still I don't know whether there'll be any side effect of having stall tlbs
in !present ptes because I'm not familiar enough with the private dev swap
migration code. But I think having them will be safe, even if redundant.
Thanks,
--
Peter Xu
More information about the Linuxppc-dev
mailing list