[PATCH 4/4] mm: numa: Slow PTE scan rate if migration failures occur

Dave Chinner david at fromorbit.com
Fri Mar 20 15:13:57 AEDT 2015


On Thu, Mar 19, 2015 at 06:29:47PM -0700, Linus Torvalds wrote:
> On Thu, Mar 19, 2015 at 5:23 PM, Dave Chinner <david at fromorbit.com> wrote:
> >
> > Bit more variance there than the pte checking, but runtime
> > difference is in the noise - 5m4s vs 4m54s - and profiles are
> > identical to the pte checking version.
> 
> Ahh, so that "!(vma->vm_flags & VM_WRITE)" test works _almost_ as well
> as the original !pte_write() test.
> 
> Now, can you check that on top of rc4? If I've gotten everything
> right, we now have:
> 
>  - plain 3.19 (pte_write): 4m54s
>  - 3.19 with vm_flags & VM_WRITE: 5m4s
>  - 3.19 with pte_dirty: 5m20s

*nod*

> so the pte_dirty version seems to have been a bad choice indeed.
> 
> For 4.0-rc4, (which uses pte_dirty) you had 7m50s, so it's still
> _much_ worse, but I'm wondering whether that VM_WRITE test will at
> least shrink the difference like it does for 3.19.

Testing now. It's a bit faster - three runs gave 7m35s, 7m20s and
7m36s. IOWs's a bit better, but not significantly. page migrations
are pretty much unchanged, too:

	   558,632      migrate:mm_migrate_pages ( +-  6.38% )

> And the VM_WRITE test should be stable and not have any subtle
> interaction with the other changes that the numa pte things
> introduced. It would be good to see if the profiles then pop something
> *else* up as the performance difference (which I'm sure will remain,
> since the 7m50s was so far off).

No, nothing new pops up in the kernel profiles. All the system CPU
time is still being spent sending IPIs on the tlb flush path.

Cheers,

Dave.
-- 
Dave Chinner
david at fromorbit.com


More information about the Linuxppc-dev mailing list