[PATCH next] mm: make swapoff more robust against soft dirty
Hugh Dickins
hughd at google.com
Sun Jan 10 11:59:42 AEDT 2016
Both s390 and powerpc have hit the issue of swapoff hanging, when
CONFIG_HAVE_ARCH_SOFT_DIRTY and CONFIG_MEM_SOFT_DIRTY ifdefs were
not quite as x86_64 had them. I think it would be much clearer if
HAVE_ARCH_SOFT_DIRTY was just a Kconfig option set by architectures
to determine whether the MEM_SOFT_DIRTY option should be offered,
and the actual code depend upon CONFIG_MEM_SOFT_DIRTY alone.
But won't embark on that change myself: instead make swapoff more
robust, by using pte_swp_clear_soft_dirty() on each pte it encounters,
without an explicit #ifdef CONFIG_MEM_SOFT_DIRTY. That being a no-op,
whether the bit in question is defined as 0 or the asm-generic fallback
is used, unless soft dirty is fully turned on.
Why "maybe" in maybe_same_pte()? Rename it pte_same_as_swp().
Signed-off-by: Hugh Dickins <hughd at google.com>
---
mm/swapfile.c | 18 ++++--------------
1 file changed, 4 insertions(+), 14 deletions(-)
--- 4.4-next/mm/swapfile.c 2016-01-06 11:54:46.327006983 -0800
+++ linux/mm/swapfile.c 2016-01-09 13:39:19.632872694 -0800
@@ -1109,19 +1109,9 @@ unsigned int count_swap_pages(int type,
}
#endif /* CONFIG_HIBERNATION */
-static inline int maybe_same_pte(pte_t pte, pte_t swp_pte)
+static inline int pte_same_as_swp(pte_t pte, pte_t swp_pte)
{
-#ifdef CONFIG_MEM_SOFT_DIRTY
- /*
- * When pte keeps soft dirty bit the pte generated
- * from swap entry does not has it, still it's same
- * pte from logical point of view.
- */
- pte_t swp_pte_dirty = pte_swp_mksoft_dirty(swp_pte);
- return pte_same(pte, swp_pte) || pte_same(pte, swp_pte_dirty);
-#else
- return pte_same(pte, swp_pte);
-#endif
+ return pte_same(pte_swp_clear_soft_dirty(pte), swp_pte);
}
/*
@@ -1150,7 +1140,7 @@ static int unuse_pte(struct vm_area_stru
}
pte = pte_offset_map_lock(vma->vm_mm, pmd, addr, &ptl);
- if (unlikely(!maybe_same_pte(*pte, swp_entry_to_pte(entry)))) {
+ if (unlikely(!pte_same_as_swp(*pte, swp_entry_to_pte(entry)))) {
mem_cgroup_cancel_charge(page, memcg, false);
ret = 0;
goto out;
@@ -1208,7 +1198,7 @@ static int unuse_pte_range(struct vm_are
* swapoff spends a _lot_ of time in this loop!
* Test inline before going to call unuse_pte.
*/
- if (unlikely(maybe_same_pte(*pte, swp_pte))) {
+ if (unlikely(pte_same_as_swp(*pte, swp_pte))) {
pte_unmap(pte);
ret = unuse_pte(vma, pmd, addr, entry, page);
if (ret)
More information about the Linuxppc-dev
mailing list