[PATCH v2 00/13] mm/debug_vm_pgtable fixes
Aneesh Kumar K.V
aneesh.kumar at linux.ibm.com
Wed Aug 19 23:45:41 AEST 2020
"Aneesh Kumar K.V" <aneesh.kumar at linux.ibm.com> writes:
> This patch series includes fixes for debug_vm_pgtable test code so that
> they follow page table updates rules correctly. The first two patches introduce
> changes w.r.t ppc64. The patches are included in this series for completeness. We can
> merge them via ppc64 tree if required.
>
> Hugetlb test is disabled on ppc64 because that needs larger change to satisfy
> page table update rules.
>
> Changes from V1:
> * Address review feedback
> * drop test specific pfn_pte and pfn_pmd.
> * Update ppc64 page table helper to add _PAGE_PTE
>
> Aneesh Kumar K.V (13):
> powerpc/mm: Add DEBUG_VM WARN for pmd_clear
> powerpc/mm: Move setting pte specific flags to pfn_pte
> mm/debug_vm_pgtable/ppc64: Avoid setting top bits in radom value
> mm/debug_vm_pgtables/hugevmap: Use the arch helper to identify huge
> vmap support.
> mm/debug_vm_pgtable/savedwrite: Enable savedwrite test with
> CONFIG_NUMA_BALANCING
> mm/debug_vm_pgtable/THP: Mark the pte entry huge before using
> set_pmd/pud_at
> mm/debug_vm_pgtable/set_pte/pmd/pud: Don't use set_*_at to update an
> existing pte entry
> mm/debug_vm_pgtable/thp: Use page table depost/withdraw with THP
> mm/debug_vm_pgtable/locks: Move non page table modifying test together
> mm/debug_vm_pgtable/locks: Take correct page table lock
> mm/debug_vm_pgtable/pmd_clear: Don't use pmd/pud_clear on pte entries
> mm/debug_vm_pgtable/hugetlb: Disable hugetlb test on ppc64
> mm/debug_vm_pgtable: populate a pte entry before fetching it
>
> arch/powerpc/include/asm/book3s/64/pgtable.h | 29 +++-
> arch/powerpc/include/asm/nohash/pgtable.h | 5 -
> arch/powerpc/mm/book3s64/pgtable.c | 2 +-
> arch/powerpc/mm/pgtable.c | 5 -
> include/linux/io.h | 12 ++
> mm/debug_vm_pgtable.c | 151 +++++++++++--------
> 6 files changed, 127 insertions(+), 77 deletions(-)
>
BTW I picked a wrong branch when sending this. Attaching the diff
against what I want to send. pfn_pmd() no more updates _PAGE_PTE
because that is handled by pmd_mkhuge().
diff --git a/arch/powerpc/mm/book3s64/pgtable.c b/arch/powerpc/mm/book3s64/pgtable.c
index 3b4da7c63e28..e18ae50a275c 100644
--- a/arch/powerpc/mm/book3s64/pgtable.c
+++ b/arch/powerpc/mm/book3s64/pgtable.c
@@ -141,7 +141,7 @@ pmd_t pfn_pmd(unsigned long pfn, pgprot_t pgprot)
unsigned long pmdv;
pmdv = (pfn << PAGE_SHIFT) & PTE_RPN_MASK;
- return __pmd(pmdv | pgprot_val(pgprot) | _PAGE_PTE);
+ return pmd_set_protbits(__pmd(pmdv), pgprot);
}
pmd_t mk_pmd(struct page *page, pgprot_t pgprot)
diff --git a/mm/debug_vm_pgtable.c b/mm/debug_vm_pgtable.c
index 7d9f8e1d790f..cad61d22f33a 100644
--- a/mm/debug_vm_pgtable.c
+++ b/mm/debug_vm_pgtable.c
@@ -229,7 +229,7 @@ static void __init pmd_huge_tests(pmd_t *pmdp, unsigned long pfn, pgprot_t prot)
static void __init pmd_savedwrite_tests(unsigned long pfn, pgprot_t prot)
{
- pmd_t pmd = pfn_pmd(pfn, prot);
+ pmd_t pmd = pmd_mkhuge(pfn_pmd(pfn, prot));
if (!IS_ENABLED(CONFIG_NUMA_BALANCING))
return;
More information about the Linuxppc-dev
mailing list