[PATCH V3] powerpc/mm/hash64: memset the pagetable pages on allocation.

Aneesh Kumar K.V aneesh.kumar at linux.vnet.ibm.com
Tue Feb 13 22:09:33 AEDT 2018


On powerpc we allocate page table pages from slab cache of different sizes. For
now we have a constructor that zero out the objects when we allocate then for
the first time. We expect the objects to be zeroed out when we free the the
object back to slab cache. This happens in the unmap path. For hugetlb pages
we call huge_pte_get_and_clear to do that. With the current configuration of
page table size, both pud and pgd level tables get allocated from the same slab
cache. At the pud level, we use the second half of the table to store the slot
information. But never clear that when unmapping. When such an freed object get
allocated at pgd level, we will have part of the page table page not initlaized
correctly. This result in kernel crash

Simplify this by calling the object initialization after kmem_cache_alloc

Signed-off-by: Aneesh Kumar K.V <aneesh.kumar at linux.vnet.ibm.com>
---
 arch/powerpc/include/asm/book3s/64/pgalloc.h | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/pgalloc.h b/arch/powerpc/include/asm/book3s/64/pgalloc.h
index 53df86d3cfce..e4d154a4d114 100644
--- a/arch/powerpc/include/asm/book3s/64/pgalloc.h
+++ b/arch/powerpc/include/asm/book3s/64/pgalloc.h
@@ -73,10 +73,13 @@ static inline void radix__pgd_free(struct mm_struct *mm, pgd_t *pgd)
 
 static inline pgd_t *pgd_alloc(struct mm_struct *mm)
 {
+	pgd_t *pgd;
 	if (radix_enabled())
 		return radix__pgd_alloc(mm);
-	return kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
-		pgtable_gfp_flags(mm, GFP_KERNEL));
+	pgd = kmem_cache_alloc(PGT_CACHE(PGD_INDEX_SIZE),
+			       pgtable_gfp_flags(mm, GFP_KERNEL));
+	memset(pgd, 0, PGD_TABLE_SIZE);
+	return pgd;
 }
 
 static inline void pgd_free(struct mm_struct *mm, pgd_t *pgd)
-- 
2.14.3



More information about the Linuxppc-dev mailing list