[PATCH] powerpc/mm: Fix potential access to freed pages when using hugetlbfs

Benjamin Herrenschmidt benh at kernel.crashing.org
Tue Jun 16 12:53:43 EST 2009


When using 64k page sizes, our PTE pages are split in two halves,
the second half containing the "extension" used to keep track of
individual 4k pages when not using HW 64k pages.

However, our page tables used for hugetlb have a slightly different
format and don't carry that "second half".

Our code that batched PTEs to be invalidated unconditionally reads
the "second half" (to put it into the batch), which means that when
called to invalidate hugetlb PTEs, it will access unrelated memory.

It breaks when CONFIG_DEBUG_PAGEALLOC is enabled.

This fixes it by only accessing the second half when the _PAGE_COMBO
bit is set in the first half, which indicates that we are dealing with
a "combo" page which represents 16x4k subpages. Anything else shouldn't
have this bit set and thus not require loading from the second half.

Signed-off-by: Benjamin Herrenschmidt <benh at kernel.crashing.org>
---


 arch/powerpc/include/asm/pte-hash64-64k.h |    3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

--- linux-work.orig/arch/powerpc/include/asm/pte-hash64-64k.h	2009-06-16 11:27:05.000000000 +1000
+++ linux-work/arch/powerpc/include/asm/pte-hash64-64k.h	2009-06-16 12:03:29.000000000 +1000
@@ -47,7 +47,8 @@
  * generic accessors and iterators here
  */
 #define __real_pte(e,p) 	((real_pte_t) { \
-	(e), pte_val(*((p) + PTRS_PER_PTE)) })
+			(e), ((e) & _PAGE_COMBO) ? \
+				(pte_val(*((p) + PTRS_PER_PTE))) : 0 })
 #define __rpte_to_hidx(r,index)	((pte_val((r).pte) & _PAGE_COMBO) ? \
         (((r).hidx >> ((index)<<2)) & 0xf) : ((pte_val((r).pte) >> 12) & 0xf))
 #define __rpte_to_pte(r)	((r).pte)


More information about the Linuxppc-dev mailing list