2.6.18-rc3->rc4 hugetlbfs regression
Adam Litke
agl at us.ibm.com
Thu Aug 17 01:00:04 EST 2006
On Tue, 2006-08-15 at 08:22 -0700, Dave Hansen wrote:
> kernel BUG in cache_free_debugcheck at mm/slab.c:2748!
Alright, this one is only triggered when slab debugging is enabled. The slabs
are assumed to be aligned on a HUGEPTE_TABLE_SIZE boundary. The free path
makes use of this assumption and uses the lowest nibble to pass around an index
into an array of kmem_cache pointers. With slab debugging turned on, the slab
is still aligned, but the "working" object pointer is not. This would break
the assumption above that a full nibble is available for the PGF_CACHENUM_MASK.
The following patch reduces PGF_CACHENUM_MASK to cover only the two least
significant bits, which is enough to cover the current number of 4 pgtable
cache types. Then use this constant to mask out the appropriate part of the
huge pte pointer.
Signed-off-by: Adam Litke <agl at us.ibm.com>
---
arch/powerpc/mm/hugetlbpage.c | 2 +-
include/asm-powerpc/pgalloc.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff -upN reference/arch/powerpc/mm/hugetlbpage.c current/arch/powerpc/mm/hugetlbpage.c
--- reference/arch/powerpc/mm/hugetlbpage.c
+++ current/arch/powerpc/mm/hugetlbpage.c
@@ -153,7 +153,7 @@ static void free_hugepte_range(struct mm
hpdp->pd = 0;
tlb->need_flush = 1;
pgtable_free_tlb(tlb, pgtable_free_cache(hugepte, HUGEPTE_CACHE_NUM,
- HUGEPTE_TABLE_SIZE-1));
+ PGF_CACHENUM_MASK));
}
#ifdef CONFIG_PPC_64K_PAGES
diff -upN reference/include/asm-powerpc/pgalloc.h current/include/asm-powerpc/pgalloc.h
--- reference/include/asm-powerpc/pgalloc.h
+++ current/include/asm-powerpc/pgalloc.h
@@ -117,7 +117,7 @@ static inline void pte_free(struct page
pte_free_kernel(page_address(ptepage));
}
-#define PGF_CACHENUM_MASK 0xf
+#define PGF_CACHENUM_MASK 0x3
typedef struct pgtable_free {
unsigned long val;
--
Adam Litke - (agl at us.ibm.com)
IBM Linux Technology Center
More information about the Linuxppc-dev
mailing list