[PATCH] PPC40x: Limit Allocable RAM During Early Mapping
Grant Erickson
gerickson at nuovations.com
Thu Oct 30 08:41:14 EST 2008
If the size of RAM is not an exact power of two, we may not have
covered RAM in its entirety with large 16 and 4 MiB
pages. Consequently, restrict the top end of RAM currently allocable
by updating '__initial_memory_limit_addr' so that calls to the LMB to
allocate PTEs for "tail" coverage with normal-sized pages (or other
reasons) do not attempt to allocate outside the allowed range.
Signed-off-by: Grant Erickson <gerickson at nuovations.com>
---
This bug was discovered in the course of working on CONFIG_LOGBUFFER support
(see http://ozlabs.org/pipermail/linuxppc-dev/2008-October/064685.html).
However, the bug is triggered quite easily independent of that feature
by placing a memory limit via the 'mem=' kernel command line that results in
a memory size that is not equal to an exact power of two.
For example, on the AMCC PowerPC 405EXr "Haleakala" board with 256 MiB
of RAM, mmu_mapin_ram() normally covers RAM with precisely 16 16 MiB
large pages. However, if a memory limit of 256 MiB - 20 KiB (as might
be the case for CONFIG_LOGBUFFER) is put in place with
"mem=268414976", then large pages only cover (16 MiB * 15) + (4 MiB *
3) = 252 MiB with a 4 MiB - 20 KiB "tail" to cover with normal, 4 KiB
pages via map_page().
Unfortunately, if __initial_memory_limit_addr is not updated from its
initial value of 0x1000 0000 (256 MiB) to reflect what was actually
mapped via mmu_mapin_ram(), the following happens during the "tail"
mapping when the first PTE is allocated at 0xFFF A000 (rather than the
desired 0xFBF F000):
mapin_ram
mmu_mapin_ram
map_page
pte_alloc_kernel
pte_alloc_one_kernel
early_get_page
lmb_alloc_base
clear_page
clear_pages
dcbz 0,page <-- BOOM!
a non-recoverable page fault.
arch/powerpc/mm/40x_mmu.c | 16 ++++++++++++++--
1 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/40x_mmu.c b/arch/powerpc/mm/40x_mmu.c
index cecbbc7..29954dc 100644
--- a/arch/powerpc/mm/40x_mmu.c
+++ b/arch/powerpc/mm/40x_mmu.c
@@ -93,7 +93,7 @@ void __init MMU_init_hw(void)
unsigned long __init mmu_mapin_ram(void)
{
- unsigned long v, s;
+ unsigned long v, s, mapped;
phys_addr_t p;
v = KERNELBASE;
@@ -130,5 +130,17 @@ unsigned long __init mmu_mapin_ram(void)
s -= LARGE_PAGE_SIZE_4M;
}
- return total_lowmem - s;
+ mapped = total_lowmem - s;
+
+ /* If the size of RAM is not an exact power of two, we may not
+ * have covered RAM in its entirety with 16 and 4 MiB
+ * pages. Consequently, restrict the top end of RAM currently
+ * allocable so that calls to the LMB to allocate PTEs for "tail"
+ * coverage with normal-sized pages (or other reasons) do not
+ * attempt to allocate outside the allowed range.
+ */
+
+ __initial_memory_limit_addr = memstart_addr + mapped;
+
+ return mapped;
}
--
1.6.0.1
More information about the Linuxppc-dev
mailing list