[PATCH] powerpc/mm: Ensure Huge-page memory is free before allocation

Vaibhav Jain vaibhav at linux.ibm.com
Tue Jun 18 14:46:09 AEST 2019


We recently discovered an bug where physical memory meant for
allocation of Huge-pages was inadvertently allocated by another component
during early boot. The behavior of memblock_reserve() where it wont
indicate whether an existing reserved block overlaps with the
requested reservation only makes such bugs hard to investigate.

Hence this patch proposes adding a memblock reservation check in
htab_dt_scan_hugepage_blocks() just before call to memblock_reserve()
to ensure that the physical memory thats being reserved for is not
already reserved by someone else. In case this happens we panic the
the kernel to ensure that user of this huge-page doesn't accidentally
stomp on memory allocated to someone else.

Signed-off-by: Vaibhav Jain <vaibhav at linux.ibm.com>
---
 arch/powerpc/mm/book3s64/hash_utils.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/arch/powerpc/mm/book3s64/hash_utils.c b/arch/powerpc/mm/book3s64/hash_utils.c
index 28ced26f2a00..a05be3adb8c9 100644
--- a/arch/powerpc/mm/book3s64/hash_utils.c
+++ b/arch/powerpc/mm/book3s64/hash_utils.c
@@ -516,6 +516,11 @@ static int __init htab_dt_scan_hugepage_blocks(unsigned long node,
 	printk(KERN_INFO "Huge page(16GB) memory: "
 			"addr = 0x%lX size = 0x%lX pages = %d\n",
 			phys_addr, block_size, expected_pages);
+
+	/* Ensure no one else has reserved memory for huge pages before */
+	BUG_ON(memblock_is_region_reserved(phys_addr,
+					   block_size * expected_pages));
+
 	if (phys_addr + block_size * expected_pages <= memblock_end_of_DRAM()) {
 		memblock_reserve(phys_addr, block_size * expected_pages);
 		pseries_add_gpage(phys_addr, block_size, expected_pages);
-- 
2.21.0



More information about the Linuxppc-dev mailing list