[PATCH 43/49] powerpc/mm: rely on generic vmemmap_can_optimize() to simplify code

Muchun Song songmuchun at bytedance.com
Sun Apr 5 22:52:34 AEST 2026


The goal of this patch is to simplify the code by removing unnecessary
architecture-specific overrides.

After unifying DAX and HugeTLB vmemmap optimizations, we can rely on
the generic rule of vmemmap_can_optimize() instead of having architecture
specific overrides.

In radix__vmemmap_populate(), we can directly depend on
section_vmemmap_optimizable(__pfn_to_section(pfn)) because the upper
layer (sparse_add_section()) has already set the section order correctly
if the optimization condition was met.

In the fallback case for Hash MMU (!radix_enabled) inside vmemmap_populate(),
we reset the section order to 0. This is necessary because sparse_add_section()
may have optimistically set the section order assuming optimization could
be enabled, but Hash MMU does not support it. This ensures that
section_vmemmap_pages() calculates the unoptimized page count accurately.

Signed-off-by: Muchun Song <songmuchun at bytedance.com>
---
 arch/powerpc/include/asm/book3s/64/radix.h |  5 -----
 arch/powerpc/mm/book3s64/radix_pgtable.c   | 12 +-----------
 arch/powerpc/mm/init_64.c                  |  1 +
 3 files changed, 2 insertions(+), 16 deletions(-)

diff --git a/arch/powerpc/include/asm/book3s/64/radix.h b/arch/powerpc/include/asm/book3s/64/radix.h
index 2600defa2dc2..18e28deba255 100644
--- a/arch/powerpc/include/asm/book3s/64/radix.h
+++ b/arch/powerpc/include/asm/book3s/64/radix.h
@@ -352,10 +352,5 @@ int radix__create_section_mapping(unsigned long start, unsigned long end,
 				  int nid, pgprot_t prot);
 int radix__remove_section_mapping(unsigned long start, unsigned long end);
 #endif /* CONFIG_MEMORY_HOTPLUG */
-
-#ifdef CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP
-#define vmemmap_can_optimize vmemmap_can_optimize
-bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap);
-#endif
 #endif /* __ASSEMBLER__ */
 #endif
diff --git a/arch/powerpc/mm/book3s64/radix_pgtable.c b/arch/powerpc/mm/book3s64/radix_pgtable.c
index 714d5cdc10ec..36a69589fae4 100644
--- a/arch/powerpc/mm/book3s64/radix_pgtable.c
+++ b/arch/powerpc/mm/book3s64/radix_pgtable.c
@@ -977,16 +977,6 @@ int __meminit radix__vmemmap_create_mapping(unsigned long start,
 	return 0;
 }
 
-#ifdef CONFIG_ARCH_WANT_OPTIMIZE_DAX_VMEMMAP
-bool vmemmap_can_optimize(struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
-{
-	if (radix_enabled())
-		return __vmemmap_can_optimize(altmap, pgmap);
-
-	return false;
-}
-#endif
-
 int __meminit vmemmap_check_pmd(pmd_t *pmdp, int node,
 				unsigned long addr, unsigned long next)
 {
@@ -1126,7 +1116,7 @@ int __meminit radix__vmemmap_populate(unsigned long start, unsigned long end, in
 	pte_t *pte;
 	unsigned long pfn = page_to_pfn((struct page *)start);
 
-	if (vmemmap_can_optimize(altmap, pgmap) && section_vmemmap_optimizable(__pfn_to_section(pfn)))
+	if (section_vmemmap_optimizable(__pfn_to_section(pfn)))
 		return vmemmap_populate_compound_pages(pfn, start, end, node, pgmap);
 	/*
 	 * If altmap is present, Make sure we align the start vmemmap addr
diff --git a/arch/powerpc/mm/init_64.c b/arch/powerpc/mm/init_64.c
index 8f4aa5b32186..56cbea89d304 100644
--- a/arch/powerpc/mm/init_64.c
+++ b/arch/powerpc/mm/init_64.c
@@ -283,6 +283,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 		return radix__vmemmap_populate(start, end, node, altmap, pgmap);
 #endif
 
+	section_set_order(__pfn_to_section(page_to_pfn((struct page *)start)), 0);
 	return __vmemmap_populate(start, end, node, altmap);
 }
 
-- 
2.20.1



More information about the Linuxppc-dev mailing list