[PATCH v4 5/5] mm/mm_init: Fix uninitialized struct pages for ZONE_DEVICE
David Hildenbrand (Arm)
david at kernel.org
Thu Apr 23 05:12:13 AEST 2026
On 4/22/26 10:14, Muchun Song wrote:
> If DAX memory is hotplugged into an unoccupied subsection of an early
> section, section_activate() reuses the unoptimized boot memmap.
> However, compound_nr_pages() still assumes that vmemmap optimization is
> in effect and initializes only the reduced number of struct pages. As a
> result, the remaining tail struct pages are left uninitialized, which
> can later lead to unexpected behavior or crashes.
>
> Fix this by treating early sections as unoptimized when calculating how
> many struct pages to initialize.
>
> Fixes: 6fd3620b3428 ("mm/page_alloc: reuse tail struct pages for compound devmaps")
> Signed-off-by: Muchun Song <songmuchun at bytedance.com>
> ---
> mm/mm_init.c | 13 ++++++++++---
> 1 file changed, 10 insertions(+), 3 deletions(-)
>
> diff --git a/mm/mm_init.c b/mm/mm_init.c
> index 9d0fe79a94de..3d5af40d0943 100644
> --- a/mm/mm_init.c
> +++ b/mm/mm_init.c
> @@ -1056,10 +1056,17 @@ static void __ref __init_zone_device_page(struct page *page, unsigned long pfn,
> * of how the sparse_vmemmap internals handle compound pages in the lack
> * of an altmap. See vmemmap_populate_compound_pages().
> */
> -static inline unsigned long compound_nr_pages(struct vmem_altmap *altmap,
> +static inline unsigned long compound_nr_pages(unsigned long pfn,
> + struct vmem_altmap *altmap,
> struct dev_pagemap *pgmap)
> {
> - if (!vmemmap_can_optimize(altmap, pgmap))
> + /*
> + * If DAX memory is hot-plugged into an unoccupied subsection
> + * of an early section, the unoptimized boot memmap is reused.
> + * See section_activate().
> + */
> + if (early_section(__pfn_to_section(pfn)) ||
> + !vmemmap_can_optimize(altmap, pgmap))
> return pgmap_vmemmap_nr(pgmap);
>
> return VMEMMAP_RESERVE_NR * (PAGE_SIZE / sizeof(struct page));
> @@ -1129,7 +1136,7 @@ void __ref memmap_init_zone_device(struct zone *zone,
> continue;
>
> memmap_init_compound(page, pfn, zone_idx, nid, pgmap,
> - compound_nr_pages(altmap, pgmap));
> + compound_nr_pages(pfn, altmap, pgmap));
> }
>
> /*
Nasty, but yes, we cannot really optimize in that case.
Acked-by: David Hildenbrand (Arm) <david at kernel.org>
--
Cheers,
David
More information about the Linuxppc-dev
mailing list