[PATCH v4 2/5] mm/sparse-vmemmap: Pass @pgmap argument to memory deactivation paths

Muchun Song muchun.song at linux.dev
Thu Apr 23 12:14:16 AEST 2026



> On Apr 23, 2026, at 02:50, David Hildenbrand (Arm) <david at kernel.org> wrote:
> 
> On 4/22/26 10:14, Muchun Song wrote:
>> Currently, the memory hot-remove call chain -- arch_remove_memory(),
>> __remove_pages(), sparse_remove_section() and section_deactivate() --
>> does not carry the struct dev_pagemap pointer. This prevents the lower
>> levels from knowing whether the section was originally populated with
>> vmemmap optimizations (e.g., DAX with vmemmap optimization enabled).
>> 
>> Without this information, we cannot call vmemmap_can_optimize() to
>> determine if the vmemmap pages were optimized. As a result, the vmemmap
>> page accounting during teardown will mistakenly assume a non-optimized
>> allocation, leading to incorrect memmap statistics.
>> 
>> To lay the groundwork for fixing the vmemmap page accounting, we need
>> to pass the @pgmap pointer down to the deactivation location. Plumb the
>> @pgmap argument through the APIs of arch_remove_memory(), __remove_pages()
>> and sparse_remove_section(), mirroring the corresponding *_activate()
>> paths.
>> 
>> Signed-off-by: Muchun Song <songmuchun at bytedance.com>
>> Acked-by: Mike Rapoport (Microsoft) <rppt at kernel.org>
>> Reviewed-by: Oscar Salvador <osalvador at suse.de>
> 
> 
> [...]
> 
>> static void depopulate_section_memmap(unsigned long pfn, unsigned long nr_pages,
>> - 		struct vmem_altmap *altmap)
>> + 		struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
>> {
>> 	unsigned long start = (unsigned long) pfn_to_page(pfn);
>> 	unsigned long end = start + nr_pages * sizeof(struct page);
>> @@ -746,7 +746,7 @@ static int fill_subsection_map(unsigned long pfn, unsigned long nr_pages)
>>  * usage map, but still need to free the vmemmap range.
>>  */
>> static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>> - 		struct vmem_altmap *altmap)
>> + 		struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
>> {
>> 	struct mem_section *ms = __pfn_to_section(pfn);
>> 	bool section_is_early = early_section(ms);
>> @@ -784,7 +784,7 @@ static void section_deactivate(unsigned long pfn, unsigned long nr_pages,
>>  * section_activate() and pfn_valid() .
>>  */
>> 	if (!section_is_early)
>> - 		depopulate_section_memmap(pfn, nr_pages, altmap);
>> + 		depopulate_section_memmap(pfn, nr_pages, altmap, pgmap);
>> 	else if (memmap)
>> 		free_map_bootmem(memmap);
>> 
>> @@ -828,7 +828,7 @@ static struct page * __meminit section_activate(int nid, unsigned long pfn,
>> 
>> 	memmap = populate_section_memmap(pfn, nr_pages, nid, altmap, pgmap);
>> 	if (!memmap) {
>> - 		section_deactivate(pfn, nr_pages, altmap);
>> + 		section_deactivate(pfn, nr_pages, altmap, pgmap);
>> 		return ERR_PTR(-ENOMEM);
>> 	}
>> 
>> @@ -889,13 +889,13 @@ int __meminit sparse_add_section(int nid, unsigned long start_pfn,
>> }
>> 
>> void sparse_remove_section(unsigned long pfn, unsigned long nr_pages,
>> -    			   struct vmem_altmap *altmap)
>> +    			   struct vmem_altmap *altmap, struct dev_pagemap *pgmap)
> 
> While at it, could switch to two-tab indent here as well.

OK.

> 
> Acked-by: David Hildenbrand (Arm) <david at kernel.org>

Thanks.

Muchun.

> 
> -- 
> Cheers,
> 
> David




More information about the Linuxppc-dev mailing list