[PATCH v2 1/5] mm/hotplug: Embed vmem_altmap details in memory block

Aneesh Kumar K V aneesh.kumar at linux.ibm.com
Fri Jul 7 02:06:52 AEST 2023


On 7/6/23 6:29 PM, David Hildenbrand wrote:
> On 06.07.23 14:32, Aneesh Kumar K V wrote:
>> On 7/6/23 4:44 PM, David Hildenbrand wrote:
>>> On 06.07.23 11:36, Aneesh Kumar K V wrote:
>>>> On 7/6/23 2:48 PM, David Hildenbrand wrote:
>>>>> On 06.07.23 10:50, Aneesh Kumar K.V wrote:
>>>>>> With memmap on memory, some architecture needs more details w.r.t altmap
>>>>>> such as base_pfn, end_pfn, etc to unmap vmemmap memory.
>>>>>
>>>>> Can you elaborate why ppc64 needs that and x86-64 + aarch64 don't?
>>>>>
>>>>> IOW, why can't ppc64 simply allocate the vmemmap from the start of the memblock (-> base_pfn) and use the stored number of vmemmap pages to calculate the end_pfn?
>>>>>
>>>>> To rephrase: if the vmemmap is not at the beginning and doesn't cover full apgeblocks, memory onlining/offlining would be broken.
>>>>>
>>>>> [...]
>>>>
>>>>
>>>> With ppc64 and 64K pagesize and different memory block sizes, we can end up allocating vmemmap backing memory from outside altmap because
>>>> a single page vmemmap can cover 1024 pages (64 *1024/sizeof(struct page)). and that can point to pages outside the dev_pagemap range.
>>>> So on free we  check
>>>
>>> So you end up with a mixture of altmap and ordinarily-allocated vmemmap pages? That sound wrong (and is counter-intuitive to the feature in general, where we *don't* want to allocate the vmemmap from outside the altmap).
>>>
>>> (64 * 1024) / sizeof(struct page) -> 1024 pages
>>>
>>> 1024 pages * 64k = 64 MiB.
>>>
>>> What's the memory block size on these systems? If it's >= 64 MiB the vmemmap of a single memory block fits into a single page and we should be fine.
>>>
>>> Smells like you want to disable the feature on a 64k system.
>>>
>>
>> But that part of vmemmap_free is common for both dax,dax kmem and the new memmap on memory feature. ie, ppc64 vmemmap_free have checks which require
>> a full altmap structure with all the details in. So for memmap on memmory to work on ppc64 we do require similar altmap struct. Hence the idea
>> of adding vmemmap_altmap to  struct memory_block
> 
> I'd suggest making sure that for the memmap_on_memory case your really *always* allocate from the altmap (that's what the feature is about after all), and otherwise block the feature (i.e., arch_mhp_supports_... should reject it).
>

Sure. How about?

bool mhp_supports_memmap_on_memory(unsigned long size)
{

	unsigned long nr_pages = size >> PAGE_SHIFT;
	unsigned long vmemmap_size = nr_pages * sizeof(struct page);

	if (!radix_enabled())
		return false;
	/*
	 * memmap on memory only supported with memory block size add/remove
	 */
	if (size != memory_block_size_bytes())
		return false;
	/*
	 * Also make sure the vmemmap allocation is fully contianed
	 * so that we always allocate vmemmap memory from altmap area.
	 */
	if (!IS_ALIGNED(vmemmap_size,  PAGE_SIZE))
		return false;
	/*
	 * The pageblock alignment requirement is met by using
	 * reserve blocks in altmap.
	 */
	return true;
}



 
> Then, you can reconstruct the altmap layout trivially
> 
> base_pfn: start of the range to unplug
> end_pfn: base_pfn + nr_vmemmap_pages
> 
> and pass that to the removal code, which will do the right thing, no?
> 
> 
> Sure, remembering the altmap might be a potential cleanup (eventually?), but the basic reasoning why this is required as patch #1 IMHO is wrong: if you say you support memmap_on_memory for a configuration, then you should also properly support it (allocate from the hotplugged memory), not silently fall back to something else.

I guess you want to keep the altmap introduction as a later patch in the series and not the preparatory patch? Or are you ok with just adding the additional check I mentioned above w.r.t size value and keep this patch as patch 1  as a generic cleanup (avoiding
the recomputation of altmap->alloc/base_pfn/end_pfn? 

-aneesh
 



More information about the Linuxppc-dev mailing list