(bisected) [PATCH v2 08/37] mm/hugetlb: check for unreasonable folio sizes when registering hstate

Christophe Leroy christophe.leroy at csgroup.eu
Thu Oct 9 20:16:52 AEDT 2025



Le 09/10/2025 à 10:14, David Hildenbrand a écrit :
> On 09.10.25 10:04, Christophe Leroy wrote:
>>
>>
>> Le 09/10/2025 à 09:22, David Hildenbrand a écrit :
>>> On 09.10.25 09:14, Christophe Leroy wrote:
>>>> Hi David,
>>>>
>>>> Le 01/09/2025 à 17:03, David Hildenbrand a écrit :
>>>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>>>> index 1e777cc51ad04..d3542e92a712e 100644
>>>>> --- a/mm/hugetlb.c
>>>>> +++ b/mm/hugetlb.c
>>>>> @@ -4657,6 +4657,7 @@ static int __init hugetlb_init(void)
>>>>>         BUILD_BUG_ON(sizeof_field(struct page, private) * 
>>>>> BITS_PER_BYTE <
>>>>>                 __NR_HPAGEFLAGS);
>>>>> +    BUILD_BUG_ON_INVALID(HUGETLB_PAGE_ORDER > MAX_FOLIO_ORDER);
>>>>>         if (!hugepages_supported()) {
>>>>>             if (hugetlb_max_hstate || default_hstate_max_huge_pages)
>>>>> @@ -4740,6 +4741,7 @@ void __init hugetlb_add_hstate(unsigned int 
>>>>> order)
>>>>>         }
>>>>>         BUG_ON(hugetlb_max_hstate >= HUGE_MAX_HSTATE);
>>>>>         BUG_ON(order < order_base_2(__NR_USED_SUBPAGE));
>>>>> +    WARN_ON(order > MAX_FOLIO_ORDER);
>>>>>         h = &hstates[hugetlb_max_hstate++];
>>>>>         __mutex_init(&h->resize_lock, "resize mutex", &h->resize_key);
>>>>>         h->order = order;
>>>
>>> We end up registering hugetlb folios that are bigger than
>>> MAX_FOLIO_ORDER. So we have to figure out how a config can trigger that
>>> (and if we have to support that).
>>>
>>
>> MAX_FOLIO_ORDER is defined as:
>>
>> #ifdef CONFIG_ARCH_HAS_GIGANTIC_PAGE
>> #define MAX_FOLIO_ORDER        PUD_ORDER
>> #else
>> #define MAX_FOLIO_ORDER        MAX_PAGE_ORDER
>> #endif
>>
>> MAX_PAGE_ORDER is the limit for dynamic creation of hugepages via
>> /sys/kernel/mm/hugepages/ but bigger pages can be created at boottime
>> with kernel boot parameters without CONFIG_ARCH_HAS_GIGANTIC_PAGE:
>>
>>     hugepagesz=64m hugepages=1 hugepagesz=256m hugepages=1
>>
>> Gives:
>>
>> HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
>> HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page
>> HugeTLB: registered 64.0 MiB page size, pre-allocated 1 pages
>> HugeTLB: 0 KiB vmemmap can be freed for a 64.0 MiB page
>> HugeTLB: registered 256 MiB page size, pre-allocated 1 pages
>> HugeTLB: 0 KiB vmemmap can be freed for a 256 MiB page
>> HugeTLB: registered 4.00 MiB page size, pre-allocated 0 pages
>> HugeTLB: 0 KiB vmemmap can be freed for a 4.00 MiB page
>> HugeTLB: registered 16.0 MiB page size, pre-allocated 0 pages
>> HugeTLB: 0 KiB vmemmap can be freed for a 16.0 MiB page
> 
> I think it's a violation of CONFIG_ARCH_HAS_GIGANTIC_PAGE. The existing 
> folio_dump() code would not handle it correctly as well.

I'm trying to dig into history and when looking at commit 4eb0716e868e 
("hugetlb: allow to free gigantic pages regardless of the 
configuration") I understand that CONFIG_ARCH_HAS_GIGANTIC_PAGE is 
needed to be able to allocate gigantic pages at runtime. It is not 
needed to reserve gigantic pages at boottime.

What am I missing ?

> 
> See how snapshot_page() uses MAX_FOLIO_NR_PAGES.
> 



More information about the Linuxppc-dev mailing list