[PATCH v3 06/13] selftest/mm: adjust hugepage-mremap test size for large huge pages

Sayali Patil sayalip at linux.ibm.com
Thu Apr 2 07:45:48 AEDT 2026



On 01/04/26 19:40, David Hildenbrand (Arm) wrote:
> On 3/27/26 08:16, Sayali Patil wrote:
>> The hugepage-mremap selftest uses a default size of 10MB, which is
>> sufficient for small huge page sizes. However, when the huge page size
>> is large (e.g. 1GB), 10MB is smaller than a single huge page.
>> As a result, the test does not trigger PMD sharing and the
>> corresponding unshare path in mremap(), causing the
>> test to fail (mremap succeeds where a failure is expected).
>>
>> Update run_vmtest.sh to use twice the huge page size when the huge page
>> size exceeds 10MB, while retaining the 10MB default for smaller huge
>> pages. This ensures the test exercises the intended PMD sharing and
>> unsharing paths for larger huge page sizes.
>>
>> Before patch:
>>   running ./hugepage-mremap
>>   ------------------------------
>>   TAP version 13
>>   1..1
>>    Map haddr: Returned address is 0x7eaa40000000
>>    Map daddr: Returned address is 0x7daa40000000
>>    Map vaddr: Returned address is 0x7faa40000000
>>    Address returned by mmap() = 0x7fffaa600000
>>    Mremap: Returned address is 0x7faa40000000
>>    First hex is 0
>>    First hex is 3020100
>>   Bail out! mremap: Expected failure, but call succeeded
>>   Planned tests != run tests (1 != 0)
>>   Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0
>>   [FAIL]
>>   not ok 1 hugepage-mremap # exit=1
>>
>> Before patch:
>>   running ./hugepage-mremap
>>   ------------------------------
>>   TAP version 13
>>   1..1
>>    Map haddr: Returned address is 0x7eaa40000000
>>    Map daddr: Returned address is 0x7daa40000000
>>    Map vaddr: Returned address is 0x7faa40000000
>>    Address returned by mmap() = 0x7fffaa600000
>>    Mremap: Returned address is 0x7faa40000000
>>    First hex is 0
>>    First hex is 3020100
>>   Bail out! mremap: Expected failure, but call succeeded
>>   Planned tests != run tests (1 != 0)
>>   Totals: pass:0 fail:0 xfail:0 xpass:0 skip:0 error:0
>>   [FAIL]
>>   not ok 1 hugepage-mremap # exit=1
>>
> 
> Why are there two "Before patch" in here?
Thanks for pointing that out, Let me fix it in the next version.
> 
>> After patch:
>>   running ./hugepage-mremap 2048
>>   ------------------------------
>>   TAP version 13
>>   1..1
>>    Map haddr: Returned address is 0x7eaa40000000
>>    Map daddr: Returned address is 0x7daa40000000
>>    Map vaddr: Returned address is 0x7faa40000000
>>    Address returned by mmap() = 0x7fff13000000
>>    Mremap: Returned address is 0x7faa40000000
>>    First hex is 0
>>    First hex is 3020100
>>    ok 1 Read same data
>>   Totals: pass:1 fail:0 xfail:0 xpass:0 skip:0 error:0
>>   [PASS]
>>   ok 1 hugepage-mremap 2048
>>
>> Fixes: f77a286de48c ("mm, hugepages: make memory size variable in hugepage-mremap selftest")
>> Acked-by: Zi Yan <ziy at nvidia.com>
>> Tested-by: Venkat Rao Bagalkote <venkat88 at linux.ibm.com>
>> Signed-off-by: Sayali Patil <sayalip at linux.ibm.com>
>> ---
>>   tools/testing/selftests/mm/run_vmtests.sh | 13 ++++++++++++-
>>   1 file changed, 12 insertions(+), 1 deletion(-)
>>
>> diff --git a/tools/testing/selftests/mm/run_vmtests.sh b/tools/testing/selftests/mm/run_vmtests.sh
>> index afdcfd0d7cef..eecec0b6eb13 100755
>> --- a/tools/testing/selftests/mm/run_vmtests.sh
>> +++ b/tools/testing/selftests/mm/run_vmtests.sh
>> @@ -293,7 +293,18 @@ echo "$shmmax" > /proc/sys/kernel/shmmax
>>   echo "$shmall" > /proc/sys/kernel/shmall
>>   
>>   CATEGORY="hugetlb" run_test ./map_hugetlb
>> -CATEGORY="hugetlb" run_test ./hugepage-mremap
>> +
>> +# If the huge page size is larger than 10MB, increase the test memory size
>> +# to twice the huge page size (in MB) to ensure the test exercises PMD sharing
>> +# and the unshare path in hugepage-mremap. Otherwise, run the test with
>> +# the default 10MB memory size.
> 
> PMD sharing requires, on x86, a 1 GiB area with 2 MiB hugetlb folios.
> 
> How does doubling sort that out?
> 
> Also, why the magic value 10mb?
> 
> 
Hi David,
Yes, 1GB huge pages are mapped at the PUD level and are not involved in 
PMD sharing, as huge_pte_alloc() skips sharing for sizes other than 
PMD_SIZE.

The issue here is due to an unaligned memory size on a 1GB mapping.
This leads munmap() to fail at an unaligned address, causing the 
subsequent expected-to-fail mremap() to unexpectedly succeed.
The default memory size for this test is 10MB.

Aligning the size to a multiple of 1GB avoids this failure, but it is 
not related to PMD sharing. I will update the description in v4 to 
reflect this more accurately.

I will also update the test code directly to align the memory size to 
the huge page size, rather than modifying run_vmtests.sh.

Thanks,
Sayali


More information about the Linuxppc-dev mailing list