[PATCH v3 05/13] selftests/mm: size tmpfs according to PMD page size in split_huge_page_test

Sayali Patil sayalip at linux.ibm.com
Thu Apr 2 03:20:02 AEDT 2026


On 27/03/26 12:45, Sayali Patil wrote:
> The split_file_backed_thp() test mounts a tmpfs with a fixed size of
> "4m". This works on systems with smaller PMD page sizes,
> but fails on configurations where the PMD huge page size is
> larger (e.g. 16MB).
>
> On such systems, the fixed 4MB tmpfs is insufficient to allocate even
> a single PMD-sized THP, causing the test to fail.
>
> Fix this by sizing the tmpfs dynamically based on the runtime
> pmd_pagesize, allocating space for two PMD-sized pages.
>
> Before patch:
>    running ./split_huge_page_test /tmp/xfs_dir_YTrI5E
>    --------------------------------------------------
>    TAP version 13
>    1..55
>    ok 1 Split zero filled huge pages successful
>    ok 2 Split huge pages to order 0 successful
>    ok 3 Split huge pages to order 2 successful
>    ok 4 Split huge pages to order 3 successful
>    ok 5 Split huge pages to order 4 successful
>    ok 6 Split huge pages to order 5 successful
>    ok 7 Split huge pages to order 6 successful
>    ok 8 Split huge pages to order 7 successful
>    ok 9 Split PTE-mapped huge pages successful
>     Please enable pr_debug in split_huge_pages_in_file() for more info.
>     Failed to write data to testing file: Success (0)
>    Bail out! Error occurred
>     Planned tests != run tests (55 != 9)
>     Totals: pass:9 fail:0 xfail:0 xpass:0 skip:0 error:0
>   [FAIL]
>
> After patch:
>    --------------------------------------------------
>    running ./split_huge_page_test /tmp/xfs_dir_bMvj6o
>    --------------------------------------------------
>    TAP version 13
>    1..55
>    ok 1 Split zero filled huge pages successful
>    ok 2 Split huge pages to order 0 successful
>    ok 3 Split huge pages to order 2 successful
>    ok 4 Split huge pages to order 3 successful
>    ok 5 Split huge pages to order 4 successful
>    ok 6 Split huge pages to order 5 successful
>    ok 7 Split huge pages to order 6 successful
>    ok 8 Split huge pages to order 7 successful
>    ok 9 Split PTE-mapped huge pages successful
>     Please enable pr_debug in split_huge_pages_in_file() for more info.
>     Please check dmesg for more information
>    ok 10 File-backed THP split to order 0 test done
>     Please enable pr_debug in split_huge_pages_in_file() for more info.
>     Please check dmesg for more information
>    ok 11 File-backed THP split to order 1 test done
>     Please enable pr_debug in split_huge_pages_in_file() for more info.
>     Please check dmesg for more information
>    ok 12 File-backed THP split to order 2 test done
> ...
>    ok 55 Split PMD-mapped pagecache folio to order 7 at
>      in-folio offset 128 passed
>     Totals: pass:55 fail:0 xfail:0 xpass:0 skip:0 error:0
>     [PASS]
> ok 1 split_huge_page_test /tmp/xfs_dir_bMvj6o
>
> Fixes: fbe37501b252 ("mm: huge_memory: debugfs for file-backed THP split")
> Reviewed-by: Zi Yan<ziy at nvidia.com>
> Reviewed-by: David Hildenbrand (Arm)<david at kernel.org>
> Tested-by: Venkat Rao Bagalkote<venkat88 at linux.ibm.com>
> Signed-off-by: Sayali Patil<sayalip at linux.ibm.com>
> ---
>   tools/testing/selftests/mm/split_huge_page_test.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
>
> diff --git a/tools/testing/selftests/mm/split_huge_page_test.c b/tools/testing/selftests/mm/split_huge_page_test.c
> index e0167111bdd1..57e8a1c9647a 100644
> --- a/tools/testing/selftests/mm/split_huge_page_test.c
> +++ b/tools/testing/selftests/mm/split_huge_page_test.c
> @@ -484,6 +484,8 @@ static void split_file_backed_thp(int order)
>   	char tmpfs_template[] = "/tmp/thp_split_XXXXXX";
>   	const char *tmpfs_loc = mkdtemp(tmpfs_template);
>   	char testfile[INPUT_MAX];
> +	unsigned long size = 2 * pmd_pagesize;
> +	char opts[64];
>   	ssize_t num_written, num_read;
>   	char *file_buf1, *file_buf2;
>   	uint64_t pgoff_start = 0, pgoff_end = 1024;
> @@ -503,7 +505,8 @@ static void split_file_backed_thp(int order)
>   		file_buf1[i] = (char)i;
>   	memset(file_buf2, 0, pmd_pagesize);
>   
> -	status = mount("tmpfs", tmpfs_loc, "tmpfs", 0, "huge=always,size=4m");
> +	snprintf(opts, sizeof(opts), "huge=always,size=%lu", size);
> +	status = mount("tmpfs", tmpfs_loc, "tmpfs", 0, opts);
>   
>   	if (status)
>   		ksft_exit_fail_msg("Unable to create a tmpfs for testing\n");
AI review question:
> Are file_buf1 and file_buf2 being leaked here?
> While reviewing the patch's changes to support larger PMD sizes, I noticed
> that earlier in this function file_buf1 and file_buf2 are dynamically
> allocated using pmd_pagesize:
>      file_buf1 = (char *)malloc(pmd_pagesize);
>      file_buf2 = (char *)malloc(pmd_pagesize);
> Looking at the end of the function, the success path returns without freeing
> either buffer:
>      ksft_print_msg("Please check dmesg for more information\n");
>      ksft_test_result_pass("File-backed THP split to order %d test done\n", order);
>      return;
> Since this function is called in a loop for each split order, could this
> cause a significant memory leak? On architectures with large PMD sizes
> (like arm64 with 64KB base pages, where the PMD size is 512MB), leaking
> these buffers repeatedly could exhaust system memory and trigger the OOM
> killer during the test run.
>
Yes, both file_buf1 and file_buf2 are leaked on the success path and
some failure paths.
Since this function is invoked in a loop for each split order, the leak 
accumulates over time.
On systems with large PMD sizes, this can potentially trigger OOM during 
the test run.

This was likely not noticeable earlier with smaller PMD sizes, but 
becomes significant
with larger configurations.
This appears to be a pre-existing issue and not introduced by my patch.
I will prepare a separate fix to free both buffers on all exit paths to
prevent this memory leak.

Thanks,
Sayali
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20260401/903925c6/attachment.htm>


More information about the Linuxppc-dev mailing list