[PATCH 2/4] arm64/hugetlb: Implement arm64 specific hugetlb_mask_last_page

Muchun Song songmuchun at bytedance.com
Fri Jun 17 18:26:23 AEST 2022


On Thu, Jun 16, 2022 at 02:05:16PM -0700, Mike Kravetz wrote:
> From: Baolin Wang <baolin.wang at linux.alibaba.com>
> 
> The HugeTLB address ranges are linearly scanned during fork, unmap and
> remap operations, and the linear scan can skip to the end of range mapped
> by the page table page if hitting a non-present entry, which can help
> to speed linear scanning of the HugeTLB address ranges.
> 
> So hugetlb_mask_last_page() is introduced to help to update the address in
> the loop of HugeTLB linear scanning with getting the last huge page mapped
> by the associated page table page[1], when a non-present entry is encountered.
> 
> Considering ARM64 specific cont-pte/pmd size HugeTLB, this patch implemented
> an ARM64 specific hugetlb_mask_last_page() to help this case.
> 
> [1] https://lore.kernel.org/linux-mm/20220527225849.284839-1-mike.kravetz@oracle.com/
> 
> Signed-off-by: Baolin Wang <baolin.wang at linux.alibaba.com>
> Signed-off-by: Mike Kravetz <mike.kravetz at oracle.com>

Acked-by: Muchun Song <songmuchun at bytedance.com>

Thanks.


More information about the Linuxppc-dev mailing list