[PATCH 2/3] mm: rmap: Fix CONT-PTE/PMD size hugetlb issue when migration

Mike Kravetz mike.kravetz at oracle.com
Fri May 6 09:53:35 AEST 2022


On 4/29/22 01:14, Baolin Wang wrote:
> On some architectures (like ARM64), it can support CONT-PTE/PMD size
> hugetlb, which means it can support not only PMD/PUD size hugetlb:
> 2M and 1G, but also CONT-PTE/PMD size: 64K and 32M if a 4K page
> size specified.
<snip>
> diff --git a/mm/rmap.c b/mm/rmap.c
> index 6fdd198..7cf2408 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -1924,13 +1924,15 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>  					break;
>  				}
>  			}
> +
> +			/* Nuke the hugetlb page table entry */
> +			pteval = huge_ptep_clear_flush(vma, address, pvmw.pte);
>  		} else {
>  			flush_cache_page(vma, address, pte_pfn(*pvmw.pte));
> +			/* Nuke the page table entry. */
> +			pteval = ptep_clear_flush(vma, address, pvmw.pte);
>  		}
>  

On arm64 with CONT-PTE/PMD the returned pteval will have dirty or young set
if ANY of the PTE/PMDs had dirty or young set.

> -		/* Nuke the page table entry. */
> -		pteval = ptep_clear_flush(vma, address, pvmw.pte);
> -
>  		/* Set the dirty flag on the folio now the pte is gone. */
>  		if (pte_dirty(pteval))
>  			folio_mark_dirty(folio);
> @@ -2015,7 +2017,10 @@ static bool try_to_migrate_one(struct folio *folio, struct vm_area_struct *vma,
>  			pte_t swp_pte;
>  
>  			if (arch_unmap_one(mm, vma, address, pteval) < 0) {
> -				set_pte_at(mm, address, pvmw.pte, pteval);
> +				if (folio_test_hugetlb(folio))
> +					set_huge_pte_at(mm, address, pvmw.pte, pteval);

And, we will use that pteval for ALL the PTE/PMDs here.  So, we would set
the dirty or young bit in ALL PTE/PMDs.

Could that cause any issues?  May be more of a question for the arm64 people.
-- 
Mike Kravetz


More information about the Linuxppc-dev mailing list