[PATCH 0/2] Allow nesting of lazy MMU mode

Erhard Furtner erhard_f at mailbox.org
Wed Oct 18 10:14:43 AEDT 2023


On Tue, 17 Oct 2023 11:34:23 +0530
"Aneesh Kumar K.V" <aneesh.kumar at linux.ibm.com> wrote:

> ie, we can do something like below. The change also make sure we call
> set_pte_filter on all the ptes we are setting via set_ptes(). I haven't
> sent this as a proper patch because we still are not able to fix the
> issue Erhard reported. 
> 
> diff --git a/arch/powerpc/mm/pgtable.c b/arch/powerpc/mm/pgtable.c
> index 3ba9fe411604..95ab20cca2da 100644
> --- a/arch/powerpc/mm/pgtable.c
> +++ b/arch/powerpc/mm/pgtable.c
> @@ -191,28 +191,35 @@ void set_ptes(struct mm_struct *mm, unsigned long addr, pte_t *ptep,
>  		pte_t pte, unsigned int nr)
>  {
>  	/*
> -	 * Make sure hardware valid bit is not set. We don't do
> -	 * tlb flush for this update.
> +	 * We don't need to call arch_enter/leave_lazy_mmu_mode()
> +	 * because we expect set_ptes to be only be used on not present
> +	 * and not hw_valid ptes. Hence there is not translation cache flush
> +	 * involved that need to be batched.
>  	 */
> -	VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep));
> +	for (;;) {
>  
> -	/* Note: mm->context.id might not yet have been assigned as
> -	 * this context might not have been activated yet when this
> -	 * is called.
> -	 */
> -	pte = set_pte_filter(pte);
> +		/*
> +		 * Make sure hardware valid bit is not set. We don't do
> +		 * tlb flush for this update.
> +		 */
> +		VM_WARN_ON(pte_hw_valid(*ptep) && !pte_protnone(*ptep));
>  
> -	/* Perform the setting of the PTE */
> -	arch_enter_lazy_mmu_mode();
> -	for (;;) {
> +		/* Note: mm->context.id might not yet have been assigned as
> +		 * this context might not have been activated yet when this
> +		 * is called.
> +		 */
> +		pte = set_pte_filter(pte);
> +
> +		/* Perform the setting of the PTE */
>  		__set_pte_at(mm, addr, ptep, pte, 0);
>  		if (--nr == 0)
>  			break;
>  		ptep++;
> -		pte = __pte(pte_val(pte) + (1UL << PTE_RPN_SHIFT));
>  		addr += PAGE_SIZE;
> +		/* increment the pfn */
> +		pte = __pte(pte_val(pte) + PAGE_SIZE);
> +
>  	}
> -	arch_leave_lazy_mmu_mode();
>  }
>  
>  void unmap_kernel_page(unsigned long va)

Was this a new version of the patch for me to test? Sorry for asking but this was a bit unclear to me.

In any case I tried it on top of v6.6-rc6 and it did not help with the issue I reported.

Regards,
Erhard


More information about the Linuxppc-dev mailing list