[PATCH] powerpc/mm: Fix set_memory_*() against concurrent accesses

Michael Ellerman mpe at ellerman.id.au
Wed Aug 18 17:46:42 AEST 2021


Fabiano Rosas <farosas at linux.ibm.com> writes:
> Michael Ellerman <mpe at ellerman.id.au> writes:
>
> Hi, I already mentioned these things in private, but I'll post here so
> everyone can see:
>
>> Because pte_update() takes the set of PTE bits to set and clear we can't
>> use our existing helpers, eg. pte_wrprotect() etc. and instead have to
>> open code the set of flags. We will clean that up somehow in a future
>> commit.
>
> I tested the following on P9 and it seems to work fine. Not sure if it
> works for CONFIG_PPC_8xx, though.
>
>
>  static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
>  {
>  	long action = (long)data;
>  	pte_t pte;
>  
>  	spin_lock(&init_mm.page_table_lock);
> -
> -	/* invalidate the PTE so it's safe to modify */
> -	pte = ptep_get_and_clear(&init_mm, addr, ptep);
> -	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
> +	pte = *ptep;
>  
>  	/* modify the PTE bits as desired, then apply */
>  	switch (action) {
> @@ -59,11 +42,9 @@ static int change_page_attr(pte_t *ptep, unsigned long addr, void *data)
>  		break;
>  	}
>  
> -	set_pte_at(&init_mm, addr, ptep, pte);
> +	pte_update(&init_mm, addr, ptep, ~0UL, pte_val(pte), 0);

I avoided that because it's not atomic, but pte_update() is not atomic
on some platforms anyway. And for now at least we still have the
page_table_lock as well.

So you're right that's a nicer way to do it.

And I'll use ptep_get() as Christophe suggested.

> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>  
> -	/* See ptesync comment in radix__set_pte_at() */
> -	if (radix_enabled())
> -		asm volatile("ptesync": : :"memory");

I didn't do that because I wanted to keep the patch minimal. We can do
that as a separate patch.

>  	spin_unlock(&init_mm.page_table_lock);
>  
>  	return 0;
> ---
>
> For reference, the full patch is here:
> https://github.com/farosas/linux/commit/923c95c84d7081d7be9503bf5b276dd93bd17036.patch
>
>>
>> [1]: https://lore.kernel.org/linuxppc-dev/87y318wp9r.fsf@linux.ibm.com/
>>
>> Fixes: 1f9ad21c3b38 ("powerpc/mm: Implement set_memory() routines")
>> Reported-by: Laurent Vivier <lvivier at redhat.com>
>> Signed-off-by: Michael Ellerman <mpe at ellerman.id.au>
>> ---
>
> ...
>
>> -	set_pte_at(&init_mm, addr, ptep, pte);
>> +	pte_update(&init_mm, addr, ptep, clear, set, 0);
>>  
>>  	/* See ptesync comment in radix__set_pte_at() */
>>  	if (radix_enabled())
>>  		asm volatile("ptesync": : :"memory");
>> +
>> +	flush_tlb_kernel_range(addr, addr + PAGE_SIZE);
>
> I think there's an optimization possible here, when relaxing access, to
> skip the TLB flush. Would still need the ptesync though. Similar to what
> Nick did in e5f7cb58c2b7 ("powerpc/64s/radix: do not flush TLB when
> relaxing access").

That commit is specific to Radix, whereas this code needs to work on all
platforms.

We'd need to verify that it's safe to skip the flush on all platforms
and CPU versions.

What I think we can do, and would possibly be a more meaningful
optimisation, is to move the TLB flush out of the loop and up into
change_memory_attr(). So we just do it once for the range, rather than
per page. But that too would be a separate patch.

cheers


More information about the Linuxppc-dev mailing list