[PATCH v6 12/18] arm64/mm: Wire up PTE_CONT for user mappings

Ryan Roberts ryan.roberts at arm.com
Wed Feb 21 06:58:49 AEDT 2024


On 19/02/2024 15:18, Catalin Marinas wrote:
> On Fri, Feb 16, 2024 at 12:53:43PM +0000, Ryan Roberts wrote:
>> On 16/02/2024 12:25, Catalin Marinas wrote:
>>> On Thu, Feb 15, 2024 at 10:31:59AM +0000, Ryan Roberts wrote:
>>>> +pte_t contpte_ptep_get_lockless(pte_t *orig_ptep)
>>>> +{
>>>> +	/*
>>>> +	 * Gather access/dirty bits, which may be populated in any of the ptes
>>>> +	 * of the contig range. We may not be holding the PTL, so any contiguous
>>>> +	 * range may be unfolded/modified/refolded under our feet. Therefore we
>>>> +	 * ensure we read a _consistent_ contpte range by checking that all ptes
>>>> +	 * in the range are valid and have CONT_PTE set, that all pfns are
>>>> +	 * contiguous and that all pgprots are the same (ignoring access/dirty).
>>>> +	 * If we find a pte that is not consistent, then we must be racing with
>>>> +	 * an update so start again. If the target pte does not have CONT_PTE
>>>> +	 * set then that is considered consistent on its own because it is not
>>>> +	 * part of a contpte range.
>>>> +*/
> [...]
>>> After writing the comments above, I think I figured out that the whole
>>> point of this loop is to check that the ptes in the contig range are
>>> still consistent and the only variation allowed is the dirty/young
>>> state to be passed to the orig_pte returned. The original pte may have
>>> been updated by the time this loop finishes but I don't think it
>>> matters, it wouldn't be any different than reading a single pte and
>>> returning it while it is being updated.
>>
>> Correct. The pte can be updated at any time, before after or during the reads.
>> That was always the case. But now we have to cope with a whole contpte block
>> being repainted while we are reading it. So we are just checking to make sure
>> that all the ptes that we read from the contpte block are consistent with
>> eachother and therefore we can trust that the access/dirty bits we gathered are
>> consistent.
> 
> I've been thinking a bit more about this - do any of the callers of
> ptep_get_lockless() check the dirty/access bits? The only one that seems
> to care is ptdump but in that case I'd rather see the raw bits for
> debugging rather than propagating the dirty/access bits to the rest in
> the contig range.
> 
> So with some clearer documentation on the requirements, I think we don't
> need an arm64-specific ptep_get_lockless() (unless I missed something).

We've discussed similar at [1]. And I've posted an RFC series to convert all
ptep_get_lockless() to ptep_get_lockless_norecency() at [2]. The current spec
for ptep_get_lockless() is that it includes the access and dirty bits. So we
can't just read the single pte - if there is a tlb eviction followed by
re-population for the block, the access/dirty bits could move and that will
break pte_same() comparisons which are used in places.

So the previous conclusion was that we are ok to put this arm64-specific
ptep_get_lockless() in for now, but look to simplify by migrating to
ptep_get_lockless_norecency() in future. Are you ok with that approach?

[1]
https://lore.kernel.org/linux-mm/a91cfe1c-289e-4828-8cfc-be34eb69a71b@redhat.com/
[2] https://lore.kernel.org/linux-mm/20240215121756.2734131-1-ryan.roberts@arm.com/

Thanks,
Ryan



More information about the Linuxppc-dev mailing list