[PATCH V2 1/2] mm: Update generic gup implementation to handle hugepage directory

Steve Capper steve.capper at linaro.org
Fri Oct 24 19:33:17 AEDT 2014


On 24 October 2014 00:40, Benjamin Herrenschmidt
<benh at kernel.crashing.org> wrote:
> On Thu, 2014-10-23 at 18:40 -0400, David Miller wrote:
>> Hey guys, was looking over the generic GUP while working on a sparc64
>> issue and I noticed that you guys do speculative page gets, and after
>> talking with Johannes Weiner (CC:'d) about this we don't see how it
>> could be necessary.
>>
>> If interrupts are disabled during the page table scan (which they
>> are), no IPI tlb flushes can arrive.  Therefore any removal from the
>> page tables is guarded by interrupts being re-enabled.  And as a
>> result, page counts of pages we see in the page tables must always
>> have a count > 0.
>>
>> x86 does direct atomic_add() on &page->_count because of this
>> invariant and I would rather see the generic version do this too.
>
> This is of course only true of archs who use IPIs for TLB flushes, so if
> we are going down the path of not being speculative, powerpc would have
> to go back to doing its own since our broadcast TLB flush means we
> aren't protected (we are only protected vs. the page tables themselves
> being freed since we do that via sched RCU).
>
> AFAIK, ARM also broadcasts TLB flushes...

Indeed, for most ARM cores we have hardware TLB broadcasts, thus we
need the speculative path.

>
> Another option would be to make the generic code use something defined
> by the arch to decide whether to use speculative get or
> not. I like the idea of keeping the bulk of that code generic...

It would be nice to have the code generalised further.
In addition to the speculative/atomic helpers the implementation would
need to be renamed from GENERIC_RCU_GUP to GENERIC_GUP.
The other noteworthy assumption made in the RCU GUP is that pte's can
be read atomically. For x86 this isn't true when running with 64-bit
pte's, thus a helper would be needed.

Cheers,
-- 
Steve


More information about the Linuxppc-dev mailing list