[PATCH 3/6] 8xx: get rid of _PAGE_HWWRITE dependency in MMU.
Joakim Tjernlund
joakim.tjernlund at transmode.se
Wed Oct 7 09:05:41 EST 2009
Benjamin Herrenschmidt <benh at kernel.crashing.org> wrote on 06/10/2009 02:34:15:
>
> On Tue, 2009-10-06 at 01:35 +0200, Joakim Tjernlund wrote:
> >
> > > Well, if the HW has the ability to enforce trap when store with !
> > DIRTY,
> >
> > Yes, provided that the kernel invalidates the TLB too so the next
> > access
> > will provoke a TLB Miss, which will then provoke a TLB error. The TLB
> > error routine checks VALID, RW and USER(if not a kernel access), then
> > sets
> > ACCESSED & DIRTY and writes the TLB(RPN reg).
> >
> > Perhaps the missing invalidate is haunting us here?
>
> No, the kernel will invalidate when clearing dirty or accessed, I don't
> think that's our problem.
>
> This is still all inefficient, we end up basically with two traps.
>
> 8xx provides backup GPRs when doing TLB misses ? What does it cost to
> jump out of a TLB miss back into "normal" context ?
>
> IE. What I do on 440 is I set a mask of required bits, basically
> _PAGE_PRESENT | _PAGE_ACCESSED is the base. The DTLB miss also sticks
> in _PAGE_RW | _PAGE_DIRTY when it's a store fault.
After some more thinking I don't think I do TLB Miss/Error correctly yet.
The problem is ACCESSED. Since I don't know if load or store in TLB Miss
I must choose:
- Assume load and do what you do above. That will incorrectly
set ACCESSED on store ops when mapped as RO(plus whatever more I haven't thought about yet)
- Trap to TLB Error and do the above. That will set ACCESSED correctly
but won't trap kernel space so these remain what they are.
Is anything depending on ACCESSED for kernel pages?
if so, what if we set ACCESSED on all kernel pages when mapping them at boot?
Will SWAP or some other service(accounting?) not like that?
Finally, why do you need to include DIRTY when a store OP?
Do you need to do COW before dirtying the page?
Seems to work for me to just set DIRTY in TLB Error if RW is set too
and not trap to C.
More information about the Linuxppc-dev
mailing list