[2.5] [PATCH] Don't loop looking for free SLB entries in do_slb_bolted

Anton Blanchard anton at samba.org
Fri Dec 12 06:40:05 EST 2003


Hi Olof,

> Anton and I have done some work in 2.4 to optimize the SLB reload
> path. One big time waster for a busy system is the search for a free
> entry. This is the corresponding patch for 2.5/2.6. It's smaller since
> we don't have to do slbie's on 2.6.

Heres what Im testing at the moment. Its a fairly big patch but Im
really hoping to get our SLB reload overhead under control in 2.6 :)

- nop out some stuff that is POWER3/RS64 specific
- we were checking some bits in the DSISR in DataAccess_common that I
  cant find in the architecture manual so I nuked it. (0xa4500000 ->
  0x04500000)
- put do_slb_bolted on a diet, dont do a search for empty entries,
  similar to Olofs patch. Use the POWER4 optimsed mtcrf instruction.
- flush the kernel segment out of the SLB on context switch to avoid
  the race where the translation is in the ERAT but not in the SLB and
  it gets invalidated by another cpu doing tlbie at just the wrong time
  (eg exception exit after srr0/srr1 has been loaded)
- split segment handling and slb handling code apart.
- preload PC, SP and TASK_UNMAPPED_BASE segments on a context switch.
- create an SLB cache and only flush those segments if possible on a
  context switch.
- optimise switch_mm, we were flushing the stab/slb more often than we
  needed to (eg when switching between a user task and a kernel thread).

Its been soaking on a large box for a while now. Its completely untested
on POWER3 however :)

Anton
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: reload.patch
Url: http://ozlabs.org/pipermail/linuxppc64-dev/attachments/20031212/358726fc/attachment.txt 


More information about the Linuxppc64-dev mailing list