[Cbe-oss-dev] spufs: invalidate SLB translation before adding a new entry

Arnd Bergmann arnd at arndb.de
Tue Feb 26 17:01:56 EST 2008


When we replace an SLB entry in the MFC after using up all the
available entries, there is a short window in which an incorrect
entry is marked as valid.

The problem is that the 'valid' bit is stored in the ESID, which
is always written after the VSID. Overwriting the VSID first
will make the original ESID entry point to the new VSID, which
means that any concurrent DMA accessing the old ESID ends up
being redirected to the new virtual address.
A few cycles later, we write the new ESID and everything is fine
again.

That race can be closed by writing a zero entry to the ESID first,
which makes sure that the VSID is not accessed until we write
the new ESID.

Note that we don't actually need to invalidate the SLB entry
using the invalidation register, which would also flush any
ERAT entries for that segment, because the segment translation
does not become invalid but is only removed from the SLB cache.

Signed-off-by: Arnd Bergmann <arnd at arndb.de>

---
Index: linux-2.6/arch/powerpc/platforms/cell/spu_base.c
===================================================================
--- linux-2.6.orig/arch/powerpc/platforms/cell/spu_base.c
+++ linux-2.6/arch/powerpc/platforms/cell/spu_base.c
@@ -150,7 +150,11 @@ static inline void spu_load_slb(struct s
 			__func__, slbe, slb->vsid, slb->esid);
 
 	out_be64(&priv2->slb_index_W, slbe);
+	/* set invalid before writing vsid */
+	out_be64(&priv2->slb_esid_RW, 0);
+	/* now it's safe to write the vsid */
 	out_be64(&priv2->slb_vsid_RW, slb->vsid);
+	/* setting the new esid makes the entry valid again */
 	out_be64(&priv2->slb_esid_RW, slb->esid);
 }
 



More information about the cbe-oss-dev mailing list