[PATCH 3/5] ppc64: make mmiowb's io_sync preempt safe
Hugh Dickins
hugh at veritas.com
Wed Nov 1 05:41:51 EST 2006
If mmiowb() is always used prior to releasing spinlock as Doc suggests,
then it's safe against preemption; but I'm not convinced that's always
the case. If preemption occurs between sync and get_paca()->io_sync = 0,
I believe there's no problem. But in the unlikely event that gcc does
the store relative to another register than r13 (as it did with current),
then there's a small danger of setting another cpu's io_sync to 0, after
it had just set it to 1. Rewrite ppc64 mmiowb to prevent that.
The remaining io_sync assignments in io.h all get_paca()->io_sync = 1,
which is harmless even if preempted to the wrong cpu (the context switch
itself syncs); and those in spinlock.h are while preemption is disabled.
Signed-off-by: Hugh Dickins <hugh at veritas.com>
---
I'm clueless with powerpc and inline asm, this patch is likely to be
nonsense: more a placeholder to provoke whatever is the correct patch.
But I think the resulting patch should probably go into 2.6.19.
include/asm-powerpc/io.h | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
--- 2.6.19-rc4/include/asm-powerpc/io.h 2006-10-24 04:34:33.000000000 +0100
+++ linux/include/asm-powerpc/io.h 2006-10-30 19:27:05.000000000 +0000
@@ -163,8 +163,11 @@ extern void _outsl_ns(volatile u32 __iom
static inline void mmiowb(void)
{
- __asm__ __volatile__ ("sync" : : : "memory");
- get_paca()->io_sync = 0;
+ unsigned long tmp;
+
+ __asm__ __volatile__("sync; li %0,0; stb %0,%1(13)"
+ : "=&r" (tmp) : "i" (offsetof(struct paca_struct, io_sync))
+ : "memory");
}
/*
More information about the Linuxppc-dev
mailing list