[PATCH] Unwire set/get_robust_list
David Woodhouse
dwmw2 at infradead.org
Tue Sep 5 14:53:14 EST 2006
On Tue, 2006-09-05 at 09:58 +1000, Paul Mackerras wrote:
> Don't we need an access_ok() test in there? Otherwise, looks fine to
> me.
Oops, good point. Like this then...
Do we actually need the memory clobber, btw? We're not using this for
locking in kernel space.
[PATCH] Implement PowerPC futex_atomic_cmpxchg_inatomic().
The sys_[gs]et_robust_list() syscalls were wired up on PowerPC but
didn't work correctly because futex_atomic_cmpxchg_inatomic() wasn't
implemented. Implement it, based on __cmpxchg_u32()
Signed-off-by: David Woodhouse <dwmw2 at infradead.org>
diff --git a/include/asm-powerpc/futex.h b/include/asm-powerpc/futex.h
index f1b3c00..936422e 100644
--- a/include/asm-powerpc/futex.h
+++ b/include/asm-powerpc/futex.h
@@ -84,7 +84,33 @@ static inline int futex_atomic_op_inuser
static inline int
futex_atomic_cmpxchg_inatomic(int __user *uaddr, int oldval, int newval)
{
- return -ENOSYS;
+ int prev;
+
+ if (!access_ok(VERIFY_WRITE, uaddr, sizeof(int)))
+ return -EFAULT;
+
+ __asm__ __volatile__ (
+ LWSYNC_ON_SMP
+"1: lwarx %0,0,%2 # futex_atomic_cmpxchg_inatomic\n\
+ cmpw 0,%0,%3\n\
+ bne- 3f\n"
+ PPC405_ERR77(0,%2)
+"2: stwcx. %4,0,%2\n\
+ bne- 1b\n"
+ ISYNC_ON_SMP
+"3: .section .fixup,\"ax\"\n\
+4: li %0,%5\n\
+ b 3b\n\
+ .previous\n\
+ .section __ex_table,\"a\"\n\
+ .align 3\n\
+ " PPC_LONG "1b,4b,2b,4b\n\
+ .previous" \
+ : "=&r" (prev), "+m" (*uaddr)
+ : "r" (uaddr), "r" (oldval), "r" (newval), "i" (-EFAULT)
+ : "cc", "memory");
+
+ return prev;
}
#endif /* __KERNEL__ */
--
dwmw2
More information about the Linuxppc-dev
mailing list