possible deadlock in pipes
Anton Blanchard
anton at samba.org
Fri Aug 22 07:54:33 EST 2003
Hi Olaf,
> I see random dead locks in pipe_wait() on power4 (p630, p650).
> I dont have a simple testcase to trigger it. The userland code its
> either 32bit or 64bit, new or (very) old.
> It happens with 2.4.19 and with 2.4.21.
>
> This patch seems to fix it, maybe it is only a workaround. Is the 2.4
> pipe code supposed to work on power4 (enough sync accross cpus etc.)?
How hard is it to hit? I think we should be using set_task_state()
which includes a memory barrier. I also updated the other open coded
statements to use __set_task_state. Finally I got rid of some redundant
wmb()s and added unlikely() to force the slow path out of line
Note in 2.5 we have each way barriers on our atomics that return values
(like atomic_dec_and_test). It doesnt look like we have that in 2.4.
Anton
===== arch/ppc64/kernel/semaphore.c 1.2 vs edited =====
--- 1.2/arch/ppc64/kernel/semaphore.c Mon Apr 8 15:56:10 2002
+++ edited/arch/ppc64/kernel/semaphore.c Fri Aug 22 07:11:26 2003
@@ -75,9 +75,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_UNINTERRUPTIBLE;
+ __set_task_state(tsk, TASK_UNINTERRUPTIBLE);
add_wait_queue_exclusive(&sem->wait, &wait);
- smp_wmb();
/*
* Try to get the semaphore. If the count is > 0, then we've
@@ -87,10 +86,10 @@
*/
while (__sem_update_count(sem, -1) <= 0) {
schedule();
- tsk->state = TASK_UNINTERRUPTIBLE;
+ set_task_state(tsk, TASK_UNINTERRUPTIBLE);
}
remove_wait_queue(&sem->wait, &wait);
- tsk->state = TASK_RUNNING;
+ __set_task_state(tsk, TASK_RUNNING);
/*
* If there are any more sleepers, wake one of them up so
@@ -106,9 +105,8 @@
struct task_struct *tsk = current;
DECLARE_WAITQUEUE(wait, tsk);
- tsk->state = TASK_INTERRUPTIBLE;
+ __set_task_state(tsk, TASK_INTERRUPTIBLE);
add_wait_queue_exclusive(&sem->wait, &wait);
- smp_wmb();
while (__sem_update_count(sem, -1) <= 0) {
if (signal_pending(current)) {
@@ -122,10 +120,11 @@
break;
}
schedule();
- tsk->state = TASK_INTERRUPTIBLE;
+ set_task_state(tsk, TASK_INTERRUPTIBLE);
}
- tsk->state = TASK_RUNNING;
remove_wait_queue(&sem->wait, &wait);
+ __set_task_state(tsk, TASK_RUNNING);
+
wake_up(&sem->wait);
return retval;
}
===== include/asm-ppc64/semaphore.h 1.5 vs edited =====
--- 1.5/include/asm-ppc64/semaphore.h Mon Feb 17 14:22:35 2003
+++ edited/include/asm-ppc64/semaphore.h Fri Aug 22 07:26:44 2003
@@ -82,9 +82,8 @@
/*
* Try to get the semaphore, take the slow path if we fail.
*/
- if (atomic_dec_return(&sem->count) < 0)
+ if (unlikely(atomic_dec_return(&sem->count) < 0))
__down(sem);
- smp_wmb();
}
static inline int down_interruptible(struct semaphore * sem)
@@ -96,23 +95,18 @@
#endif
might_sleep();
- if (atomic_dec_return(&sem->count) < 0)
+ if (unlikely(atomic_dec_return(&sem->count) < 0))
ret = __down_interruptible(sem);
- smp_wmb();
return ret;
}
static inline int down_trylock(struct semaphore * sem)
{
- int ret;
-
#ifdef WAITQUEUE_DEBUG
CHECK_MAGIC(sem->__magic);
#endif
- ret = atomic_dec_if_positive(&sem->count) < 0;
- smp_wmb();
- return ret;
+ return atomic_dec_if_positive(&sem->count) < 0;
}
static inline void up(struct semaphore * sem)
@@ -121,8 +115,7 @@
CHECK_MAGIC(sem->__magic);
#endif
- smp_wmb();
- if (atomic_inc_return(&sem->count) <= 0)
+ if (unlikely(atomic_inc_return(&sem->count) <= 0))
__up(sem);
}
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list