[mainline][BUG] Observed Workqueue lockups on offline CPUs.
Samir M
samir at linux.ibm.com
Mon Apr 27 20:02:35 AEST 2026
Hi Paul,
I've been testing the latest upstream kernel on a PowerPC system and
encountered workqueue lockup issues that I've bisected to commit
61bbcfb50514 ("srcu: Push srcu_node allocation to GP when non-preemptible").
After booting, I'm seeing workqueue lockup warnings for CPUs 81-96,
which are offline on my system. The workqueues remain stuck for over 237
seconds:
[ 243.309302][ C0] BUG: workqueue lockup - pool cpus=81 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309311][ C0] BUG: workqueue lockup - pool cpus=82 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309318][ C0] BUG: workqueue lockup - pool cpus=83 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309326][ C0] BUG: workqueue lockup - pool cpus=84 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309333][ C0] BUG: workqueue lockup - pool cpus=85 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309341][ C0] BUG: workqueue lockup - pool cpus=86 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309348][ C0] BUG: workqueue lockup - pool cpus=87 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309355][ C0] BUG: workqueue lockup - pool cpus=88 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309363][ C0] BUG: workqueue lockup - pool cpus=89 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309370][ C0] BUG: workqueue lockup - pool cpus=90 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309377][ C0] BUG: workqueue lockup - pool cpus=91 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309384][ C0] BUG: workqueue lockup - pool cpus=92 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309392][ C0] BUG: workqueue lockup - pool cpus=93 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309399][ C0] BUG: workqueue lockup - pool cpus=94 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309406][ C0] BUG: workqueue lockup - pool cpus=95 node=0
flags=0x4 nice=0 stuck for 237s!
[ 243.309413][ C0] BUG: workqueue lockup - pool cpus=96 node=0
flags=0x4 nice=0 stuck for 237s!
Git bisect identified this as the first bad commit:
commit 61bbcfb50514a8a94e035a7349697a3790ab4783
Author: Paul E. McKenney <paulmck at kernel.org>
Date: Fri Mar 20 20:29:20 2026 -0700
srcu: Push srcu_node allocation to GP when non-preemptible
When the srcutree.convert_to_big and srcutree.big_cpu_lim kernel boot
parameters specify initialization-time allocation of the srcu_node
tree for statically allocated srcu_struct structures (for example, in
DEFINE_SRCU() at build time instead of init_srcu_struct() at runtime),
init_srcu_struct_nodes() will attempt to dynamically allocate this tree
at the first run-time update-side use of this srcu_struct structure,
but while holding a raw spinlock. Because the memory allocator can
acquire non-raw spinlocks, this can result in lockdep splats.
This commit therefore uses the same SRCU_SIZE_ALLOC trick that is used
when the first run-time update-side use of this srcu_struct structure
happens before srcu_init() is called. The actual allocation then takes
place from workqueue context at the ends of upcoming SRCU grace
periods.
[boqun: Adjust the sha1 of the Fixes tag]
Fixes: 175b45ed343a ("srcu: Use raw spinlocks so call_srcu() can be
used under preempt_disable()")
Signed-off-by: Paul E. McKenney <paulmck at kernel.org>
Signed-off-by: Boqun Feng <boqun at kernel.org>
kernel/rcu/srcutree.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
Reverting this commit resolves the issue.
The problem appears to be that the workqueue is attempting to execute on
offline CPUs. The commit moves SRCU node allocation to workqueue context
to avoid lockdep issues with memory allocation under raw spinlocks,
which makes sense. However, it seems the workqueue scheduling doesn't
properly account for CPU online/offline state in this code path.
My test environment:
- Architecture: PowerPC
- Kernel version: Latest upstream (7.1-rc1)
- CPUs 81-96 are offline at boot time
I suspect the issue might be related to:
1. Workqueue not checking CPU online status before scheduling SRCU
allocation work
2. Missing CPU hotplug awareness in the new workqueue-based allocation path
3. Possible race condition with CPU hotplug events
Would it make sense to use queue_work_on() with explicit online CPU
selection, or add CPU hotplug handlers for this workqueue? I'm not
deeply familiar with the workqueue internals, so I might be missing
something.
Please let me know if you need any additional details or if you'd like
me to test any patches.
If you happen to fix the above issue, then please add below tag.
Reported-by: Samir M <samir at linux.ibm.com>
Thanks,
Samir
More information about the Linuxppc-dev
mailing list