[PATCH v2 2/2] powerpc/smp: Disable steal from updating CPU capacity
Srikar Dronamraju
srikar at linux.ibm.com
Wed Oct 29 17:07:57 AEDT 2025
In a shared LPAR with SMT enabled, it has been observed that when a CPU
experiences steal time, it can trigger task migrations between sibling
CPUs. The idle CPU pulls a runnable task from its sibling that is
impacted by steal, making the previously busy CPU go idle. This reversal
can repeat continuously, resulting in ping-pong behavior between SMT
siblings.
To avoid migrations solely triggered by steal time, disable steal from
updating CPU capacity when running in shared processor mode.
lparstat
System Configuration
type=Shared mode=Uncapped smt=8 lcpu=72 mem=2139693696 kB cpus=64 ent=24.00
Noise case: (Ebizzy on 2 LPARs with similar configuration as above)
nr-ebizzy-threads baseline std-deviation +patch std-deviation
36 1 (0.0345589) 1.01073 (0.0411082)
72 1 (0.0387066) 1.12867 (0.029486)
96 1 (0.013317) 1.05755 (0.0118292)
128 1 (0.028087) 1.04193 (0.027159)
144 1 (0.0103478) 1.07522 (0.0265476)
192 1 (0.0164666) 1.02177 (0.0164088)
256 1 (0.0241208) 0.977572 (0.0310648)
288 1 (0.0121516) 0.97529 (0.0263536)
384 1 (0.0128001) 0.967025 (0.0207603)
512 1 (0.0113173) 1.00975 (0.00753263)
576 1 (0.0126021) 1.01087 (0.0054196)
864 1 (0.0109194) 1.00369 (0.00987092)
1024 1 (0.0121474) 1.00338 (0.0122591)
1152 1 (0.013801) 1.0097 (0.0150391)
scaled perf stats for 72 thread case.
event baseline +patch
cycles 1 1.16993
instructions 1 1.14435
cs 1 0.913554
migrations 1 0.110884
faults 1 1.0005
cache-misses 1 1.68619
Observations:
- We see a drop in context-switches and migrations resulting in an
improvement in the records per second.
No-noise case: (Ebizzy on 1 LPARs with other LPAR being idle)
nr-ebizzy-threads baseline std-deviation +patch std-deviation
36 1 (0.0451482) 1.01243 (0.0434088)
72 1 (0.0308503) 1.06175 (0.0373877)
96 1 (0.0500514) 1.13143 (0.0718754)
128 1 (0.0602872) 1.09909 (0.0375227)
144 1 (0.0843502) 1.07494 (0.0240824)
192 1 (0.0255402) 0.992734 (0.0615166)
256 1 (0.00653372) 0.982841 (0.00751558)
288 1 (0.00318369) 0.99093 (0.00960287)
384 1 (0.00272681) 0.974312 (0.0112133)
512 1 (0.00528486) 0.981207 (0.0125443)
576 1 (0.00491385) 0.992027 (0.0104948)
864 1 (0.0087057) 0.994927 (0.0143434)
1024 1 (0.010002) 0.992463 (0.00429322)
1152 1 (0.00720965) 1.00393 (0.012553)
Signed-off-by: Srikar Dronamraju <srikar at linux.ibm.com>
---
Changelog v1->v2:
v1: https://lkml.kernel.org/r/20251028104255.1892485-2-srikar@linux.ibm.com
Peter suggested to use static branch instead of sched feat
arch/powerpc/kernel/smp.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/kernel/smp.c b/arch/powerpc/kernel/smp.c
index 5ac7084eebc0..0f7fae0b4420 100644
--- a/arch/powerpc/kernel/smp.c
+++ b/arch/powerpc/kernel/smp.c
@@ -1694,8 +1694,11 @@ static void __init build_sched_topology(void)
{
int i = 0;
- if (is_shared_processor() && has_big_cores)
- static_branch_enable(&splpar_asym_pack);
+ if (is_shared_processor()) {
+ if (has_big_cores)
+ static_branch_enable(&splpar_asym_pack);
+ sched_disable_steal_acct();
+ }
#ifdef CONFIG_SCHED_SMT
if (has_big_cores) {
--
2.47.3
More information about the Linuxppc-dev
mailing list