[PATCH v2 00/14] Introducing TIF_NOTIFY_IPI flag
K Prateek Nayak
kprateek.nayak at amd.com
Fri Jun 14 04:15:59 AEST 2024
Hello everyone,
Before jumping into the issue, let me clarify the Cc list. Everyone have
been cc'ed on Patch 0 through Patch 3. Respective arch maintainers,
reviewers, and committers returned by scripts/get_maintainer.pl have
been cc'ed on the respective arch side changes. Scheduler and CPU Idle
maintainers and reviewers have been included for the entire series. If I
have missed anyone, please do add them. If you would like to be dropped
from the cc list, wholly or partially, for the future iterations, please
do let me know.
As long as the first three patches are applied in-order, the arch
specific enablement can be applied independently and out-of-order since
the TIF_NOTIFY_IPI flag is not used until Patch 3 and Patch 2 preps the
complete tree to handle a break out of TIF_POLLING_NRFLAG state with the
setting of either TIF_NOTIFY_IPI or TIF_NEED_RESCHED.
Quick changelog and addressing concerns from v1
===============================================
v1: https://lore.kernel.org/lkml/20240220171457.703-1-kprateek.nayak@amd.com/
v1..v2:
o Rebased the series on latest tip:sched/core at commit c793a62823d1
("sched/core: Drop spinlocks on contention iff kernel is preemptible")
Fixed a conflict with commit edc8fc01f608 ("x86: Fix
CPUIDLE_FLAG_IRQ_ENABLE leaking timer reprogram") that touched
mwait_idle_with_hints() in arch/x86/include/asm/mwait.h
o Dropping the ARM results since I never got my hands on the ARM64
system I used in my last testing. If I do manage to get my hands on it
again, I'll rerun the experiments and share the results on the thread.
To test the case where TIF_NOTIFY_IPI is not enabled for a particular
architecture, I applied the series only until Patch 3 and tested the
same on my x86 machine with a WARN_ON_ONCE() in do_idle() to check if
tif_notify_ipi() ever return true and then repeated the same with
Patch 4 applied.
o Updated benchmark results based on the latest base.
o Collected the Ack from Guo Ren for CSKY enablement.
o Dropped the RFC tag.
o Unfortunately, the series does not solve the issue highlighted by
Julia Lawall w.r.t. NUMA Balancing in [0] based on her testing of v1.
However, she did highlight a possible regression last time around
where compiling a single file took much longer with the series but I
could not reproduce it on my end. For sanity, I did rerun the same
experiment this time around and I could not see any difference.
Following are the numbers for
$ make clean
$ time make kernel/sched/core.o
---> tip:sched/core
-j1
real 0m32.734s
user 0m25.158s
sys 0m6.750s
-j256
real 0m7.181s
user 0m27.509s
sys 0m7.876s
--> tip:sched/core + TIF_NOTIFY_IPI
-j1
real 0m32.408s
user 0m24.826s
sys 0m6.767s
-j256
real 0m7.187s
user 0m27.556s
sys 0m7.602s
[0] https://lore.kernel.org/lkml/alpine.DEB.2.22.394.2310032059060.3220@hadrien/
Individual patches have their own changelog to help with review.
With those details out of the way ...
Problem statement
=================
When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream release v6.7-rc6 (this was the latest upstream kernel at the
time of encountering the issue). The issue persists on v6.10-rc1 as
well.
Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime.
Experiments
===========
Since the commit cannot be cleanly reverted on top of the current
tip:sched/core, the effects of the optimizations were reverted by:
1. Removing the check for call_function_single_prep_ipi() in
send_call_function_single_ipi(). With this change
send_call_function_single_ipi() always calls
arch_send_call_function_single_ipi()
2. Removing the call to flush_smp_call_function_queue() in do_idle()
since every smp_call_function, with (1.), would unconditionally send
an IPI to an idle CPU in TIF_POLLING mode.
Following is the diff of the above described changes which will be
henceforth referred to as the "revert":
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 31231925f1ec..735184d98c0f 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -332,11 +332,6 @@ static void do_idle(void)
*/
smp_mb__after_atomic();
- /*
- * RCU relies on this call to be done outside of an RCU read-side
- * critical section.
- */
- flush_smp_call_function_queue();
schedule_idle();
if (unlikely(klp_patch_pending(current)))
diff --git a/kernel/smp.c b/kernel/smp.c
index f085ebcdf9e7..2ff100c41885 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -111,11 +111,9 @@ void __init call_function_init(void)
static __always_inline void
send_call_function_single_ipi(int cpu)
{
- if (call_function_single_prep_ipi(cpu)) {
- trace_ipi_send_cpu(cpu, _RET_IP_,
- generic_smp_call_function_single_interrupt);
- arch_send_call_function_single_ipi(cpu);
- }
+ trace_ipi_send_cpu(cpu, _RET_IP_,
+ generic_smp_call_function_single_interrupt);
+ arch_send_call_function_single_ipi(cpu);
}
static __always_inline void
--
With the revert, the time taken to complete a fixed set of IPIs using
ipistorm improves significantly. Following are the numbers from a dual
socket 3rd Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled)
running ipistorm between CPU8 and CPU16:
cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
(tip:sched/core was at commit c793a62823d1 ("sched/core: Drop spinlocks
on contention iff kernel is preemptible") for all the test data
presented below)
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [00.00]
tip:sched/core + revert 0.41 [60.00]
Although the revert improves ipistorm performance, it also regresses
tbench and netperf, supporting the validity of the optimization.
Following are the tbench numbers from the same machine comparing vanilla
tip:sched/core and the revert applied on top:
==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip (CV) revert (CV) [pct imp]
1 1.00 (0.60) 0.90 (0.08) [-10%]
2 1.00 (0.27) 0.90 (0.76) [-10%]
4 1.00 (0.42) 0.90 (0.52) [-10%]
8 1.00 (0.78) 0.91 (0.54) [ -9%]
16 1.00 (1.70) 0.92 (0.39) [ -8%]
32 1.00 (1.73) 0.91 (1.39) [ -9%]
64 1.00 (1.09) 0.92 (1.60) [ -8%]
128 1.00 (1.45) 0.95 (0.52) [ -5%]
256 1.00 (0.96) 1.01 (0.28) [ 1%]
512 1.00 (0.32) 1.01 (0.20) [ 1%]
1024 1.00 (0.06) 1.01 (0.03) [ 1%]
Since a simple revert is not a viable solution, we delved deeper into
the changes in the execution path with call_function_single_prep_ipi()
check.
Effects of call_function_single_prep_ipi()
==========================================
To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.
When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.
Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--
Skipping newidle_balance()
==========================
In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.
Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.
Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.
Proposed solution: TIF_NOTIFY_IPI
=================================
Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.
On architectures that do not support the TIF_NOTIFY_IPI flag (this
series only adds support for x86 and ARM processors for now),
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.
Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi())
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { ...
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--
Results
=======
With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically and is closer the numbers same with
the revert. Following are the numbers from the same dual socket 3rd
Generation EPYC system (2 x 64C/128T) (boost on, C2 disabled) running
ipistorm between CPU8 and CPU16:
cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [baseline]
tip:sched/core + revert 0.40 [60.26]
tip:sched/core + TIF_NOTIFY_IPI 0.46 [54.88]
netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.
Additional benefits
===================
In nohz_csd_func(), the need_resched() check returns true when an idle
CPU in TIF_POLLING mode is woken up to do an idle load balance which
leads to the idle load balance bailing out early today since
send_call_function_single_ipi() ends up setting the TIF_NEED_RESCHED
flag to put the CPU out of idle and the flag is not cleared until
__schedule() is called much later in the call path.
With TIF_NOTIFY_IPI, this is no longer the case since TIF_NEED_RESCHED
is only set if there is a genuine need to call schedule() and not used
in an overloaded manner to notify a pending IPI.
Links
=====
[1] https://github.com/antonblanchard/ipistorm
[2] https://lore.kernel.org/lkml/20240119084548.2788-1-kprateek.nayak@amd.com/
[3] https://lore.kernel.org/lkml/b4f5ac150685456cf45a342e3bb1f28cdd557a53.camel@linux.intel.com/
[4] https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/
[5] https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/
This series is based on tip:sched/core at commit c793a62823d1
("sched/core: Drop spinlocks on contention iff kernel is preemptible")
--
Gautham R. Shenoy (4):
thread_info: Add helpers to test and clear TIF_NOTIFY_IPI
sched: Define a need_resched_or_ipi() helper and use it treewide
sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING
mode of pending IPI
x86/thread_info: Introduce TIF_NOTIFY_IPI flag
K Prateek Nayak (10):
arm/thread_info: Introduce TIF_NOTIFY_IPI flag
alpha/thread_info: Introduce TIF_NOTIFY_IPI flag
openrisc/thread_info: Introduce TIF_NOTIFY_IPI flag
powerpc/thread_info: Introduce TIF_NOTIFY_IPI flag
sh/thread_info: Introduce TIF_NOTIFY_IPI flag
sparc/thread_info: Introduce TIF_NOTIFY_IPI flag
csky/thread_info: Introduce TIF_NOTIFY_IPI flag
parisc/thread_info: Introduce TIF_NOTIFY_IPI flag
nios2/thread_info: Introduce TIF_NOTIFY_IPI flag
microblaze/thread_info: Introduce TIF_NOTIFY_IPI flag
--
Cc: Richard Henderson <richard.henderson at linaro.org>
Cc: Ivan Kokshaysky <ink at jurassic.park.msu.ru>
Cc: Matt Turner <mattst88 at gmail.com>
Cc: Russell King <linux at armlinux.org.uk>
Cc: Guo Ren <guoren at kernel.org>
Cc: Michal Simek <monstr at monstr.eu>
Cc: Dinh Nguyen <dinguyen at kernel.org>
Cc: Jonas Bonn <jonas at southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson at saunalahti.fi>
Cc: Stafford Horne <shorne at gmail.com>
Cc: "James E.J. Bottomley" <James.Bottomley at HansenPartnership.com>
Cc: Helge Deller <deller at gmx.de>
Cc: Michael Ellerman <mpe at ellerman.id.au>
Cc: Nicholas Piggin <npiggin at gmail.com>
Cc: Christophe Leroy <christophe.leroy at csgroup.eu>
Cc: "Naveen N. Rao" <naveen.n.rao at linux.ibm.com>
Cc: Yoshinori Sato <ysato at users.sourceforge.jp>
Cc: Rich Felker <dalias at libc.org>
Cc: John Paul Adrian Glaubitz <glaubitz at physik.fu-berlin.de>
Cc: "David S. Miller" <davem at davemloft.net>
Cc: Andreas Larsson <andreas at gaisler.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: Borislav Petkov <bp at alien8.de>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: "Rafael J. Wysocki" <rafael at kernel.org>
Cc: Daniel Lezcano <daniel.lezcano at linaro.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Juri Lelli <juri.lelli at redhat.com>
Cc: Vincent Guittot <vincent.guittot at linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann at arm.com>
Cc: Steven Rostedt <rostedt at goodmis.org>
Cc: Ben Segall <bsegall at google.com>
Cc: Mel Gorman <mgorman at suse.de>
Cc: Daniel Bristot de Oliveira <bristot at redhat.com>
Cc: Valentin Schneider <vschneid at redhat.com>
Cc: Andrew Donnellan <ajd at linux.ibm.com>
Cc: Benjamin Gray <bgray at linux.ibm.com>
Cc: Frederic Weisbecker <frederic at kernel.org>
Cc: Xin Li <xin3.li at intel.com>
Cc: Kees Cook <keescook at chromium.org>
Cc: Rick Edgecombe <rick.p.edgecombe at intel.com>
Cc: Tony Battersby <tonyb at cybernetics.com>
Cc: Bjorn Helgaas <bhelgaas at google.com>
Cc: Brian Gerst <brgerst at gmail.com>
Cc: Leonardo Bras <leobras at redhat.com>
Cc: Imran Khan <imran.f.khan at oracle.com>
Cc: "Paul E. McKenney" <paulmck at kernel.org>
Cc: Rik van Riel <riel at surriel.com>
Cc: Tim Chen <tim.c.chen at linux.intel.com>
Cc: David Vernet <void at manifault.com>
Cc: Julia Lawall <julia.lawall at inria.fr>
Cc: linux-alpha at vger.kernel.org
Cc: linux-kernel at vger.kernel.org
Cc: linux-arm-kernel at lists.infradead.org
Cc: linux-csky at vger.kernel.org
Cc: linux-openrisc at vger.kernel.org
Cc: linux-parisc at vger.kernel.org
Cc: linuxppc-dev at lists.ozlabs.org
Cc: linux-sh at vger.kernel.org
Cc: sparclinux at vger.kernel.org
Cc: linux-pm at vger.kernel.org
Cc: x86 at kernel.org
--
arch/alpha/include/asm/thread_info.h | 2 ++
arch/arm/include/asm/thread_info.h | 3 ++
arch/csky/include/asm/thread_info.h | 2 ++
arch/microblaze/include/asm/thread_info.h | 2 ++
arch/nios2/include/asm/thread_info.h | 2 ++
arch/openrisc/include/asm/thread_info.h | 2 ++
arch/parisc/include/asm/thread_info.h | 2 ++
arch/powerpc/include/asm/thread_info.h | 2 ++
arch/sh/include/asm/thread_info.h | 2 ++
arch/sparc/include/asm/thread_info_32.h | 2 ++
arch/sparc/include/asm/thread_info_64.h | 2 ++
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/include/asm/thread_info.h | 2 ++
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++
include/linux/sched/idle.h | 12 +++----
include/linux/thread_info.h | 43 +++++++++++++++++++++++
kernel/sched/core.c | 41 ++++++++++++++++-----
kernel/sched/idle.c | 23 ++++++++----
22 files changed, 133 insertions(+), 26 deletions(-)
--
2.34.1
More information about the Linuxppc-dev
mailing list