[RFC PATCH 03/14] sched/core: Use TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of pending IPI
K Prateek Nayak
kprateek.nayak at amd.com
Wed Feb 21 04:14:46 AEDT 2024
From: "Gautham R. Shenoy" <gautham.shenoy at amd.com>
Problem statement
=================
When measuring IPI throughput using a modified version of Anton
Blanchard's ipistorm benchmark [1], configured to measure time taken to
perform a fixed number of smp_call_function_single() (with wait set to
1), an increase in benchmark time was observed between v5.7 and the
upstream kernel (v6.7-rc6).
Bisection pointed to commit b2a02fc43a1f ("smp: Optimize
send_call_function_single_ipi()") as the reason behind this increase in
runtime. Reverting the optimization introduced by the above commit fixed
the regression in ipistorm, however benchmarks like tbench and netperf
regressed with the revert, supporting the validity of the optimization.
Following are the benchmark results on top of tip:sched/core with the
optimization reverted on a dual socket 3rd Generation aMD EPYC system
(2 x 64C/128T) running with boost enabled and C2 disabled:
(tip:sched/core at tag "sched-core-2024-01-08" for all the testing done
below)
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
cmdline : insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]
==================================================================
Test : tbench
Units : Normalized throughput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1 1.00 [ 0.00]( 0.24) 0.91 [ -8.96]( 0.30)
2 1.00 [ 0.00]( 0.25) 0.92 [ -8.20]( 0.97)
4 1.00 [ 0.00]( 0.23) 0.91 [ -9.20]( 1.75)
8 1.00 [ 0.00]( 0.69) 0.91 [ -9.48]( 1.56)
16 1.00 [ 0.00]( 0.66) 0.92 [ -8.49]( 2.43)
32 1.00 [ 0.00]( 0.96) 0.89 [-11.13]( 0.96)
64 1.00 [ 0.00]( 1.06) 0.90 [ -9.72]( 2.49)
128 1.00 [ 0.00]( 0.70) 0.92 [ -8.36]( 1.26)
256 1.00 [ 0.00]( 0.72) 0.97 [ -3.30]( 1.10)
512 1.00 [ 0.00]( 0.42) 0.98 [ -1.73]( 0.37)
1024 1.00 [ 0.00]( 0.28) 0.99 [ -1.39]( 0.43)
==================================================================
Test : netperf
Units : Normalized Througput
Interpretation: Higher is better
Statistic : AMean
==================================================================
Clients: tip[pct imp](CV) revert[pct imp](CV)
1-clients 1.00 [ 0.00]( 0.50) 0.89 [-10.51]( 0.20)
2-clients 1.00 [ 0.00]( 1.16) 0.89 [-11.10]( 0.59)
4-clients 1.00 [ 0.00]( 1.03) 0.89 [-10.68]( 0.38)
8-clients 1.00 [ 0.00]( 0.99) 0.89 [-10.54]( 0.50)
16-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.92]( 0.95)
32-clients 1.00 [ 0.00]( 1.24) 0.89 [-10.85]( 0.63)
64-clients 1.00 [ 0.00]( 1.58) 0.90 [-10.11]( 1.18)
128-clients 1.00 [ 0.00]( 0.87) 0.89 [-10.94]( 1.11)
256-clients 1.00 [ 0.00]( 4.77) 1.00 [ -0.16]( 3.45)
512-clients 1.00 [ 0.00](56.16) 1.02 [ 2.10](56.05)
Since a simple revert is not a viable solution, the changes in the code
path of call_function_single_prep_ipi(), with and without the
optimization were audited to better understand the effect of the commit.
Effects of call_function_single_prep_ipi()
==========================================
To pull a TIF_POLLING thread out of idle to process an IPI, the sender
sets the TIF_NEED_RESCHED bit in the idle task's thread info in
call_function_single_prep_ipi() and avoids sending an actual IPI to the
target. As a result, the scheduler expects a task to be enqueued when
exiting the idle path. This is not the case with non-polling idle states
where the idle CPU exits the non-polling idle state to process the
interrupt, and since need_resched() returns false, soon goes back to
idle again.
When TIF_NEED_RESCHED flag is set, do_idle() will call schedule_idle(),
a large part of which runs with local IRQ disabled. In case of ipistorm,
when measuring IPI throughput, this large IRQ disabled section delays
processing of IPIs. Further auditing revealed that in absence of any
runnable tasks, pick_next_task_fair(), which is called from the
pick_next_task() fast path, will always call newidle_balance() in this
scenario, further increasing the time spent in the IRQ disabled section.
Following is the crude visualization of the problem with relevant
functions expanded:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NEED_RESCHED); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_set_need_resched();
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func();
} /* End of wait */ }
} schedule_idle() {
...
local_irq_disable();
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
arch_send_call_function_single_ipi(CPU1); ...
\ ...
\ newidle_balance() {
\ ...
/* Delay */ ...
\ }
\ ...
\--------------> local_irq_enable();
/* Processes the IPI */
--
Skipping newidle_balance()
==========================
In an earlier attempt to solve the challenge of the long IRQ disabled
section, newidle_balance() was skipped when a CPU waking up from idle
was found to have no runnable tasks, and was transitioning back to
idle [2]. Tim [3] and David [4] had pointed out that newidle_balance()
may be viable for CPUs that are idling with tick enabled, where the
newidle_balance() has the opportunity to pull tasks onto the idle CPU.
Vincent [5] pointed out a case where the idle load kick will fail to
run on an idle CPU since the IPI handler launching the ILB will check
for need_resched(). In such cases, the idle CPU relies on
newidle_balance() to pull tasks towards itself.
Using an alternate flag instead of NEED_RESCHED to indicate a pending
IPI was suggested as the correct approach to solve this problem on the
same thread.
Proposed solution: TIF_NOTIFY_IPI
=================================
Instead of reusing TIF_NEED_RESCHED bit to pull an TIF_POLLING CPU out
of idle, TIF_NOTIFY_IPI is a newly introduced flag that
call_function_single_prep_ipi() sets on a target TIF_POLLING CPU to
indicate a pending IPI, which the idle CPU promises to process soon.
On architectures that do not support the TIF_NOTIFY_IPI flag,
call_function_single_prep_ipi() will fallback to setting
TIF_NEED_RESCHED bit to pull the TIF_POLLING CPU out of idle.
Since the pending IPI handlers are processed before the call to
schedule_idle() in do_idle(), schedule_idle() will only be called if the
IPI handler have woken / migrated a new task on the idle CPU and has set
TIF_NEED_RESCHED bit to indicate the same. This avoids running into the
long IRQ disabled section in schedule_idle() unnecessarily, and any
need_resched() check within a call function will accurately notify if a
task is waiting for CPU time on the CPU handling the IPI.
Following is the crude visualization of how the situation changes with
the newly introduced TIF_NOTIFY_IPI flag:
--
CPU0 CPU1
==== ====
do_idle() {
__current_set_polling();
...
monitor(addr);
if (!need_resched_or_ipi()) {
mwait() {
/* Waiting */
smp_call_function_single(CPU1, func, wait = 1) { ...
... ...
set_nr_if_polling(CPU1) { ...
/* Realizes CPU1 is polling */ ...
try_cmpxchg(addr, ...
&val, ...
val | _TIF_NOTIFY_IPI); ...
} /* Does not send an IPI */ ...
... } /* mwait exit due to write at addr */
csd_lock_wait() { }
/* Waiting */ preempt_fold_need_resched(); /* fold if NEED_RESCHED */
... __current_clr_polling();
... flush_smp_call_function_queue() {
... func(); /* Will set NEED_RESCHED if sched_ttwu_pending() */
} /* End of wait */ }
} if (need_resched()) {
schedule_idle();
smp_call_function_single(CPU1, func, wait = 1) { }
... ... /* IRQs remain enabled */
arch_send_call_function_single_ipi(CPU1); -----------> /* Processes the IPI */
--
Results
=======
With the TIF_NOTIFY_IPI, the time taken to complete a fixed set of IPIs
using ipistorm improves drastically. Following are the numbers from the
same dual socket 3rd Generation EPYC system (2 x 64C/128T) (boost on,
C2 disabled) running ipistorm between CPU8 and CPU16:
cmdline: insmod ipistorm.ko numipi=100000 single=1 offset=8 cpulist=8 wait=1
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + revert 0.81 [19.36]
tip:sched/core + TIF_NOTIFY_IPI 0.20 [80.99]
Same experiment was repeated on an dual socket ARM server (2 x 64C)
which too saw a significant improvement in the ipistorm performance:
==================================================================
Test : ipistorm (modified)
Units : Normalized runtime
Interpretation: Lower is better
Statistic : AMean
==================================================================
kernel: time [pct imp]
tip:sched/core 1.00 [0.00]
tip:sched/core + TIF_NOTIFY_IPI 0.41 [59.29]
netperf and tbench results with the patch match the results on tip on
the dual socket 3rd Generation AMD system (2 x 64C/128T). Additionally,
hackbench, stream, and schbench too were tested, with results from the
patched kernel matching that of the tip.
[ prateek: Split the changes into a separate patch, added the
TIF_NEED_RESCHED optimization in notify_ipi_if_polling().
TIF_WAKE_FLAG macro, commit log ]
Link: https://github.com/antonblanchard/ipistorm [1]
Link: https://lore.kernel.org/lkml/20240119084548.2788-1-kprateek.nayak@amd.com/ [2]
Link: https://lore.kernel.org/lkml/b4f5ac150685456cf45a342e3bb1f28cdd557a53.camel@linux.intel.com/ [3]
Link: https://lore.kernel.org/lkml/20240123211756.GA221793@maniforge/ [4]
Link: https://lore.kernel.org/lkml/CAKfTPtC446Lo9CATPp7PExdkLhHQFoBuY-JMGC7agOHY4hs-Pw@mail.gmail.com/ [5]
Cc: Richard Henderson <richard.henderson at linaro.org>
Cc: Ivan Kokshaysky <ink at jurassic.park.msu.ru>
Cc: Matt Turner <mattst88 at gmail.com>
Cc: Russell King <linux at armlinux.org.uk>
Cc: Guo Ren <guoren at kernel.org>
Cc: Michal Simek <monstr at monstr.eu>
Cc: Dinh Nguyen <dinguyen at kernel.org>
Cc: Jonas Bonn <jonas at southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson at saunalahti.fi>
Cc: Stafford Horne <shorne at gmail.com>
Cc: "James E.J. Bottomley" <James.Bottomley at HansenPartnership.com>
Cc: Helge Deller <deller at gmx.de>
Cc: Michael Ellerman <mpe at ellerman.id.au>
Cc: Nicholas Piggin <npiggin at gmail.com>
Cc: Christophe Leroy <christophe.leroy at csgroup.eu>
Cc: "Aneesh Kumar K.V" <aneesh.kumar at kernel.org>
Cc: "Naveen N. Rao" <naveen.n.rao at linux.ibm.com>
Cc: Yoshinori Sato <ysato at users.sourceforge.jp>
Cc: Rich Felker <dalias at libc.org>
Cc: John Paul Adrian Glaubitz <glaubitz at physik.fu-berlin.de>
Cc: "David S. Miller" <davem at davemloft.net>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: Borislav Petkov <bp at alien8.de>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: "Rafael J. Wysocki" <rafael at kernel.org>
Cc: Daniel Lezcano <daniel.lezcano at linaro.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Juri Lelli <juri.lelli at redhat.com>
Cc: Vincent Guittot <vincent.guittot at linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann at arm.com>
Cc: Steven Rostedt <rostedt at goodmis.org>
Cc: Ben Segall <bsegall at google.com>
Cc: Mel Gorman <mgorman at suse.de>
Cc: Daniel Bristot de Oliveira <bristot at redhat.com>
Cc: Valentin Schneider <vschneid at redhat.com>
Cc: Al Viro <viro at zeniv.linux.org.uk>
Cc: Linus Walleij <linus.walleij at linaro.org>
Cc: Ard Biesheuvel <ardb at kernel.org>
Cc: Andrew Donnellan <ajd at linux.ibm.com>
Cc: Nicholas Miehlbradt <nicholas at linux.ibm.com>
Cc: Andrew Morton <akpm at linux-foundation.org>
Cc: Arnd Bergmann <arnd at arndb.de>
Cc: Josh Poimboeuf <jpoimboe at kernel.org>
Cc: "Kirill A. Shutemov" <kirill.shutemov at linux.intel.com>
Cc: Rick Edgecombe <rick.p.edgecombe at intel.com>
Cc: Tony Battersby <tonyb at cybernetics.com>
Cc: Brian Gerst <brgerst at gmail.com>
Cc: Tim Chen <tim.c.chen at linux.intel.com>
Cc: David Vernet <void at manifault.com>
Cc: x86 at kernel.org
Cc: linux-kernel at vger.kernel.org
Cc: linux-alpha at vger.kernel.org
Cc: linux-arm-kernel at lists.infradead.org
Cc: linux-csky at vger.kernel.org
Cc: linux-openrisc at vger.kernel.org
Cc: linux-parisc at vger.kernel.org
Cc: linuxppc-dev at lists.ozlabs.org
Cc: linux-sh at vger.kernel.org
Cc: sparclinux at vger.kernel.org
Cc: linux-pm at vger.kernel.org
Signed-off-by: Gautham R. Shenoy <gautham.shenoy at amd.com>
Co-developed-by: K Prateek Nayak <kprateek.nayak at amd.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak at amd.com>
---
include/linux/sched/idle.h | 8 ++++----
kernel/sched/core.c | 41 ++++++++++++++++++++++++++++++--------
kernel/sched/idle.c | 16 +++++++++++----
3 files changed, 49 insertions(+), 16 deletions(-)
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index d739ab810e00..c22312087c30 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -58,8 +58,8 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
__current_set_polling();
/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();
@@ -71,8 +71,8 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
__current_clr_polling();
/*
- * Polling state must be visible before we test NEED_RESCHED,
- * paired by resched_curr()
+ * Polling state must be visible before we test NEED_RESCHED or
+ * NOTIFY_IPI paired by resched_curr() or notify_ipi_if_polling()
*/
smp_mb__after_atomic();
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index db4be4921e7f..6fb6e5b75724 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -909,12 +909,30 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}
/*
- * Atomically set TIF_NEED_RESCHED if TIF_POLLING_NRFLAG is set.
+ * Certain architectures that support TIF_POLLING_NRFLAG may not support
+ * TIF_NOTIFY_IPI to notify an idle CPU in TIF_POLLING mode of a pending
+ * IPI. On such architectures, set TIF_NEED_RESCHED instead to wake the
+ * idle CPU and process the pending IPI.
+ */
+#ifdef _TIF_NOTIFY_IPI
+#define _TIF_WAKE_FLAG _TIF_NOTIFY_IPI
+#else
+#define _TIF_WAKE_FLAG _TIF_NEED_RESCHED
+#endif
+
+/*
+ * Atomically set TIF_WAKE_FLAG when TIF_POLLING_NRFLAG is set.
+ *
+ * On architectures that define TIF_NOTIFY_IPI, the same is set in the
+ * idle task's thread_info to pull the CPU out of idle and process
+ * the pending interrupt. On architectures that don't support
+ * TIF_NOTIFY_IPI, TIF_NEED_RESCHED is set instead to notify the
+ * pending IPI.
*
- * If this returns true, then the idle task promises to call
- * sched_ttwu_pending() and reschedule soon.
+ * If this returns true, then the idle task promises to process the
+ * call function soon.
*/
-static bool set_nr_if_polling(struct task_struct *p)
+static bool notify_ipi_if_polling(struct task_struct *p)
{
struct thread_info *ti = task_thread_info(p);
typeof(ti->flags) val = READ_ONCE(ti->flags);
@@ -922,9 +940,16 @@ static bool set_nr_if_polling(struct task_struct *p)
do {
if (!(val & _TIF_POLLING_NRFLAG))
return false;
- if (val & _TIF_NEED_RESCHED)
+ /*
+ * If TIF_NEED_RESCHED flag is set in addition to
+ * TIF_POLLING_NRFLAG, the CPU will soon fall out of
+ * idle. Since flush_smp_call_function_queue() is called
+ * soon after the idle exit, setting TIF_WAKE_FLAG is
+ * not necessary.
+ */
+ if (val & (_TIF_NEED_RESCHED | _TIF_WAKE_FLAG))
return true;
- } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_NEED_RESCHED));
+ } while (!try_cmpxchg(&ti->flags, &val, val | _TIF_WAKE_FLAG));
return true;
}
@@ -937,7 +962,7 @@ static inline bool set_nr_and_not_polling(struct task_struct *p)
}
#ifdef CONFIG_SMP
-static inline bool set_nr_if_polling(struct task_struct *p)
+static inline bool notify_ipi_if_polling(struct task_struct *p)
{
return false;
}
@@ -3918,7 +3943,7 @@ void sched_ttwu_pending(void *arg)
*/
bool call_function_single_prep_ipi(int cpu)
{
- if (set_nr_if_polling(cpu_rq(cpu)->idle)) {
+ if (notify_ipi_if_polling(cpu_rq(cpu)->idle)) {
trace_sched_wake_idle_without_ipi(cpu);
return false;
}
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index fcc734f45a2a..b91dc1f62a56 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -315,13 +315,13 @@ static void do_idle(void)
}
/*
- * Since we fell out of the loop above, we know TIF_NEED_RESCHED must
- * be set, propagate it into PREEMPT_NEED_RESCHED.
+ * Since we fell out of the loop above, TIF_NEED_RESCHED may be set.
+ * Propagate it into PREEMPT_NEED_RESCHED.
*
* This is required because for polling idle loops we will not have had
* an IPI to fold the state for us.
*/
- preempt_set_need_resched();
+ preempt_fold_need_resched();
tick_nohz_idle_exit();
__current_clr_polling();
@@ -338,7 +338,15 @@ static void do_idle(void)
*/
current_clr_notify_ipi();
flush_smp_call_function_queue();
- schedule_idle();
+
+ /*
+ * When NEED_RESCHED is set, the idle thread promises to call
+ * schedule_idle(). schedule_idle() can be skipped when an idle CPU
+ * was woken up to process an IPI that does not queue a task on the
+ * idle CPU, facilitating faster idle re-entry.
+ */
+ if (need_resched())
+ schedule_idle();
if (unlikely(klp_patch_pending(current)))
klp_update_patch_state(current);
--
2.34.1
More information about the Linuxppc-dev
mailing list