[PATCH v2 02/14] sched: Define a need_resched_or_ipi() helper and use it treewide
K Prateek Nayak
kprateek.nayak at amd.com
Fri Jun 14 04:16:01 AEST 2024
From: "Gautham R. Shenoy" <gautham.shenoy at amd.com>
Currently TIF_NEED_RESCHED is being overloaded, to wakeup an idle CPU in
TIF_POLLING mode to service an IPI even if there are no new tasks being
woken up on the said CPU.
In preparation of a proper fix, introduce a new helper
"need_resched_or_ipi()" which is intended to return true if either
the TIF_NEED_RESCHED flag or if TIF_NOTIFY_IPI flag is set. Use this
helper function in place of need_resched() in idle loops where
TIF_POLLING_NRFLAG is set.
To preserve bisectibility and avoid unbreakable idle loops, all the
need_resched() checks within TIF_POLLING_NRFLAGS sections, have been
replaced tree-wide with the need_resched_or_ipi() check.
[ prateek: Replaced some of the missed out occurrences of
need_resched() within a TIF_POLLING sections with
need_resched_or_ipi() ]
Cc: Richard Henderson <richard.henderson at linaro.org>
Cc: Ivan Kokshaysky <ink at jurassic.park.msu.ru>
Cc: Matt Turner <mattst88 at gmail.com>
Cc: Russell King <linux at armlinux.org.uk>
Cc: Guo Ren <guoren at kernel.org>
Cc: Michal Simek <monstr at monstr.eu>
Cc: Dinh Nguyen <dinguyen at kernel.org>
Cc: Jonas Bonn <jonas at southpole.se>
Cc: Stefan Kristiansson <stefan.kristiansson at saunalahti.fi>
Cc: Stafford Horne <shorne at gmail.com>
Cc: "James E.J. Bottomley" <James.Bottomley at HansenPartnership.com>
Cc: Helge Deller <deller at gmx.de>
Cc: Michael Ellerman <mpe at ellerman.id.au>
Cc: Nicholas Piggin <npiggin at gmail.com>
Cc: Christophe Leroy <christophe.leroy at csgroup.eu>
Cc: "Naveen N. Rao" <naveen.n.rao at linux.ibm.com>
Cc: Yoshinori Sato <ysato at users.sourceforge.jp>
Cc: Rich Felker <dalias at libc.org>
Cc: John Paul Adrian Glaubitz <glaubitz at physik.fu-berlin.de>
Cc: "David S. Miller" <davem at davemloft.net>
Cc: Andreas Larsson <andreas at gaisler.com>
Cc: Thomas Gleixner <tglx at linutronix.de>
Cc: Ingo Molnar <mingo at redhat.com>
Cc: Borislav Petkov <bp at alien8.de>
Cc: Dave Hansen <dave.hansen at linux.intel.com>
Cc: "H. Peter Anvin" <hpa at zytor.com>
Cc: "Rafael J. Wysocki" <rafael at kernel.org>
Cc: Daniel Lezcano <daniel.lezcano at linaro.org>
Cc: Peter Zijlstra <peterz at infradead.org>
Cc: Juri Lelli <juri.lelli at redhat.com>
Cc: Vincent Guittot <vincent.guittot at linaro.org>
Cc: Dietmar Eggemann <dietmar.eggemann at arm.com>
Cc: Steven Rostedt <rostedt at goodmis.org>
Cc: Ben Segall <bsegall at google.com>
Cc: Mel Gorman <mgorman at suse.de>
Cc: Daniel Bristot de Oliveira <bristot at redhat.com>
Cc: Valentin Schneider <vschneid at redhat.com>
Cc: Andrew Donnellan <ajd at linux.ibm.com>
Cc: Benjamin Gray <bgray at linux.ibm.com>
Cc: Frederic Weisbecker <frederic at kernel.org>
Cc: Xin Li <xin3.li at intel.com>
Cc: Kees Cook <keescook at chromium.org>
Cc: Rick Edgecombe <rick.p.edgecombe at intel.com>
Cc: Tony Battersby <tonyb at cybernetics.com>
Cc: Bjorn Helgaas <bhelgaas at google.com>
Cc: Brian Gerst <brgerst at gmail.com>
Cc: Leonardo Bras <leobras at redhat.com>
Cc: Imran Khan <imran.f.khan at oracle.com>
Cc: "Paul E. McKenney" <paulmck at kernel.org>
Cc: Rik van Riel <riel at surriel.com>
Cc: Tim Chen <tim.c.chen at linux.intel.com>
Cc: David Vernet <void at manifault.com>
Cc: Julia Lawall <julia.lawall at inria.fr>
Cc: linux-alpha at vger.kernel.org
Cc: linux-kernel at vger.kernel.org
Cc: linux-arm-kernel at lists.infradead.org
Cc: linux-csky at vger.kernel.org
Cc: linux-openrisc at vger.kernel.org
Cc: linux-parisc at vger.kernel.org
Cc: linuxppc-dev at lists.ozlabs.org
Cc: linux-sh at vger.kernel.org
Cc: sparclinux at vger.kernel.org
Cc: linux-pm at vger.kernel.org
Cc: x86 at kernel.org
Signed-off-by: Gautham R. Shenoy <gautham.shenoy at amd.com>
Co-developed-by: K Prateek Nayak <kprateek.nayak at amd.com>
Signed-off-by: K Prateek Nayak <kprateek.nayak at amd.com>
---
v1..v2:
o Fixed a conflict with commit edc8fc01f608 ("x86: Fix
CPUIDLE_FLAG_IRQ_ENABLE leaking timer reprogram") that touched
mwait_idle_with_hints() in arch/x86/include/asm/mwait.h
---
arch/x86/include/asm/mwait.h | 2 +-
arch/x86/kernel/process.c | 2 +-
drivers/cpuidle/cpuidle-powernv.c | 2 +-
drivers/cpuidle/cpuidle-pseries.c | 2 +-
drivers/cpuidle/poll_state.c | 2 +-
include/linux/sched.h | 5 +++++
include/linux/sched/idle.h | 4 ++--
kernel/sched/idle.c | 7 ++++---
8 files changed, 16 insertions(+), 10 deletions(-)
diff --git a/arch/x86/include/asm/mwait.h b/arch/x86/include/asm/mwait.h
index 920426d691ce..3fa6f0bbd74f 100644
--- a/arch/x86/include/asm/mwait.h
+++ b/arch/x86/include/asm/mwait.h
@@ -125,7 +125,7 @@ static __always_inline void mwait_idle_with_hints(unsigned long eax, unsigned lo
__monitor((void *)¤t_thread_info()->flags, 0, 0);
- if (!need_resched()) {
+ if (!need_resched_or_ipi()) {
if (ecx & 1) {
__mwait(eax, ecx);
} else {
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index b8441147eb5e..dd73cd6f735c 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -901,7 +901,7 @@ static __cpuidle void mwait_idle(void)
}
__monitor((void *)¤t_thread_info()->flags, 0, 0);
- if (!need_resched()) {
+ if (!need_resched_or_ipi()) {
__sti_mwait(0, 0);
raw_local_irq_disable();
}
diff --git a/drivers/cpuidle/cpuidle-powernv.c b/drivers/cpuidle/cpuidle-powernv.c
index 9ebedd972df0..77c3bb371f56 100644
--- a/drivers/cpuidle/cpuidle-powernv.c
+++ b/drivers/cpuidle/cpuidle-powernv.c
@@ -79,7 +79,7 @@ static int snooze_loop(struct cpuidle_device *dev,
dev->poll_time_limit = false;
ppc64_runlatch_off();
HMT_very_low();
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
/*
* Task has not woken up but we are exiting the polling
diff --git a/drivers/cpuidle/cpuidle-pseries.c b/drivers/cpuidle/cpuidle-pseries.c
index 14db9b7d985d..4f2b490f8b73 100644
--- a/drivers/cpuidle/cpuidle-pseries.c
+++ b/drivers/cpuidle/cpuidle-pseries.c
@@ -46,7 +46,7 @@ int snooze_loop(struct cpuidle_device *dev, struct cpuidle_driver *drv,
snooze_exit_time = get_tb() + snooze_timeout;
dev->poll_time_limit = false;
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
HMT_low();
HMT_very_low();
if (likely(snooze_timeout_en) && get_tb() > snooze_exit_time) {
diff --git a/drivers/cpuidle/poll_state.c b/drivers/cpuidle/poll_state.c
index 9b6d90a72601..225f37897e45 100644
--- a/drivers/cpuidle/poll_state.c
+++ b/drivers/cpuidle/poll_state.c
@@ -26,7 +26,7 @@ static int __cpuidle poll_idle(struct cpuidle_device *dev,
limit = cpuidle_poll_time(drv, dev);
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
cpu_relax();
if (loop_count++ < POLL_IDLE_RELAX_COUNT)
continue;
diff --git a/include/linux/sched.h b/include/linux/sched.h
index 90691d99027e..e52cdd1298bf 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -2069,6 +2069,11 @@ static __always_inline bool need_resched(void)
return unlikely(tif_need_resched());
}
+static __always_inline bool need_resched_or_ipi(void)
+{
+ return unlikely(tif_need_resched() || tif_notify_ipi());
+}
+
/*
* Wrappers for p->thread_info->cpu access. No-op on UP.
*/
diff --git a/include/linux/sched/idle.h b/include/linux/sched/idle.h
index e670ac282333..497518b84e8d 100644
--- a/include/linux/sched/idle.h
+++ b/include/linux/sched/idle.h
@@ -63,7 +63,7 @@ static __always_inline bool __must_check current_set_polling_and_test(void)
*/
smp_mb__after_atomic();
- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}
static __always_inline bool __must_check current_clr_polling_and_test(void)
@@ -76,7 +76,7 @@ static __always_inline bool __must_check current_clr_polling_and_test(void)
*/
smp_mb__after_atomic();
- return unlikely(tif_need_resched());
+ return unlikely(need_resched_or_ipi());
}
#else
diff --git a/kernel/sched/idle.c b/kernel/sched/idle.c
index 6e78d071beb5..7de94df5d477 100644
--- a/kernel/sched/idle.c
+++ b/kernel/sched/idle.c
@@ -57,7 +57,7 @@ static noinline int __cpuidle cpu_idle_poll(void)
ct_cpuidle_enter();
raw_local_irq_enable();
- while (!tif_need_resched() &&
+ while (!need_resched_or_ipi() &&
(cpu_idle_force_poll || tick_check_broadcast_expired()))
cpu_relax();
raw_local_irq_disable();
@@ -174,7 +174,7 @@ static void cpuidle_idle_call(void)
* Check if the idle task must be rescheduled. If it is the
* case, exit the function after re-enabling the local IRQ.
*/
- if (need_resched()) {
+ if (need_resched_or_ipi()) {
local_irq_enable();
return;
}
@@ -270,7 +270,7 @@ static void do_idle(void)
__current_set_polling();
tick_nohz_idle_enter();
- while (!need_resched()) {
+ while (!need_resched_or_ipi()) {
rmb();
/*
@@ -350,6 +350,7 @@ static void do_idle(void)
* RCU relies on this call to be done outside of an RCU read-side
* critical section.
*/
+ current_clr_notify_ipi();
flush_smp_call_function_queue();
schedule_idle();
--
2.34.1
More information about the Linuxppc-dev
mailing list