[PATCH v4 5/6] sched/fair: Carve out logic to mark a group for asymmetric packing
Ricardo Neri
ricardo.neri-calderon at linux.intel.com
Wed Aug 11 00:41:44 AEST 2021
Create a separate function, sched_asym(). A subsequent changeset will
introduce logic to deal with SMT in conjunction with asmymmetric
packing. Such logic will need the statistics of the scheduling
group provided as argument. Update them before calling sched_asym().
Cc: Aubrey Li <aubrey.li at intel.com>
Cc: Ben Segall <bsegall at google.com>
Cc: Daniel Bristot de Oliveira <bristot at redhat.com>
Cc: Dietmar Eggemann <dietmar.eggemann at arm.com>
Cc: Mel Gorman <mgorman at suse.de>
Cc: Quentin Perret <qperret at google.com>
Cc: Rafael J. Wysocki <rafael.j.wysocki at intel.com>
Cc: Srinivas Pandruvada <srinivas.pandruvada at linux.intel.com>
Cc: Steven Rostedt <rostedt at goodmis.org>
Cc: Tim Chen <tim.c.chen at linux.intel.com>
Reviewed-by: Joel Fernandes (Google) <joel at joelfernandes.org>
Reviewed-by: Len Brown <len.brown at intel.com>
Co-developed-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
Signed-off-by: Ricardo Neri <ricardo.neri-calderon at linux.intel.com>
---
Changes since v3:
* Remove a redundant check for the local group in sched_asym().
(Dietmar)
* Reworded commit message for clarity. (Len)
Changes since v2:
* Introduced this patch.
Changes since v1:
* N/A
---
kernel/sched/fair.c | 20 +++++++++++++-------
1 file changed, 13 insertions(+), 7 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ae3d2bc59d8d..dd411cefb63f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8531,6 +8531,13 @@ group_type group_classify(unsigned int imbalance_pct,
return group_has_spare;
}
+static inline bool
+sched_asym(struct lb_env *env, struct sd_lb_stats *sds, struct sg_lb_stats *sgs,
+ struct sched_group *group)
+{
+ return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
+}
+
/**
* update_sg_lb_stats - Update sched_group's statistics for load balancing.
* @env: The load balancing environment.
@@ -8591,18 +8598,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
}
}
+ sgs->group_capacity = group->sgc->capacity;
+
+ sgs->group_weight = group->group_weight;
+
/* Check if dst CPU is idle and preferred to this group */
if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
- env->idle != CPU_NOT_IDLE &&
- sgs->sum_h_nr_running &&
- sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
+ env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
+ sched_asym(env, sds, sgs, group)) {
sgs->group_asym_packing = 1;
}
- sgs->group_capacity = group->sgc->capacity;
-
- sgs->group_weight = group->group_weight;
-
sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
/* Computing avg_load makes sense only when group is overloaded */
--
2.17.1
More information about the Linuxppc-dev
mailing list