[PATCH v5 5/6] sched/fair: Carve out logic to mark a group for asymmetric packing

Vincent Guittot vincent.guittot at linaro.org
Sat Sep 18 01:27:15 AEST 2021


On Sat, 11 Sept 2021 at 03:19, Ricardo Neri
<ricardo.neri-calderon at linux.intel.com> wrote:
>
> Create a separate function, sched_asym(). A subsequent changeset will
> introduce logic to deal with SMT in conjunction with asmymmetric
> packing. Such logic will need the statistics of the scheduling
> group provided as argument. Update them before calling sched_asym().
>
> Cc: Aubrey Li <aubrey.li at intel.com>
> Cc: Ben Segall <bsegall at google.com>
> Cc: Daniel Bristot de Oliveira <bristot at redhat.com>
> Cc: Dietmar Eggemann <dietmar.eggemann at arm.com>
> Cc: Mel Gorman <mgorman at suse.de>
> Cc: Quentin Perret <qperret at google.com>
> Cc: Rafael J. Wysocki <rafael.j.wysocki at intel.com>
> Cc: Srinivas Pandruvada <srinivas.pandruvada at linux.intel.com>
> Cc: Steven Rostedt <rostedt at goodmis.org>
> Cc: Tim Chen <tim.c.chen at linux.intel.com>
> Reviewed-by: Joel Fernandes (Google) <joel at joelfernandes.org>
> Reviewed-by: Len Brown <len.brown at intel.com>
> Co-developed-by: Peter Zijlstra (Intel) <peterz at infradead.org>
> Signed-off-by: Peter Zijlstra (Intel) <peterz at infradead.org>
> Signed-off-by: Ricardo Neri <ricardo.neri-calderon at linux.intel.com>

Reviewed-by: Vincent Guittot <vincent.guittot at linaro.org>

> ---
> Changes since v4:
>   * None
>
> Changes since v3:
>   * Remove a redundant check for the local group in sched_asym().
>     (Dietmar)
>   * Reworded commit message for clarity. (Len)
>
> Changes since v2:
>   * Introduced this patch.
>
> Changes since v1:
>   * N/A
> ---
>  kernel/sched/fair.c | 20 +++++++++++++-------
>  1 file changed, 13 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index c5851260b4d8..26db017c14a3 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8597,6 +8597,13 @@ group_type group_classify(unsigned int imbalance_pct,
>         return group_has_spare;
>  }
>
> +static inline bool
> +sched_asym(struct lb_env *env, struct sd_lb_stats *sds,  struct sg_lb_stats *sgs,
> +          struct sched_group *group)
> +{
> +       return sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu);
> +}
> +
>  /**
>   * update_sg_lb_stats - Update sched_group's statistics for load balancing.
>   * @env: The load balancing environment.
> @@ -8657,18 +8664,17 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>                 }
>         }
>
> +       sgs->group_capacity = group->sgc->capacity;
> +
> +       sgs->group_weight = group->group_weight;
> +
>         /* Check if dst CPU is idle and preferred to this group */
>         if (!local_group && env->sd->flags & SD_ASYM_PACKING &&
> -           env->idle != CPU_NOT_IDLE &&
> -           sgs->sum_h_nr_running &&
> -           sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> +           env->idle != CPU_NOT_IDLE && sgs->sum_h_nr_running &&
> +           sched_asym(env, sds, sgs, group)) {
>                 sgs->group_asym_packing = 1;
>         }
>
> -       sgs->group_capacity = group->sgc->capacity;
> -
> -       sgs->group_weight = group->group_weight;
> -
>         sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
>
>         /* Computing avg_load makes sense only when group is overloaded */
> --
> 2.17.1
>


More information about the Linuxppc-dev mailing list