[PATCH 15/15] sched/cputime: Handle dyntick-idle steal time correctly

Frederic Weisbecker frederic at kernel.org
Wed Mar 25 01:53:01 AEDT 2026


Le Tue, Mar 03, 2026 at 04:47:45PM +0530, Shrikanth Hegde a écrit :
> 
> 
> On 2/6/26 7:52 PM, Frederic Weisbecker wrote:
> > The dyntick-idle steal time is currently accounted when the tick
> > restarts but the stolen idle time is not substracted from the idle time
> > that was already accounted. This is to avoid observing the idle time
> > going backward as the dyntick-idle cputime accessors can't reliably know
> > in advance the stolen idle time.
> > 
> > In order to maintain a forward progressing idle cputime while
> > substracting idle steal time from it, keep track of the previously
> > accounted idle stolen time and substract it from _later_ idle cputime
> > accounting.
> > 
> 
> s/substract/subtract ?

Right.

> 
> > Signed-off-by: Frederic Weisbecker <frederic at kernel.org>
> > ---
> >   include/linux/kernel_stat.h |  1 +
> >   kernel/sched/cputime.c      | 21 +++++++++++++++------
> >   2 files changed, 16 insertions(+), 6 deletions(-)
> > 
> > diff --git a/include/linux/kernel_stat.h b/include/linux/kernel_stat.h
> > index 512104b0ff49..24a54a6151ba 100644
> > --- a/include/linux/kernel_stat.h
> > +++ b/include/linux/kernel_stat.h
> > @@ -39,6 +39,7 @@ struct kernel_cpustat {
> >   	bool		idle_elapse;
> >   	seqcount_t	idle_sleeptime_seq;
> >   	u64		idle_entrytime;
> > +	u64		idle_stealtime;
> >   #endif
> >   	u64		cpustat[NR_STATS];
> >   };
> > diff --git a/kernel/sched/cputime.c b/kernel/sched/cputime.c
> > index 92fa2f037b6e..7e79288eb327 100644
> > --- a/kernel/sched/cputime.c
> > +++ b/kernel/sched/cputime.c
> > @@ -424,19 +424,25 @@ static inline void irqtime_account_process_tick(struct task_struct *p, int user_
> >   static void kcpustat_idle_stop(struct kernel_cpustat *kc, u64 now)
> >   {
> >   	u64 *cpustat = kc->cpustat;
> > -	u64 delta;
> > +	u64 delta, steal, steal_delta;
> >   	if (!kc->idle_elapse)
> >   		return;
> >   	delta = now - kc->idle_entrytime;
> > +	steal = steal_account_process_time(delta);
> >   	write_seqcount_begin(&kc->idle_sleeptime_seq);
> > +	steal_delta = min_t(u64, kc->idle_stealtime, delta);
> > +	delta -= steal_delta;
> 
> I didn;t get this logic. Why do we need idle_stealtime?
> 
> Lets say 10ms was steal time and 50ms was delta. but idle_stealtime is
> sum of past accumulated steal time. we only need to subtract steal time there no?
> 
> Shouldn't this be delta -= steal ?

That would be a risk to observe backward idle accounting:

Time        CPU 0                                  CPU 1
----        -----                                  -----
0 sec       kcpustat_idle_start()
            <#VMEXIT>
            ...
1 sec       </#VMEXIT>                             
            arch_cpu_idle()                        // returns 2
2 sec       kcpustat_idle_stop()                   kcpustat_field(CPUTIME_IDLE, 0)
               cpustat[CPUTIME_IDLE] = 2 - 1
                                                   // returns 1
                                                   kcpustat_field(CPUTIME_IDLE, 0)

We could instead read remotely the paravirt clock, but then
steal_account_process_time() would need to always hold the ->idle_sleeptime_seq,
though it should happen to work without given the ordering.

Anyway to avoid any surprise I accumulate the steal time of an idle cycle to be
substracted on the next idle cycle.

Thanks.

-- 
Frederic Weisbecker
SUSE Labs


More information about the Linuxppc-dev mailing list