[RFC PATCH v3 07/10] sched/core: Push current task from paravirt CPU

Shrikanth Hegde sshegde at linux.ibm.com
Mon Nov 10 15:54:19 AEDT 2025


>> +
>> +static DEFINE_PER_CPU(struct cpu_stop_work, pv_push_task_work);
>> +
>> +static int paravirt_push_cpu_stop(void *arg)
>> +{
>> +	struct task_struct *p = arg;
> 
> Can we move all pushable tasks at once instead of just the rq->curr at
> the time of the tick? It can also avoid keeping the reference to "p"
> and only selectively pushing it. Thoughts?
> 
>> +	struct rq *rq = this_rq();
>> +	struct rq_flags rf;
>> +	int cpu;
>> +
>> +	raw_spin_lock_irq(&p->pi_lock);
>> +	rq_lock(rq, &rf);
>> +	rq->push_task_work_done = 0;
>> +
>> +	update_rq_clock(rq);
>> +
>> +	if (task_rq(p) == rq && task_on_rq_queued(p)) {
>> +		cpu = select_fallback_rq(rq->cpu, p);
>> +		rq = __migrate_task(rq, &rf, p, cpu);
>> +	}
>> +
>> +	rq_unlock(rq, &rf);
>> +	raw_spin_unlock_irq(&p->pi_lock);
>> +	put_task_struct(p);
>> +
>> +	return 0;
>> +}
>> +

Got it work by using by using rt.pushable_tasks(RT) and rq->cfs_tasks(CFS).

I don't see any significant benefit by doing this. there is slight improvement in time
it takes to move the tasks out. This could help when there are way too many tasks on rq.
But these days most system are with HZ=1000, that means it is 1ms tick, it shouldn't take
very long to push the current task out. Also, rq lock likely needs to be held across
the loop to ensure loop doesn't get altered by irq etc.

Given the complexity, I prefer the method of pushing the current task out.
---

         /* push the rt tasks out first */
         plist_for_each_entry_safe(p, tmp_p, &orig_rq->rt.pushable_tasks, pushable_tasks) {
                 rq = orig_rq;

                 if (kthread_is_per_cpu(p) ||is_migration_disabled(p))
                         continue;

                 raw_spin_lock_irqsave(&p->pi_lock, flags);
                 rq_lock(rq, &rf);

                 update_rq_clock(rq);

                 if (task_rq(p) == rq && task_on_rq_queued(p)) {
                         cpu = select_fallback_rq(rq->cpu, p);
                         rq = __migrate_task(rq, &rf, p, cpu);
                 }

                 rq_unlock(rq, &rf);
                 raw_spin_unlock_irqrestore(&p->pi_lock, flags);
         }


More information about the Linuxppc-dev mailing list