[RFC PATCH v3 07/10] sched/core: Push current task from paravirt CPU
Shrikanth Hegde
sshegde at linux.ibm.com
Fri Sep 12 15:22:00 AEST 2025
On 9/11/25 10:36 PM, K Prateek Nayak wrote:
> Hello Shrikanth,
>
> On 9/11/2025 10:22 PM, Shrikanth Hegde wrote:
>>>> + if (is_cpu_paravirt(cpu))
>>>> + push_current_from_paravirt_cpu(rq);
>>>
>>> Does this mean paravirt CPU is capable of handling an interrupt but may
>>> not be continuously available to run a task?
>>
>> When i run hackbench which involves fair bit of IRQ stuff, it moves out.
>>
>> For example,
>>
>> echo 600-710 > /sys/devices/system/cpu/paravirt
>>
>> 11:31:54 AM CPU %usr %nice %sys %iowait %irq %soft %steal %guest %gnice %idle
>> 11:31:57 AM 598 2.04 0.00 77.55 0.00 18.37 0.00 1.02 0.00 0.00 1.02
>> 11:31:57 AM 599 1.01 0.00 79.80 0.00 17.17 0.00 1.01 0.00 0.00 1.01
>> 11:31:57 AM 600 0.00 0.00 0.00 0.00 0.00 0.00 0.99 0.00 0.00 99.01
>> 11:31:57 AM 601 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
>> 11:31:57 AM 602 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 100.00
>>
>>
>> There could some workloads which doesn't move irq's out, for which needs irqbalance change.
>> Looking into it.
>>
>> Or is the VMM expected to set
>>> the CPU on the paravirt mask and give the vCPU sufficient time to move the
>>> task before yanking it away from the pCPU?
>>>
>>
>> If the vCPU is running something, it is going to run at some point on pCPU.
>> hypervisor will give the cycles to this vCPU by preempting some other vCPU.
>>
>> It is that using this infra, there is should be nothing on that paravirt vCPU.
>> That way collectively VMM gets only limited request for pCPU which it can satify
>> without vCPU preemption.
>
> Ack! Just wanted to understand the usage.
>
> P.S. I remember discussions during last LPC where we could communicate
> this unavailability via CPU capacity. Was that problematic for some
> reason? Sorry if I didn't follow this discussion earlier.
>
Thanks for that questions. Gives a opportunity to retrospect.
Yes. That's where we started. but that has a lot of implementation challenges.
Still an option though.
History upto current state:
1. At LPC24 presented the problem statement, and why existing approaches such as hotplug,
cpuset cgroup or taskset are not viable solution. Hotplug would have come handy if the cost was low.
The overhead of sched domain rebuild and serial nature of hotplug makes it not viable option.
One of the possible approach was CPU Capacity.
1. Issues with CPU Capacity approach:
a. Need to make group_misfit_task as the highest priority. That alone will break big.LITTLE
since it relies on group misfit and group_overload should have higher priority there.
b. At high concurrency tasks still moved those CPUs with CAPACITY=1.
c. A lot of scheduler stats would need to be aware of change in CAPACITY specially load balance/wakeup.
d. in update_group_misfit - need to set the misfit load based on capacity. the current code sets to 0,
because of task_fits_cpu stuff
e. More challenges in RT.
That's when Tobias had introduced a new group type called group_parked.
https://lore.kernel.org/all/20241204112149.25872-2-huschle@linux.ibm.com/
It has relatively cleaner implementation compared to CPU CAPACITY.
It had a few disadvantages too:
1. It use to take around 8-10 seconds for tasks to move out of those CPUs. That was the main
concern.
2. Needs a few stats based changes in update_sg_lb_stats. might be tricky in all scenarios.
That's when we were exploring how the tasks move out when the cpu goes offline. It happens quite fast too.
So tried a similar mechanism and this is where we are right now.
> [..snip..]
>>>> + local_irq_save(flags);
>>>> + preempt_disable();
>>>
>>> Disabling IRQs implies preemption is disabled.
>>>
>>
>> Most of the places stop_one_cpu_nowait called with preemption & irq disabled.
>> stopper runs at the next possible opportunity.
>
> But is there any reason to do both local_irq_save() and
> preempt_disable()? include/linux/preempt.h defines preemptible() as:
>
> #define preemptible() (preempt_count() == 0 && !irqs_disabled())
>
> so disabling IRQs should be sufficient right or am I missing something?
>
f0498d2a54e79 (Peter Zijlstra) "sched: Fix stop_one_cpu_nowait() vs hotplug"
could be the answer you are looking for.
>>
>> stop_one_cpu_nowait
>> ->queues the task into stopper list
>> -> wake_up_process(stopper)
>> -> set need_resched
>> -> stopper runs as early as possible.
>>
More information about the Linuxppc-dev
mailing list