[PATCH v2 0/2] send tlb_remove_table_smp_sync IPI only to necessary CPUs

ypodemsk at redhat.com ypodemsk at redhat.com
Thu Jun 22 23:11:32 AEST 2023

On Wed, 2023-06-21 at 09:43 +0200, Peter Zijlstra wrote:
> On Tue, Jun 20, 2023 at 05:46:16PM +0300, Yair Podemsky wrote:
> > Currently the tlb_remove_table_smp_sync IPI is sent to all CPUs
> > indiscriminately, this causes unnecessary work and delays notable
> > in
> > real-time use-cases and isolated cpus.
> > By limiting the IPI to only be sent to cpus referencing the
> > effected
> > mm.
> > a config to differentiate architectures that support mm_cpumask
> > from
> > those that don't will allow safe usage of this feature.
> > 
> > changes from -v1:
> > - Previous version included a patch to only send the IPI to CPU's
> > with
> > context_tracking in the kernel space, this was removed due to race 
> > condition concerns.
> > - for archs that do not maintain mm_cpumask the mask used should be
> >  cpu_online_mask (Peter Zijlstra).
> >  
> Would it not be much better to fix the root cause? As per the last
> time,
> there's patches that cure the thp abuse of this.
Hi Peter,
Thanks for your reply.
There are two code paths leading to this IPI, one is the thp,
But the other is the failure to allocate page in tlb_remove_table,
It is the the second path that we are most interested in as it was
to cause interference in a real time process for a client (That system
 not have thp).
So while curing thp abuses is a good thing, it will not unfortunately
our root cause.
If you have any idea of how to remove the tlb_remove_table_sync_one()
in the tlb_remove_table()->tlb_remove_table_one() call path -- the
that's relevant for us -- that would be great. As long as we can't
that, I'm afraid all we can do is optimize for it to not broadcast an
to all CPUs in the system, as done in this patch.

More information about the Linuxppc-dev mailing list