[PATCH v2] sched/membarrier: Fix redundant load of membarrier_state
Segher Boessenkool
segher at kernel.crashing.org
Thu Oct 31 00:33:39 AEDT 2024
Hi!
On Tue, Oct 29, 2024 at 11:21:28AM +0530, Nysal Jan K.A. wrote:
> On architectures where ARCH_HAS_SYNC_CORE_BEFORE_USERMODE
> is not selected, sync_core_before_usermode() is a no-op.
> In membarrier_mm_sync_core_before_usermode() the compiler does not
> eliminate redundant branches and load of mm->membarrier_state
> for this case as the atomic_read() cannot be optimized away.
>
> Here's a snippet of the code generated for finish_task_switch() on powerpc
> prior to this change:
>
> 1b786c: ld r26,2624(r30) # mm = rq->prev_mm;
> .......
> 1b78c8: cmpdi cr7,r26,0
> 1b78cc: beq cr7,1b78e4 <finish_task_switch+0xd0>
> 1b78d0: ld r9,2312(r13) # current
> 1b78d4: ld r9,1888(r9) # current->mm
> 1b78d8: cmpd cr7,r26,r9
> 1b78dc: beq cr7,1b7a70 <finish_task_switch+0x25c>
> 1b78e0: hwsync
> 1b78e4: cmplwi cr7,r27,128
> .......
> 1b7a70: lwz r9,176(r26) # atomic_read(&mm->membarrier_state)
> 1b7a74: b 1b78e0 <finish_task_switch+0xcc>
>
> This was found while analyzing "perf c2c" reports on kernels prior
> to commit c1753fd02a00 ("mm: move mm_count into its own cache line")
> where mm_count was false sharing with membarrier_state.
>
> There is a minor improvement in the size of finish_task_switch().
> The following are results from bloat-o-meter for ppc64le:
>
> GCC 7.5.0
> ---------
> add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-32 (-32)
> Function old new delta
> finish_task_switch 884 852 -32
>
> GCC 12.2.1
> ----------
> add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-32 (-32)
> Function old new delta
> finish_task_switch.isra 852 820 -32
>
> LLVM 17.0.6
> -----------
> add/remove: 0/0 grow/shrink: 0/2 up/down: 0/-36 (-36)
> Function old new delta
> rt_mutex_schedule 120 104 -16
> finish_task_switch 792 772 -20
>
> Results on aarch64:
>
> GCC 14.1.1
> ----------
> add/remove: 0/2 grow/shrink: 1/1 up/down: 4/-60 (-56)
> Function old new delta
> get_nohz_timer_target 352 356 +4
> e843419 at 0b02_0000d7e7_408 8 - -8
> e843419 at 01bb_000021d2_868 8 - -8
> finish_task_switch.isra 592 548 -44
>
> Signed-off-by: Nysal Jan K.A. <nysal at linux.ibm.com>
> ---
> V1 -> V2:
> - Add results for aarch64
> - Add a comment describing the changes
> ---
> include/linux/sched/mm.h | 7 +++++++
> 1 file changed, 7 insertions(+)
>
> diff --git a/include/linux/sched/mm.h b/include/linux/sched/mm.h
> index 928a626725e6..b13474825130 100644
> --- a/include/linux/sched/mm.h
> +++ b/include/linux/sched/mm.h
> @@ -531,6 +531,13 @@ enum {
>
> static inline void membarrier_mm_sync_core_before_usermode(struct mm_struct *mm)
> {
> + /*
> + * The atomic_read() below prevents CSE. The following should
> + * help the compiler generate more efficient code on architectures
> + * where sync_core_before_usermode() is a no-op.
> + */
> + if (!IS_ENABLED(CONFIG_ARCH_HAS_SYNC_CORE_BEFORE_USERMODE))
> + return;
> if (current->mm != mm)
> return;
> if (likely(!(atomic_read(&mm->membarrier_state) &
I'd say "CSE and similar transformations", but yeah, in this case CSE.
The point is that any access to a volatile object is a necessary side-
effect, so it has to be performed on the actual machine just as on the
abstract machine (on all the same paths, and as often). It might be
nice to have an atomic_read (for PowerPC) that can generate better
machine code. Not a trivial task though!
Reviewed-by: Segher Boessenkool <segher at kernel.crashing.org>
Segher
More information about the Linuxppc-dev
mailing list