[PATCH v4 07/12] mm: enable lazy_mmu sections to nest
Ritesh Harjani (IBM)
ritesh.list at gmail.com
Fri Nov 7 03:32:39 AEDT 2025
Alexander Gordeev <agordeev at linux.ibm.com> writes:
> On Wed, Nov 05, 2025 at 02:19:03PM +0530, Ritesh Harjani wrote:
>> > + * in_lazy_mmu_mode() can be used to check whether the lazy MMU mode is
>> > + * currently enabled.
>> > */
>> > #ifdef CONFIG_ARCH_HAS_LAZY_MMU_MODE
>> > static inline void lazy_mmu_mode_enable(void)
>> > {
>> > - arch_enter_lazy_mmu_mode();
>> > + struct lazy_mmu_state *state = ¤t->lazy_mmu_state;
>> > +
>> > + VM_WARN_ON_ONCE(state->nesting_level == U8_MAX);
>> > + /* enable() must not be called while paused */
>> > + VM_WARN_ON(state->nesting_level > 0 && !state->active);
>> > +
>> > + if (state->nesting_level++ == 0) {
>> > + state->active = true;
>> > + arch_enter_lazy_mmu_mode();
>> > + }
>> > }
>>
>> Some architectures disables preemption in their
>> arch_enter_lazy_mmu_mode(). So shouldn't the state->active = true should
>> happen after arch_enter_lazy_mmu_mode() has disabled preemption()? i.e.
>
> Do you have some scenario in mind that could cause an issue?
>
No not really. But that's a deviation from what previous arch hooks were
expecting. Although thinking this through - I don't have any usecase
where this can be a problem.
But let me re-visit some of the code paths on ppc64 lazy mmu...
Looking at the arch specific usecase I see we always do get_cpu_var()
for accessing the per-cpu batch array which disables preemption before
accessing the per-cpu structure.. This per-cpu structure is where we
batch pte updates...
For e.g...
arch_enter_lazy_mmu_mode()
hpte_need_flush()
get_cpu_var() // this takes care of preempt_disable()
adds vpns to per-cpu batch[i]
put_cpu_var() //
arch_leave_lazy_mmu_mode()
> IOW, what could go wrong if the process is scheduled to another
> CPU before preempt_disable() is called?
So from above - I don't think your sequence to update
state->active = true
before calling arch_enter hook should be a problem.
Based on above this looks mostly ok to me.
-ritesh
More information about the Linuxppc-dev
mailing list