[RFC PATCH v1 5/6] mm: Avoid calling page allocator while in lazy mmu mode
Ryan Roberts
ryan.roberts at arm.com
Sat May 31 00:04:43 AEST 2025
Lazy mmu mode applies to the current task and permits pte modifications
to be deferred and updated at a later time in a batch to improve
performance. tlb_next_batch() is called in lazy mmu mode as follows:
zap_pte_range
arch_enter_lazy_mmu_mode
do_zap_pte_range
zap_present_ptes
zap_present_folio_ptes
__tlb_remove_folio_pages
__tlb_remove_folio_pages_size
tlb_next_batch
arch_leave_lazy_mmu_mode
tlb_next_batch() may call into the page allocator which is problematic
with CONFIG_DEBUG_PAGEALLOC because debug_pagealloc_[un]map_pages()
calls the arch implementation of __kernel_map_pages() which must modify
the ptes for the linear map.
There are two possibilities at this point:
- If the arch implementation modifies the ptes directly without first
entering lazy mmu mode, the pte modifications may get deferred until
the existing lazy mmu mode is exited. This could result in taking
spurious faults for example.
- If the arch implementation enters a nested lazy mmu mode before
modification of the ptes (many arches use apply_to_page_range()),
then the linear map updates will definitely be applied upon leaving
the inner lazy mmu mode. But because lazy mmu mode does not support
nesting, the remainder of the outer user is no longer in lazy mmu
mode and the optimization opportunity is lost.
So let's just ensure that the page allocator is never called from within
lazy mmu mode. Use the new arch_in_lazy_mmu_mode() API to check if we
are in lazy mmu mode, and if so, when calling into the page allocator,
temporarily leave lazy mmu mode.
Given this new API we can also add VM_WARNings to check that we exit
lazy mmu mode when required to ensure the PTEs are actually updated
prior to tlb flushing.
Signed-off-by: Ryan Roberts <ryan.roberts at arm.com>
---
include/asm-generic/tlb.h | 2 ++
mm/mmu_gather.c | 15 +++++++++++++++
2 files changed, 17 insertions(+)
diff --git a/include/asm-generic/tlb.h b/include/asm-generic/tlb.h
index 88a42973fa47..84fb269b78a5 100644
--- a/include/asm-generic/tlb.h
+++ b/include/asm-generic/tlb.h
@@ -469,6 +469,8 @@ tlb_update_vma_flags(struct mmu_gather *tlb, struct vm_area_struct *vma)
static inline void tlb_flush_mmu_tlbonly(struct mmu_gather *tlb)
{
+ VM_WARN_ON(arch_in_lazy_mmu_mode());
+
/*
* Anything calling __tlb_adjust_range() also sets at least one of
* these bits.
diff --git a/mm/mmu_gather.c b/mm/mmu_gather.c
index db7ba4a725d6..0bd1e69b048b 100644
--- a/mm/mmu_gather.c
+++ b/mm/mmu_gather.c
@@ -18,6 +18,7 @@
static bool tlb_next_batch(struct mmu_gather *tlb)
{
struct mmu_gather_batch *batch;
+ bool lazy_mmu;
/* Limit batching if we have delayed rmaps pending */
if (tlb->delayed_rmap && tlb->active != &tlb->local)
@@ -32,7 +33,15 @@ static bool tlb_next_batch(struct mmu_gather *tlb)
if (tlb->batch_count == MAX_GATHER_BATCH_COUNT)
return false;
+ lazy_mmu = arch_in_lazy_mmu_mode();
+ if (lazy_mmu)
+ arch_leave_lazy_mmu_mode();
+
batch = (void *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
+
+ if (lazy_mmu)
+ arch_enter_lazy_mmu_mode();
+
if (!batch)
return false;
@@ -145,6 +154,8 @@ static void tlb_batch_pages_flush(struct mmu_gather *tlb)
{
struct mmu_gather_batch *batch;
+ VM_WARN_ON(arch_in_lazy_mmu_mode());
+
for (batch = &tlb->local; batch && batch->nr; batch = batch->next)
__tlb_batch_free_encoded_pages(batch);
tlb->active = &tlb->local;
@@ -154,6 +165,8 @@ static void tlb_batch_list_free(struct mmu_gather *tlb)
{
struct mmu_gather_batch *batch, *next;
+ VM_WARN_ON(arch_in_lazy_mmu_mode());
+
for (batch = tlb->local.next; batch; batch = next) {
next = batch->next;
free_pages((unsigned long)batch, 0);
@@ -363,6 +376,8 @@ void tlb_remove_table(struct mmu_gather *tlb, void *table)
{
struct mmu_table_batch **batch = &tlb->batch;
+ VM_WARN_ON(arch_in_lazy_mmu_mode());
+
if (*batch == NULL) {
*batch = (struct mmu_table_batch *)__get_free_page(GFP_NOWAIT | __GFP_NOWARN);
if (*batch == NULL) {
--
2.43.0
More information about the Linuxppc-dev
mailing list