[PATCH v2 2/2] powerpc/vmx: avoid KASAN instrumentation in enter_vmx_ops() for kexec

Sourabh Jain sourabhjain at linux.ibm.com
Sat Apr 4 06:01:16 AEDT 2026


The kexec sequence invokes enter_vmx_ops() via copy_page() with the MMU
disabled. In this context, code must not rely on normal virtual address
translations or trigger page faults.

With KASAN enabled, functions get instrumented and may access shadow
memory using regular address translation. When executed with the MMU
off, this can lead to page faults (bad_page_fault) from which the
kernel cannot recover in the kexec path, resulting in a hang.

The kexec path sets preempt_count to HARDIRQ_OFFSET before entering
the MMU-off copy sequence.

current_thread_info()->preempt_count = HARDIRQ_OFFSET
  kexec_sequence(..., copy_with_mmu_off = 1)
    -> kexec_copy_flush(image)
         copy_segments()
           -> copy_page(dest, addr)
	         bl enter_vmx_ops()
                   if (in_interrupt())
                     return 0
	         beq .Lnonvmx_copy

Since kexec sets preempt_count to HARDIRQ_OFFSET, in_interrupt()
evaluates to true and enter_vmx_ops() returns early.

As in_interrupt() (and preempt_count()) are always inlined, mark
enter_vmx_ops() with __no_sanitize_address to avoid KASAN
instrumentation and shadow memory access with MMU disabled, helping
kexec boot fine with KASAN enabled.

Cc: Aditya Gupta <adityag at linux.ibm.com>
Cc: Daniel Axtens <dja at axtens.net>
Cc: Hari Bathini <hbathini at linux.ibm.com>
Cc: Madhavan Srinivasan <maddy at linux.ibm.com>
Cc: Mahesh Salgaonkar <mahesh at linux.ibm.com>
Cc: Michael Ellerman <mpe at ellerman.id.au>
Cc: Ritesh Harjani (IBM) <ritesh.list at gmail.com>
Cc: Shivang Upadhyay <shivangu at linux.ibm.com>
Cc: Venkat Rao Bagalkote <venkat88 at linux.ibm.com>
Reported-by: Aboorva Devarajan <aboorvad at linux.ibm.com>
Signed-off-by: Sourabh Jain <sourabhjain at linux.ibm.com>
---
Changelog:

v2:
- Remove __no_sanitize_address from exit_vmx_ops
- Add a comment explaining that marking only enter_vmx_ops
  with __no_sanitize_address is sufficient for kexec to
  function properly with KASAN enabled

v1:
https://lore.kernel.org/all/20260321053121.614022-1-sourabhjain@linux.ibm.com/
---
 arch/powerpc/lib/vmx-helper.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/arch/powerpc/lib/vmx-helper.c b/arch/powerpc/lib/vmx-helper.c
index 554b248002b4..57e897b60db8 100644
--- a/arch/powerpc/lib/vmx-helper.c
+++ b/arch/powerpc/lib/vmx-helper.c
@@ -52,7 +52,14 @@ int exit_vmx_usercopy(void)
 }
 EXPORT_SYMBOL(exit_vmx_usercopy);
 
-int enter_vmx_ops(void)
+/*
+ * Can be called from kexec copy_page() path with MMU off. The kexec
+ * code sets preempt_count to HARDIRQ_OFFSET so we return early here.
+ * Since in_interrupt() is always inline, __no_sanitize_address on this
+ * function is sufficient to avoid KASAN shadow memory accesses in real
+ * mode.
+ */
+int __no_sanitize_address enter_vmx_ops(void)
 {
 	if (in_interrupt())
 		return 0;
-- 
2.52.0



More information about the Linuxppc-dev mailing list