<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p><br>
</p>
<div class="moz-cite-prefix">On 04/03/26 12:19, Christophe Leroy (CS
GROUP) wrote:<br>
</div>
<blockquote type="cite"
cite="mid:d3aa734c-fed7-4cec-9a39-e1c1d8f8c480@kernel.org">Hi
Sayali,
<br>
<br>
Le 04/03/2026 à 06:35, Sayali Patil a écrit :
<br>
<blockquote type="cite">On powerpc with PREEMPT_FULL or
PREEMPT_LAZY and function tracing enabled,
<br>
KUAP warnings can be triggered from the VMX usercopy path under
memory
<br>
stress workloads.
<br>
<br>
KUAP requires that no subfunctions are called once userspace
access has
<br>
been enabled. The existing VMX copy implementation violates this
<br>
requirement by invoking enter_vmx_usercopy() from the assembly
path after
<br>
userspace access has already been enabled. If preemption occurs
<br>
in this window, the AMR state may not be preserved correctly,
<br>
leading to unexpected userspace access state and resulting in
<br>
KUAP warnings.
<br>
<br>
Fix this by restructuring the VMX usercopy flow so that VMX
selection
<br>
and VMX state management are centralized in
raw_copy_tofrom_user(),
<br>
which is invoked by the raw_copy_{to,from,in}_user() wrappers.
<br>
<br>
The new flow is:
<br>
<br>
- raw_copy_{to,from,in}_user() calls raw_copy_tofrom_user()
<br>
- raw_copy_tofrom_user() decides whether to use the VMX path
<br>
based on size and CPU capability
<br>
- Call enter_vmx_usercopy() before enabling userspace access
<br>
- Enable userspace access as per the copy direction
<br>
and perform the VMX copy
<br>
- Disable userspace access as per the copy direction
<br>
- Call exit_vmx_usercopy()
<br>
- Fall back to the base copy routine if the VMX copy faults
<br>
<br>
With this change, the VMX assembly routines no longer perform
VMX state
<br>
management or call helper functions; they only implement the
<br>
copy operations.
<br>
The previous feature-section based VMX selection inside
<br>
__copy_tofrom_user_power7() is removed, and a dedicated
<br>
__copy_tofrom_user_power7_vmx() entry point is introduced.
<br>
<br>
This ensures correct KUAP ordering, avoids subfunction calls
<br>
while KUAP is unlocked, and eliminates the warnings while
preserving
<br>
the VMX fast path.
<br>
<br>
Fixes: de78a9c42a79 ("powerpc: Add a framework for Kernel
Userspace Access Protection")
<br>
Reported-by: Shrikanth Hegde <a class="moz-txt-link-rfc2396E" href="mailto:sshegde@linux.ibm.com"><sshegde@linux.ibm.com></a>
<br>
Closes:
<a class="moz-txt-link-freetext" href="https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/">https://lore.kernel.org/all/20260109064917.777587-2-sshegde@linux.ibm.com/</a><br>
Suggested-by: Christophe Leroy <a class="moz-txt-link-rfc2396E" href="mailto:chleroy@kernel.org"><chleroy@kernel.org></a>
<br>
Co-developed-by: Aboorva Devarajan
<a class="moz-txt-link-rfc2396E" href="mailto:aboorvad@linux.ibm.com"><aboorvad@linux.ibm.com></a>
<br>
Signed-off-by: Aboorva Devarajan <a class="moz-txt-link-rfc2396E" href="mailto:aboorvad@linux.ibm.com"><aboorvad@linux.ibm.com></a>
<br>
Signed-off-by: Sayali Patil <a class="moz-txt-link-rfc2396E" href="mailto:sayalip@linux.ibm.com"><sayalip@linux.ibm.com></a>
<br>
</blockquote>
<br>
That looks almost good, some editorial comments below.
<br>
<br>
With those fixed, you can add Reviewed-by: Christophe Leroy (CS
GROUP) <a class="moz-txt-link-rfc2396E" href="mailto:chleroy@kernel.org"><chleroy@kernel.org></a>
<br>
<br>
<blockquote type="cite">---
<br>
<br>
v2->v3
<br>
- Addressd as per review feedback by removing usercopy_mode
enum
<br>
and using the copy direction directly for KUAP permission
handling.
<br>
- Integrated __copy_tofrom_user_vmx() functionality into
<br>
raw_copy_tofrom_user() in uaccess.h as a static
__always_inline
<br>
implementation.
<br>
- Exported enter_vmx_usercopy() and exit_vmx_usercopy()
<br>
to support VMX usercopy handling from the common path.
<br>
<br>
v2:
<a class="moz-txt-link-freetext" href="https://lore.kernel.org/all/20260228135319.238985-1-sayalip@linux.ibm.com/">https://lore.kernel.org/all/20260228135319.238985-1-sayalip@linux.ibm.com/</a><br>
<br>
---
<br>
arch/powerpc/include/asm/uaccess.h | 66
++++++++++++++++++++++--------
<br>
arch/powerpc/lib/copyuser_64.S | 1 +
<br>
arch/powerpc/lib/copyuser_power7.S | 45 +++++++-------------
<br>
arch/powerpc/lib/vmx-helper.c | 2 +
<br>
4 files changed, 66 insertions(+), 48 deletions(-)
<br>
<br>
diff --git a/arch/powerpc/include/asm/uaccess.h
b/arch/powerpc/include/asm/uaccess.h
<br>
index ba1d878c3f40..8fd412671025 100644
<br>
--- a/arch/powerpc/include/asm/uaccess.h
<br>
+++ b/arch/powerpc/include/asm/uaccess.h
<br>
@@ -15,6 +15,9 @@
<br>
#define TASK_SIZE_MAX TASK_SIZE_USER64
<br>
#endif
<br>
+/* Threshold above which VMX copy path is used */
<br>
+#define VMX_COPY_THRESHOLD 3328
<br>
+
<br>
#include <asm-generic/access_ok.h>
<br>
/*
<br>
@@ -326,40 +329,67 @@ do { \
<br>
extern unsigned long __copy_tofrom_user(void __user *to,
<br>
const void __user *from, unsigned long size);
<br>
-#ifdef __powerpc64__
<br>
-static inline unsigned long
<br>
-raw_copy_in_user(void __user *to, const void __user *from,
unsigned long n)
<br>
+unsigned long __copy_tofrom_user_base(void __user *to,
<br>
+ const void __user *from, unsigned long size);
<br>
+
<br>
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
<br>
+ const void __user *from, unsigned long size);
<br>
+
<br>
+
<br>
</blockquote>
<br>
Remove one line.
<br>
<br>
<blockquote type="cite">+static __always_inline bool
will_use_vmx(unsigned long n)
<br>
+{
<br>
+ return IS_ENABLED(CONFIG_ALTIVEC) &&
<br>
+ cpu_has_feature(CPU_FTR_VMX_COPY) &&
<br>
+ n > VMX_COPY_THRESHOLD;
<br>
</blockquote>
<br>
Avoid too many line when possible. Nowadays up to 100 chars per
line are allowed.
<br>
<br>
Take care of alignment of second line, the second line should
start at same position as IS_ENABLED, meaning you have to insert 7
spaces instead of a tab.
<br>
<br>
<blockquote type="cite">+}
<br>
+
<br>
+static __always_inline unsigned long raw_copy_tofrom_user(void
__user *to,
<br>
+ const void __user *from, unsigned long n,
<br>
+ unsigned long dir)
<br>
</blockquote>
<br>
Subsequent lines should start at same position as the ( of the
first line, therefore I'd suggest following form instead:
<br>
<br>
static __always_inline unsigned long
<br>
raw_copy_tofrom_user(void __user *to,const void __user *from,
unsigned long n, unsigned long dir)
<br>
<br>
<blockquote type="cite"> {
<br>
unsigned long ret;
<br>
- barrier_nospec();
<br>
- allow_user_access(to, KUAP_READ_WRITE);
<br>
+ if (will_use_vmx(n) && enter_vmx_usercopy()) {
<br>
+ allow_user_access(to, dir);
<br>
+ ret = __copy_tofrom_user_power7_vmx(to, from, n);
<br>
+ prevent_user_access(dir);
<br>
+ exit_vmx_usercopy();
<br>
+
<br>
+ if (unlikely(ret)) {
<br>
+ allow_user_access(to, dir);
<br>
+ ret = __copy_tofrom_user_base(to, from, n);
<br>
+ prevent_user_access(dir);
<br>
+ }
<br>
+ return ret;
<br>
+ }
<br>
+
<br>
+ allow_user_access(to, dir);
<br>
ret = __copy_tofrom_user(to, from, n);
<br>
- prevent_user_access(KUAP_READ_WRITE);
<br>
+ prevent_user_access(dir);
<br>
return ret;
<br>
}
<br>
+
<br>
+#ifdef __powerpc64__
<br>
</blockquote>
<br>
I know it was already there before, but checkpatch is not happy
about __power64__. It should be replaced by CONFIG_PPC64.
<br>
<br>
<blockquote type="cite">+static inline unsigned long
<br>
+raw_copy_in_user(void __user *to, const void __user *from,
unsigned long n)
<br>
+{
<br>
+ barrier_nospec();
<br>
+ return raw_copy_tofrom_user(to, from, n, KUAP_READ_WRITE);
<br>
+}
<br>
#endif /* __powerpc64__ */
<br>
static inline unsigned long raw_copy_from_user(void *to,
<br>
const void __user *from, unsigned long n)
<br>
</blockquote>
<br>
Same problem with alignment of second line. Prefer the form used
for raw_copy_in_user() or raw_copy_to_user(), ie:
<br>
<br>
static inline unsigned long
<br>
raw_copy_from_user(void *to, const void __user *from, unsigned
long n)
<br>
<br>
<blockquote type="cite"> {
<br>
- unsigned long ret;
<br>
-
<br>
- allow_user_access(NULL, KUAP_READ);
<br>
- ret = __copy_tofrom_user((__force void __user *)to, from,
n);
<br>
- prevent_user_access(KUAP_READ);
<br>
- return ret;
<br>
+ return raw_copy_tofrom_user((__force void __user *)to,
from,
<br>
+ n, KUAP_READ);
<br>
</blockquote>
<br>
100 chars are allowed per line, this should fit on a single line.
<br>
<br>
<blockquote type="cite"> }
<br>
static inline unsigned long
<br>
raw_copy_to_user(void __user *to, const void *from, unsigned
long n)
<br>
{
<br>
- unsigned long ret;
<br>
-
<br>
- allow_user_access(to, KUAP_WRITE);
<br>
- ret = __copy_tofrom_user(to, (__force const void __user
*)from, n);
<br>
- prevent_user_access(KUAP_WRITE);
<br>
- return ret;
<br>
+ return raw_copy_tofrom_user(to, (__force const void __user
*)from,
<br>
+ n, KUAP_WRITE);
<br>
</blockquote>
<br>
100 chars are allowed per line, this should fit on a single line.
<br>
<br>
<blockquote type="cite"> }
<br>
unsigned long __arch_clear_user(void __user *addr, unsigned
long size);
<br>
</blockquote>
<br>
<br>
Run checkpatch before submitting patches:
<br>
<br>
$ ./scripts/checkpatch.pl --strict -g HEAD~
<br>
CHECK: Alignment should match open parenthesis
<br>
#83: FILE: arch/powerpc/include/asm/uaccess.h:333:
<br>
+unsigned long __copy_tofrom_user_base(void __user *to,
<br>
+ const void __user *from, unsigned long size);
<br>
<br>
CHECK: Alignment should match open parenthesis
<br>
#86: FILE: arch/powerpc/include/asm/uaccess.h:336:
<br>
+unsigned long __copy_tofrom_user_power7_vmx(void __user *to,
<br>
+ const void __user *from, unsigned long size);
<br>
<br>
CHECK: Please don't use multiple blank lines
<br>
#88: FILE: arch/powerpc/include/asm/uaccess.h:338:
<br>
+
<br>
+
<br>
<br>
CHECK: Alignment should match open parenthesis
<br>
#97: FILE: arch/powerpc/include/asm/uaccess.h:347:
<br>
+static __always_inline unsigned long raw_copy_tofrom_user(void
__user *to,
<br>
+ const void __user *from, unsigned long n,
<br>
<br>
CHECK: architecture specific defines should be avoided
<br>
#125: FILE: arch/powerpc/include/asm/uaccess.h:372:
<br>
+#ifdef __powerpc64__
<br>
<br>
total: 0 errors, 0 warnings, 5 checks, 212 lines checked
<br>
<br>
NOTE: For some of the reported defects, checkpatch may be able to
<br>
mechanically convert to the typical style using --fix or
--fix-inplace.
<br>
<br>
Commit 3a44f6614d88 ("powerpc: fix KUAP warning in VMX usercopy
path") has style problems, please review.
<br>
<br>
NOTE: If any of the errors are false positives, please report
<br>
them to the maintainer, see CHECKPATCH in MAINTAINERS. <br>
<br>
</blockquote>
<font size="4" face="monospace">Thanks Christophe for the review.<br>
I have addressed the comments and incorporated the changes in v4.<br>
<br>
As suggested, I have added:<br>
Reviewed-by: Christophe Leroy (CS GROUP)
<a class="moz-txt-link-rfc2396E" href="mailto:chleroy@kernel.org"><chleroy@kernel.org></a><br>
<br>
v4:
<a class="moz-txt-link-freetext" href="https://lore.kernel.org/all/20260304122201.153049-1-sayalip@linux.ibm.com/">https://lore.kernel.org/all/20260304122201.153049-1-sayalip@linux.ibm.com/</a></font>
</body>
</html>