[PATCH] x86/uaccess: Avoid barrier_nospec() in copy_from_user()
Borislav Petkov
bp at alien8.de
Sun Oct 13 01:21:07 AEDT 2024
On Sat, Oct 12, 2024 at 09:09:23AM -0500, Josh Poimboeuf wrote:
> On Sat, Oct 12, 2024 at 09:48:57AM +0100, Andrew Cooper wrote:
> > On 12/10/2024 5:09 am, Josh Poimboeuf wrote:
> > > For x86-64, the barrier_nospec() in copy_from_user() is overkill and
> > > painfully slow. Instead, use pointer masking to force the user pointer
> > > to a non-kernel value even in speculative paths.
> > >
> > > Signed-off-by: Josh Poimboeuf <jpoimboe at kernel.org>
> >
> > You do realise mask_user_address() is unsafe under speculation on AMD
> > systems?
> >
> > Had the mask_user_address() patch been put for review, this feedback
> > would have been given then.
> >
> >
> > AMD needs to arrange for bit 47 (bit 58 with LA57) to be the one
> > saturated by shifting, not bit 63.
>
> Ok... why?
I've been working on a fix for this. Still WIP.
Author: Borislav Petkov (AMD) <bp at alien8.de>
Date: Sat Oct 12 00:18:10 2024 +0200
x86/uaccess: Fix user access masking on AMD
Commit
2865baf54077 ("x86: support user address masking instead of non-speculative conditional")
started doing a data-dependent mask on the user address in order to
speed up the fast unsafe user access patterns.
However, AMD CPUs would use only 48, or 56 bits respectively when doing
transient TLB accesses and a non-canonical kernel address supplied by
user can still result in a TLB hit and access kernel data transiently,
leading to a potential spectre v1 leak, see bulletin Link below.
Therefore, use the most-significant address bit when doing the masking
and prevent such transient TLB hits.
Fixes: 2865baf54077 ("x86: support user address masking instead of non-speculative conditional")
Signed-off-by: Borislav Petkov (AMD) <bp at alien8.de>
Link: https://www.amd.com/en/resources/product-security/bulletin/amd-sb-1010.html
diff --git a/arch/x86/include/asm/uaccess_64.h b/arch/x86/include/asm/uaccess_64.h
index afce8ee5d7b7..73c7a6bf1468 100644
--- a/arch/x86/include/asm/uaccess_64.h
+++ b/arch/x86/include/asm/uaccess_64.h
@@ -58,7 +58,9 @@ static inline unsigned long __untagged_addr_remote(struct mm_struct *mm,
* user_access_begin that can avoid the fencing. This only works
* for dense accesses starting at the address.
*/
-#define mask_user_address(x) ((typeof(x))((long)(x)|((long)(x)>>63)))
+#define mask_user_address(x) ((typeof(x)) \
+ ((long)(x) | ((long)(x) << (63 - __VIRTUAL_MASK_SHIFT) >> 63)))
+
#define masked_user_access_begin(x) ({ \
__auto_type __masked_ptr = (x); \
__masked_ptr = mask_user_address(__masked_ptr); \
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
More information about the Linuxppc-dev
mailing list