[PATCH] x86/uaccess: Avoid barrier_nospec() in copy_from_user()
Josh Poimboeuf
jpoimboe at kernel.org
Mon Oct 21 10:11:12 AEDT 2024
On Sun, Oct 20, 2024 at 03:59:25PM -0700, Linus Torvalds wrote:
> On Sun, 20 Oct 2024 at 15:44, Josh Poimboeuf <jpoimboe at kernel.org> wrote:
> >
> > Anyway, I'd really like to make forward progress on getting rid of the
> > LFENCEs in copy_from_user() and __get_user(), so until if/when we hear
> > back from both vendors, how about we avoid noncanonical exceptions
> > altogether (along with the edge cases mentioned above) and do something
> > like the below?
>
> That doesn't work for LAM at _all_.
Argh, ok.
> So at a minimum, you need to then say "for LAM enabled CPU's we do the
> 'shift sign bit' trick".
Something like below to wipe out the LAM bits beforehand?
I'm probably overlooking something else as there are a lot of annoying
details here...
> Hopefully any LAM-capable CPU doesn't have this issue?
>
> And I still think that clac/stac has to serialize with surrounding
> memory operations, making this all moot.
Until it's s/think/know/ can we please put something in place?
#define FORCE_CANONICAL \
ALTERNATIVE_2 \
"shl $(64 - 48), %rdx", \
"shl $(64 - 57), %rdx", X86_FEATURE_LA57, \
"", ALT_NOT(X86_FEATURE_LAM)
#ifdef CONFIG_X86_5LEVEL
#define LOAD_TASK_SIZE_MINUS_N(n) \
ALTERNATIVE __stringify(mov $((1 << 47) - 4096 - (n)),%rdx), \
__stringify(mov $((1 << 56) - 4096 - (n)),%rdx), X86_FEATURE_LA57
#else
#define LOAD_TASK_SIZE_MINUS_N(n) \
mov $(TASK_SIZE_MAX - (n)),%_ASM_DX
#endif
.macro check_range size
.if IS_ENABLED(CONFIG_X86_64)
FORCE_CANONICAL
/* If above TASK_SIZE_MAX, convert to all 1's */
LOAD_TASK_SIZE_MINUS_N(size-1)
cmp %rax, %rdx
sbb %rdx, %rdx
or %rdx, %rax
More information about the Linuxppc-dev
mailing list