[PATCH v2 1/2] powerpc: fix KUAP warning in VMX usercopy path

Christophe Leroy (CS GROUP) chleroy at kernel.org
Wed Mar 4 02:17:45 AEDT 2026


Hi once more,

Le 03/03/2026 à 16:10, Christophe Leroy (CS GROUP) a écrit :
> Hi Again,
> 
> Le 03/03/2026 à 15:57, Christophe Leroy (CS GROUP) a écrit :
>> Hi,
>>
>> Le 03/03/2026 à 10:19, Sayali Patil a écrit :
>>>
>>> On 02/03/26 16:42, Christophe Leroy (CS GROUP) wrote:
>>>>
>>> Hi Christophe,
>>> Thanks for the review.
>>> With the suggested change, we are hitting a compilation error.
>>>
>>> The issue is related to how KUAP enforces the access direction.
>>> allow_user_access() contains:
>>>
>>> BUILD_BUG_ON(!__builtin_constant_p(dir));
>>>
>>> which requires that the access direction is a compile-time constant.
>>> If we pass a runtime value (for example, an unsigned long), the
>>> __builtin_constant_p() check fails and triggers the following build 
>>> error.
>>>
>>> Error:
>>> In function 'allow_user_access', inlined from 
>>> '__copy_tofrom_user_vmx' at arch/powerpc/lib/vmx-helper.c:19:3:
>>> BUILD_BUG_ON failed: !__builtin_constant_p(dir) 706
>>>
>>>
>>> The previous implementation worked because allow_user_access() was 
>>> invoked with enum
>>> constants (READ, WRITE, READ_WRITE), which satisfied the 
>>> __builtin_constant_p() requirement.
>>> So in this case, the function must be called with a compile-time 
>>> constant to satisfy KUAP.
>>>
>>> Please let me know if you would prefer a different approach.
>>>
>>
>> Ah, right, I missed that. The problem should only be in vmx-helper.c
>>
> 
> Thinking about it once more, I realised that powerpc does not define 
> INLINE_COPY_FROM_USER nor INLINE_COPY_TO_USER.
> 
> This means that raw_copy_from_user() and raw_copy_to_user() will in 
> really not be called much. Therefore __copy_tofrom_user_vmx() could 
> remain in uaccess.h as static __always_inline allthough it requires 
> exporting enter_vmx_usercopy() and exit_vmx_usercopy().

That would result in something like:

static __always_inline bool will_use_vmx(unsigned long n)
{
	return IS_ENABLED(CONFIG_ALTIVEC) && cpu_has_feature(CPU_FTR_VMX_COPY) &&
	       n > VMX_COPY_THRESHOLD;
}

static __always_inline unsigned long
raw_copy_tofrom_user(void __user *to, const void __user *from, unsigned 
long n, unsigned long dir)
{
	unsigned long ret;

	if (will_use_vmx(n) && enter_vmx_usercopy()) {
		allow_user_access(to, dir);
		ret = __copy_tofrom_user_power7_vmx(to, from, size);
		prevent_user_access(dir);
		exit_vmx_usercopy();

		if (unlikely(ret)) {
			allow_user_access(to, dir);
			ret = __copy_tofrom_user_base(to, from, size);
			prevent_user_access(dir);
		}
		return ret;
	}
	allow_user_access(to, dir);
	ret = __copy_tofrom_user(to, from, n);
	prevent_user_access(dir);
	return ret;
}


Christophe


More information about the Linuxppc-dev mailing list