[PATCH 2/8] powerpc/signal: Add unsafe_copy_{vsx, fpr}_from_user()

Christophe Leroy christophe.leroy at csgroup.eu
Sun Feb 7 21:12:01 AEDT 2021



Le 06/02/2021 à 18:39, Christopher M. Riedl a écrit :
> On Sat Feb 6, 2021 at 10:32 AM CST, Christophe Leroy wrote:
>>
>>
>> Le 20/10/2020 à 04:01, Christopher M. Riedl a écrit :
>>> On Fri Oct 16, 2020 at 10:48 AM CDT, Christophe Leroy wrote:
>>>>
>>>>
>>>> Le 15/10/2020 à 17:01, Christopher M. Riedl a écrit :
>>>>> Reuse the "safe" implementation from signal.c except for calling
>>>>> unsafe_copy_from_user() to copy into a local buffer. Unlike the
>>>>> unsafe_copy_{vsx,fpr}_to_user() functions the "copy from" functions
>>>>> cannot use unsafe_get_user() directly to bypass the local buffer since
>>>>> doing so significantly reduces signal handling performance.
>>>>
>>>> Why can't the functions use unsafe_get_user(), why does it significantly
>>>> reduces signal handling
>>>> performance ? How much significant ? I would expect that not going
>>>> through an intermediate memory
>>>> area would be more efficient
>>>>
>>>
>>> Here is a comparison, 'unsafe-signal64-regs' avoids the intermediate buffer:
>>>
>>> 	|                      | hash   | radix  |
>>> 	| -------------------- | ------ | ------ |
>>> 	| linuxppc/next        | 289014 | 158408 |
>>> 	| unsafe-signal64      | 298506 | 253053 |
>>> 	| unsafe-signal64-regs | 254898 | 220831 |
>>>
>>> I have not figured out the 'why' yet. As you mentioned in your series,
>>> technically calling __copy_tofrom_user() is overkill for these
>>> operations. The only obvious difference between unsafe_put_user() and
>>> unsafe_get_user() is that we don't have asm-goto for the 'get' variant.
>>> Instead we wrap with unsafe_op_wrap() which inserts a conditional and
>>> then goto to the label.
>>>
>>> Implemenations:
>>>
>>> 	#define unsafe_copy_fpr_from_user(task, from, label)   do {            \
>>> 	       struct task_struct *__t = task;                                 \
>>> 	       u64 __user *buf = (u64 __user *)from;                           \
>>> 	       int i;                                                          \
>>> 									       \
>>> 	       for (i = 0; i < ELF_NFPREG - 1; i++)                            \
>>> 		       unsafe_get_user(__t->thread.TS_FPR(i), &buf[i], label); \
>>> 	       unsafe_get_user(__t->thread.fp_state.fpscr, &buf[i], label);    \
>>> 	} while (0)
>>>
>>> 	#define unsafe_copy_vsx_from_user(task, from, label)   do {            \
>>> 	       struct task_struct *__t = task;                                 \
>>> 	       u64 __user *buf = (u64 __user *)from;                           \
>>> 	       int i;                                                          \
>>> 									       \
>>> 	       for (i = 0; i < ELF_NVSRHALFREG ; i++)                          \
>>> 		       unsafe_get_user(__t->thread.fp_state.fpr[i][TS_VSRLOWOFFSET], \
>>> 				       &buf[i], label);                        \
>>> 	} while (0)
>>>
>>
>> Do you have CONFIG_PROVE_LOCKING or CONFIG_DEBUG_ATOMIC_SLEEP enabled in
>> your config ?
> 
> I don't have these set in my config (ppc64le_defconfig). I think I
> figured this out - the reason for the lower signal throughput is the
> barrier_nospec() in __get_user_nocheck(). When looping we incur that
> cost on every iteration. Commenting it out results in signal performance
> of ~316K w/ hash on the unsafe-signal64-regs branch. Obviously the
> barrier is there for a reason but it is quite costly.

Interesting.

Can you try with the patch I just sent out 
https://patchwork.ozlabs.org/project/linuxppc-dev/patch/c72f014730823b413528e90ab6c4d3bcb79f8497.1612692067.git.christophe.leroy@csgroup.eu/

Thanks
Christophe


More information about the Linuxppc-dev mailing list