epoll_put_uevent() calls __put_user() twice, which are inlined
to two calls of out-of-line functions, __put_user_nocheck_4()
and __put_user_nocheck_8().
Both functions wrap mov with a stac/clac pair, which is expensive
on an AMD EPYC 7B12 64-Core Processor platform.
__put_user_nocheck_4 /proc/kcore [Percent: local period]
Percent │
89.91 │ stac
0.19 │ mov %eax,(%rcx)
0.15 │ xor %ecx,%ecx
9.69 │ clac
0.06 │ ← retq
This was remarkable while testing neper/tcp_rr with 1000 flows per
thread.
Overhead Shared O Symbol
10.08% [kernel] [k] _copy_to_iter
7.12% [kernel] [k] ip6_output
6.40% [kernel] [k] sock_poll
5.71% [kernel] [k] move_addr_to_user
4.39% [kernel] [k] __put_user_nocheck_4
...
1.06% [kernel] [k] ep_try_send_events
... ^- epoll_put_uevent() was inlined
0.78% [kernel] [k] __put_user_nocheck_8
Patch 1 adds a new uaccess helper that is inlined to a bare stac
without address masking or uaccess_ok(), which is already checked
in ep_check_params().
Patch 2 uses the helper and unsafe_put_user() in epoll_put_uevent().
Kuniyuki Iwashima (2):
uaccess: Add __user_write_access_begin().
epoll: Use __user_write_access_begin() and unsafe_put_user() in
epoll_put_uevent().
arch/arm64/include/asm/uaccess.h | 1 +
arch/powerpc/include/asm/uaccess.h | 13 ++++++++++---
arch/riscv/include/asm/uaccess.h | 1 +
arch/x86/include/asm/uaccess.h | 1 +
include/linux/eventpoll.h | 13 ++++++++-----
include/linux/uaccess.h | 1 +
6 files changed, 22 insertions(+), 8 deletions(-)
--
2.51.1.814.gb8fa24458f-goog