[PATCH bpf-next v4 1/2] powerpc64/bpf: Support internal-only MOV instruction to resolve per-CPU addrs
Hari Bathini
hbathini at linux.ibm.com
Wed Dec 17 16:11:17 AEDT 2025
On 10/12/25 12:20 pm, Saket Kumar Bhaskar wrote:
> With the introduction of commit 7bdbf7446305 ("bpf: add special
> internal-only MOV instruction to resolve per-CPU addrs"),
> a new BPF instruction BPF_MOV64_PERCPU_REG has been added to
> resolve absolute addresses of per-CPU data from their per-CPU
> offsets. This update requires enabling support for this
> instruction in the powerpc JIT compiler.
>
> As of commit 7a0268fa1a36 ("[PATCH] powerpc/64: per cpu data
> optimisations"), the per-CPU data offset for the CPU is stored in
> the paca.
>
> To support this BPF instruction in the powerpc JIT, the following
> powerpc instructions are emitted:
> if (IS_ENABLED(CONFIG_SMP))
> ld tmp1_reg, 48(13) //Load per-CPU data offset from paca(r13) in tmp1_reg.
> add dst_reg, src_reg, tmp1_reg //Add the per cpu offset to the dst.
> else if (src_reg != dst_reg)
> mr dst_reg, src_reg //Move src_reg to dst_reg, if src_reg != dst_reg
>
> To evaluate the performance improvements introduced by this change,
> the benchmark described in [1] was employed.
>
> Before Change:
> glob-arr-inc : 41.580 ± 0.034M/s
> arr-inc : 39.592 ± 0.055M/s
> hash-inc : 25.873 ± 0.012M/s
>
> After Change:
> glob-arr-inc : 42.024 ± 0.049M/s
> arr-inc : 55.447 ± 0.031M/s
> hash-inc : 26.565 ± 0.014M/s
>
> [1] https://github.com/anakryiko/linux/commit/8dec900975ef
>
Looks good to me.
Acked-by: Hari Bathini <hbathini at linux.ibm.com>
> Reviewed-by: Puranjay Mohan <puranjay at kernel.org>
> Signed-off-by: Saket Kumar Bhaskar <skb99 at linux.ibm.com>
> ---
> arch/powerpc/net/bpf_jit_comp.c | 5 +++++
> arch/powerpc/net/bpf_jit_comp64.c | 10 ++++++++++
> 2 files changed, 15 insertions(+)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 5e976730b2f5..d53e9cd7563f 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -466,6 +466,11 @@ bool bpf_jit_supports_insn(struct bpf_insn *insn, bool in_arena)
> return true;
> }
>
> +bool bpf_jit_supports_percpu_insn(void)
> +{
> + return IS_ENABLED(CONFIG_PPC64);
> +}
> +
> void *arch_alloc_bpf_trampoline(unsigned int size)
> {
> return bpf_prog_pack_alloc(size, bpf_jit_fill_ill_insns);
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 1fe37128c876..37723ee9344e 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -918,6 +918,16 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, u32 *fimage, struct code
> case BPF_ALU | BPF_MOV | BPF_X: /* (u32) dst = src */
> case BPF_ALU64 | BPF_MOV | BPF_X: /* dst = src */
>
> + if (insn_is_mov_percpu_addr(&insn[i])) {
> + if (IS_ENABLED(CONFIG_SMP)) {
> + EMIT(PPC_RAW_LD(tmp1_reg, _R13, offsetof(struct paca_struct, data_offset)));
> + EMIT(PPC_RAW_ADD(dst_reg, src_reg, tmp1_reg));
> + } else if (src_reg != dst_reg) {
> + EMIT(PPC_RAW_MR(dst_reg, src_reg));
> + }
> + break;
> + }
> +
> if (insn_is_cast_user(&insn[i])) {
> EMIT(PPC_RAW_RLDICL_DOT(tmp1_reg, src_reg, 0, 32));
> PPC_LI64(dst_reg, (ctx->user_vm_start & 0xffffffff00000000UL));
More information about the Linuxppc-dev
mailing list