[PATCH v2 2/5] powerpc64/bpf: fix the address returned by bpf_get_func_ip
Venkat Rao Bagalkote
venkat88 at linux.ibm.com
Sat Feb 21 14:41:44 AEDT 2026
On 20/02/26 12:09 pm, Hari Bathini wrote:
> bpf_get_func_ip() helper function returns the address of the traced
> function. It relies on the IP address stored at ctx - 16 by the bpf
> trampoline. On 64-bit powerpc, this address is recovered from LR
> accounting for OOL trampoline. But the address stored here was off
> by 4-bytes. Ensure the address is the actual start of the traced
> function.
>
> Reported-by: Abhishek Dubey <adubey at linux.ibm.com>
> Fixes: d243b62b7bd3 ("powerpc64/bpf: Add support for bpf trampolines")
> Cc: stable at vger.kernel.org
> Signed-off-by: Hari Bathini <hbathini at linux.ibm.com>
> ---
>
> * No changes since v1.
>
>
> arch/powerpc/net/bpf_jit_comp.c | 21 +++++++++++++--------
> 1 file changed, 13 insertions(+), 8 deletions(-)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 987cd9fb0f37..fb6cc1f832a8 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -786,8 +786,8 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> * [ reg argN ]
> * [ ... ]
> * regs_off [ reg_arg1 ] prog ctx context
> - * nregs_off [ args count ]
> - * ip_off [ traced function ]
> + * nregs_off [ args count ] ((u64 *)prog_ctx)[-1]
> + * ip_off [ traced function ] ((u64 *)prog_ctx)[-2]
> * [ ... ]
> * run_ctx_off [ bpf_tramp_run_ctx ]
> * [ reg argN ]
> @@ -895,7 +895,7 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
>
> bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, regs_off);
>
> - /* Save our return address */
> + /* Save our LR/return address */
> EMIT(PPC_RAW_MFLR(_R3));
> if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
> EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));
> @@ -903,24 +903,29 @@ static int __arch_prepare_bpf_trampoline(struct bpf_tramp_image *im, void *rw_im
> EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
>
> /*
> - * Save ip address of the traced function.
> - * We could recover this from LR, but we will need to address for OOL trampoline,
> - * and optional GEP area.
> + * Get IP address of the traced function.
> + * In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR
> + * points to the instruction after the 'bl' instruction in the OOL stub.
> + * Refer to ftrace_init_ool_stub() and bpf_arch_text_poke() for OOL stub
> + * of kernel functions and bpf programs respectively.
> + * Recover kernel function/bpf program address from the unconditional
> + * branch instruction at the end of OOL stub.
> */
> if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & BPF_TRAMP_F_IP_ARG) {
> EMIT(PPC_RAW_LWZ(_R4, _R3, 4));
> EMIT(PPC_RAW_SLWI(_R4, _R4, 6));
> EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));
> EMIT(PPC_RAW_ADD(_R3, _R3, _R4));
> - EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
> }
>
> if (flags & BPF_TRAMP_F_IP_ARG)
> EMIT(PPC_RAW_STL(_R3, _R1, ip_off));
>
> - if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
> + if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
> /* Fake our LR for unwind */
> + EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
> EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
> + }
>
> /* Save function arg count -- see bpf_get_func_arg_cnt() */
> EMIT(PPC_RAW_LI(_R3, nr_regs));
./test_progs -t get_func_ip_test
#139 get_func_ip_test:OK
Summary: 1/0 PASSED, 0 SKIPPED, 0 FAILED
Tested-by: Venkat Rao Bagalkote <venkat88 at linux.ibm.com>
Regards,
Venkat.
More information about the Linuxppc-dev
mailing list