[PATCH v2 2/5] powerpc64/bpf: fix the address returned by bpf_get_func_ip

adubey adubey at linux.ibm.com
Sun Feb 22 23:21:56 AEDT 2026


On 2026-02-20 12:09, Hari Bathini wrote:
> bpf_get_func_ip() helper function returns the address of the traced
> function. It relies on the IP address stored at ctx - 16 by the bpf
> trampoline. On 64-bit powerpc, this address is recovered from LR
> accounting for OOL trampoline. But the address stored here was off
> by 4-bytes. Ensure the address is the actual start of the traced
> function.
> 
> Reported-by: Abhishek Dubey <adubey at linux.ibm.com>
> Fixes: d243b62b7bd3 ("powerpc64/bpf: Add support for bpf trampolines")
> Cc: stable at vger.kernel.org
> Signed-off-by: Hari Bathini <hbathini at linux.ibm.com>
> ---
> 
> * No changes since v1.
> 
> 
>  arch/powerpc/net/bpf_jit_comp.c | 21 +++++++++++++--------
>  1 file changed, 13 insertions(+), 8 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c 
> b/arch/powerpc/net/bpf_jit_comp.c
> index 987cd9fb0f37..fb6cc1f832a8 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -786,8 +786,8 @@ static int __arch_prepare_bpf_trampoline(struct
> bpf_tramp_image *im, void *rw_im
>  	 *                              [ reg argN          ]
>  	 *                              [ ...               ]
>  	 *       regs_off               [ reg_arg1          ] prog ctx 
> context
prog ctx context/prog_ctx context/prog_ctx, to be in sync with tags 
below.
please refer s390's field tagging
> -	 *       nregs_off              [ args count        ]
> -	 *       ip_off                 [ traced function   ]
> +	 *       nregs_off              [ args count        ] ((u64 
> *)prog_ctx)[-1]
> +	 *       ip_off                 [ traced function   ] ((u64 
> *)prog_ctx)[-2]
>  	 *                              [ ...               ]
>  	 *       run_ctx_off            [ bpf_tramp_run_ctx ]
>  	 *                              [ reg argN          ]
> @@ -895,7 +895,7 @@ static int __arch_prepare_bpf_trampoline(struct
> bpf_tramp_image *im, void *rw_im
> 
>  	bpf_trampoline_save_args(image, ctx, func_frame_offset, nr_regs, 
> regs_off);
> 
> -	/* Save our return address */
> +	/* Save our LR/return address */
>  	EMIT(PPC_RAW_MFLR(_R3));
>  	if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
>  		EMIT(PPC_RAW_STL(_R3, _R1, alt_lr_off));
> @@ -903,24 +903,29 @@ static int __arch_prepare_bpf_trampoline(struct
> bpf_tramp_image *im, void *rw_im
>  		EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
> 
>  	/*
> -	 * Save ip address of the traced function.
> -	 * We could recover this from LR, but we will need to address for
> OOL trampoline,
> -	 * and optional GEP area.
> +	 * Get IP address of the traced function.
Get/Derive
> +	 * In case of CONFIG_PPC_FTRACE_OUT_OF_LINE or BPF program, LR
> +	 * points to the instruction after the 'bl' instruction in the OOL 
> stub.
> +	 * Refer to ftrace_init_ool_stub() and bpf_arch_text_poke() for OOL 
> stub
> +	 * of kernel functions and bpf programs respectively.
> +	 * Recover kernel function/bpf program address from the unconditional
> +	 * branch instruction at the end of OOL stub.
>  	 */
>  	if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE) || flags & 
> BPF_TRAMP_F_IP_ARG) {
>  		EMIT(PPC_RAW_LWZ(_R4, _R3, 4));
Please add comment what R4 points to; for easy referencing
>  		EMIT(PPC_RAW_SLWI(_R4, _R4, 6));
>  		EMIT(PPC_RAW_SRAWI(_R4, _R4, 6));
>  		EMIT(PPC_RAW_ADD(_R3, _R3, _R4));
> -		EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
>  	}
> 
>  	if (flags & BPF_TRAMP_F_IP_ARG)
>  		EMIT(PPC_RAW_STL(_R3, _R1, ip_off));
> 
> -	if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE))
> +	if (IS_ENABLED(CONFIG_PPC_FTRACE_OUT_OF_LINE)) {
>  		/* Fake our LR for unwind */
> +		EMIT(PPC_RAW_ADDI(_R3, _R3, 4));
>  		EMIT(PPC_RAW_STL(_R3, _R1, bpf_frame_size + PPC_LR_STKOFF));
> +	}
> 
>  	/* Save function arg count -- see bpf_get_func_arg_cnt() */
>  	EMIT(PPC_RAW_LI(_R3, nr_regs));
-Abhishek


More information about the Linuxppc-dev mailing list