[PATCH v4 2/2] powerpc/bpf: enable kfunc call

Naveen N Rao naveen at kernel.org
Tue May 7 20:40:51 AEST 2024


On Thu, May 02, 2024 at 11:02:05PM GMT, Hari Bathini wrote:
> Currently, bpf jit code on powerpc assumes all the bpf functions and
> helpers to be part of core kernel text. This is false for kfunc case,
> as function addresses may not be part of core kernel text area. So,
> add support for addresses that are not within core kernel text area
> too, to enable kfunc support. Emit instructions based on whether the
> function address is within core kernel text address or not, to retain
> optimized instruction sequence where possible.
> 
> In case of PCREL, as a bpf function that is not within core kernel
> text area is likely to go out of range with relative addressing on
> kernel base, use PC relative addressing. If that goes out of range,
> load the full address with PPC_LI64().
> 
> With addresses that are not within core kernel text area supported,
> override bpf_jit_supports_kfunc_call() to enable kfunc support. Also,
> override bpf_jit_supports_far_kfunc_call() to enable 64-bit pointers,
> as an address offset can be more than 32-bit long on PPC64.
> 
> Signed-off-by: Hari Bathini <hbathini at linux.ibm.com>
> ---
> 
> * Changes in v4:
>   - Use either kernelbase or PC for relative addressing. Also, fallback
>     to PPC_LI64(), if both are out of range.
>   - Update r2 with kernel TOC for elfv1 too as elfv1 also uses the
>     optimization sequence, that expects r2 to be kernel TOC, when
>     function address is within core kernel text.
> 
> * Changes in v3:
>   - Retained optimized instruction sequence when function address is
>     a core kernel address as suggested by Naveen.
>   - Used unoptimized instruction sequence for PCREL addressing to
>     avoid out of range errors for core kernel function addresses.
>   - Folded patch that adds support for kfunc calls with patch that
>     enables/advertises this support as suggested by Naveen.
> 
> 
>  arch/powerpc/net/bpf_jit_comp.c   | 10 +++++
>  arch/powerpc/net/bpf_jit_comp64.c | 61 ++++++++++++++++++++++++++-----
>  2 files changed, 61 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/powerpc/net/bpf_jit_comp.c b/arch/powerpc/net/bpf_jit_comp.c
> index 0f9a21783329..984655419da5 100644
> --- a/arch/powerpc/net/bpf_jit_comp.c
> +++ b/arch/powerpc/net/bpf_jit_comp.c
> @@ -359,3 +359,13 @@ void bpf_jit_free(struct bpf_prog *fp)
>  
>  	bpf_prog_unlock_free(fp);
>  }
> +
> +bool bpf_jit_supports_kfunc_call(void)
> +{
> +	return true;
> +}
> +
> +bool bpf_jit_supports_far_kfunc_call(void)
> +{
> +	return IS_ENABLED(CONFIG_PPC64);
> +}
> diff --git a/arch/powerpc/net/bpf_jit_comp64.c b/arch/powerpc/net/bpf_jit_comp64.c
> index 4de08e35e284..8afc14a4a125 100644
> --- a/arch/powerpc/net/bpf_jit_comp64.c
> +++ b/arch/powerpc/net/bpf_jit_comp64.c
> @@ -208,17 +208,13 @@ bpf_jit_emit_func_call_hlp(u32 *image, u32 *fimage, struct codegen_context *ctx,
>  	unsigned long func_addr = func ? ppc_function_entry((void *)func) : 0;
>  	long reladdr;
>  
> -	if (WARN_ON_ONCE(!core_kernel_text(func_addr)))
> +	if (WARN_ON_ONCE(!kernel_text_address(func_addr)))
>  		return -EINVAL;
>  
> -	if (IS_ENABLED(CONFIG_PPC_KERNEL_PCREL)) {
> -		reladdr = func_addr - local_paca->kernelbase;
> +#ifdef CONFIG_PPC_KERNEL_PCREL

Would be good to retain use of IS_ENABLED().
Reviewed-by: Naveen N Rao <naveen at kernel.org>


- Naveen



More information about the Linuxppc-dev mailing list