[PATCH v3 8/8] bpf ppc32: Access only if addr is kernel address
Christophe Leroy
christophe.leroy at csgroup.eu
Wed Sep 22 00:27:13 AEST 2021
Le 21/09/2021 à 15:29, Hari Bathini a écrit :
> With KUAP enabled, any kernel code which wants to access userspace
> needs to be surrounded by disable-enable KUAP. But that is not
> happening for BPF_PROBE_MEM load instruction. Though PPC32 does not
> support read protection, considering the fact that PTR_TO_BTF_ID
> (which uses BPF_PROBE_MEM mode) could either be a valid kernel pointer
> or NULL but should never be a pointer to userspace address, execute
> BPF_PROBE_MEM load only if addr is kernel address, otherwise set
> dst_reg=0 and move on.
>
> This will catch NULL, valid or invalid userspace pointers. Only bad
> kernel pointer will be handled by BPF exception table.
>
> [Alexei suggested for x86]
> Suggested-by: Alexei Starovoitov <ast at kernel.org>
> Signed-off-by: Hari Bathini <hbathini at linux.ibm.com>
> ---
>
> Changes in v3:
> * Updated jump for PPC_BCC to always be the same while emitting
> a NOP instruction when needed.
>
>
> arch/powerpc/net/bpf_jit_comp32.c | 35 +++++++++++++++++++++++++++++++
> 1 file changed, 35 insertions(+)
>
> diff --git a/arch/powerpc/net/bpf_jit_comp32.c b/arch/powerpc/net/bpf_jit_comp32.c
> index 1239643f532c..59849e1230d2 100644
> --- a/arch/powerpc/net/bpf_jit_comp32.c
> +++ b/arch/powerpc/net/bpf_jit_comp32.c
> @@ -825,6 +825,41 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32 *image, struct codegen_context *
> case BPF_LDX | BPF_MEM | BPF_DW: /* dst = *(u64 *)(ul) (src + off) */
> fallthrough;
> case BPF_LDX | BPF_PROBE_MEM | BPF_DW:
> + /*
> + * As PTR_TO_BTF_ID that uses BPF_PROBE_MEM mode could either be a valid
> + * kernel pointer or NULL but not a userspace address, execute BPF_PROBE_MEM
> + * load only if addr is kernel address (see is_kernel_addr()), otherwise
> + * set dst_reg=0 and move on.
> + */
> + if (BPF_MODE(code) == BPF_PROBE_MEM) {
> + EMIT(PPC_RAW_ADDI(b2p[TMP_REG], src_reg, off));
> + PPC_LI32(_R0, TASK_SIZE);
> + EMIT(PPC_RAW_CMPLW(b2p[TMP_REG], _R0));
You may drop the ADDI and do:
PPC_LI32(_R0, TASK_SIZE - off);
EMIT(PPC_RAW_CMPLW(src_reg, _R0));
It will likely be the same number of instructions because now the
PPC_LI32 will generate two instruction, but it avoids the use of TMP_REG.
> + PPC_BCC(COND_GT, (ctx->idx + 5) * 4);
> + EMIT(PPC_RAW_LI(dst_reg, 0));
> + /*
> + * For BPF_DW case, "li reg_h,0" would be needed when
> + * !fp->aux->verifier_zext. Emit NOP otherwise.
> + *
> + * Note that "li reg_h,0" is emitted for BPF_B/H/W case,
> + * if necessary. So, jump there insted of emitting an
> + * additional "li reg_h,0" instruction.
> + */
> + if (size == BPF_DW && !fp->aux->verifier_zext)
> + EMIT(PPC_RAW_LI(dst_reg_h, 0));
> + else
> + EMIT(PPC_RAW_NOP());
> + /*
> + * Need to jump two instructions instead of one for BPF_DW case
> + * as there are two load instructions for dst_reg_h & dst_reg
> + * respectively.
> + */
> + if (size == BPF_DW)
> + PPC_JMP((ctx->idx + 3) * 4);
> + else
> + PPC_JMP((ctx->idx + 2) * 4);
> + }
> +
> switch (size) {
> case BPF_B:
> EMIT(PPC_RAW_LBZ(dst_reg, src_reg, off));
>
More information about the Linuxppc-dev
mailing list