[PATCH 6/9] powerpc/bpf: Fix BPF_SUB when imm == 0x80000000
Christophe Leroy
christophe.leroy at csgroup.eu
Tue Oct 5 16:40:58 AEDT 2021
Le 04/10/2021 à 20:18, Naveen N. Rao a écrit :
> Christophe Leroy wrote:
>>
>>
>> Le 01/10/2021 à 23:14, Naveen N. Rao a écrit :
>>> We aren't handling subtraction involving an immediate value of
>>> 0x80000000 properly. Fix the same.
>>>
>>> Fixes: 156d0e290e969c ("powerpc/ebpf/jit: Implement JIT compiler for
>>> extended BPF")
>>> Signed-off-by: Naveen N. Rao <naveen.n.rao at linux.vnet.ibm.com>
>>> ---
>>> arch/powerpc/net/bpf_jit_comp64.c | 16 ++++++++--------
>>> 1 file changed, 8 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/arch/powerpc/net/bpf_jit_comp64.c
>>> b/arch/powerpc/net/bpf_jit_comp64.c
>>> index ffb7a2877a8469..4641a50e82d50d 100644
>>> --- a/arch/powerpc/net/bpf_jit_comp64.c
>>> +++ b/arch/powerpc/net/bpf_jit_comp64.c
>>> @@ -333,15 +333,15 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
>>> *image, struct codegen_context *
>>> case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
>>> case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
>>> case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
>>> - if (BPF_OP(code) == BPF_SUB)
>>> - imm = -imm;
>>> - if (imm) {
>>> - if (imm >= -32768 && imm < 32768)
>>> - EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
>>> - else {
>>> - PPC_LI32(b2p[TMP_REG_1], imm);
>>> + if (imm > -32768 && imm < 32768) {
>>> + EMIT(PPC_RAW_ADDI(dst_reg, dst_reg,
>>> + BPF_OP(code) == BPF_SUB ? IMM_L(-imm) :
>>> IMM_L(imm)));
>>> + } else {
>>> + PPC_LI32(b2p[TMP_REG_1], imm);
>>> + if (BPF_OP(code) == BPF_SUB)
>>> + EMIT(PPC_RAW_SUB(dst_reg, dst_reg,
>>> b2p[TMP_REG_1]));
>>> + else
>>> EMIT(PPC_RAW_ADD(dst_reg, dst_reg,
>>> b2p[TMP_REG_1]));
>>> - }
>>> }
>>> goto bpf_alu32_trunc;
>>
>> There is now so few code common to both BPF_ADD and BPF_SUB that you
>> should make them different cases.
>>
>> While at it, why not also use ADDIS if imm is 32 bits ? That would be
>> an ADDIS/ADDI instead of LIS/ORI/ADD
>
> Sure. I wanted to limit the change for this fix. We can do a separate
> patch to optimize code generation for BPF_ADD.
>
Sure, this second part was just a thought, I agree it should be another
patch.
My main comment here is to split stuff and make it a different case, I
don't think it increases the change much, and IMO it is easier to read:
diff --git a/arch/powerpc/net/bpf_jit_comp64.c
b/arch/powerpc/net/bpf_jit_comp64.c
index ffb7a2877a84..39226d88c558 100644
--- a/arch/powerpc/net/bpf_jit_comp64.c
+++ b/arch/powerpc/net/bpf_jit_comp64.c
@@ -330,11 +330,7 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
*image, struct codegen_context *
EMIT(PPC_RAW_SUB(dst_reg, dst_reg, src_reg));
goto bpf_alu32_trunc;
case BPF_ALU | BPF_ADD | BPF_K: /* (u32) dst += (u32) imm */
- case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
case BPF_ALU64 | BPF_ADD | BPF_K: /* dst += imm */
- case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
- if (BPF_OP(code) == BPF_SUB)
- imm = -imm;
if (imm) {
if (imm >= -32768 && imm < 32768)
EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(imm)));
@@ -344,6 +340,17 @@ int bpf_jit_build_body(struct bpf_prog *fp, u32
*image, struct codegen_context *
}
}
goto bpf_alu32_trunc;
+ case BPF_ALU | BPF_SUB | BPF_K: /* (u32) dst -= (u32) imm */
+ case BPF_ALU64 | BPF_SUB | BPF_K: /* dst -= imm */
+ if (imm) {
+ if (-imm >= -32768 && -imm < 32768) {
+ EMIT(PPC_RAW_ADDI(dst_reg, dst_reg, IMM_L(-imm)));
+ } else {
+ PPC_LI32(b2p[TMP_REG_1], imm);
+ EMIT(PPC_RAW_SUB(dst_reg, dst_reg, b2p[TMP_REG_1]));
+ }
+ }
+ goto bpf_alu32_trunc;
case BPF_ALU | BPF_MUL | BPF_X: /* (u32) dst *= (u32) src */
case BPF_ALU64 | BPF_MUL | BPF_X: /* dst *= src */
if (BPF_CLASS(code) == BPF_ALU)
More information about the Linuxppc-dev
mailing list