[RFC][PATCH bpf] tools: bpftool: Fix tags for bpf-to-bpf calls

Naveen N. Rao naveen.n.rao at linux.vnet.ibm.com
Thu Mar 1 19:51:51 AEDT 2018


Daniel Borkmann wrote:
> On 02/27/2018 01:13 PM, Sandipan Das wrote:
>> With this patch, it will look like this:
>>    0: (85) call pc+2#bpf_prog_8f85936f29a7790a+3
> 
> (Note the +2 is the insn->off already.)
> 
>>    1: (b7) r0 = 1
>>    2: (95) exit
>>    3: (b7) r0 = 2
>>    4: (95) exit
>> 
>> where 8f85936f29a7790a is the tag of the bpf program and 3 is
>> the offset to the start of the subprog from the start of the
>> program.
> 
> The problem with this approach would be that right now the name is
> something like bpf_prog_5f76847930402518_F where the subprog tag is
> just a placeholder so in future, this may well adapt to e.g. the actual
> function name from the elf file. Note that when kallsyms is enabled
> then a name like bpf_prog_5f76847930402518_F will also appear in stack
> traces, perf records, etc, so for correlation/debugging it would really
> help to have them the same everywhere.
> 
> Worst case if there's nothing better, potentially what one could do in
> bpf_prog_get_info_by_fd() is to dump an array of full addresses and
> have the imm part as the index pointing to one of them, just unfortunate
> that it's likely only needed in ppc64.

Ok. We seem to have discussed a few different aspects in this thread.  
Let me summarize the different aspects we have discussed:
1. Passing address of JIT'ed function to the JIT engines:
    Two approaches discussed:
    a. Existing approach, where the subprog address is encoded as an 
    offset from __bpf_call_base() in imm32 field of the BPF call 
    instruction. This requires the JIT'ed function to be within 2GB of 
    __bpf_call_base(), which won't be true on ppc64, at the least. So, 
    this won't on ppc64 (and any other architectures where vmalloc'ed 
    (module_alloc()) memory is from a different, far, address range).
    
    [As a side note, is it _actually_ guaranteed that JIT'ed functions 
    will be within 2GB (signed 32-bit...) on all other architectures 
    where BPF JIT is supported? I'm not quite sure how memory allocation 
    works on other architectures, but it looks like this can fail if 
    there are other larger allocations.]

    b. Pass the full 64-bit address of the call target in an auxiliary 
    field for the JIT engine to use (as implemented in this mail chain).  
    We can then use this to determine the call target if this is a 
    pseudo call.

    There is a third option we can consider:
    c. Convert BPF pseudo call instruction into a 2-instruction sequence 
    (similar to BPF_DW) and encode the full 64-bit call target in the 
    second bpf instruction. To distinguish this from other instruction 
    forms, we can set imm32 to -1.

    If we go with (b) or (c), we will need to take a call on whether we 
    will implement this in the same manner across all architectures, or 
    if we should have ppc64 (and any other affected architectures) work 
    differently from the rest.

    Further more, for (b), bpftool won't be able to derive the target 
    function call address, but approaches (a) and (c) are fine. More 
    about that below...

2. Indicating target function in bpftool:
    In the existing approach, bpftool can determine target address since 
    the offset is encoded in imm32 and is able to lookup the name from 
    kallsyms, if enabled.

    If we go with approach (b) for ppc64, this won't work and we will 
    have to minimally update bpftool to detect that the target address 
    is not available on ppc64.

    If we go with approach (c), the target address will be available and 
    we should be able to update bpftool to look that up.
 
    [As a side note, I suppose part of Sandipan's point with the 
    previous patch was to make the bpftool output consistent whether or 
    not JIT is enabled. It does look a bit weird that bpftool shows the 
    address of a JIT'ed function when asked to print the BPF bytecode.]

Thoughts?


- Naveen




More information about the Linuxppc-dev mailing list