[PATCH v3] cxl: Export AFU error buffer via sysfs

trigg mr.triggtrigg at gmail.com
Thu May 21 13:36:43 AEST 2015


> On 21-May-2015, at 05:16, Michael Neuling <mikey at neuling.org> wrote:
> 
> + */
>> +ssize_t cxl_afu_read_err_buffer(struct cxl_afu *afu, char *buf,
>> +                loff_t off, size_t count)
>> +{
>> +    loff_t aligned_off;
>> +    size_t aligned_count;
>> +    const void __iomem *ebuf = afu->afu_desc_mmio + afu->eb_offset;
>> +
>> +    if (!afu->eb_len || count == 0 || off < 0)
> 
> if eb_len == 0 we don't even create the sysfs file.  So is this check
> needed?
This function is non static so it can be potentially called from kernel.
Condition check of "if (count == 0 || off < 0 || (size_t)off >= afu->eb_len) "
should work solving the problem below too.
> 
>> +        return 0;
>> +
>> +    /* calculate aligned read window */
>> +    count = min((size_t)(afu->eb_len - off), count);
> 
> What if count ends up being negative because off > afu->eb_len??
Agreed. Thanks for catching this. Size_t being unsigned would overflow
To a large positive value.

> 
>> +    aligned_off = round_down(off, 8);
>> +    aligned_count = round_up(off + count, 8) - aligned_off;
> 
> I kinda preferred the start end variables, and length was just end -
> start.
> 
> I though it was more readable. IMHO
> 
> How about:
> 
>   aligned_start = round_down(off, 8);
>   aligned_end = round_up(off + count, 8);
>   aligned_length = aligned_end - aligned_start;
Agreed on aligned_start and aligned_end but would not
need aligned_end in rest of the code.
> 
>> +
>> +    /* fast path */
>> +    if ((aligned_off == off) && (aligned_count == count)) {
>> +        /* no need to use the bounce buffer */
>> +        _memcpy_fromio(buf, ebuf + off, count);
> 
> I would drop this, as the other code path should work fine.
> Premature optimisation.....
I am inclined to differ on this. Code below uses a bounce buffer
which needs more than double the amount of loads and stores.
If the caller wants to speed up things it can simply ask for aligned
read that won't have this overhead. This will be especially useful 
In large reads.

>> +
>> +    } else {
>> +        /* use bounce buffer for copy */
>> +        void *tbuf = (void *)__get_free_page(GFP_TEMPORARY);
>> +
>> +        if (!tbuf)
>> +            return -ENOMEM;
>> +
>> +        /* max we can copy in one read is PAGE_SIZE */
>> +        aligned_count = min(aligned_count, PAGE_SIZE);
>> +        _memcpy_fromio(tbuf, ebuf + aligned_off, aligned_count);
>> +
>> +        count = min(count, aligned_count);
Thanks for catching this.

> 
> This doesn't seem right.  count will equal PAGE_SIZE if it's too big but
> it has to be smaller by (off & 7) in this case.
> 
> How about this?
> 
>       #define MAX_COPY_SIZE PAGE_SIZE
>   ...
> 
>   void *bbuf;
> 
>   /* Bounds check count with err buf length */
>   count = min((size_t)(afu->eb_len - off), count);
>   if ((off < 0) || (count < 0))
>       return 0;
> 
>   /* Create aligned bounce buffer to copy into */
>   aligned_start = round_down(off, 8);
>   aligned_end = round_up(off + count, 8);
>   aligned_length = aligned_end - aligned_start;
> 
>   if (aligned_length > MAX_COPY_SIZE) {
>       aligned_length = MAX_COPY_SIZE;
>       count = MAX_COPY_SIZE - (off & 0x7);
>       }
> 
>   bbuf = (void *)__get_free_page(GFP_TEMPORARY);
>   if (!bbuf)
>       return -ENOMEM;
> 
>   /* Use _memcpy_fromio() so the reads are aligned */
>   _memcpy_fromio(bbuf, ebuf + aligned_start, aligned_length);
>   memcpy(buf, bbuf + (off & 0x7), count);
> 
>   free_page(bbuf);
> 
> 
>> +        memcpy(buf, tbuf + (off & 0x7), count);
>> +
>> +        free_page((unsigned long)tbuf);
>> +    }
>> +
>> +    return count;
> 
> _______________________________________________
> Linuxppc-dev mailing list
> Linuxppc-dev at lists.ozlabs.org
> https://lists.ozlabs.org/listinfo/linuxppc-dev
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.ozlabs.org/pipermail/linuxppc-dev/attachments/20150521/dfc86dc5/attachment-0001.html>


More information about the Linuxppc-dev mailing list