[PATCH 11/16] byteorder: provide a linux/byteorder.h with {be, le}_to_cpu() and cpu_to_{be, le}() macros

Cody P Schafer dev at codyps.com
Thu May 29 08:11:49 EST 2014


On Wed, May 28, 2014 at 5:05 PM, Cody P Schafer <dev at codyps.com> wrote:
> On Wed, May 28, 2014 at 3:45 AM, David Laight <David.Laight at aculab.com> wrote:
>> From: Cody P Schafer
>>> Rather manually specifying the size of the integer to be converted, key
>>> off of the type size. Reduces duplicate size info and the occurance of
>>> certain types of bugs (using the wrong sized conversion).
>> ...
>>> +#define be_to_cpu(v) \
>>> +     __builtin_choose_expr(sizeof(v) == sizeof(uint8_t) , v, \
>>> +     __builtin_choose_expr(sizeof(v) == sizeof(uint16_t), be16_to_cpu(v), \
>>> +     __builtin_choose_expr(sizeof(v) == sizeof(uint32_t), be32_to_cpu(v), \
>>> +     __builtin_choose_expr(sizeof(v) == sizeof(uint64_t), be64_to_cpu(v), \
>>> +             (void)0))))
>> ...
>>
>> I'm not at all sure that using the 'size' of the constant will reduce
>> the number of bugs - it just introduces a whole new category of bugs.
>
> Certainly, if you mis-size the argument (and thus have missized one of
> the variables containing the be value, probably a bug anyhow), there
> will be problems.
>
> I put this interface together because of an actual bug I wrote into
> the initial code of the hv_24x7 driver (resized a struct member
> without adjusting the be*_to_cpu() sizing).
> Having this "auto sizing" macro means I can avoid encoding the size of
> a struct field in multiple places.

To clarify, the point I'm making here is that this simply cuts out 1
more place we can screw up endianness conversion sizing.


More information about the Linuxppc-dev mailing list