[RFC PATCH v4 10/11] lib: vdso: Allow arches to override the ns shift operation

Christophe Leroy christophe.leroy at c-s.fr
Wed Jan 29 18:26:39 AEDT 2020



Le 29/01/2020 à 08:14, Thomas Gleixner a écrit :
> Andy Lutomirski <luto at kernel.org> writes:
> 
>> On Thu, Jan 16, 2020 at 11:57 AM Thomas Gleixner <tglx at linutronix.de> wrote:
>>>
>>> Andy Lutomirski <luto at kernel.org> writes:
>>>> On Thu, Jan 16, 2020 at 9:58 AM Christophe Leroy
>>>>
>>>> Would mul_u64_u64_shr() be a good alternative?  Could we adjust it to
>>>> assume the shift is less than 32?  That function exists to benefit
>>>> 32-bit arches.
>>>
>>> We'd want mul_u64_u32_shr() for this. The rules for mult and shift are:
>>>
>>
>> That's what I meant to type...
> 
> Just that it does not work. The math is:
> 
>       ns = d->nsecs;   // That's the nsec value shifted left by d->shift
> 
>       ns += ((cur - d->last) & d->mask) * mult;
> 
>       ns >>= d->shift;
> 
> So we cannot use mul_u64_u32_shr() because we need the addition there
> before shifting. And no, we can't drop the fractional part of
> d->nsecs. Been there, done that, got sporadic time going backwards
> problems as a reward. Need to look at that again as stuff has changed
> over time.
> 
> On x86 we enforce that mask is 64bit, so the & operation is not there,
> but due to the nasties of TSC we have that conditional
> 
>      if (cur > last)
>         return (cur - last) * mult;
>      return 0;
> 
> Christophe, on PPC the decrementer/RTC clocksource masks are 64bit as
> well, so you can spare that & operation there too.
> 

Yes, I did it already. It spares reading d->mast, that the main advantage.

Christophe


More information about the Linuxppc-dev mailing list