[SLOF] [PATCH v5 5/7] tpm: Add sha1 implementation

Segher Boessenkool segher at kernel.crashing.org
Sat Jan 11 22:20:09 AEDT 2020


On Fri, Jan 10, 2020 at 08:21:53PM -0500, Stefan Berger wrote:
> +static inline uint32_t rol(uint32_t data, uint8_t n)
> +{
> +	register uint32_t res;
> +
> +	/* rotlw a,b,c : a = rol(b, c) */
> +	__asm__ __volatile__ (
> +		"rotlw %0,%1,%2"
> +		: "=&r" (res)
> +		: "r" (data), "r" (n)
> +		: "cc"
> +	);
> +	return res;
> +}

Eww.

This asm doesn't have to be volatile.

Why the earlyclobber?

Why the clobber of cc (which is the same as cr0)?

For a simpler way to do this, try something like:

===
unsigned int rot(unsigned int x, unsigned int n)
{
	return (x << (n & 31)) | (x >> (-n & 31));
}

unsigned int rot4(unsigned int x)
{
	return rot(x, 4);
}
===

(rot doesn't realise it doesn't need to mask n, but rot4 results in
optimal code already).

Oh, and since Power8 there are machine insns to do SHA2 operations.  Do
you really want people to use SHA1?  https://eprint.iacr.org/2020/014 .
Maybe you *have* to with TPM?


Segher


More information about the SLOF mailing list