inline assembly

David Howells dhowells at redhat.com
Thu Jun 5 20:44:51 EST 2008


Scott Wood <scottwood at freescale.com> wrote:

> int tmp;
> 
> asm volatile("addi %1, %2, -1;"
>              "andc %1, %2, %1;"
>              "cntlzw %1, %1;"
>              "subfic %0, %1, 31" : "=r" (j), "=&r" (tmp) : "r" (i));

Registers are usually assumed to be 'long' in size, so I'd recommend using
that rather than 'int' for tmp, though I suspect it'll make little difference
(except, perhaps on x86 where you can partially use registers).

> However, it'd be better to let the compiler do more, by just using the
> existing cntlzw() function.

Look in include/asm-powerpc/bitops.h.  There are examples of the things you're
trying to do:

	static __inline__ __attribute__((const))
	int __ilog2(unsigned long x)
	{
		int lz;

		asm (PPC_CNTLZL "%0,%1" : "=r" (lz) : "r" (x));
		return BITS_PER_LONG - 1 - lz;
	}

	static __inline__ int __ffs(unsigned long x)
	{
		return __ilog2(x & -x);
	}

Where:

	asm-compat.h:79:#define PPC_CNTLZL	stringify_in_c(cntlzd)
	asm-compat.h:100:#define PPC_CNTLZL	stringify_in_c(cntlzw)

Depending on whether you're in 32-bit mode or 64-bit mode.

David



More information about the Linuxppc-dev mailing list