spinlocks
Anton Blanchard
anton at samba.org
Wed Jan 7 00:09:37 EST 2004
> I tend to think that our spinlocks are so big nowadays that it would
> probably be worth un-inlining them....
I prefer out of line slowpath directly below the function rather than
one single out of line spinlock. It makes profiling much easier, while we
can backtrace out of the spinlock when doing readprofile profiling, for
hardware performance monitor profiling we get an address that happened
somewhere in time and cant do a backtrace.
We should give both methods a go, perhaps SMP kernel on UP and something
larger like an 8way. Other than sdet is there a benchmark that will
really stress our spinlocks and isnt a real pain to run?
Heres my current idea for a spinlock:
static inline void _raw_spin_lock(spinlock_t *lock)
{
unsigned long tmp;
asm volatile(
"1: ldarx %0,0,%1 # spin_lock\n\
cmpdi 0,%0,0\n\
bne- 2f\n\
stdcx. 13,0,%1\n\
bne- 1b\n\
isync\n\
.subsection 1\n\
2:"
HMT_LOW
BEGIN_FTR_SECTION
" mflr %0\n\
bl .splpar_spinlock_r%1
mtlr %0\n"
END_FTR_SECTION_IFSET(CPU_FTR_SPLPAR)
" ldx %0,0,%1\n\
cmpdi 0,%0,0\n\
bne- 2b\n"
HMT_MEDIUM
" b 1b\n\
.previous"
: "=&r"(tmp)
: "r"(&lock->lock)
: "cr0", "memory");
}
And below is the magic goo to bind it together, thanks to Alan Modra for
pointing out I can create dynamic functions names in inline assembly :)
Anton
/*
* the function that called us may have used stack below the SP, so we
* allocate enough here to avoid it.
*/
#define STACKFRAMESIZE (288 + 3*8)
#define SAVE_R3 0
#define SAVE_R4 8
#define SAVE_R5 16
/* junk the kernel provides */
#if 1
#define GLOBAL(A) A
#define HVSC .long 0x44000022
#define r1 1
#define r3 3
#define r4 4
#define r5 5
#endif
/*
* NOTE: This code relies on the vpa and the processor id being within the
* paca. Ugly stuff but it works for now.
*/
#define SPLPAR_SPINLOCK(REG) \
SPLPAR_spinlock_r##REG :\
stdu r1,-STACKFRAMESIZE(r1); \
std r4,SAVE_R4(r1); \
std r5,SAVE_R5(r1); \
lwz r5,0x280(REG); /* load dispatch counter */ \
andi. r4,5,1; /* if even then go back and spin */ \
beq 1f; \
std r3,SAVE_R3(r1); \
li 3,0xE4; /* give up the cycles H_CONFER */ \
lhz 4,0x18(REG); /* processor number */ \
HVSC; \
ld r3,SAVE_R3(r1); \
1: ld r4,SAVE_R4(r1); \
ld r5,SAVE_R5(r1); \
addi r1,r1,STACKFRAMESIZE; \
blr
SPLPAR_SPINLOCK(0)
SPLPAR_SPINLOCK(3)
SPLPAR_SPINLOCK(4)
SPLPAR_SPINLOCK(5)
SPLPAR_SPINLOCK(6)
SPLPAR_SPINLOCK(7)
SPLPAR_SPINLOCK(8)
SPLPAR_SPINLOCK(9)
SPLPAR_SPINLOCK(10)
SPLPAR_SPINLOCK(11)
SPLPAR_SPINLOCK(12)
SPLPAR_SPINLOCK(14)
SPLPAR_SPINLOCK(15)
SPLPAR_SPINLOCK(16)
SPLPAR_SPINLOCK(17)
SPLPAR_SPINLOCK(18)
SPLPAR_SPINLOCK(19)
SPLPAR_SPINLOCK(20)
SPLPAR_SPINLOCK(21)
SPLPAR_SPINLOCK(22)
SPLPAR_SPINLOCK(23)
SPLPAR_SPINLOCK(24)
SPLPAR_SPINLOCK(25)
SPLPAR_SPINLOCK(26)
SPLPAR_SPINLOCK(27)
SPLPAR_SPINLOCK(28)
SPLPAR_SPINLOCK(29)
SPLPAR_SPINLOCK(30)
SPLPAR_SPINLOCK(31)
** Sent via the linuxppc64-dev mail list. See http://lists.linuxppc.org/
More information about the Linuxppc64-dev
mailing list