[Cbe-oss-dev] [PATCH 2/2] spu sched: static timeslicing for SCHED_RR contexts
Arnd Bergmann
arnd at arndb.de
Sat Feb 10 20:13:30 EST 2007
On Saturday 10 February 2007 07:06, Benjamin Herrenschmidt wrote:
>
> > Having a single workqueue like this is _probably_ enough,
> > but I wonder if we get better behaviour by using per numa
> > node work queues. Maybe someone from our performance
> > team wants to analyse this.
>
> How bad is the limitation of only running the SPU scheduler at task
> level ?
>
> Among other things, I've been thinking about the typical usage scenarios
> of in-kernel SPEs. While I want them to use normal SPU contexts, one of
> the thing that typically come to mind is the ability to fire them off at
> interrupt time. It would be annoying to take the latency of queuing a
> workqueue especially if the context happens to already be present on an
> SPU which then ony needs to be "run" (hasn't been replaced by something
> else). Something like a SPURS job model.
>
> One of the thing that comes to mind right away to allow that sort of
> optimisation is to have a spinlock rather than a mutex to protext the
> SPU scheduler run queue.
We have a number of places in the context switch code right now where
we do a cond_resched() or something similar when we're waiting for
the SPU for too long. This requires some detailed analysis to see if
we ever wait for extended times in those places.
Also, the context switch itself can take some time, e.g. if there are
hash misses from the SPU involved during the DMA transfers. I'd rather
not think about scheduling in spu contexts at interrupt time.
Arnd <><
More information about the cbe-oss-dev
mailing list