[Cbe-oss-dev] [RFC] [PATCH 0:8] SPU Gang Scheduling

Luke Browning lukebr at linux.vnet.ibm.com
Fri Mar 14 03:22:54 EST 2008


On Thu, 2008-03-13 at 04:06 +0100, Arnd Bergmann wrote:

> 
> Assuming we do it your way and move half-running gangs to the run queue,
> maybe we can simplify the logic and improve the fairness by determining
> the length of the time slice from the average of what any thread would
> like to run for based on its priority. Obviously if all threads are
> busy doing something else, that would be zero, so we don't schedule
> the gang at all.

Yes, that is what I was proposing.  Adjusting the length of the time
slice to account for fairness issues. 

> 
> > > One simple but interesting question would be: what should the
> > > gang do if one context does a nanosleep() to wait for many seconds?
> > > I'd say we should suspend all threads in the gang after the end
> > > of the time slice, but I guess you disagree with that, because
> > > it disrupts the runtime behavior of the other contexts, right?
> > 

> I think it would still be good to have a change where we can block the
> gang immediately when the ppe blocks on a page fault or syscall. We
> already do that on a stop-and-signal callback to user space (we assume
> that any callback to use space is slow and blocks), and that is essential
> for multitasking performance. In case of gangs, we just don't want to
> deschedule the gang when the first contexts blocks, but only if they
> are all blocked.
> 

I believe this is consistent with what I implemented.  When a
controlling thread runs, the spu context is not in spu_run(), so I
decrement the nrunnable count.  Whatever subsequently happens to this
controlling thread is not important as it is the state of the other spus
that dictates whether the gang can be scheduled.  One or more of them
have to be in spu_run() for the gang to be scheduled.  If none of them
are in spu_run or if they are but the spu is suspended because it
faulted, then the gang is not considered runnable.

Luke




More information about the cbe-oss-dev mailing list