[Cbe-oss-dev] [PATCH 3/6] spufs: fix starvation case with terminated spes

Arnd Bergmann arnd at arndb.de
Mon Feb 18 14:21:32 EST 2008


On Friday 15 February 2008, Luke Browning wrote:
> On Fri, 2008-02-15 at 10:41 -0200, Luke Browning wrote:
> > On Fri, 2008-02-15 at 12:55 +0100, Arnd Bergmann wrote:
> > >  
> > > Wouldn't it be sufficient to lower the priority of a nonrunning context
> > > to the minimum? We set it in __spu_update_sched_info() when entering
> > > spu_run, so it would be logical to reset the priority when leaving there.
> > > 
> > >     Arnd <><
> > 
> > That is worth prototyping.  I will try it out.
> 
> Its ugly.  We would have to add a priority level, since the scheduler
> doesn't preempt contexts at the same level. 

Why doesn't it? I would think that the scheduler can preempt contexts
at any priority level if their time slice has expired.

> We also have the issue of what to do with the context once it is
> preempted.  The scheduler is currently coded to put preempted contexts
> back on the runqueue, but this breaks things as the context cannot be
> on the runqueue when it is in user mode.  Not only do we hit the
> assert when we re-enter spu_run(), but more importantly it could be
> scheduled as it is on the runqueue. 

That sounds like something that needs fixing anyway. We should never
put a context on the runqueue if it doesn't actually want to run,
that would be a horrible waste of resources.

Imagine you have created 100 contexts in one thread and only occasionally
call spu_run on one of them if you need a specific function to be
executed. The way you describe it, we would continuously schedule all
100 threads even in the current code, where the expected behaviour
would be that the last N contexts remain present on one of the SPU
so they can start quickly if you call that context again.

	Arnd <><



More information about the cbe-oss-dev mailing list