[Cbe-oss-dev] [RFC 4/5] spufs: add kernel support for spu task
Benjamin Herrenschmidt
benh at kernel.crashing.org
Thu Jun 14 08:08:58 EST 2007
(P.S. Keep the list CCed)
> I've been thinking about this. IMHO the scheduler should fire another
> ctx if there are SPUs available (not busy with userland for example) and
> if there is plenty of work to do. Right now, scheduler can't do this.
I was thinking about something around the lines of keventd... that is,
in addition to the ability for the kernel to create normal SPU kernel
tasks (long lived), there could be a kernel context per SPU for async
short lived jobs. Those would be bound physially to the SPUs, though
still scheduleable, that is, they get scheduled in when an SPU is idle
and there's need for work. You can then fire jobs which get scheduled to
whatever SPU is first available to pick them up...
In fact the more I think about it, the more it looks like a SPURS jobs
model :-) Well... with some refinements.
But you are right, it doens't -have- to be like that. We could instead
have a threshold in the pending jobs queue that causes the code to
instanciate another kernel context and not bind them, let them be
scheduled naturally ... (though it would make sense to actually have a
way to tell the kernel that there's no point in context switching
between two of these on a single SPU...)
> I tried to glue code together (and have for example AES and CRC in the
> same SPU binary and SPE) and not have two and let the scheduler switch
> ctx because there are not enough SPUs for every one. ctx switch is not
> that cheap like in user space (from what I know) and I try to avoid it
> if possible.
I think we want something akin to the kernel module loader, that is, SPU
code for use by kernel short lived "jobs" like these are relocatable and
are linked together by the kernel itself.
> Once the load gets too high, and a second physical SPU would make sense,
> the load balancer should handle it :)
In any case, let's first see if we get any useful performances out of
this offloading :-)
Ben.
More information about the cbe-oss-dev
mailing list