[Cbe-oss-dev] MARS on Cell blades ? Cell image / video codecs?

Arnd Bergmann arnd at arndb.de
Fri May 29 21:55:49 EST 2009


On Friday 29 May 2009, Werner Armingeon wrote:
> My feeling is that without such a compatible Cell programming standard 
> which guarantees interoperability you cannot serious develop libraries 
> for general use. For sure you would prefer MARS ;-), and what I learned 
> from the presentation and documentation this would be a reasonable 
> choice. Are there even any others?

In principle, the rules for interoperable programming on SPU are the
same as on any multithreaded system. Most of all, you have to make sure
that you never sit on resources that other threads could use if you don't
need them.

The kernel will do scheduling of SPU contexts just fine in most scenarios,
but there are a few common pitfalls:

* If you are blocked on a channel read on the SPU, the kernel cannot
  tell that you are waiting and assumes that the SPU is busy. Every
  library and program you use on the SPU therefore needs to make sure
  that it can not run into situations where it blocks on a channel
  for an extended time, e.g. waiting for user input. When the task is
  blocked, you have to do a stop-and-signal call to go back to user
  space and wait for the event there.

* For some reason, people forget everything they learned about
  synchronization methods when they move to SPU programming.
  'while (!condition) sched_yield();' is not a proper way to
  wait for an event. You have condition variables, mutexes, futexes,
  message queues and pipes for these, among others. Every use
  of sched_yield() is a bug, with or without SPUs.

* Directly accessing SPU registers is a hack that you can use for
  optimizing latency in communication between PPU and SPU, but it
  destroys the ability to schedule the context fairly. If you need
  direct register access, use SPU_CREATE_NOSCHED and leave that
  context out of your load balancing.
  In most cases, you should simply avoid the direct mapping entirely.

	Arnd <><



More information about the cbe-oss-dev mailing list