Video vs Serial for BMC based host console

Tanous, Ed ed.tanous at intel.com
Tue Dec 19 11:49:30 AEDT 2017


I had spent some time trying to get video redirection through websocket going on the AST, but got stalled recently working on other things.  I'm not sure what your thoughts are on how it should be architected, but here's how I did my proof of concept.

Driver: pulls framebuffers out of hardware, defines a 10MB CMA region that can be allocated to video hardware.  Video buffers are pulled one at a time by the web server, and rely on the aspeed interrupt scheme to return the compressed JPEG-like streams
Webserver: Ran VNC over HTTPS websocket, and queried the buffers asynchronously one at a time, and pushed them to the VNC websocket as they were available.  The code I had is available here and can be considered a toy at this point.
https://gerrit.openbmc-project.xyz/#/c/7786/1/bmcweb/include/web_kvm.hpp
Front end: connected via NoVNC (https://github.com/novnc/noVNC) which implements a pretty standard run of the mill VNC interface.  This package was used unmodified.


Some issues I had:
1. CMA isn't terribly well documented at the time was relatively new.  The "right" way to do video would be to allocate the buffers when the KVM connects, and discard the buffers once the video disconnects so the memory is only used when the KVM is in use.  CMA gives you a mechanism to do this reliably with large buffer sizes.
2. The aspeed implementation of jpeg compression isn't fully compliant (ie, you can't add the appropriate header and save the buffer as a jpeg file) therefore I had to implement a decompress to bitmap specifically for the aspeed hardware implementation.  This is _very_ slow in both transfer time and decompression time.  On my list of things to do was to decipher the hieroglyphics that are the aspeed datasheet and see if I could get a more efficient way to scan a Jpeg buffer and simply patch in the required header fields to make it jpeg compliant and send directly as jpeg.
The other option was to "teach" the NoVnc implementation how to decode the aspeed specific jpeg implementation.  I didn't get very far down either path here.
3. Frames were pulled one at a time, and didn't use the aspeed differential compression, nor the double buffering available in hardware.  Both of these would've sped up processing significantly.
4. Requiring encryption is expensive, as it means that OpenSSL needs to encrypt every byte.  There is an AES engine in the AST part that might mitigate this, but my suspicion was implementation of the AST video differential algorithm (only sending changed bytes as a pixel stream) would net more performance for a common user than implementing the crypto engine.  Ideally, we'd have time for both if this is something that everyone really needs.
5. I wasn't able to get the CMA buffer kernel command line option to launch with the appropriate amount of saved memory.  Changing the default in the build seemed to work much better (your mileage may vary)
6. The VNC specification itself implements a laughably insecure password hashing mechanism, so security needs to be done in the websocket layer in the webserver.  My best stab at it is in the patchset linked above.

> a) Is this in your plan to implement in the near future ? If so, when ?

Yes, I believe we have resources planned to implement this in Q3 (although we won't commit to an external release timeframe for obvious reasons)  If you guys are looking to implement this sooner, we should talk, and see if either:
A. I can help you guys get some of a jump start on it
B. I can move up my timetables (or someone on my team) to collaborate with you guys to get this done.

> b) Has any progress made this implementation easier by someone else?

See above.

> c) If answer is no to above questions, can one of you, please let us
> know steps in which we should go through about adding this feature,
> from scratch?

Feel free to tell my I'm crazy, but here are the steps as I see them.
1. Grab some variant of the ast_video driver (either the one I linked on IRC or the SDK one, each have their advantages and pitfalls)
2. Get the driver ported into something that will build in 4.13.  I found several issues here, and got a little wrapped around the axle trying to get them fixed, but my linux driver mojo is a little lacking these days.
	A. Solve the CMA problem, and get the appropriate build flags into a bitbake recipe to get the large buffers allocated at runtime without having collisions.
	B. Determine the required buffer lengths for your given choice of video modes and set the config flags to change the default CMA size of the kernel build (I was not able to get these applied via kernel launch parameters, but you might have more luck)
	C. Do general housekeeping on the driver (it is not suitable for upstreaming at this point)
3. Decipher the AST2500 video compression scheme and decide on a path forward for when and where to decompress.  My plan was to start by immediately decoding the buffer to a bitmap, then slowly move the decompression up the stack as I was able to solve the engineering problems (driver -> websocket server -> web/NoVnc) until it was being decoded client side in the browser, which given some back of the napkin calculations will be required for the solution to be useful.
The webserver I posted didn't get much love from a code review standpoint, and isn't standard.  Feel free to use it or roll your own if that's easier.   

4. Implement the NoVnc window in the phosphor-webui.  Integrating my proof of concept with a non-phosphor angular based UI was pretty trivial (less than a day) and I have some example code that might help get you started.
5. Implement a USB gadget driver for Aspeed, and connect up the emulation of both keyboard and mouse.  Some work has been done by others here.
6. Debug, load test, and find corner cases.  In our experience, BMC KVM has a lot of corner cases that get exercised very rarely that can cause deadlocks.

Let me know what you guys are planning to do and at a very least I might be able to help you with some of the learnings I had running through this.

Out of curiosity, are you guys using an AST2400 or AST2500, or something else?

-Ed


More information about the openbmc mailing list