MicroVNC (Revisited)

I’m picking the microVNC project back up after a 9 year delay.

The goal if this project is to enable software developers to develop applications involving a network-connected embedded device with an LCD display, without needed to write any firmware.

I’m using the VNC protocol in the project, as I demonstrated back in 2002 that a microcontroller with limited resources can use VNC to efficiently get images from a remote server.

Here are the major components of the system:

The major pieces under development are the Frame Server and the microVNC device. The Frame Server is a VNC server dedicated to reading an image out of shared memory and sending changes to the image to the microVNC device using the VNC protocol. The microVNC device just listens for new data from the Frame Server, and writes the new pixels to the display. In the future, the microVNC device will look for user input from button or touchscreen presses and will send those back to the Frame Server.

To send an image to the microVNC device, the application just needs to store and update an image in shared memory. If the application is interactive, it can listen for button or touch screen events relayed by the Frame Server. In the future, to be more efficient, the application can send the Frame Server details on what pixels were updated, so the Frame Server can more efficiently send those pixels to the microVNC device.

Demo

At the moment, I have a demo that uses a simple Processing sketch as the application, a Frame Server that can send changes in the image to a VNC client, and a microVNC device capable of drawing an image to a small color display. The microVNC device is connected to the Frame Server through a USB-serial connection.

Demo showing Processing sketch running on PC (top), shared with microVNC device.

(The LCD display looks much better in real life, the camera had a hard time capturing it)

I chose Processing for the demo, as it seemed like a development tool that is easy to pick up, it was easy to get access to the pixels and write them to a file, and there are plenty of demo sketches available. The Processing sketches I used are mostly just modified demos provided by Processing, with an additional function I wrote that that opens a file and writes the raw pixels to it. The sketch needs to call this function at the end of the draw() routine.

The Frame Server is implemented using LibVNCServer. It opens the file containing the image generated by Processing, does a comparison to the previous image it received, and if changed, copies the image into its internal frame buffer. It finds a rectangular area within the image that contains all the changed pixels, and sends all the pixels in that rectangle to the microVNC device. To connect a TCP socket to the microVNC’s USB-serial port, I used com2tcp.

The microVNC device is simply an adafruit ATmega32u4 Breakout+ connected through level shifters to a 128×160 LCD display purchased from eBay. The display was cheap, but the provided sample code had errors, and one of the two displays I purchased didn’t match the eBay listing, so I’m not going to recommend the seller. Adafruit sells a similar if not the same display.

microVNC device for Maker Faire Demo

The demo is able to draw static images, and animation involving small changes to the image before the framerate suffers.The weak link is actually the interface to the display. I chose a display using an SPI bus to simplify the circuit for prototyping until I was ready to create a custom PCB. At the maximum SPI clock rate available on the microcontroller, I can write a 16-bit pixel every 4 us, and it takes over 81ms to update the whole 128×160 pixel display. Using a display with an 8-bit or 16-bit parallel bus I expect the system to support much faster updates.

I don’t have any code or hardware designs ready to share yet. I was in the early stages of bringing my microVNC code up to date to support a color display, and writing a simple server using LibVNCServer when I decided to try to hack together the whole system to demo at the Maker Faire. While it works, there’s enough bugs and rushed code that I’m not ready to put anything out right now, but plan to release everything as Open Source Hardware in my github repo.

Next Steps

Given the poor performance of the current display’s SPI interface, I want to switch to a display with a better interface, so I can get a better feel for the performance of the system.

I can think of a number of applications that can take advantage of a USB connected display, so I’m going to focus on USB connectivity for now. Later I will add WiFi support and battery power.