I’ve been interested in building a fully working “Portal Turret” like device for a while now. My first attempt a few years ago was using a Kinect. I basically unboxed the Kinect, and realized that you need to go in front of it before it can start tracking, which defeats in my opinion the whole point of having a turret. I decided to put the project on hold for a while.
A few months ago, I got my hands on a Coral AI USB Accelerator and decided to use it for the projet.
I’ve build a prototype “pan-tilt” turret using a pan-tilt kit sold by RobotShop, as RaspberryPi 3b I had with my old stuff, with the Camera Module V2. In order to avoid stressing the camera cable, I’ve used Legos to build a frame that would contain all the hardware.
The announced performance for this device is impressive, 4 Tera Operations (on 8-bit numbers) per second, which is about half of the performance delivered on a good old 1070TI NVidia card (on 32-bit numbers). Assuming that the model is correctly quantized, from those numbers, one could expect the performance of this small chip would be similar to half a decent GPU, for a 2W footprint!
However, this is definitively not the right way to do the math, since the real performance of the USB accelerator on MobileNetSSD is about 10 frames per seconds on a RaspberryPI, which is lower that we could expect on a desktop PC CPU, thus clearly lower that what could be achieved on a GPU. Turns out that the small Jetson Nano GPU would be about 4 times faster than a similar USB accelerator according to NVidia’s benchmark.
Anyhow, 10 FPS is enough for real-time tracking, but if I was to restart the project, I would experiment on the Jetson Nano instead of pairing the USB Accelerator with a RaspberryPi, and would expect much smoother video output and tracking.