Categories
Uncategorized

Autonomous Tracking Turret using Coral AI USB Accelerator

I’ve been interested in building a fully working “Portal Turret” like device for a while now. My first attempt a few years ago was using a Kinect. I basically unboxed the Kinect, and realized that you need to go in front of it before it can start tracking, which defeats in my opinion the whole point of having a turret. I decided to put the project on hold for a while.

A few months ago, I got my hands on a Coral AI USB Accelerator and decided to use it for the projet.

I’ve build a prototype “pan-tilt” turret using a pan-tilt kit sold by RobotShop, as RaspberryPi 3b I had with my old stuff, with the Camera Module V2. In order to avoid stressing the camera cable, I’ve used Legos to build a frame that would contain all the hardware.

The announced performance for this device is impressive, 4 Tera Operations (on 8-bit numbers) per second, which is about half of the performance delivered on a good old 1070TI NVidia card (on 32-bit numbers). Assuming that the model is correctly quantized, from those numbers, one could expect the performance of this small chip would be similar to half a decent GPU, for a 2W footprint!

However, this is definitively not the right way to do the math, since the real performance of the USB accelerator on MobileNetSSD is about 10 frames per seconds on a RaspberryPI, which is lower that we could expect on a desktop PC CPU, thus clearly lower that what could be achieved on a GPU. Turns out that the small Jetson Nano GPU would be about 4 times faster than a similar USB accelerator according to NVidia’s benchmark.

Anyhow, 10 FPS is enough for real-time tracking, but if I was to restart the project, I would experiment on the Jetson Nano instead of pairing the USB Accelerator with a RaspberryPi, and would expect much smoother video output and tracking.

The project repository is on my GitHub

Categories
Uncategorized

Debugging Python using VSCode in a running docker container.

I wanted to find a way to hookup a debugger on a running Docker container that was started by an external toolchain. (The DuckieTown shell). However, the same method could be applied to any Python program running in a docker container that was started by any framework.

The first step is to get the container started as usual. In my particular case:

dts exercises test --sim 

Once the docker is started, I needed to attach a Visual Studio Code instance to that container. For that to work, the Visual Studio Code Docker extension must be installed.

This launches a new Visual Studio Code window, and proceeds to install the Visual Studio Code server and required extensions in the docker container. After that, I needed to specify the Python Interpreter and open a working folder. In my case, the code was available in the /code folder.

The next step is to be able to attach a debugger to the running Python script. I’ve learned the hard way that unlike C and C++, there is no way to hook a debugger to an “unwilling” running process. However it is quite simple to make the process willing to accept a debug session using the “debugpy” package from Microsoft. They document several ways of enabling debugging. On my part, I’ve added the following lines in the beggining of the source file.

import debugpy
debugpy.listen(("localhost", 5678))

The debugpy package must also be installed in the container. This can be done by automatically installing the package in the startup script. In my case, the script was named “run_all.sh”.

pip3 install debugpy

A debugging configuration can be first created using the GUI. For the option to be available, first make sure that the Python Extension is also installed in the container. VS Code will handle it as soon as you open a Python file.

The default hostname proposed by VS Code is localhost. It is ok. Default port is 5678, this is also “ok”.

For breakpoints to work, it is critical the the local workspace is properly mapped the remote workspace. If this is not done properly, breakpoints will appear greyed out and will never be reached. Since we are in a docker container, both the localRoot and the remoteRoot should be the same.

At this point, you can start debugging using the remote attach configuration. Any symbol can be inspected and code can be written interactively in the debug console.

I hope this helps!