How to make an autonomous vehicle – the 2016 Droid Racing Challenge

The Droid Racing Challenge recently hosted by QUT Robotics Club was an event designed to inspire students and general public alike about the potential of robotic vision. Results and winners were announced with a selection of media from the event in the Droid Racing Challenge Wrap-up. This article is a companion piece highlighting the technical challenges associated with building a racing droid.

There are are a few core systems required to make a successful droid – mechanical systems for driving, sensor systems for data acquisition, and processors for using that data to make navigation decisions. The entire system had to come in at a total value of less than $1500. This placed hard limits on what parts could be used, and along with time limitations and other rules, caused a convergence of mechanical design among the teams. Every team present at the challenge this year chose to purchase and adapt the chassis from a remote control car. These are readily available, cheap, mechanically robust, and save the time of having to design a new chassis. Most have plenty of room for mounting extra components, and come with motors, suspension, batteries, motor controllers etc. – they are a great choice for a competition like this. They also have the added advantage of a radio control system, which can be adapted for the required wireless start/stop mechanism.

DSC01675
All the droids at the challenge used a modified chassis from a hobby remote controlled car.

Customisations included swapping out different motor drivers and wireless transceivers, to better work with the other electrical equipment used by the teams.

Data acquisition for robotic vision uses a camera. The camera and processor cannot be chosen independently; the type of camera that should be used is highly dependent on what type of processing you plan to do and the processor you plan to do it on. There were three variants of camera+processor used in this years challenge – wide angle Raspberry Pi cameras with Raspberry Pi 3B computers (QUT), webcams with Raspberry Pi 2B or 3B computers (QUT, UQ, and UNSW), and a Stereolabs ZED Stereo Camera combined with an Nvidia Jetson TX1 developer kit (Griffith).

The Raspberry Pi and it’s software ecosystem are familiar to most electrical and mechatronics engineering students, and it is a very popular platform among hobbyists as well. Raspberry Pi computers are cheap at around $50 AUD, small (“credit-card sized”) so they can be easily mounted to a droid, and very well supported with software and programming languages/libraries. The Raspberry Pi camera module has a five megapixel camera that supports 1080p30, 720p60 and VGA90 video modes. QUT teams used a special version with a wide angle lens to capture more of the track. The camera sensor is not very high quality, but it is cheap at around $25 – $30 AUD. Webcams are also supported through the USB interface, so higher quality and resolution cameras can be used (albeit at a lower frame rate). Each of the QUT droid builds came in way under the budget limit because of our use of Raspberry Pi computers and cameras. The Raspberry Pi has enough processing power to do some image processing tasks (robotic vision). However, this is the downside of the platform; even the low-spec Raspberry Pi cameras had frame rates and resolutions far beyond what the computer was capable of processing. They were chosen mostly because of cost, ease of use and the teams’ prior familiarity with the platform, not because they are the best choice for robotic vision. Teams had to design very efficient algorithms, as the main constraint on performance was the processing power of the Raspberry Pi.

The team from Griffith used a much more capable camera+processor combo, the ZED Stereo Camera and Jetson TX1, which were specifically designed for robotic vision applications. We’ve never used this platform, but were impressed to see it. Unfortunately, Griffith’s droid was plagued by mechanical and other issues so we never got to see what difference this would have made to their performance at the challenge. It seems to us that this system is a far better choice for robotic vision, but availability and familiarity are problematic. The camera and processor combined also used the majority of Griffith’s budget, with the total build only just under the limit. From our point of view, this system definitely warrants some investigation for next year’s challenge.

With parts selection done, the main aspect of the challenge was the vision software system, which needed to use software algorithms to process images and video to gain meaningful information. In this case, the droids are looking for coloured lines on the ground (the track), coloured boxes on the track, and other droids. During testing we found that due to time constraints and other issues relating to the fact that this is the first time running the event, most teams could not avoid obstacles or other droids well. These requirements were dropped so that the challenge could go ahead.

OpenCV is a popular library for computer vision. It becomes robotic vision when the droid acts on the results of image analysis done by the vision system, by navigating around the track. The QUT teams used OpenCV to detect the track lines, and navigation algorithms to stay between them while going around the track. Using the Python programming language and OpenCV, a system can be setup that grabs frames from the camera for analysis as it is videoing. The image below shows an example raw image from the camera:

test_image_27
Raw camera image. Note the distortion near the edges due to the wide angle lens.

There are several different algorithms that can be used to identify lines. Two QUT teams used colour thresholding techniques, where an image is filtered for a specific colour. The other team, which I was part of, used edge detection techniques to find where contrast between pixels was high, indicating and edge. Below is a breakdown of steps for the edge detection algorithm which picks out the track lines. The first step was to downscale and crop the image down to the region of interest; this reduces the resolution of the image, drastically improves processing time, and removes the droid itself and objects above the horizon from the image.

im
The cropped image.

Next, the image is converted to chromaticity coordinates. This makes the intensity of each colour in a pixel relative to the total intensity of the pixel, removing some of the effect of brightness.

chroma1
Chromaticity coordinates – notice how the yellow stands out, but the blue is hard to see because is was glary in the previous image.

In order to boost contrast and improve edge detection, we then squared the values of each pixel and divided by the maximum possible pixel value. This decreases overall image intensity, but increases contrast between pixels. See below:

chroma2
Contrast boosted chromaticity. The yellow line hasn’t changed as much as the rest of the image. The blue is still difficult to make out.

The next step is to run the edge detection algorithm. This is available as part of the OpenCV library, but needs to be calibrated. After some experimentation, we were able to get an image like this:

edges
Edge of the track lines. The edge detection algorithm has been calibrated to detect the edges of the tape which marks the track.

You can see in the above algorithm that the output correlates remarkably well with the track lines – even with the glare on the blue line, it is still picked up. The yellow line is found perfectly, and there is very little noise. This was tested on a series of test images, and we decided that we needed to do some further noise reduction because not all results were this clean – sometimes gaps between pavers were found as well. To do this, we dilated the image until the edges of the track lines merged into a single line, then eroded the image back down until the lines were thin again. Any other “noise” edges that were found in the image should then be eroded entirely, because they would not have a matching edge nearby to merge with. This can be seen in the images below:

dilate
Merged edges into a single line after dilation.
erode
First round of erosion – this is about the same as the original width of the track line.
erode2
Second round of erosion – almost all noise is removed.

Here is a better example of noise reduction:

edges
Edges
dilate
Dilation
erode
Erosion
erode2
Final image after erosion

 You can see in the series of images above, the edge detection picks up a number of edges that we are not interested in. Through the process described above we can remove those lines so that in the final image there is almost no noise and the track lines are clear.

From here the navigation system takes over and we have a few different options. You can measure the angles of the lines, and use this to come up with a target angle for steering. You could also just find the centre point between the lines and steer towards that. More advanced methods could improve navigation, but unfortunately we didn’t get to implement anything else and for reasons probably to do with calibration and variable brightness and glare out on the track, our algorithm didn’t perform well on the day. The frame rate for the above algorithm was also very low on the Raspberry Pi, around 3-4 fps, so the speed of the droid would have to be very slow for the vision system to keep up. Here is our droid:

DSC01724
Lindsay Watt (left), Lachlan Robinson (right) and our droid.

The winning team, UNSW, used edge detection methods as well, but obviously better calibrated than ours and with more advanced navigation. Their droid was quite slow around the track, about the same speed that ours would have to have been. They did a great job developing an algorithm that reliably detected the track lines and navigated through them. It is also worth noting that they came prepared to avoid obstacles as well!

There are a few things that none of the teams were prepared for. One of these was the glare on the track and the tape that marked the lines – the tape was matte finish but could still have significant glare from the droid’s perspective. This made it difficult to pick up lines sometimes. Teams improvised by using UV camera filters or polarised lenses from sunglasses, but this brings up a larger point: the better data you have at the start, the easier it is to process. Next time teams should think about filters, lenses, optics, and other camera settings like white balance and exposure more because the right camera setup will make the software task a lot easier.

The variable brightness and angle of the sun throughout the day made it difficult, but this was meant to be part of the challenge. We used chromaticity coordinates to overcome this. However, the blue line was still harder to detect than the yellow line, and our use of edge detection meant that we weren’t distinguishing the lines by colour, rather by angle or merely which side of the image they were found on. The colours could be used in combination with edge detection for robustness, but consideration should be put towards whether different colours are necessary and what colours should be used.

Finally, as is always the case with these sorts of competitions, every single team needed to do much more testing beforehand. Testing outdoors, with similar tape, in conditions like those found on the day, is crucial for a positive result. Future events will hopefully have a bit more lead time, and with the experience the teams gained this year, I’m confident we’ll see some more great droids next time.

Thanks to Lindsay Watt who did the majority of the work on the droid while I was organising the event and agreed to share our secret methods in this article :)


One thought on “How to make an autonomous vehicle – the 2016 Droid Racing Challenge

Comments are closed.