Top 5% in Autonomous Drone Challenge of Lockheed Martin

At, we believe that one of biggest technology transformations will be started by autonomous Unmanned Aerial Vehicles (UAVs), also known as drones. Autonomous drones can make the difference in a lot of fields: agriculture, security inspections, surveillance, rescue operations, wildlife protection, …

Lockheed Martin and Drone Racing League partnered up and organized AlphaPilot.

We managed to finish in the top 5% of the challenge: out of almost 2400 participants we ranked in the top 5% of entries with our computer vision algorithms.

The primary goal of the computer vision challenge was to develop computer vision algorithms which could extract the flyable region out of a racing gate used in the popular drone racing courses. Assessment metrics were accuracy and speed, as latency is of high importance if included in the control loop of an aerial vehicle.

We developed a custom image segmentation algorithm, a highly-modified concatenation of MobileNetV2 and the cutting-edge BiseNet, which would classify each pixel of the image at hand whether it’s a drone racing gate or not. Once this classification is done for each pixel, we extract the shapes of the racing gate and identify the flyable region using various custom algorithms and computer vision libraries such as OpenCV.

Our pixel classification algorithm based on MobileNetV2 (for speed) and BiseNet (for accuracy). The pixels in red mark the outline of the racing gate.

The classification model was a model created and trained in TensorFlow using over 10.000 images of drone racing gates under various angles, distances and lightning conditions. By applying data augmentation, we ended up with over 100.000 unique images with varying transformations such as rotation angle, flipping, and brightness and hue levels. A sample of these images was labeled by hand to give the model a head start.

Example of a racing gate with flyable region as predicted by the trained model.

In the end, our model was able to identify a flyable region within 6 milliseconds in 99.5% of the testing images with an accuracy of over 80%.

A sample of the annotations our algorithms were able to perform.

Unfortunately, only the top 9 teams qualified to participate in the final phase and we did not make the final cut. However, we learned a lot out of participating in the challenge by using cutting edge technology and solutions and we are confident that these skills are crucial for the AI projects we are currently working on and in the future.

Would you like to learn more about our approach? Feel free to contact us at


[Total: 1    Average: 5/5]
By | 2019-03-11T14:31:56+00:00 March 11th, 2019|Categories: Artificial Intelligence, Autonomous drones, Computer Vision, Machine Learning|0 Comments

About the Author:

Koen is co-founder of Oakwood and specializes in Artificial Intelligence, Machine Learning and Natural Language Processing. He's highly proficient with Python and NodeJS.

Leave A Comment