Video from aerial surveillance footage provides key information to successfully carry out missions in resource-constrained environments where humanitarian or military operations often take place. State-of-the-art analysis of images, however, usually requires powerful computations that are unavailable on the equipment that military and humanitarian-aid personnel carry in edge environments. To make these analyses available so that personnel at the edge get the best information possible, the SEI developed techniques that enable real-time, state-of-the-art object detection on constrained, low-power edge platforms.
Field personnel executing military and humanitarian missions often rely on images taken from a variety of cameras—including cameras on drones that provide surveillance and reconnaissance information—to conduct their operations effectively and safely. The images these devices capture can detect the presence and movement of objects, such as people, buildings, vehicles, and much more. This information helps operational personnel increase their knowledge about the state of the terrain, and it can drive the success of their missions.
The process of detecting objects accurately and quickly from this footage often involves applying machine learning (ML) techniques that identify what’s in the image, along with software that relays that information in a way that is easy and quick to understand. This kind of analysis can provide a wealth of information by parsing through the images and communicating the most pressing information quickly. However, performing the analysis requires complex neural networks that traditionally run on powerful servers.
At the tactical and humanitarian edge, personnel carry lightweight equipment and devices that may not have the processing power required to provide this kind of analysis in real time. This means that state-of-the-art object detection is difficult to deploy—and often simply unavailable—in the edge environments where military and field personnel work.
To enable object detection on the lightweight devices that operational personnel carry in edge environments, the SEI applied recent advancements in algorithmic compression techniques to make image analysis easier on devices with limited resources, such as devices that have less processing power or memory than the powerful servers that are usually needed to perform state-of-the-art object detection.
The result of our work is that the devices carried by operational personnel at the edge can better analyze the images and video taken from drones or other inputs. The SEI’s solutions aim to provide these devices with the capability of efficiently processing images in the field to offer crucial information in real time.
The SEI tested these compression techniques in laboratory settings with aerial drone imagery taken on devices that our partners at the Department of Defense use in the field, which include NVIDIA Jetson Nano and TX2 edge devices. Our testing demonstrated that our solution obtained a nine-fold reduction in storage size, which contributed to a six-fold increase in the speed for detecting objects with only a reduction of three percentage points in object-detection accuracy. We observed this performance in real time, with our object-detection model identifying objects on videos with good-quality frame rates—10 frames per second (fps) on the Jetson Nano and 20 fps on the TX2.
To collaborate with us on improving operations at edge environments using algorithmic solutions, or to get help with your edge computing problems, reach out to us today! With our expertise in artificial intelligence (AI) and edge computing, we can help you enable AI and ML techniques at the tactical and humanitarian edge.