Apply online to license this technology
The U.S. Navy is integrating unmanned aerial systems (UASs) into the mix of aircraft landing and taking off from the nation’s aircraft carriers, requiring a system to communicate with these remote-controlled devices on busy, crowded high tempo flight decks. The Naval Warfare Center, Aircraft Division (NAWCAD) at Lakehurst, New Jersey, has developed such a system that interprets standardized visual cues from an aircraft handler and provides an output that can be used by the UAS’s onboard computer to move the vehicle as directed by the ground handler.
This technology improves on previous attempts at developing a system for gesture recognition that relied on pixel-by-pixel processing. These early attempts resulted in millions of numbers per incoming frame and would bog down at the speed of a hand gesture. The Navy’s new system utilizes between 8 and 13 numbers per incoming frame to minimize the computation power required to interpret hand and body gesturing commands. Gestures are projected with LEDs placed on the hands and head and optical filters on the camera lens ensure that only the wavelengths emitted from the LED are communicated. The motion analysis algorithms developed for this system also take into account common analytical problems associated with rotation, translation, and shift of the human signaler with respect to the camera.
- Easily manufactured from low-cost, readily available components
- Integrates easily into aircraft environment and protocols
- Significant reduction in processing power
- Numerous potential applications for many remote controlled devices
- A proof-of-principle has been established with a working lab prototype
- US patent 8,005,257 available for license