When 3-D sensors such as light detection and ranging (LIDAR) are employed in identification and recognition of people from both ground and aerial platforms, the derived point clouds of body shape often comprise low-resolution, disjoint, and irregular patches of points resulting from self-occlusions and viewing angle variations.
Many existing 3-D shape descriptors designed for shape query and retrieval are unable to work effectively with these degenerated point clouds because of their dependency on dense and smooth full-body scans.
In order to increase the fidelity and usefulness of LIDAR data for human image recognition, an Air Force computer scientist has developed a process to directly model 3-D point clouds to a level better than that derived from common 2-D depth image analysis methods under varying viewing angles and at small scales.
This 3-D sensing and modeling method from LIDAR data could augment existing 2-D technologies – especially for the application of air-to-ground target recognition where human targets are typically much smaller in size than those seen in the many ground-level public benchmark datasets due to the much longer standoff distance.
- Ideal for low-resolution LIDAR data
- Performs better than contemporary methods such as 3-D discrete Fourier transform (DFT) and is at least comparable to other contemporary methods such as 3-D discrete wavelet transforms (DWT)
- Validation experimentation demonstrated that this method maintains consistent performance under different elevation angles, which may have a particular significance for aerial platforms
- US patent 9,934,590 available for license
- Potential for collaboration with Air Force researchers