Binaural rendering is a signal processing technique for creating stereo audio signals which, when delivered through headphones, are perceived by a user to originate from a real-world sound source at a specific location. This technique can create very realistic auditory virtual realities for entertainment, gaming, and e-tourism purposes, as well as for more serious applications like education and training, remote telepresence, and spatial situation awareness displays.
The human system for generating this virtual audio illusion is known as the head-related transfer function (HRTF), which is a response that characterizes how the ear receives a sound from a point in space. The sound is transformed based on the size and shape of the head, ears, ear canal, density of the head, and size and shape of nasal and oral cavities, boosting some frequencies and attenuating others. These can be mapped and used to make highly complex hearing systems. When a user-specific HRTF cannot be used, general performance of the binaural rendering is degraded for the majority of users, an increase in large localization errors is observed, users exhibit poor sound source externalization, and there is a perceived decreased sense of presence in the auditory virtual environment.
The measurement of an individualized HRTF is time and cost-prohibitive for the average user of binaural rendering technologies. Present technologies for measuring an HRTF for an individual require complex equipment and hard-to-find acoustically treated anechoic environments, making widespread use of true individualized HRTFs impractical for most commercial applications.
To provide a better auditory virtual reality, Air Force scientists and engineers have developed an improved methodology for selecting an HRTF for the binaural rendering of audio signals that are perceived by a user to originate from a real-world spatial location.
The method provides for personalized HRTF selection from among a database containing candidate HRTFs using an evaluation-based personalization strategy which uses multiple relational models to personalize the selection. These models can relate candidate HRTFs to each other and a particular user to other users so that only a subset of the candidate HRTFs require evaluation. Candidate HRTFs can be evaluated according to one or more selection policies, and relational models can be updated based on actual responses from a user to virtual audio signals that are rendered by a candidate HRTF.
Selection starts with accessing a database of candidate HRTFs. A virtual audio signal is presented to the user via an apparatus for generating audio signals based on the candidate HRTF and the location pairing. User response data is collected and performance value is predicted for each candidate HRTF based on the response data and location pairing. An optimal HRTF is then selected to render the binaural signals for the user based upon the prediction.
This approach may be used prior to, or as part of any HRTF-based binaural audio rendering system. HRTFs selected in this manner are then imported into the binaural rendering system and used for improved performance.
If virtual audio signals are replaced with physical loudspeakers (or virtual simulations based on the device/settings), then the system may be used to select appropriate, commercially-available hearing aids, hearing protectors, communication headsets, or settings.
- Enhanced auditory localization in a virtual auditory environment
- Cost effective over true individualized HRTF
- Businesses can license US Patent Application 20180310115 to obtain commercial rights and technical data for product development
- License fees are negotiable, contact TechLink for more information
- Potential for collaboration with Air Force researchers