ZAPlab studies human auditory perception generally, with a focus in sound localization and spatial hearing. Specific areas of interest include: the perception of sound source distance, the contributions of echoes and reverberation to percepts of source direction and distance, and the effects of adaptation, training, and input from other sensory modalities on spatial hearing. Research in ZAPlab relies heavily on auditory display technology that effectively simulates a "virtual" auditory space (VAS) using information about the acoustical characteristics of listeners' external ears and the environment. This technology not only enables the precise study of psychophysical relationships between the acoustic signals present at the two ears and human spatial hearing abilities, but also allows for simulations and scientifically relevant signal manipulations that would be impossible in real listening situations. Below is a list of current and recent past projects in the lab, all of which benefit from VAS technology.

  1. Psychophysical research on auditory/visual space perception and navigation. Our normal everyday perception of 3-dimensional space and our abilities to interact and navigate within the space require integration of information from multiple sensory modalities. The integration of distance/depth information in the intermediate range (2 – 20 m), where visual and auditory modalities provide the primary inputs, is not well understood, however. The long-term goal of this project is a complete understanding of how auditory and visual information is integrated to form distance percepts that can support accurate orientation and navigation in both normal and sensory-impaired populations. The objective of this application is to test and refine an innovative conceptual framework that represents the integration processes in a normal-hearing, normal-vision population. The central hypothesis guiding this framework is that distance perception, unlike the perception of direction, requires additional contextual or background information about the environment that is beyond that provided by the object itself. This background representation can act like a frame of reference for coding distance. For multisensory distance input, object, contextual and background information must be integrated across modalities. But since all the information is not necessarily available at the same time, memory must be involved in the integration process. The rationale that underlies the proposed research is that once a conceptual framework for auditory/visual distance integration has been specified and validated for normal populations, new and innovative approaches can be applied to understanding and minimizing the impact of sensory impairments on spatial perception and navigation. This hypothesis will be tested by pursuing two specific aims: 1) Reveal an integrated auditory and visual reference frame for distance perception based on the environmental background. 2) Determine the role of working memory in auditory/visual distance perception. These aims will be addressed by testing human distance judgment and navigation performance under conditions in which the contributions of the contextual information, background information or working memory are manipulated. Virtual and real stimulus manipulation techniques will allow for novel pairing of auditory and visual information that will be used to evaluate and refine the proposed framework. Development and validation of this framework will be a significant contribution because it will provide a better understanding of how humans are able to successfully integrate auditory and visual information to perform spatial tasks in the environment. Moreover, it will provide a vehicle for future studies to advance the field of multisensory space perception. The proposed research is relevant to public health because it will lead to a better understanding of how auditory or visual impairment affects multisensory space perception. Ultimately, this knowledge may inform the development of new strategies for assisting or enhancing degraded spatial information to improve orientation and navigation abilities in visually- and/or hearing-impaired populations. [Work supported by NIH-NEIR21EY023767; He/Zahorik, Multi-PI]
  2. Perceptual processing of indirect sound: The processes by which we hear auditory events in most everyday acoustic environments that produce indirect sound (e.g. echoes, reflections, reverberation) are complex and not well understood. Under certain circumstances, indirect sound can facilitate speech communication and certain aspects of sound localization. In other circumstances, however, indirect sound can produce deficits in these and other abilities that can be particularly large for individuals with hearing impairment. Recent results have demonstrated that some aspects of indirect sound processing appear to be affected by previous exposure to the acoustic environment, which suggests a form of perceptual adaptation. Although these adaptation effects can be substantial for situations with a single echo, the effects have not been evaluated in more realistic acoustic environments with complex patterns of indirect sound resulting from multiple echoes and reverberation. The long-term goal of this project is a complete understanding of the mechanisms and the potentially adaptive processes that subserve auditory localization and communication in everyday acoustic environments with complex patterns of indirect sound and the potential impact of hearing loss on these processes. This project uses state-of-the-art VAS technology to simulate and manipulate realistic echoic listening environments. Knowledge gained from these studies will lead to an improved understanding of a significant public health problem: the impairment of communication and localization in acoustically reflective or reverberant environments resulting from hearing loss. [Work supported by NIH-NIDCDR01DC008168; Zahorik, PI]
  3. Enhancing the utility of spatial auditory displays for military applications: This project seeks to determine and quantify ways of improving VAS displays, particularly under non-optimal simulation conditions such as when the display is not customized acoustically for an individual operator. One potential way to improve sound localization performance under such display condition is to provide targeted operator training. Here, training techniques for sound localization using other sensory modalities are tested and evaluated. Important questions include the relative importance of the sensory modalities used in training techniques, the amount of training that would be required, and how long the training effects will last. The project also examines ways in which a spatial auditory display might be used to enhance auditory distance perception, which is known to be relatively poor even in real auditory environments, and enhance accurate judgment of moving sound source trajectories. Funding for this project is also supporting the construction of a state-of-the-art anechoic chamber facility at the University of Louisville. [Work supported by AFOSR & KY DEPSCoR FA9550-08-1-0234; Zahorik, PI]