Research

Research is an important part of any university program. It stimulates students to pursue answers to academic or clinical questions using a scientific approach. Students have the opportunity to work hand in hand with the principal investigators to develop their research skills. Research also benefits the patients with hearing loss that we see in our clinics. The information that we learn from these research projects makes us better able to serve those patients coming to us for help with their communication problems.

Current Projects

Listening effort for word and emotion recognition

  • Principal investigator: Shae Morgan, AuD, PhD

The amount of cognitive effort required for speech perception can be measured through pupil dilations, with more dilation correlating to increased listening effort. We are looking at the difference in listener effort, as measured by pupil dilations when the listener repeated the target sentence (sentence recognition), the perceived emotion (emotion recognition), or both using emotional and non-emotional stimuli. Trends show that emotional stimuli and talker emotion recognition tasks result in larger pupil dilations compared to non-emotional stimuli and word recognition tasks. Simultaneous processing and reporting of both tasks (emotion and sentence recognition) yielded the largest pupil dilations (or greater effort for simultaneous processing). From this study, we hope to demonstrate that listening requires effort to process how speech is spoken in addition to what is said. Future studies will examine the effects of aging and hearing loss on word and emotion recognition.

Auditory emotion recognition

  • Principal investigator: Shae Morgan, AuD, PhD

Test parameters and stimuli optimization: This study aims to determine optimal testing parameters and stimuli to use for auditory emotion recognition testing. Specifically, we are looking at how stimuli influence listeners’ ability to recognize emotions in speech. We are measuring emotion recognition with different numbers of stimuli, different numbers of talkers, longer and shorter stimuli, and different numbers of emotion categories. This research project ultimately aims to develop an appropriate emotion recognition test to include as part of the clinical audiometric test battery for further assessment of a listener’s social and pragmatic communication beyond word recognition.

 Cochlear implantation and emotion recognition

  • Principle Investigator: Shae Morgan, AuD, PhD

This project assesses whether the placement (i.e., the depth) of a cochlear implant electrode array influences word and/or emotion recognition. In conjunction with funding from MEDEL, we will be measuring the insertion angle for implants post-operatively, and correlating this measure with outcomes like word and emotion recognition scores.

 Cochlear implantation and emotion recognition

  • Principle Investigator: Shae Morgan, AuD, PhD

This project assesses whether the placement (i.e., the depth) of a cochlear implant electrode array influences word and/or emotion recognition. In conjunction with funding from MEDEL, we will be measuring the insertion angle for implants post-operatively, and correlating this measure with outcomes like word and emotion recognition scores.

Otitis Media Light Treatment (OMeLiT)

  • Principle Investigator: Shae Morgan, AuD, PhD

This study investigates the use of high-energy visible light as a potential treatment option for Otitis Media. With collaborators in Engineering and Biology, we’re measuring the effect of the light on different strains of bacteria and viruses. We’re comparing our data against antibiotics for use as a potential alternative treatment to this common disease.

 

Will an Internet-Based Self-Management Program Increase the Uptake of Audiology Services in Adults with Unaddressed Hearing Impairment? A Feasibility Study. The Oticon Foundation (awarded to Dr. Jill Preminger, 2017)

· Principal Investigator: Laura Galloway, AuD

The aim of this project is to develop a proof-of-concept internet-based intervention designed to increase the percentage of adults who visit an audiologist after failing a hearing screening. We developed a program called “iManage (my hearing loss) using principles of the Health Belief Model, Participatory Design, and Decision Coaching. We performed usability testing to ensure the program is easy to navigate and understandable for a wide range of audiences. We are ready to begin feasibility testing of the program once in-person hearing screenings in the community can resume.

Does the Screening Experience Influence the Uptake of a Decision Coaching Guide for Adults with Unaddressed Hearing Impairment? An Effectiveness Study. The Retirement Research Foundation (awarded April 2020)

  • Principal Investigator: Laura Galloway, AuD

The objective of this project is to conduct an effectiveness study, designed using the RE-AIM framework, to determine the type of screening experience that triggers uptake-up of an intermediary step, a decision coaching guide called iManage (my hearing loss), following a failed hearing screen. We are recruiting subjects from four different arms that vary in terms of trust, cues to action and accessibility to determine how these factors affect rates of uptake of the iManage program. We hypothesize that hearing screenings performed at locations that are trustworthy, have cues to action, and are easily accessible will have higher rates of uptake, not only of the decision coaching guide, but also higher rates of uptake of audiology services and hearing loss management options (e.g., hearing aids) compared to screenings done at locations that are considered less trustworthy, with fewer cues to action and less accessible.

Development of the Hearing Loss Support Scale

  • Principal Investigator: Laura Galloway, AuD

 


Refining wideband acoustic immittance testing in newborns

  • Principle investigator: Hammam AlMakadma, AuD, PhD
  • Collaborators: Beth Rosen, AuD Student, University of Louisville

In the first two days of life, outer ear vernix and residual middle ear mesenchymal tissue obstruct the conductive pathways of sound. The presence of these naturally occurring substances impacts the outcomes of newborn hearing screening tests and contributes to delays in diagnosis of permanent congenital hearing loss. Assessment of the conductive pathways of newborns at birth can improve the timeliness of diagnosis/intervention for newborns with hearing loss.

The goal of this line of research is to improve the assessment of conductive pathway in newborns using wideband acoustic immittance (WAI). Testing with WAI provides a comprehensive and more realistic view of the function of the conductive pathway, than single-tone admittance testing with tympanometry. 

  • Current areas of investigation include:
    • Development and validation of criteria for proper probe-tip fit
        • Proper fitting of the probe tip in the ear canal is important for obtaining uncontaminated measurements. Given the small dimensions of newborn ears, and frequent head movement, it is important for clinicians/hearing screening technicians to determine whether measurements are contaminated by improper fitting or probe-tip slippage. 
        • The goal of this project is develop and validate WAI-based criteria to alert the tester when measurements are affected by improper probe tip fits. 
      • Characterization of normal WAI responses
        • One issue that affects the sensitivity and specificity of WAI measures in detection of conductive pathway alteration is the large normative range.
        • The goal of this project is to investigate various approaches in which normal responses can be detected and assessed. Identification of normal responses based on unique characteristics may circumvent the issue of large norms.
    Use of wideband acoustic immittance in differential diagnosis of otologic diseases
    • Principle investigator: Hammam AlMakadma, AuD, PhD
    • Collaborators: Jerry Lin, MD, PhD
    Assessment of the middle ear over a wide range of frequencies provides a more complete picture of its function and its resonant properties than testing at a single discrete frequency. This is one reason WAI testing can be superior to traditional tympanometry testing in the diagnosis of otologic diseases. Measures of the WAI are sensitive to settle and etiology-specific changes in the middle ear function. 
    The overarching goal of this line of research is to develop clinical protocols including WAI testing that would improve pre-surgical diagnosis of otologic pathology.
    • Areas of investigation include:
      • Diagnosis of ossicular pathologies in adult patients undergoing surgical intervention
      • Diagnosis of middle ear fluid/ middle ear infection/ negative middle ear pressure in pediatric patients
      • Assessment of vestibular diseases of cochlear origins, including endolymphatic hydrops, superior canal dehiscence, and perilymphatic fistula

     

    Big data analytics in the management of hearing impairments

    • Principal investigator: Yonghee Oh, PhD

    Hearing Loss (HL) is the most common sensory deficit and one of the most prevalent chronic diseases, affecting over 5% of the world's population. The effective management of HL depends on and requires appropriate onsite audiological evaluations and ongoing treatment services such as HL check-ups, hearing device (i.e., hearing aid, cochlear implant) adjustments, and provision of related rehabilitation services. The purpose of this study is to analyze heterogeneous data, including hearing device usage, noise episodes causing threshold shifts, audiological, physiological, cognitive, clinical and medication, personal, behavioral, lifestyle data, and occupational and environmental data. The analysis of these types of data using big data analytic techniques can enable to provide better HL management and the investigation of whether HL relates to other comorbidities and contextual factors and patterns of such relations. This project will be a collaborative work with audiologists in the University of Louisville Physicians – Hearing & Balance Clinic and the Heuser Hearing Clinic.

    Perceptual cues and their interaction in auditory grouping and stream segregation

    • Principal investigator: Yonghee Oh, PhD

    In multi-talker listening environments, the culmination of different voice streams may lead to the distortion of each source’s individual message, causing deficits in comprehension. Voice characteristics, such as pitch, timbre, and loudness, are major dimensions of auditory perception and play a vital role in grouping and segregating incoming sounds based on their acoustic properties. The purpose of the current study is to investigate how perceptual cues such as pitch, timbre, and loudness cues can affect perceptual integration and segregation of complex-tone sequences within an auditory streaming paradigm in normal-hearing listeners, and how they differ in hearing-impaired listeners including hearing aid and/or cochlear implant users. Additionally, this project will aim to further quantify boundaries between grouping and segregation, defining the fusion-fission boundaries, in all three perceptual domains. This project will be conducted in Dr. Oh’s laboratory located in the Heuser Hearing Institute.

    Interaction between voice-gender difference and spatial separation in release from masking in multi-talker listening environments

    • Principal investigator: Yonghee Oh, PhD
    • Collaborator: Pavel Zahorik, PhD

    In multi-talker listening situations, there are two major acoustic cues that can enhance speech segregation performance: 1) differences in voice characteristics between talkers (e.g., male versus female talkers); 2) spatial separation between talkers (e.g., co-located versus spatially separated talkers). This enhancement is referred to as release from masking. The purpose of this study is to systematically investigate potential interactions between voice-gender difference and spatial separation cues to explore how they influence the relative magnitude of masking release in normal-hearing listeners, and how they differ in hearing-impaired listeners including hearing aid and/or cochlear implant users. The project will be conducted in Dr. Oh’s laboratory located in the Heuser Hearing Institute and in Dr. Zahorik’s laboratory, a state-of-the-art anechoic chamber facility, located in the UofL Belknap Campus.

    Multisensory benefits for speech perception in complex listening environments

    • Principal investigator: Yonghee Oh, PhD

    Speech perception is a complex and multidimensional process. The inputs delivered to different sensory organs provide us with complementary speech information about the environment. The overall goal of this study is to establish which multisensory characteristics can facilitate speech perception and to describe neural biomarkers associated with this benefit. The central hypothesis is that dynamic temporal visual/tactile information provides benefits for speech perception, independent of articulation cues. This hypothesis will be tested with the tracking of temporal cues of visual/tactile speech synced with an auditory speech that can play a key role in speech perception. This research will increase our understanding of how multi-sensory inputs affect speech perception in noisy environments. The findings may be applied to future rehabilitation approaches using auditory training programs to enhance speech perception in noise, and implications for potential technological enhancements to speech perception with hearing devices – in particular, the integration of a non-acoustic signal. The project will be conducted in Dr. Oh’s laboratory located in the Heuser Hearing Institute.