Research

The research in dB SPL is motivated towards identifying problems that hearing-impaired listeners encounter in daily life, with the ultimate goal of proposing solutions to these problems. To achieve this goal, we conduct basic science research to understand mechanisms of hearing, or its failures, as well as clinical research with actual patients. Our research is multidisciplinary, and involves behavioral and cognitive sciences, as well as engineering approaches. We work closely with users of hearing aids and cochlear implants, the manufacturers of these devices, and local and international collaborators.

Some of our projects (selected) are as follows. To see a more comprehensive list please see the People section.  For both lab members and non-members, you can find useful tools and other materials at our dB SPL Lab website.

Ongoing Projects

Development of voice, speech, and language processing in normal-hearing and cochlear-implanted children of school age

PhD student: L. Nagels
Co-promotor:Prof. P. Hendriks (RUG Semantics)
Collaborators: Dr. E. Gaudrain (CNRS Lyon), Dr. D. Vickers (UCL)
Past students; BCN ReMa student: J. Libert,  Students: E. Ibiza, I. van Bommel
Funded by Faculty of Arts, VICI grant, PI: Hendriks, and VICI and VIDI grants, PI: Başkent
Web: https://www.picka-onderzoek.nl/

The PICKA project focuses on the perception of voice and indexical cues in children and adults, with normal, impaired, or cochlear-implant (CI) hearing. Voice cues are an important component of speech and language processing, adding to the semantic content (meaning from words), for example, in conveying vocal emotions. Voice cues can also help enhancing speech segregation and comprehension in cocktail-party listening — situations where hearing-impaired listeners have most difficulties.

Our previous research has shown that the perception of vocal-tract cues, information about the distribution of formants which is determined by the size of the speaker, is very limited in cochlear implant (CI) users. To gain a better understanding of the nature of the limited perception of voice characteristics in CI users, we will explore the developmental trajectory of voice pitch and vocal-tract length perception in normal-hearing children and children with CIs. The performance of the normal-hearing children will serve as a baseline for a normal developmental trajectory of voice characteristics perception, to which individual children with CIs can be compared. This allows us to identify any developmental differences or delays in children with CIs compared to normal-hearing children. In addition to their perceptual abilities, we will also assess children’s ability to use voice cues for three speech-related tasks: identification of vocal emotion, categorization of voice gender, and the perception of speech in the presence of competing background speech. As voice pitch is the most dominant cue in infant-directed speech, it is expected that perceptual sensitivity for this cue develops early in life. Sensitivity to vocal-tract length cues requires exposure to multiple talkers, and is thus expected to develop at a later age. Taken together, this implies a hierarchy for the perception of these voice characteristics. If the vocal-tract length information available to children with CIs is too distorted to be used for speech-related tasks, they may benefit from specific training.

Perception of voice and speech in cochlear implants and hearing impairment

PhD students: N. El-Boghdady (completed), Floor Arts
Audiologist in training: Mathieu Blom
Visiting PhD: Ben Zobel (completed)
Researchers, co-supervisors:  Dr. A. Wagner, Dr. T. Tamati, Dr. Thomas Koelewijn, Dr. Laura Rachman
Collaborators: Dr. E. Gaudrain (co-supervisor; CNRS Lyon), Dr. W. Nogueira (Medical School Hannover)
Funded by VIDI and VICI grants, partial funding Advanced Bionics and GSMS

Like fingerprints, each person has a characteristic voice that can be used for identification purposes. However, unlike fingerprints, voice is involved in speech communication and listeners use this information to identify a speaker or to infer some characteristics of the person who is talking. This is particularly useful when many talkers speak at the same time, like in a crowded environment, or when there is no visual information available, like on the phone. These two situations are particularly difficult for cochlear implant users. Some recent work from our group allowed us to show that cochlear implant users have a clear deficit in the perception of some vocal characteristics, which certainly contributes to the difficulties described above. Our task is now to understand the origins of this deficit, and to explore new techniques to restore appropriate perception of voices.

There are a number of vocal characteristics that can be used to identify a speaker. However, two of them appear to be most important: one is the pitch of the voice, and the other is linked to the size of the speaker. A violin and a cello can play the same note while having bodies of different sizes, producing two sounds that can be easily distinguished despite having the same pitch. The difference in timbre between two such sounds originates from the difference in the size of the resonating bodies of the two instruments. In the human speech production system, the size of the resonating body is characterized by the length of the vocal tract, which similarly affects the timbre of the voice.

The way these two dimensions, pitch and vocal-tract length, are coded in the acoustic signal is relatively well known. Furthermore, the reduced ability of cochlear implants to deliver pitch information has also been largely documented. However, very little is known on the perception of size (or vocal-tract length) information in cochlear implant users. The first stage of our research thus consists in evaluating the sensitivity of CI users to this voice characteristic. Our results indicate that while voice pitch is sufficiently preserved in the implant to allow gender categorization, vocal-tract length information is not available to the cochlear implant patients. As shown in some of our previous research, this can lead to confusing a male speaker for a female, and yields difficulties in separating concurrent voices. We undertake a thorough exploration of the pitch – vocal tract length – signal-to-noise ratio (SNR) parameter space to determine what degree of separation in voice characteristics listeners require to be able to pick out a single voice in a multispeaker environment, and how this is affected by hearing loss.

We have been exploring methods to improve the perception of these vocal characteristics in cochlear implant patients. We focused in particular on vocal-tract length, as this dimension seems to be the most problematic. Because the perception of vocal-tract length is solely based on spectral information (whereas pitch information can also be conveyed through temporal information), the accuracy of the spectral information is of primary importance for this dimension. Consequently, our first approach explored how fine-tuning of the frequency allocation map, i.e. the way the spectral information is distributed along the electrode array in the cochlea, could improve vocal-tract length perception. Our results indicate that frequency allocation map has an effect on vocal-tract length discrimination in implant simulations. In addition, we have tested some experimental coding strategies, like Spectral Contrast Enhancement (in collaboration with Advanced Bionics European Research Center and Dr. Nogueira, University Medical Center Hannover), to assess whether they could improve vocal-tract length discrimination. Related techniques, like current focusing combined with current steering, while showing little benefit for the perception of speech in silence, may also provide an advantage for vocal-tract length perception, and thus for voice identification.

Music and cochlear implants

Researchers: Dr. E. Harding, Dr. C.D. Fuller
Collaborators: Dr. R.H. Free, Ir. A. Maat, Mw. G. Nijhof, Dr. R. Harris (Prince Claus Conservatory), Dr. E. Gaudrain (CNRS Lyon), Prof. Dr. B. Tillmann (CNRS Lyon), Mr. B. Dijkstra (NHL Stenden), Mr. S. de Rooij (NHL Stenden)
Funded by VICI grant and Dorhout Mees Foundation

In cochlear implants, improvements in device design have produced good speech understanding in quiet, but speech perception in noise and enjoyment of music are still not satisfactory. Furthermore, cochlear-implant users rank music, after speech perception, as the second most important acoustical stimulus in their lives. Thus improvement of music enjoyment/perception and speech perception in noise could lead to a significant improvement in quality of life in cochlear-implant recipients. Knowing that music is the second most important acoustical stimulus in cochlear-implant users, the field is focusing on improvement of music perception.

Another relevance to music for CI users is that recently, music training has shown improvements also for perception of speech in noise. This was explained by a transfer of learning from music training to speech perception, likely as a result of overlapping neural networks specialized for music and for speech. In this project, we have both explored music perception and appreciation among our CI users, as well potential benefits of music training on perception of music and speech, with CI users and NH listeners using CI simulations, who further comprised of groups of people who were musically trained or not trained.

In this intervention study intended for postlingually deafened adult (and older adult) CI users, we will conduct a randomized controlled trial with a music lesson intervention, computer game control intervention, and a do-nothing control intervention. The main hypothesis of the study is that the music intervention — learning to play a music instrument using an improvisation-based audiomotor approach GAME (guided audiomotor exploration)— can improve cochlear implantation outcomes related to speech and music perception, music enjoyment, and quality of life.  In a novel approach, our Serious Gaming collaborators (NHL Stenden) are designing a control intervention that matches the format of GAME piano lessons (such as having weekly lessons with an instructor), while teaching the serious game ‘Minecraft’ (building a virtual world). Thus, effectively we hope to isolate the impact of music that our music intervention will have on speech and music perception as well as quality of life. We have started the processes of piloting both GAME and Minecraft interventions and hope to start clinical trials in January 2021.

AURORA: Auditory Robotics for Research Applications

PhD student: L. Meyer
Researchers, co-supervisors: Dr. G. Araiza Illan, Dr. L. Rachman
Funded by VICI grant (Başkent), GSMS, Kolff Institute, Heinsius Houbolt Foundation.
More info: Facebook, Instagram

The Auditory Robotics for Research Applications (AURORA) team, based at the University Medical Center Groningen, Netherlands, explores the use of humanoid robots as a rehabilitative, diagnostic and testing interface for auditory perception research involving children and adults.

Luke Meyer’s PhD project R2D2 for KNO: Use of a Humanoid Robot for Rehabilitation of Deaf Individuals aims to address three primary objectives, namely, how a humanoid robot can be used as 1) an effective interface for the testing of auditory perception by using the pre-existing PICKA test battery, 2) a rehabilitative platform for cochlear implant users by providing support to existing rehabilitative procedures, and 3) an emotional support platform through various autonomous interactions.

Perception of realistic forms of speech with cochlear implants

Researchers: Dr. T. Tamati, Dr. T. Koelewijn
Collaborator: Dr. E. Janse (MPI)
Funded by VICI, VIDI grants (Başkent) and VENI grant (Tamati), Rosalind Franklin Fellowship.

Speech communication is an important part of daily life for humans, providing us a way to connect with other people and the surrounding world. Yet, everyday, real-life listening conditions can be very challenging. Listeners must deal with a great deal of natural speech variability, also in the presence of background noise and competition from other talkers. For example, the pronunciation of a word differs across talkers and social groups, as well as environmental and social contexts. For normal-hearing listeners, speech understanding is successful and robust despite this variability. Normal-hearing listeners have highly flexible perceptual systems, allowing them to adapt to and learn differences in talkers’voices, regional or foreign accents, or speaking styles to promote robust communication.

While cochlear implants (CIs) are successful in resorting hearing to profoundly deaf individuals, implant users rely on input signals that are heavily reduced in acoustic-phonetic detail. As a result, the adverse listening conditions commonly encountered in daily life appear be particularly detrimental to successful speech understanding in these users. However, in most current clinical and research approaches, ideal speech, i.e., carefully produced by a single talker with no discernable accent, is commonly used to assess the speech recognition abilities of patients. In contrast to ideal speech, highly variable, real-life speech imposes a greater perceptual and cognitive demand on listeners, resulting in more challenging or effortful speech recognition, as can be measured by lower accuracy scores or increased response times, or an increase in pupil size. As a consequence, the effects of talker variability on this population are still largely unknown due to the absence of sensitive clinical tools.

We urgently need to achieve better outcomes for implant recipients, but our current lack of knowledge of real-life challenges, outside the lab or clinic, remains a critical barrier to the development of new clinical tools and interventions. To fill the current knowledge gap and overcome the resulting clinical barrier, the overall aim of this project is to systematically investigate the effects of talker variability on speech understanding and listening effort by cochlear implant users.

Voice and speech perception in early-deafened late-implanted cochlear-implant users

Researcher: Dr. C.D. Fuller
Collaborator: Dr. R.H. Free
Funded by Mandema, VICI

Individuals who are deafened early in life but not implanted within a short period of time after deafness onset are a relatively new cochlear-implant (CI) population. New clinical protocols are implemented and evaluated for this population. For research, on the other hand, this population presents an excellent oppotunity to investigate voice and speech perception with long duration of auditory deprivation and perceptual re-learning following implantation.

The aim of the project  is two-fold; on one hand, this implant user group will serve as a model of auditory deprivation to provide scientific knowledge on voice and speech perception, on the other hand, it will provide evidence to increase implantation to a new population, supporting the clinical practice. Some research indirectly indicates that this early-deafened late implanted (EDLI) group may benefit from cochlear implantation, but research to this date remains relatively limited. Based on previous research, we expect positive outcomes both in hearing abilities, such as in voice and speech perception, as well as in psychological factors. It remains unknown, however, how this population performs compared to traditional implantation strong scientific evidence to support extending inclusion criteria for implantation of atypical patients, which can eventually help more deaf individuals.

Perception of L2 prosody in cochlear implant simulations

PhD student: Drs. M.K. Everhardt
Co-promotores: Prof. dr. W. Lowie (RUG Applied Linguistics), Prof. dr. D. Baskent (UMCG ENT)
Co-supervisor: Dr. A. Sarampalis (RUG Psychology)
Collaborator: Dr. M. Coler (RUG Campus Fryslan)
Funded by RUG Faculty of Arts

This PhD project explores how the perception of prosody in a non-native language is influenced by a cochlear implant (CI) simulation. Linguistically speaking, prosody (i.e. suprasegmental speech elements involving variation in fundamental frequency (f0), intensity, duration, and spectral characteristics) forms an important source for information on the syntactic and semantic properties of speech, which is especially important when learning a second language (L2); it contributes to a listener’s ability to determine boundaries between syllables and words as well as affecting the interpretation and comprehension of speech influenced by for instance (word) stress or sentence type. Degradation in fine spectrotemporal detail complicates the perception of prosody and could consequently lead to errors in the comprehensibility and processing of utterances. CI users and CI-simulation listeners are therefore at a disadvantage when processing prosody compared to NH listeners. In this project, we investigate the influence of CI simulations on the perception of L2 prosody at word-level and sentence-level. That is, we investigate how accurately and efficiently young native Dutch learners of English perceive prosody in spoken English words and sentences degraded by a CI simulation compared to how accurately and efficiently they perceive it in non-CI-simulated words and sentences. Furthermore, we investigate how the accuracy and efficiency of the non-native listeners compares to that of native listeners.

ENRICH: Enriched communication across the lifespans

PhD students: E. Kaplan, J. Kirwan
Researcher, co-supervisor: Dr. A. Wagner
Funded by H2020 MSCA ITN, VICI grant.

Emotion recognition for normal hearing (NH) listeners has been explored in research, however, there is a lack of research regarding emotion perception in individuals with hearing loss. This project aims to complete the gap of research knowledge involving emotion recognition in hearing impaired (HI) adults, so that effective hearing rehabilitation programs can be developed for this population.

The project  investigates the perception of emotional information in speech by NH and HI listeners with the goal of understanding the individual variations in emotion recognition, and in cognitive compensatory mechanisms, to produce knowledge that can enable new individualized methods for hearing rehabilitation. Emotion recognition in NH and HI populations will be tested in pupillometry and behavioral studies to determine what HI adults do to recognize emotions in speech, and the best methods that can be applied to train HI listeners in enriching their speech processing for emotion perception developing training paradigms for hearing rehabilitation. In collaboration with partners in the ENRICH network, ways of enriching speech processing for emotion perception will be investigated, including the development of visual and auditory means of enrichment. Subsequently, these methods will be tested in visual world paradigms and behavioral studies and be eventually applied to train hearing-impaired listeners.

Web: http://www.enrich-etn.eu/

Emotion processing in audio-visual impairment

PhD student: M. de Boer
Collaborator, co-supervisor: Prof. F. Cornelissen (UMCG Ophtalmology)
Funded by GSMS BCN-Brain, VICI, Uitzicht.

Communication is the corner stone of human interaction. It is not only important for conveying ideas, but also for connecting to other people and thus for the overall well-being. Communication takes place through speech, but often also involves facial expressions and body language. Although lexical content in speech is essential, one must to be able to recognize a sender’s emotion in order to truly understand their intentions. Emotion is carried by cues in both the auditory and visual domain. While it is known the two modalities interact, for communication in general and for emotion in communication perception (ECP) in particular, the underlying mechanisms remain unknown. Even more so, there is no knowledge on what effects visual (VI) and hearing impairments (HI) have on the perception of emotion in communication. Based on knowledge on multisensory perception, interactive effects are not expected to be entirely linear and predictable.

This project aims to clarify the interactive effects of:

  1. visual and auditory components in ECP, and
  2. VI and HI on the ECP.

This will be done by revealing the underlying mechanisms of audio-visual emotion perception in observers with simulated and actual sensory impairments. We will additionally look for specific eye-movement strategies defining optimal performance. If these can be identified, we will explore the possibility to use those strategies to train patients to enhance their performance.

 

Completed projects

The effects of aging on temporal integration in vision and hearing
PhD student: J. Saija
Collaborators: Dr. E. Akyürek (RUG Psychology), Dr. T. Andringa (RUG AI)
Funded by RUG and UMCG Faculties, NWO Aspasia grant.

Mental and auditory representations in cochlear-implant users
Researcher: Dr. A. Wagner
Collaborator: Prof. dr. N. Maurits (RUG Neurology), drs. B. Maat
Funded by Marie Curie Intra-European Fellowship (PI: Wagner, Host: Başkent), Med-El, VICI.

Perception of speech in complex listening environments in normal hearing, simulated hearing loss, and users of cochlear implants
PhD students: P. Bhargava, J. Clarke
Researchers: Dr. E. Gaudrain
Collaborators: Dr. M. Chatterjee (Boys Town, USA), Dr. R. Morse (Aston, UK), Dr. S. Holmes (Aston, UK)
Funded by VIDI and Aspasia grants from NWO, ZonMw, Rosalind Franklin Fellowship.

Geriatric cochlear-implant candidacy
Collaborators: Dr. R. Hofman, Dr. G. Izaks
JSMS student: N. Schubert

Listening effort with cochlear implants
PhD student: C.Pals, MSc, Co-supervisor: Dr. A. Sarampalis
Collaborators: Dr. A. Beynon (Radboud MC), Dr. H. van Rijn (RUG Psychology)
Partially funded by Cochlear Europe Ltd., Rosalind Franklin Fellowship, Doorhout Mees Foundation, Stichting Steun Gehoorgestoorde Kind, GSMS.

Musical experience, quality of life and speech intelligibility in normal hearing and cochlear-implant recipients
PhD student: C.D. Fuller, Co-supervisors: dr. R.H. Free, ir. A. Maat
Collaborators: Dr. E. Gaudrain, Dr. J. Galvin III (UCLA)
Partially funded by Advanced Bionics, Rosalind Franklin Fellowship, Heinsius Houbolt Foundation.

Single- and multi-channel pattern perception in electric hearing
Researcher: Drs. J. J. Galvin III (UCLA)
Partially funded by VIDI grant and Rosalind Franklin Fellowship (PI: Başkent). NIH grants, PI: Prof. Qian-Jie Fu.

Interference from music, speech, and noise in middle age
Master Student: S. van Engelshoven, Collaborator: Dr. J. J. Galvin III (UCLA)
Partially funded by VIDI grant and Rosalind Franklin Fellowship.

Audiovisual integration in young and elderly listeners
Audiologist in training: ir. M. Stawicki
Collaborator: Dr. Piotr Majdak (ARI, Austria)
Partially funded by VIDI grant and Rosalind Franklin Fellowship.

Perceptual learning of interrupted speech
PhD student: M.R. Benard
Funded by VIDI grant (NWO) and Rosalind Franklin Fellowship

Behavioral diagnosis of tinnitus
Researcher: Dr. K. Boyen
Project leader: Prof. P. van Dijk
Funded by Action in Hearing Loss.

Second language learning in cochlear-implanted children and adolescents (not completed)
Phd student: Drs. E. Jung
Collaborators: Dr. A. Sarampalis (RUG Psychology), Dr. W. Lowie (RUG Linguistics)
Funded by VIDI grant and Rosalind Franklin Fellowship.