Category Archives: Auditory seminars

Auditory Seminar 25 May 2018: Dr. Edmund Lalor, Univ. Rochester

The effects of attention and visual input on noninvasive electrophysiological indices of natural speech processing at different hierarchical levels

Date: 25 May 2018, FRIDAY, 10:00 hr
Location: UMCG, Onderwijscentrum, Lokaal 04

Broadcasting link:

Dr. Edmund Lalor
University of Rochester

How the human brain extracts meaning from the dynamic patterns of sound that constitute speech remains poorly understood. This is especially true in natural environments where the speech signal has to be processed against a complex mixture of background sounds. In this talk I will outline efforts over the last few years to derive noninvasive indices of natural speech processing in the brain. I will discuss how these indices are affected by attention and visual input and how attentional selection and multisensory integration can be “decoded” from EEG data. I will outline work showing that EEG and MEG are sensitive not just to the low-level acoustic properties of speech, but also to higher-level linguistic aspects of this most important of signals. This will include demonstrating that these signals reflect processing at the level of phonetic features. And, based on our most recent work, it will also include evidence that EEG is exquisitely sensitivity to the semantic processing of natural, running speech in a way that is very strongly affected by attention and intelligibility. While showcasing these findings, I will outline a number of paradigms and methodological approaches for eliciting noninvasive indices of speech-specific processing that should be useful in advancing our understanding of receptive speech processing in particular populations.

Combined Auditory/Epigenetics Seminar 5 April 2018: Prof. Marianne Rots,UMCG

Cellular Reprogramming by Epigenetic Engineering

Date: 5 April, Thurs
Time: 13:00 Lecture
Location: 3215.0165
(faculty building ADL1, entrance from Antonius Deusinglaan 1)

We have a combined auditory/epigenetics lecture next week, 5 April, Thursday, given by Prof. Marianne Rots from UMCG. It will be an informal lecture, and anyone who wants to learn more about epigenetics is welcome to join!
Similar to other seminars, audiologists can get credit for participation. Dissimilar to other seminars there will be no broadcasting, and also sign-up required, so please do contact me with a quick yes if you want to join (deadline 28 March)!
If time left a lab tour may follow!

Auditory Seminar 19 January 2018: Dr. Carlos Trenado, University Hospital Düsseldorf, Germany

Corticothalamic feedback dynamics for attention and habituation and its application in tinnitus decompensation

Date: 19 Jan 2018, FRIDAY, 14:00 hr
Location: UMCG, Onderwijscentrum, Lokaal 13

Broadcasting link:

Dr. Carlos Trenado
Institute of Clinical Neuroscience and Medical Psychology, University Hospital Düsseldorf & Dept. of Psychology and Neurosciences, Leibniz Research Centre for Working Environment and Human Factors, Technical University Dortmund, Germany


6 October 2017: Dr. David Ryugo, Garvan Institute of Medical Research, Australia

The Auditory Nerve: Structure, Function, and Plasticity

Date: 6 October 2017, FRIDAY, 14:00 hr
Location: 3215.0165

Broadcasting link:

Prof. Dr. David Ryugo
Hearing Research
Garvan Institute of Medical Research
Sydney, Australia

All sound in the environment accesses the brain by way of the auditory nerve. This nerve is primarily composed of neurons with myelinated axons that innervate inner hair cells of the cochlea. In order to make sense of sound, neural activity must be closely linked in time to acoustic events. The auditory system has mechanisms to accomplish this task that will be discussed in this presentation. Each auditory nerve fiber forms a giant terminal in the brain with many synapses, and these terminals, called endbulbs of Held, have been observed in every land vertebrate examined to date. I will explore their specializations in hearing, their pathologic reactions to deafness, and their salvation by cochlear implants.

27 June 2017: Dr. Robert Harris, Prince Claus Conservatoire

Action-oriented predictive processing: grasping the aural world 

Date: 27 June 2017, 14:00
Location: UMCG, room P3.270 (near KNO Department)

Broadcasting link:

Dr. Robert Harris
Lifelong Learning in Music
Hanze University of Applied Sciences
Prince Claus Conservatoire

Current models of brain function indicate that sensory input is not only processed in two anatomically and functionally separate pathways, but that perception is the product of a predicting brain and not purely a representation of the input to which it has access. Sensory modalities are furthermore intertwined, making not only synesthesia possible in rare instances, but also the expropriation of neural resources as in the SMARC effect. The use of instrumental music training to enhance the hearing of cochlear implant recipients builds on these models by promoting the implicit acquisition of ideomotor associations between musical pitch, tone color, volume, and hand movement. 


23 June 2017: Dr. Lars Riecke, Univ. Maastricht

Neural entrainment to speech modulates speech intelligibility ?

Date: 23 June 2017, 15:00
Location: UMCG, Panoramazaal, U4.123

Broadcasting link:

Dr. Lars Riecke
Department of Cognitive Neuroscience
Faculty of Psychology and Neuroscience
University of Maastricht

Speech entrainment, the alignment of neural activity to the slow temporal fluctuations (envelope) of acoustic speech input, is ubiquitous in current theories of speech processing. Associations between speech entrainment and acoustic speech signal, behavioral listening task, and speech intelligibility have been observed repeatedly. However, a methodological bottleneck has prevented clarifying whether speech entrainment functionally contributes to speech intelligibility. Here, we addressed this issue by experimentally manipulating speech entrainment in the absence of systematic acoustic and task-related changes with a novel approach that involves stimulating listeners with transcranial currents carrying speech-envelope information. Results from two experiments involving a cocktail party-like scenario and a listening situation devoid of acoustic envelope information show consistently an effect on listeners’ speech-recognition performance, demonstrating a causal role of speech entrainment for speech intelligibility. This finding supports entrainment-based theories of speech comprehension and suggests that transcranial stimulation with speech envelope-shaped currents can be utilized to modulate speech comprehension.


9 June 2017: Prof. Dr. Hartmut Meister, Univ of Cologne

Assessment of audiovisual speech recognition in cochlear implant recipients- why and how?

Date: 9 June 2017, 14:00
Location: P3.270 (near KNO Department)

Broadcasting link:

Prof. dr. Hartmut Meister
Head Audiology Research
Jean Uhrmacher Institute for Clinical ENT-Research
University of Cologne

In their early days, cochlear implants (CI) served as aids for lip-reading. Due to technical and medical progress and the development of elaborated rehabilitation programs many CI users show near perfect speech understanding without visual cues these days. Nevertheless, audiovisual (AV) speech is still important since visual cues are generally helpful in every-day communication. Thus, assessing different CI-processing schemes or fittings using AV speech reveals high ecological validity. Moreover, CI recipients typically show better lip-reading abilities than their normal-hearing peers and AV integration might be different in these populations.

However, assessing AV speech recognition is not a simple matter since validated speech material is scarce and establishing an AV speech corpus is costly and time-consuming. An alternative approach is using common speech-audiometric material and supplementing the visual modality by applying an avatar.

I will give an overview of our experience with the use of avatars during AV speech assessment, discuss opportunities and limitations and give examples for the implementation in cochlear implant research.