Invited Speaker

Laura Gwilliams

Ms.Laura Gwilliams

Talk Schedule

April 18th (Fri), 9:30-10:30

Profile

Laura Gwilliams is jointly appointed between Stanford Psychology, Wu Tsai Neurosciences Institute and Stanford Data Science. Her research aims to provide an algorithmically precise account of how the human brain achieves this feat. Such insight has the power to inform neuroscience (understanding the human brain) and engineering (building intelligent machines). Her research program is organised around two overarching questions: (i) what representations does the brain derive from auditory input? (ii) what computations does the brain apply to those representations? To address these questions, she combines insight from neuroscience, machine learning and linguistics, and works with neural measurements at different spatial scales: magnetoencephalography (MEG), electrocorticography (ECoG) and single-unit recordings.
Website

Title

Computational architecture of speech comprehension

Abstract

Humans understand speech with such speed and accuracy, it belies the complexity of transforming sound into meaning. The goal of my research is to develop a theoretically grounded, biologically constrained and computationally explicit account of how the human brain achieves this feat. In my talk, I will present a series of studies that examine neural responses at different spatial scales: From population ensembles using magnetoencephalography and electrocorticography, to the encoding of speech properties in individual neurons across the cortical depth using Neuropixels probes in humans. The results provide insight into (i) what auditory and linguistic representations serve to bridge between sound and meaning; (ii) what operations reconcile auditory input speed with neural processing time; (iii) how information at different timescales is nested, in time and in space, to allow information exchange across hierarchical structures. My work showcases the utility of combining cognitive science, machine-learning and neuroscience for developing neurally-constrained computational models of spoken language understanding.



<< Back to "Speakers"

TOP