About
The Cornell Phonetics Lab is a group of students and faculty who are curious about speech. We study patterns in speech — in both movement and sound. We do a variety research — experiments, fieldwork, and corpus studies. We test theories and build models of the mechanisms that create patterns. Learn more about our Research. See below for information on our events and our facilities.
17th November 2025 12:20 PM
PhonDAWG - Phonetics Lab Data Analysis Working Group
Sam will give a Data Visualization Workshop, and we will review figures submitted by students.
Location: B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
18th November 2025 04:30 PM
ASL Linguistics Lecture Series: Dr. Ben Bahan
The Cornell Linguistics Department proudly presents Dr. Ben Bahan, Professor Emeritus at Gallaudet University. Dr. Bahan will deliver a lecture titled:
"Why are Deaf People on Earth? My Interview with a Conspiracy Theorist."
ASL/English Interpretation provided, light refreshments to follow.
Abstract:
In the interview Dr. Bahan explores the significance of gesture across cultures, time, and human development.
Through compelling examples and cross-cultural insights, he demonstrates how gestures are deeply embedded in our DNA, predating spoken language by tens of thousands of years and are serving as an essential element in the survival of Homo sapiens.
Ben Bahan's presentation delves into the profound question: "Why are there Deaf people on Earth?" Through a blend of storytelling, and historical insight, Dr. Bahan offers a narrative that explores Deaf identity and the significance of Deaf culture in the broader human experience.
Ben Bahan, Ph.D. is a leading scholar, educator, and storyteller in the field of Deaf Studies. He is widely recognized for advancing ASL literature and promoting Deaf identity, language rights, and cultural understanding through research, teaching, and performance. Ben Bahan is also known for his work in sensory orientation studies and how the senses impact space and design.
Some of Bahan’s prominent works in American Sign Language are “Bleeva,” “The Ball Story,” and “Birds of a Different Feather”. He also co-wrote the book A Journey into the Deaf-World (1996) with Robert J. Hoffmeister and Harlan Lane.
Additionally, Bahan also co-wrote and co-directed the film Audism Unveiled (2008) with his colleague H-Dirksen L. Bauman.
A native ASL signer, he holds a Ph.D. in Applied Linguistics from Boston University and is a retired Professor at Gallaudet University.
Prior to working at Gallaudet University, Bahan earned his Ph.D. from Boston University in Applied Linguistics (1996) and worked at the Salk Institute in La Jolla, California where he researched ASL linguistics and language acquisition.
Location: 106 Morrill Hall, Cornell University, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
19th November 2025 12:20 PM
Phonetics Lab Meeting
Sam & Jennifer will lead a workshop on Abstract writing, and we will review draft student Abstracts for upcoming conferences.
Location: : B11 Morrill Hall, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
20th November 2025 04:30 PM
Colloquium Talk Series - Kathryn Davidson
The Cornell Linguistics Department is proud to present Dr. Kathryn Davidson, Professor of Linguistics at Harvard University, who will give a talk titled:
"What’s in a word (but not a picture): Insights from signed languages".
Abstract:
Recent trends in both formal semantics and in cognitive science have taken the logical/compositional structure of meaning in language to be a model for understanding meaning outside language, as in pictures, gestures, etc.
Sign languages are an ideal place to investigate this question, since both complex linguistic structure and complex depictive structure are highly productive with the same articulators/same modality and both are used extensively in signing contexts. We might then expect to see unconstrained semantic composition of linguistic and non-linguistic components in sign languages when freed from the articulatory distinction spoken languages face with their accompanying gestures.
However, I’ll argue that instead sign languages provide especially strong evidence in favor of distinguishing linguistic and non-linguistic meaning, given the highly constrained ways that non-linguistic meanings compose with complex linguistic structures, based on three case studies: negation, quantification, and anaphora.
Funded in part by the GPSAFC and Open to the Graduate Community.
Location: 106 Morrill Hall, Cornell University, 159 Central Avenue, Morrill Hall, Ithaca, NY 14853-4701, USA
The Cornell Phonetics Laboratory (CPL) provides an integrated environment for the experimental study of speech and language, including its production, perception, and acquisition.
Located in Morrill Hall, the laboratory consists of six adjacent rooms and covers about 1,600 square feet. Its facilities include a variety of hardware and software for analyzing and editing speech, for running experiments, for synthesizing speech, and for developing and testing phonetic, phonological, and psycholinguistic models.
Web-Based Phonetics and Phonology Experiments with LabVanced
The Phonetics Lab licenses the LabVanced software for designing and conducting web-based experiments.
Labvanced has particular value for phonetics and phonology experiments because of its:
Students and Faculty are currently using LabVanced to design web experiments involving eye-tracking, audio recording, and perception studies.
Subjects are recruited via several online systems:
Computing Resources
The Phonetics Lab maintains two Linux servers that are located in the Rhodes Hall server farm:
In addition to the Phonetics Lab servers, students can request access to additional computing resources of the Computational Linguistics lab:
These servers, in turn, are nodes in the G2 Computing Cluster, which currently consists of 195 servers (82 CPU-only servers and 113 GPU servers) consisting of ~7400 CPU cores and 698 GPUs.
The G2 Cluster uses the SLURM Workload Manager for submitting batch jobs that can run on any available server or GPU on any cluster node.
Articulate Instruments - Micro Speech Research Ultrasound System
We use this Articulate Instruments Micro Speech Research Ultrasound System to investigate how fine-grained variation in speech articulation connects to phonological structure.
The ultrasound system is portable and non-invasive, making it ideal for collecting articulatory data in the field.
BIOPAC MP-160 System
The Sound Booth Laboratory has a BIOPAC MP-160 system for physiological data collection. This system supports two BIOPAC Respiratory Effort Transducers and their associated interface modules.
Language Corpora
Speech Aerodynamics
Studies of the aerodynamics of speech production are conducted with our Glottal Enterprises oral and nasal airflow and pressure transducers.
Electroglottography
We use a Glottal Enterprises EG-2 electroglottograph for noninvasive measurement of vocal fold vibration.
Real-time vocal tract MRI
Our lab is part of the Cornell Speech Imaging Group (SIG), a cross-disciplinary team of researchers using real-time magnetic resonance imaging to study the dynamics of speech articulation.
Articulatory movement tracking
We use the Northern Digital Inc. Wave motion-capture system to study speech articulatory patterns and motor control.
Sound Booth
Our isolated sound recording booth serves a range of purposes--from basic recording to perceptual, psycholinguistic, and ultrasonic experimentation.
We also have the necessary software and audio interfaces to perform low latency real-time auditory feedback experiments via MATLAB and Audapter.