Skip to main content


The Cornell Phonetics Lab is a group of students and faculty who are curious about speech. We study patterns in speech — in both movement and sound. We do a variety research — experiments, fieldwork, and corpus studies. We test theories and build models of the mechanisms that create patterns. Learn more about our Research. See below for information on our events and our facilities.


Upcoming Events

  • 10th September 2020 12:00 PM

    Phonetic Data Analysis Working Group

    Location: Zoom
  • 2nd October 2020 09:55 AM

    Aurelie Herbelot, Modelling the acquisition of linguistic competences from small data

     There is currently much optimism in the field of Natural Language Processing (NLP): some basic linguistic tasks are considered 'solved', while others have tremendously benefited from the introduction of novel neural architectures. However, the data, training regimes and system architectures required to obtain top performance are often unrealistic from the point of view of human cognition. It is therefore questionable whether current NLP systems can ever earn the name of 'models' of language learning. In this talk, we will subject well-known algorithms to one specific constraint on human acquisition: limited input.


    The first part of the talk will focus on RNN architectures and analyse their level of grammatical competence when trained over 3 million tokens from child-directed language. The second part will investigate the issue of semantic competence, looking at the behaviour of word embedding systems with respect to three aspects of meaning: lexical knowledge, reference, distributional properties. We will conclude that NLP systems can actually adapt well to small data, but that their success may be highly dependent on the nature of the data they receive, as well as the underlying representations they learn from. (Work with Ludovica Pannitto)


    Bio: Aurelie is assistant professor at the Center for Mind/Brain Sciences, University of Trento (Italy). Her research is situated at the junction of computational semantics, cognitive science and AI. She leads the Computational Approaches to Language and Meaning (CALM) group, focusing on investigating the link between language and worlds (the real world and others). She is particularly interested in models of semantics that bridge across formal and distributional representations of meaning.

  • 21st October 2020 12:40 PM

    PhonDAWG - Phonetics Lab Data Analysis Working Group - Part 4 of the Pitch Tracking Tutorial

    A weekly meeting of phonetics/phonology researchers, focused on algorithms, techniques & tools for spoken-word sound analysis.

  • 21st October 2020 12:40 PM

    PhonDAWG - Phonetics Lab Data Analysis Working Group

    This week we'll do a tutorial on statistical power



The Cornell Phonetics Laboratory (CPL) provides an integrated environment for the experimental study of speech and language, including its production, perception, and acquisition.

Located in Morrill Hall, the laboratory consists of six adjacent rooms and covers about 1,600 square feet. Its facilities include a variety of hardware and software for analyzing and editing speech, for running experiments, for synthesizing speech, and for developing and testing phonetic, phonological, and psycholinguistic models.

Computing Resources

The Phonetics Lab maintains two Linux servers that are located in the Rhodes Hall server farm:


  • Lingual -  This web server hosts the Phonetics Lab Drupal websites, along with a number of event and faculty/grad student HTML/CSS websites.  


  • Uvular - This dual-processor, 24-core, two GPU server is the computational workhorse for the Phonetics lab, and is primarily used for deep-learning projects.


In addition to the Phonetics Lab servers, students can request access to additional computing resources of the Computational Linguistics lab:


  • Badjak - a Linux GPU-based compute server with eight NVIDIA GeForce RTX 2080Ti GPUs


  • Compute server #2 - a Linux GPU-based compute server with eight NVIDIA  A5000 GPUs


  • Oelek  - a Linux NFS storage server that supports Badjak. 


These servers, in turn, are nodes in the G2 Computing Cluster, which uses the SLURM Workload Manager for submitting batch jobs  that can run on any available server or GPU on any cluster node.  The G2 cluster currently contains 159 compute nodes and 81 GPUs.



Articulate Instruments - Micro Speech Research Ultrasound System

We use this Articulate Instruments Micro Speech Research Ultrasound System to investigate how fine-grained variation in speech articulation connects to phonological structure.


The ultrasound system is portable and non-invasive, making it ideal for collecting articulatory data in the field.



BIOPAC MP-160 System

The Sound Booth Laboratory has a BIOPAC MP-160 system for physiological data collection.   This system supports two BIOPAC Respiratory Effort Transducers and their associated interface modules.

Language Corpora

  • The Cornell Linguistics Department has more than 880 language corpora from the Linguistic Data Consortium (LDC), consisting of high-quality text, audio, and video corpora in more than 60 languages.    In addition, we receive three to four new language corpora per month under an LDC license maintained by the Cornell Library.



  • These and other corpora are available to Cornell students, staff, faculty, post-docs, and visiting scholars for research in the broad area of "natural language processing", which of course includes all ongoing Phonetics Lab research activities.   


  • This Confluence wiki page - only available to Cornell faculty & students -  outlines the corpora access procedures for faculty supervised research.


Speech Aerodynamics

Studies of the aerodynamics of speech production are conducted with our Glottal Enterprises oral and nasal airflow and pressure transducers.


We use a Glottal Enterprises EG-2 electroglottograph for noninvasive measurement of vocal fold vibration.


Our GE LOGIQbook portable ultrasonic imaging system is used for studying vocal tract kinematics and dynamics.

Real-time vocal tract MRI

Our lab is part of the Cornell Speech Imaging Group (SIG), a cross-disciplinary team of researchers using real-time magnetic resonance imaging to study the dynamics of speech articulation.

Articulatory movement tracking

We use the Northern Digital Inc. Wave motion-capture system to study speech articulatory patterns and motor control.

Sound Booth

Our isolated sound recording booth serves a range of purposes--from basic recording to perceptual,  psycholinguistic, and ultrasonic experimentation. 


We also have the necessary software and audio interfaces to perform low latency real-time auditory feedback experiments via MATLAB and Audapter.