Will Styler's Homepage
Will Styler

Associate Teaching Professor of Linguistics at UC San Diego

Director of UCSD's Computational Social Science Program

Publications, Awards, and CV

Here’s an up-to-date listing of my publications, along with PDF copies for most of them.

Curriculum Vitae

Please download my full CV (PDF) for additional details. To see citations in other people’s work, visit Will’s Google Scholar Page.

Honors and Awards

Recipient of the UC San Diego 2022 Legacy Lecture Award, based on a campus-wide popular vote where students ‘choose their favorite professors’. This led to my presenting the UCSD 2022 Legacy Lecture on May 6th, 2022. Details and a recording are at https://savethevowels.org/legacy

Recipient of the 2022 UCSD Linguistics Graduate Student ‘Recognition of Faculty Excellence’ award, recognizing my work with and for graduate students in our department, my teaching, and outreach across campus.

Refereed Publications

A. Coetzee, P.S. Beddor, W. Styler, S. Tobin, I. Bekker, D. Wissing. Producing and perceiving socially structured coarticulation: Coarticulatory nasalization in Afrikaans. Laboratory Phonology. 13(1):13. 2022. https://doi.org/10.16995/labphon.6450 - Download as a PDF

J. Krivokapic, W. Styler, D. Byrd. The role of speech planning in the articulation of pauses. Journal of the Acoustical Society of America. 151(1):402. 2022. https://asa.scitation.org/doi/10.1121/10.0009279 - Download as a PDF

J. Krivokapic, W. Styler, B. Parrell. Pause postures: The relationship between articulation and cognitive processes during pauses. Journal of Phonetics Num. 79, 2020. - https://doi.org/10.1016/j.wocn.2019.100953

A. Coetzee, P.S. Beddor, W. Styler, S. Tobin, I. Bekker, D. Wissing. Producing and Perceiving Socially Indexed Coarticulation in Afrikaans. In Proceedings of the 19th International Congress of Phonetic Sciences, pages 215-219, Melbourne, Aug. 2019. - Download as a PDF

P.S. Beddor, A. Coetzee, W. Styler, K. McGowan, J. Boland. The time course of individuals’ perception of coarticulatory information is linked to their production: Implications for sound change. Language, Vol. 94 Num. 4, 2018. - Download as a PDF

A. Coetzee, P.S. Beddor, K. Shedden, W. Styler, Daan Wissing. Plosive voicing in Afrikaans: Differential cue weighting and tonogenesis. Journal of Phonetics, 66:185 - 216. 2018. - Download as a PDF file

W. Styler. On the Acoustical Features of Vowel Nasality in English and French. Journal of the Acoustical Society of America. 142(4):2469-2482. Oct. 2017. - Download as a PDF file

G. Savova, S. Pradhan, M. Palmer, W. Styler, W. Chapman, and N. Elhadad. Annotating the Clinical Text- MiPACQ, ShARe, SHARPn, and THYME Corpora. In Handbook of Linguistic Annotations. Ed. James Pustejovsky and Nancy Ide. Springer. 2017.

R. Scarborough, W. Styler, and L. Marques. Coarticulation and contrast: Neighborhood density conditioned phonetic variation in French. In Proceedings of the 18th International Congress of Phonetic Sciences, Glasgow, Aug. 2015. - Download as a PDF file

W. Styler, S. Bethard, S. Finan, M. Palmer, S. Pradhan, P. C. De Groen, B. Erickson, T. Miller, C. Lin, G. K. Savova, and J. Pustejovsky. Temporal annotation in the clinical domain. Transactions of the Association of Computational Linguistics, 2, 2014. - Download as a PDF file

R. Ikuta, W. Styler, M. Hamang, T. O’Gorman, and M. Palmer. Challenges of adding causation to Richer Event Descriptions. In Proceedings of the 2014 ACL EVENT Workshop. Association for Computational Linguistics, June 2014. - Download as a PDF file

W.-T. Chen and W. Styler. Anafora: A web-based general purpose annotation tool. In Proceedings of the 2013 NAACL HLT Demonstration Session, pages 14-19, Atlanta, Georgia, June 2013. Association for Computational Linguistics. - Download as a PDF file

D. Albright, A. Lanfranchi, A. Fredriksen, W. Styler, C. Warner, J. D. Hwang, J. D. Choi, D. Dligach, R. D. Nielsen, J. Martin, W. Ward, M. Palmer, and G. K. Savova. Towards comprehensive syntactic and semantic annotations of the clinical narrative. Journal of the American Medical Informatics Association, December 2012. - Download as a PDF file

R. Scarborough, W. Styler, and G. Zellou. Nasal Coarticulation in Lexical Perception: The Role of Neighborhood-Conditioned Variation. In Proceedings of the 17th International Congress of Phonetic Sciences, pages 1-4, Hong Kong, Aug. 2011. - Download as a PDF file

G. K. Savova, S. Bethard, W. Styler, J. Martin, and M. Palmer. Towards temporal relation discovery from the clinical narrative. In AMIA Annual Symposium Proceedings, page 445. AMIA, 2009. - Download as a PDF file

Non-Refereed Publications

J. Zhu, W. Styler, I. Calloway. A CNN-based tool for automatic tongue contour tracking in ultrasound images. https://arxiv.org/abs/1907.10210 (Submitted, approved, and withdrawn due to inability to attend at Interspeech, 2019)

W. Styler. Using Unix for Linguistic Research. Published in January 2019, and continuously maintained at http://savethevowels.org/unix/.

W. Styler. Using Praat for Linguistic Research. Published in July 2011 for the 2011 LSA Linguistic Institute’s Praat Workshop, and continuously maintained at http://savethevowels.org/praat/.

Posters

See my posters page for full-size PDFs of all posters I’ve presented at conferences.

Dissertation: ‘On the Acoustical and Perceptual Features of Vowel Nasality’

Overview

Vowel nasality is, simply put, the difference in the vowel sound between the English words “pat” and “pant”, or between the French “beau” and “bon”. This phenomenon is used in languages around the world, but is relatively poorly understood from an acoustical standpoint, meaning that although we as human listeners can easily hear that a vowel is or isn’t nasalized, it’s quite difficult for us to measure or identify that nasality in a laboratory context.

The goal of my dissertation is to better understand vowel nasality in language by discovering not just what parts of the sound signal change in oral vs. nasal vowels, but which parts of the signal are actually used by listeners to perceive differences in nasality.

I’ve written up a summary of the process, aimed at a more general audience, here, or you can read the abstract below.

Dissertation Abstract

Although much is known about the linguistic function of vowel nasality, either contrastive (as in French) or coarticulatory (as in English), less is known about its perception. This study uses careful examination of production patterns, along with data from both machine learning and human listeners to establish which acoustical features are useful (and used) for identifying vowel nasality.

A corpus of 4,778 oral and nasal or nasalized vowels in English and French was collected, and feature data for 29 potential perceptual features was extracted. A series of Linear Mixed-Effects Regressions showed 7 promising features with large oral-to-nasal feature differences, and highlighted some cross-linguistic differences in the relative importance of these features.

Two machine learning algorithms, Support Vector Machines and RandomForests, were trained on this data to identify features or feature groupings that were most effective at predicting nasality token-by-token in each language. The list of promising features was thus narrowed to four: A1-P0, Vowel Duration, Spectral Tilt, and Formant Frequency/Bandwidth.

These four features were manipulated in vowels in oral and nasal contexts in English, adding nasal features to oral vowels and reducing nasal features in nasalized vowels, in an attempt to influence oral/nasal classification. These stimuli were presented to L1 English listeners in a lexical choice task with phoneme masking, measuring oral/nasal classification accuracy and reaction time. Only modifications to vowel formant structure caused any perceptual change for listeners, resulting in increased reaction times, as well as increased oral/nasal confusion in the oral-to-nasal (feature addition) stimuli. Classification of already-nasal vowels was not affected by any modifications, suggesting a perceptual role for other acoustical characteristics alongside nasality-specific cues. A Support Vector Machine trained on the same stimuli showed a similar pattern of sensitivity to the experimental modifications.

Thus, based on both the machine learning and human perception results, formant structure, particularly F1 bandwidth, appears to be the primary cue to the perception of nasality in English. This close relationship of nasal- and oral-cavity derived acoustical cues leads to a strong perceptual role for both the oral and nasal aspects of nasal vowels.

Dissertation Details

Title: “On the Acoustical and Perceptual Features of Vowel Nasality”

Advisor: Dr. Rebecca Scarborough

Defense Date: March 18th, 2015

Download: Download a PDF Copy (3.4 MB) - BibTeX Citation

Related Work: