Hi!
I'm a psycholinguist interested in natural language meaning, its acquisition, and how it interfaces with non-linguistic conceptual systems.
Lately, I’ve been focusing on the mental representations that serve as the meanings of quantificational expressions like each, every, and most. What are the formal properties of those representations? When do children have access to them? And how do learners figure out how to pair them with the right pronunciations? To get at these questions, I’ve used a range of methods including behavioral experiments with adults and children, habituation experiments with infants, psychophysical modeling, corpus analysis, and good old linguistic intuitions.
I'm currently a MindCORE Postdoctoral Research Fellow at the University of Pennsylvania, where I'm working with John Trueswell, Anna Papafragou, and Florian Schwarz. I earned my PhD in Linguistics in 2021 from the University of Maryland, where I was part of the Maryland Language Science Center. My dissertation -- The Psycho-logic of Universal Quantifiers -- was advised by Jeff Lidz and Paul Pietroski. Before that, I studied Cognitive Science at Johns Hopkins and managed Justin Halberda's Vision and Cognition Lab.
Research Projects
Universal Quantifiers:

Precursors of Quantification in Infancy:

Majority Quantifiers:

(1) most of the dots are blue.
(2) more of the dots are blue.
But while (1) calls for comparing the number of blue dots to the total number of dots, (2) calls for comparing the blue and yellow dots directly. My collaborators and I argue that these subtle differences in meaning influence how adults and children expect visual scenes to look (e.g., given (1), they create pictures like A but given (2), they create pictures like B), what information they remember from those scenes (they encode only the set of blue dots given (1) but encode both blue and yellow given (2)), and how easily they are able to judge the sentence as true (evaluating (2) is easier, since direct comparisons introduce less noise than proportional comparisons). Together, these effects demonstrate that more and most have discoverable decompositional mental representations that are (at least largely) shared across speakers of English at a fine-grained level of representational detail. We've recently started to extend these predictions to Cantonese majority quantifiers as well.
Event Concepts & Syntactic Bootstrapping:

Output
Papers
Dissertation
Talks
Posters (& talks)
Teaching
Courses instructed:
Spring 2020: Language and Thought (LING449T)Does language shape cognition? Do the details of our native language(s) determine how we perceive the world? Can learning language give us access to new concepts? In this course, we’ll explore these questions through case studies, including color categorization, spatial frames of reference, navigation, theory of mind, event representations, and number. Along the way, we’ll discuss the nature of concepts as well as ways that linguists can leverage the relationship between language and thought to study natural language meaning.
Courses TAed:
Fall 2019: Grammar and Meaning (LING410; Instructor: Valentine Hacquard)Spring 2019: Child Language Acquisition (LING444; Instructor: Jeffrey Lidz)
Fall 2018: Language and Mind (LING240; Instructor: Tonia Bleam)
Spring 2018: Introductory Linguistics (LING200; Instructor: Tonia Bleam)
Contact Info
Check out what my awesome cohort from UMD Ling is up to:
Sigwan Thivierge, Mina Hirzel, Anouk Dieuleveut, Aaron Doliana, and Rodrigo Ranero.