I'm a psycholinguist and cognitive scientist interested in natural language meaning, its acquisition, and how it interfaces with non-linguistic conceptual systems.

My research largely focuses on the mental representations that serve as the meanings of quantificational expressions like each, every, and most. What do these representations look like? What implications do details of representational format have for how they make contact with other areas of cognition? And when/how do learners figure out how to pair these representations with the right pronunciations? To get at these questions, I’ve used a range of methods including psychophysical experiments with adults, artificial language learning experiments with children, habituation experiments with infants, analysis of naturally occurring speech data, and good old linguistic intuitions.

I'm currently a postdoctoral researcher in the University of Delaware's Linguistics and Cognitive Science department, working with Alon Hafri. Before UD, I was a MindCORE Postdoctoral Research Fellow at the University of Pennsylvania, where I worked with John Trueswell, Anna Papafragou, and Florian Schwarz. I earned my PhD in Linguistics in 2021 from the University of Maryland, where I was part of their interdisciplinary Language Science Center. My dissertation -- The Psycho-logic of Universal Quantifiers -- was advised by Jeff Lidz and Paul Pietroski. Before that, I studied Cognitive Science at Johns Hopkins University and managed Justin Halberda's Vision and Cognition Lab.

Publications

[click underlined titles for DOIs, for PDFs, and for abstracts]

Book

J. Lidz and T. Knowlton (under review) A course in first language acquisition. Oxford University Press. 

Papers

J. Ongchoco, T. Knowlton, and A. Papafragou (under review) Language shifts the representation of auditory objects

Humans have the ability to represent objects as individuals or as members of a group. Language can shift these representations in the visual domain, such that the same visual objects can be represented as independent object-files or as a single ensemble collection depending on how they are described. Here, we ask whether the same is true for auditory objects. Building on recent semantic proposals, we hypothesize that describing tones with the expression “each sound” will lead participants to individuate those sounds, whereas describing the very same sequence of tones with the expression “every sound” will lead participants to instead mentally group them. We test this hypothesis by asking whether participants recall individual- or group-based properties about the tones depending on the way those tones were described. We find that differences in entity construal – representing auditory objects as independent individuals versus as members of an ensemble collection – can be modulated by quantifier use in language (“each” versus “every”). These results are one of the first demonstrations that language can shift the representation of auditory objects. Furthermore, they speak to the generality of object-files and ensembles beyond the visual domain.


T. Knowlton, J. Trueswell, and A. Papafragou (under review) Non-conservative quantifiers are unlernable

Linguistic universals have long been a cornerstone of linguistic theories. Perhaps the most well-known universal in the semantic domain is the observation that all quantificational determiners (e.g., “every”, “some”, “no”) have ‘conservative’ meanings: only the noun phrase with which the quantifier combines matters for the truth of the sentence. If it’s true that “every fish swims” then it’s true that “every fish is a fish that swims” (cf. “only fish swim”, which is not true in all the same situations as “only fish are fish that swim”). Accordingly, no language has a ‘non-conservative’ quantifier like “equi”, where “equi fish swims” means “the fish and the swimmers are numerically equivalent” (this quantifier fails to be conservative because the swimmers also matter). This robust cross-linguistic generalization has been argued to reflect a fundamental property of quantifier semantics, a linguistically-specific constraint. If conservativity is a genuine semantic universal, as opposed to the result of a historical accident, then non-conservative quantifiers should be unlearnable. Across seven experiments, we show that this prediction is borne out. Adult participants fail to learn three novel non-conservative meanings, even when explicitly taught, but succeed at learning their conservative counterparts. And since conservativity is a property of quantifier semantics, this effect disappears when an intended non-conservative meaning is instead paired with verbal syntax. These results suggest that the conservativity universal is tied to learnability, and support semantic theories on which conservativity reflects a deep fact about the human language faculty.


L. Perkins, T. Knowlton, A. Williams, and J. Lidz (2024) Thematic content, not number matching, drives syntactic bootstrapping. Language Learning and Development.   

Children use correlations between the syntax of a clause and the meaning of its predicate to draw inferences about word meanings. On one proposal, these inferences are underwritten by a structural similarity between syntactic and semantic representations: learners expect that the number of clause arguments exactly matches the number of participant roles in the event concept under which its referent is viewed. We argue against this proposal, and in favor of a theory rooted in syntactic and semantic contents— in mappings from syntactic positions to thematic relations. We (i) provide evidence that infants view certain scenes under a concept with three participant relations (a girl taking a truck from a boy), and (ii) show that toddlers do not expect these representations to align numerically with clauses used to describe those scenes: they readily accept two-argument descriptions ("she pimmed the truck!"). This argues against syntactic bootstrapping theories underwritten by mappings between structural features of syntactic and semantic representations. Instead, our findings support bootstrapping based on grammatical and thematic content. Children’s earliest inferences may rely on the assumption that the syntactic asymmetry between subject and object correlates with a difference in how their referents relate to the event described by the sentence.


S.-Z. Huang, T. Knowlton, and F. Schwarz (2024) Cross-linguistic comparisons on distributive universal quantification: "each" vs. "every" vs. "mei". Proceedings of the LSA.   

This paper discusses differences between each and every with regard to (a) pair-list readings; (b) subject/object asymmetries seen with every but not with each; and (c) the long-held intuition that each is more individualistic whereas every is friendlier to groups. We propose that these phenomena can be captured by prior accounts of the Mandarin Chinese distributive universal quantifier mei. In particular, we consider the Double Variable Hypothesis (the idea that in DUQ, for every x, there must be a y) (S.-Z. Huang 1995; 1996), and the Skolemized Topicality Hypothesis (the idea that topical quantifiers are Skolemized, resulting in the required x-y pairings) (S.-Z. Huang 2022b). We argue that (a’) pair-list answers to questions with quantifiers are derivable from the Double Variable Hypothesis; (b’) the subject/object asymmetry seen in every is due to its positionally-varied association with the Double Variable Hypothesis, while each is always subject to Skolemized Topicality due to its inherent topicality; and (c’) the individualistic interpretation of each can be described as stemming from its intrinsically Skolemized topicality as well.


T. Knowlton and F. Schwarz (2024) "Every" provides an implicit comparison class when "each" does not. Proceedings of the 47th annual Penn Linguistics Conference.   

It’s long been observed that each and every, while both distributive universal quantifiers, differ in subtle ways. One recent proposal, outlined in Knowlton (2021), seeks to explain these differences by positing a semantic distinction: the mental representation that serves as the meaning of every has a semantic constituent that calls for grouping the things quantified over as a plurality; the representation that serves as the meaning of each lacks any such piece. A natural prediction of this view is that every NP should implicitly make available a plurality corresponding to "the NPs" in a way that each NP does not. We test this prediction in two forced choice judgment experiments, both involving sentence-internal elements that require anaphora to a plurality. As predicted, every NP is better able to provide the necessary plural comparison class to predicates involving same, and to serve as the antecedent of plural they.


D. Odic, T. Knowlton, A. Wellwood, P. Pietroski, J. Lidz, and J. Halberda (2024) Observers efficiently extract the min and max element in perceptual magnitudes sets: evidence for a bipartite format. Psychological Science.   

The mind represents abstract magnitude information, including time, space, and number, but in what format is this information stored? We show support for the bipartite format of perceptual magnitudes, in which the measured value on a dimension is scaled to the dynamic range of the input, leading to a privileged status for values at the lowest and highest end of the range. In six experiments with college undergraduates, we show that observers are faster and more accurate to find the endpoints (i.e., the minimum and maximum) than any of the inner values, even as the number of items increases beyond visual short-term memory limits. Our results show that length, size, and number are represented in a dynamic format that allows for comparison-free sorting, with endpoints represented with an immediately accessible status, consistent with the bipartite model of perceptual magnitudes. We discuss the implications for theories of visual search and ensemble perception.


T. Knowlton, J. Halberda, P. Pietroski, and J. Lidz (2023) Individuals versus ensembles and “each” versus “every”: linguistic framing affects performance in a change detection task. Glossa Psycholinguistics.   

Though each and every are both distributive universal quantifiers, a common theme in linguistic and psycholinguistic investigations into them has been that each is somehow more individualistic than every. We offer a novel explanation for this generalization: each has a first-order meaning which serves as an internalized instruction to cognition to build a thought that calls for representing the (restricted) domain as a series of individuals; by contrast, every has a second-order meaning which serves as an instruction to build a thought that calls for grouping the domain. In support of this view, we show that these distinct meanings invite the use of distinct verification strategies, using a novel paradigm. In two experiments, participants who had been asked to verify sentences like each/every circle is green were subsequently given a change detection task. Those who evaluated each-sentences were better able to detect the change, suggesting they encoded the individual circles' colors to a greater degree. Taken together with past work demonstrating that participants recall group properties after evaluating sentences with every better than after evaluating sentences with each, these results support the hypothesis that each and every call for treating the individuals that constitute their domain differently: as independent individuals (each) or as members of an ensemble collection (every). We situate our findings within a conception of linguistic meanings as instructions for thought building, on which the format of the resulting thought has consequences for how meanings interface with non-linguistic cognition.


T. Knowlton, P. Pietroski, A. Williams, J. Halberda, and J. Lidz (2023) Psycholinguistic evidence for restricted quantification. Natural Language Semantics.   

Quantificational determiners are often said to be devices for expressing relations. For example, the meaning of every is standardly described as the inclusion relation, with a sentence like every frog is green meaning roughly that the green things include the frogs. Here, we consider an older, non-relational alternative: determiners are tools for creating restricted quantifiers. On this view, determiners specify how many elements of a restricted domain (e.g., the frogs) satisfy a given condition (e.g., being green). One important difference concerns how the determiner treats its two grammatical arguments. On the relational view, the arguments are on a logical par as independent terms that specify the two relata. But on the restricted view, the arguments play distinct logical roles: specifying the limited domain versus supplying an additional condition on domain entities. We present psycholinguistic evidence suggesting that the restricted view better describes what speakers know when they know the meaning of a determiner. In particular, we find that when asked to evaluate sentences of the form every F is G, participants mentally group the Fs but not the Gs. Moreover, participants forego representing the group defined by the intersection of F and G. This tells against the idea that speakers understand every F is G as implying that the Fs bear relation (e.g., inclusion) to a second group.


T. Knowlton, J. Trueswell, and A. Papafragou (2023) Keeping quantifier meaning in mind: connecting semantics, cognition, and pragmatics. Cognitive Psychology.   

A complete theory of the meaning of linguistic expressions needs to explain their semantic properties, their links to non-linguistic cognition, and their use in communication. Even though in principle interconnected, these areas are generally not pursued in tandem. We present a novel take on the semantics-cognition-pragmatics interface. We propose that formal semantic differences in expressions’ meanings lead those meanings to activate distinct cognitive systems, which in turn have downstream effects on when speakers prefer to use those expressions. As a case study, we focus on the quantifiers "each" and "every", which can be used to talk about the same state of the world, but have been argued to differ in meaning. In particular, we adopt a mentalistic proposal about these quantifiers on which "each" has a purely individualistic meaning that interfaces with the psychological system for representing object-files, whereas "every" has a meaning that implicates a group and interfaces with the psychological system for representing ensembles. In seven experiments, we demonstrate that this account correctly predicts both known and newly-observed constraints on how "each" and "every" are pragmatically used. More generally, this integrated approach to semantics, cognition, and pragmatics suggests that canonical patterns of language use can be affected in predictable ways by fine-grained differences in semantic meanings and the cognitive systems to which those meanings connect.


J. Ongchoco, T. Knowlton, and A. Papafragou (2023) Language shifts the representation of sounds in time: from auditory individuals to auditory ensembles. Proceedings of CogSci.   

Objects can either be represented as independent individuals ("object-files") or as members of a collection (an "ensemble"). Work over the past 40 years has explored these representational systems, largely in the visual domain. Far less is known about auditory objects. Here, we show that a property characteristic of visual object representation – that it can be modulated by linguistic framing – also applies to auditory objects. In particular, we show that using the expression "each sound" versus "every sound" can bias auditory object construal in the same way that using "each circle" versus "every circle" can bias visual object construal. These findings support the idea that object-files and ensembles are not limited to the visual domain, but are representational formats found more generally throughout cognition.


T. Knowlton, J. Trueswell, and A. Papafragou (2022) New evidence for the unlearnability of non-conservative quantifiers. Proceedings of the 23rd Amsterdam Colloquium. 

T. Knowlton, J. Trueswell, and A. Papafragou (2022) A mentalistic semantics explains "each" and "every" quantifier use. Proceedings of CogSci.   

"Each" and "every" can be used to express the same truth-conditions but differ in their contexts of use. We adopt a particular psycho-semantic proposal about the meanings of these universal quantifiers: "each" has a meaning that interfaces with the psychological system for representing object-files whereas "every" has a meaning that interfaces with the psychological system for representing ensembles. In five experiments (n=798 total) we demonstrate that this mentalistic account correctly predicts newly-observed constraints on how "each" and "every" are pragmatically used. More generally, these results demonstrate that canonical patterns of language use are affected in predictable ways by fine-grained differences in semantic representations and the cognitive systems to which those representations connect. By treating the output of semantics as mental representations that are more finely articulated than truth-conditions -- and by taking seriously the relationship between linguistic meanings and non-linguistic cognitive systems -- we can explain otherwise puzzling patterns of language use.


T. Knowlton and V. Gomes (2022) Linguistic and non-linguistic cues to acquiring the strong distributivity of "each". Proceedings of the LSA.   

The universal quantifier each is more strongly distributive than its counterparts every and all. It forces predicates to apply to individuals, it more often supports pair-list readings, it’s unfriendly to genericity, and, in psycholinguistic tasks, it encourages encoding and remembering individual properties. But what information leads learners to acquire this aspect of each’s meaning? We explore the hypothesis that, because of its meaning, parents are more likely to use each in situations that independently promote representing the domain of quantification as a series of individuals (as opposed to a group). In line with this, we find that in child-directed speech, parents often use each to quantify over small numbers of physically present things. The same cannot be said of every and all. Because such situations are independently known to trigger object-files – the mind’s system for representing individuals – we argue that these cases are ideal for acquiring the individualistic aspect of each.


T. Knowlton, P. Pietroski, J. Halberda, and J. Lidz (2022) The mental representation of universal quantifiers. Linguistics and Philosophy.   

A sentence like every circle is blue might be understood in terms of individuals and their properties (e.g., for each thing that is a circle, it is blue) or in terms of a relation between groups (e.g., the blue things include the circles). Relatedly, theorists can specify the contents of universally quantified sentences in first-order or second-order terms. We offer new evidence that this logical first-order vs. second-order distinction corresponds to a psychologically robust individual vs. group distinction that has behavioral repercussions. Participants were shown displays of dots and asked to evaluate sentences with each, every, or all combined with a predicate (e.g., big dot). We find that participants are better at estimating how many things the predicate applied to after evaluating sentences in which universal quantification is indicated with every or all, as opposed to each. We argue that every and all are understood in second-order terms that encourage group representation, while each is understood in first-order terms that encourage individual representation. Since the sentences that participants evaluate are truth-conditionally equivalent, our results also bear on questions concerning how meanings are related to truth-conditions.


T. Knowlton, T. Hunter, D. Odic, A. Wellwood, J. Halberda, P. Pietroski, and J. Lidz (2021) Linguistic meanings as cognitive instructions. Annals of the New York Academy of Sciences.   

Natural languages like English connect pronunciations with meanings. Linguistic pronunciations can be described in ways that relate them to our motor system (e.g., to the movement of our lips and tongue). But how do linguistic meanings relate to our nonlinguistic cognitive systems? As a case study, we defend an explicit proposal about the meaning of most by comparing it to the closely related more: whereas more expresses a comparison between two independent subsets, most expresses a subset–superset comparison. Six experiments with adults and children demonstrate that these subtle differences between their meanings influence how participants organize and interrogate their visual world. In otherwise identical situations, changing the word from most to more affects preferences for picture–sentence matching (experiments 1–2), scene creation (experiments 3–4), memory for visual features (experiment 5), and accuracy on speeded truth judgments (experiment 6). These effects support the idea that the meanings of more and most are mental representations that provide detailed instructions to conceptual systems.



T. Knowlton, P. Pietroski, A. Williams, J. Halberda, and J. Lidz (2021) Determiners are "conservative" because their meanings are not relations: evidence from verification. Proceedings of SALT.   

Quantificational determiners have meanings that are "conservative" in the following sense: in sentences, repeating a determiner's internal argument within its external argument is logically insignificant. Using a verification task to probe which sets (or properties) of entities are represented when participants evaluate sentences, we test the predictions of three potential explanations for the cross-linguistic yet substantive conservativity constraint. According to "lexical restriction" views, words like every express relations that are exhibited by pairs of sets, but only some of these relations can be expressed with determiners. An "interface filtering" view retains the relational conception of determiner meanings, while replacing appeal to lexical filters (on relations of the relevant type) with special rules for interpreting the combination of a quantificational expression (Det NP) with its syntactic context and a ban on meanings that lead to triviality. The contrasting idea of "ordered predication" is that determiners don't express genuine relations. Instead, the second argument provides the scope of a monadic quantifier, while the first argument selects the domain for that quantifier: the sequences with respect to which it is evaluated. On this view, a determiner's two arguments each have a different logical status, suggesting that they might have a different psychological status as well. We find evidence that this is the case: When evaluating sentences like every big circle is blue, participants mentally group the things specified by the determiner's first argument (e.g., the big circles) but not the things specified by the second argument (e.g., the blue things) or the intersection of both (e.g., the big blue circles). These results suggest that the phenomenon of conservativity is due to ordered predication.


Dissertation

T. Knowlton (2021) The psycho-logic of universal quantifiers. University of Maryland.   

A universally quantified sentence like every frog is green is standardly thought to express a two-place second-order relation (e.g., the set of frogs is a subset of the set of green things). This dissertation argues that as a psychological hypothesis about how speakers mentally represent universal quantifiers, this view is wrong in two respects. First, each, every, and all are not represented as two-place relations, but as one-place descriptions of how a predicate applies to a restricted domain (e.g., relative to the frogs, everything is green). Second, while every and all are represented in a second-order way that implicates a group, each is represented in a completely first-order way that does not involve grouping the satisfiers of a predicate together (e.g., relative to individual frogs, each one is green). These "psycho-logical" distinctions have consequences for how participants evaluate sentences like every circle is green in controlled settings. In particular, participants represent the extension of the determiner’s internal argument (the circles), but not the extension of its external argument (the green things). Moreover, the cognitive system they use to represent the internal argument differs depending on the determiner: Given every or all, participants show signatures of forming ensemble representations, but given each, they represent individual object-files. In addition to psychosemantic evidence, the proposed representations provide explanations for at least two semantic phenomena. The first is the "conservativity" universal: All determiners allow for duplicating their first argument in their second argument without a change in informational significance (e.g., every fish swims has the same truth-conditions as every fish is a fish that swims). This is a puzzling generalization if determiners express two-place relations, but it is a logical consequence if they are devices for forming one-place restricted quantifiers. The second is that every, but not each, naturally invites certain kinds of generic interpretations (e.g., gravity acts on every/#each object). This asymmetry can potentially be explained by details of the interfacing cognitive systems (ensemble and object-file representations). And given that the difference leads to lower-level concomitants in child-ambient speech (as revealed by a corpus investigation), children may be able to leverage it to acquire every’s second-order meaning. This case study on the universal quantifiers suggests that knowing the meaning of a word like every consists not just in understanding the informational contribution that it makes, but in representing that contribution in a particular format. And much like phonological representations provide instructions to the motor planning system, it supports the idea that meaning representations provide (sometimes surprisingly precise) instructions to conceptual systems.

Teaching

Instructor of Record:

Spring 2024 (Penn): Language, Cognition, and Culture
Spring 2020 (UMD): Language and Thought

Courses TAed:

Fall 2019: Grammar and Meaning (Instructor: Valentine Hacquard)
Spring 2019: Child Language Acquisition (Instructor: Jeffrey Lidz)
Fall 2018: Language and Mind (Instructor: Tonia Bleam)
Spring 2018: Introductory Linguistics (Instructor: Tonia Bleam)

Contact Info

  • Email

    tzk@udel.edu
  • Address

    Ewing Hall, 4th Floor
    15 Orchard Road
    Newark, DE 19716

Check out what my awesome cohort from UMD Ling is up to these days:
Sigwan Thivierge, Mina Hirzel, Anouk Dieuleveut, Aaron Doliana, and Rodrigo Ranero.