From the mouths of babes: Saying the (im)perceptible.

As a kid, I was an avid consumer of sci-fi/fantasy books, which I’d get in unremarkable hardcover from the local library. When I would later see the dust jackets or movie versions of these fictional worlds, I’d often have a strong reaction along the lines of  “that’s not what it’s like at all!” In my mind, the world and its characters looked or sounded different. Through language alone, I’d built mental representations of things I’d never directly experienced. If you have a prediction about how Durian might taste but have never tasted it, you’ve encountered a similar phenomenon.

In a recent paper (Campbell et al, 2025), we wanted to look at the kind of tension highlighted above: how language and direct sensory experience interact. We were especially interested in how likely toddlers and preschoolers (roughly, 2- to 3-year-olds) were to produce words for things they could or couldn’t perceive. We zoomed in on sets of words that were highly visual (e.g., see, green), auditory (e.g., hear, meow), or abstract (e.g., good, pretend). We recruited 6 groups of young children, primarily paired into 3 sets of comparisons: blind and sighted children; deaf children who had cochlear implants and were learning spoken English, and hearing children; and early and late  learners of ASL (American Sign Language; early learners were exposed to ASL from birth, late learners not until around age 2). This let us get at several interesting comparisons: while none of the groups directly perceived the abstract words, the deaf groups (both ASL groups and English-learners) and the blind group had different access to the visual and auditory words than their hearing or sighted peers.

Critically, for blind children learning spoken English and deaf learners of ASL, the sense they are missing is not the one that carries the linguistic signal (i.e., blind children have intact hearing, and deaf ASL learners have intact vision). In contrast, deaf children with cochlear implants (i.e. children with severe to profound hearing loss) learning spoken English have no access to language until they get their cochlear implant (around age 1), and after implantation, the input they hear is degraded (a cochlear implant does not transmit all of the detailed auditory signal that the cochlea of hearing children receives). These groups of participants also let us explore the roles of early vs. late language access, for both spoken English and ASL. For all the groups, parents filled out a vocabulary checklist indicating which words their child said (spoken English) or signed (ASL).

Distilling a complex set of findings (where we controlled for things like overall vocabulary size, age, word frequency, etc.), we found 3 key results. (1) Blind children were just as likely to produce abstract and auditory words as sighted peers, but less likely to produce visual words. (2) In contrast, deaf children with cochlear implants learning spoken English were less likely to produce all three types of words than hearing peers, with the largest difference for abstract words. (3) The late vs. early ASL learners did not differ in their likelihood of producing signs for any of the three word types. In an exploratory follow-up analysis, we tried to figure out whether we could home in on the role of language access by combining our early groups into an early language access group (hearing + early ASL) and a late language access group (deaf spoken English learners + late ASL). When we did that, we found that later language access corresponded with lower word production.

So how do those results come together to help us better understand the link between sensory experience and early word production? These results help support an important idea sparked by seminal work by Landau and Gleitman (1985): language itself carries a ton of information we all use to represent our world (and fictional worlds as well). On the one hand, across the board and unsurprisingly, our young child participants were more likely to produce words for things they could perceive versus those they couldn’t. That is, what drove word production was whether children could, given their sensory access, perceive it.

However, the language access story is a bit more complicated: deaf spoken English learners weren’t uniquely less likely to produce auditory words, and early vs. late ASL learning didn’t seem to matter for rates of word production here. While later language access did link to later word production in our pooled analysis, this wasn’t uniquely or more strongly true for our abstract words, where we’d predicted language access may be particularly relevant. There is more work to be done to better understand how sensory experience and language access and learning come together.

It’s also worth highlighting though that we had blind participants and deaf participants producing highly visual and auditory words and signs by age 1.5, words like “green” and “hear” that they don’t directly experience. So just like the alien planets I envisioned without visiting, the children in our studies learn and produce words for things they don’t directly perceive. How specifically perceptual experiences leave their fingerprints on learning is also a question in need of further study. But in short, blind babies and deaf babies provide a fascinating window into a learning process we all take part in: learning language from language. Direct perceptual input is not—in any sense—strictly required.

Coda: I’d be remiss to not mention the current funding climate in a post on work funded by federal grants in the U.S. The NSF CAREER grant that funded the work above ended at the end of April, just before all the grant cuts to Harvard. My NIH grant was not so lucky; it was terminated in May (as were virtually all DHHS grants to Harvard). Our work is not controversial or politically-charged: it’s basic science asking how babies get better at understanding language. It was vetted by a rigorous and transparent review process at the NIH by specialists in language and communication, and funded by federal funds appropriated by Congress to the NIH. It is, in my view (and many others’) completely unethical and of extremely questionable legality for the NIH (at the President’s behest) to withhold these funds in the way they have. This termination is the subject of ongoing litigation between Harvard and the U.S. Government with, at this point, an unclear future.

Fortunately, our infant participants are not being harmed by this grant termination (which is more than can be said for terminated grants on drug trials, medical devices, etc.). But our trainees are: the future scientists of the U.S. are actively losing opportunities to engage in cutting-edge research, both in a practical sense of fewer funded opportunities, and in the broader opportunity costs that arise when basic science research falls victim to political arrow-slinging. I encourage interested scientists at any career stage to engage with efforts like Science Homecoming to help highlight the importance of science for everyone. Anyone who’s seen the wonders (or struggles!) of babies as they develop, think, and grow can appreciate the importance of better understanding this process. We would all benefit in being more babylike in our approach to scientific inquiry and human health: led by a deep-seated curiosity about how the world works above all else.

About the Author

Elika Bergelson

Elika Bergelson

Harvard University

Elika Bergelson is the John L. Loeb Professor of the Social Sciences in the Psychology Department at Harvard University. Her lab studies how babies learn language from the world around them. Her work has been federally funded since 2008, and has been published in journals such as Infancy, PNAS, and Cognition. She’s received accolades from organizations such as ICIS, APS, CDS, FABBS, and Forbes Magazine. Her favorite food is blueberries.

You May Also Like…

Copyright

© 2021 by the author. Except as otherwise noted, the ICIS Baby Blog, including its text and figures, is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit:
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Translate »
Share This