Using an eye-tracker to study attention, perception, learning, and memory in infancy: Be careful what you wish for!

by Richard Aslin and David Lewkowicz

 

Automated devices for recording where you are looking are so common (even your smartphone can do it) that we forget how it is done, the potential pitfalls when applied to infants, and how to interpret the massive amount of data that such devices can spit out over a short testing session.  Here we provide some history, principles of operation, an overview of options, and evidence that eye trackers can answer unique questions that older methods cannot (see comprehensive reviews by Oakes, 2012 and Hessels & Hooge, 2019).

There are three basic ways to record eye movements:  electrooculography (EOG), coils embedded in a contact lens and placed in a magnetic field, and video images of the eye along with reflections from its corneal surface.  EOG was employed first because it was easy to place two small electrodes on the face near the outer edge of each eye and it provided highly precise recordings of horizontal eye position with excellent temporal resolution.  One of us published an EOG study of 1- and 2-month-olds as his second-year project in grad school (Aslin & Salapatek, 1975) and many other EOG studies have documented the development of horizontal, vertical, and binocular eye movements in young infants. Unfortunately, for many applications (e.g., determining where on a face the infant is looking), EOG is not suitable unless you can hold the infant’s head in a fixed position (and we know how much infants like that constraint). 

We don’t need to dwell on the contact lens method because infants will not tolerate (1) a large diameter contact lens in their eye, (2) an embedded coil of fine wires that extends off the contact lens to transmit electrical signals induced by a large magnetic field, and (3) the fine wires rubbing against their cornea every time they blink.  Anesthetic eye-drops are tolerated by aspiring PhD students but not by infants (or their parents).

The most commonly used eye-tracking method is based on an old principle called corneal reflection photography.  If you hold a small light source in front of the eye, it creates a reflection that is (approximately) in the center of the pupil.  Since that light source is distracting, you can make it invisible by using infrared light and still detect the image of the eye and the corneal reflection with film or a video camera that is sensitive to light in the infrared region of the spectrum.  This method was perfected by Salapatek & Kessen (1966) using infrared-sensitive film and then by Haith (1969) using newly introduced infrared-sensitive video cameras.  The position of the corneal reflection with respect to the center of the pupil does not vary linearly as the eye moves, but this can be mapped quite nicely by a calibration routine in which the infant is presented with a small target in several locations in the stimulus display.  The main advantage of the corneal reflection method is that (within limits) it is not affected by head movements because the position of the eye is not measured with respect to the head, but rather with respect to the fixed location in space where the light creating the corneal reflection is located.

Once a commercial market emerged for eye-trackers, the “home brew” systems were quickly replaced by standard off-the-shelf instruments.  Initially, these instruments were very pricey (~$22,500 when one of us purchased his first system in 1977) and had trouble detecting small eye-movements due to relatively low-resolution video sensors.  Moreover, the field-of-view of the camera had to be just slightly larger than the diameter of the pupil which meant that even small head movements shifted the eye out of the field-of-view of the camera.  To address this problem, subsequent commercial systems introduced motor-driven cameras that could compensate automatically for small head movements.  Both of us had such a system (Applied Science Laboratories 504) in our respective labs in the 1980s and 1990s, but then switched to a more robust system, the Eye-Link 1000 (https://www.sr-research.com/eyelink-1000-plus/).

Crucially, as video sensor technology improved and as prices declined in the late 1990s, several eye-trackers, most notably Tobii (https://www.tobiipro.com/applications/scientific-research/), employed very high-resolution video sensors, thereby eliminating the need to move the camera to compensate for small head movements because the field-of-view of the camera was much larger (~ 6 inches instead of 0.6 inches).  The high cost of eye-trackers remains, although a $229 version for gamers called the Tobii-5 (https://gaming.tobii.com/product/eye-tracker-5/) is an intriguing option.  Nevertheless, the primary limitation of corneal-reflection eye-trackers is that once the head moves outside the camera’s field-of-view, no data are recorded.  Most studies circumvented this problem by presenting stimuli on a fixed video display and positioning infants so that they were unlikely to look anywhere except at the screen.  This, of course, is not how infants engage their attention in the real world.

The solution to this fixed-screen constraint was to create head-mounted eye-trackers in which the camera was fixed to the head using a miniature camera on a “stalk” that was attached to a headband.  A second camera also mounted on the headband was pointed outward in the direction the participant was facing.  This provided a view of the scene in front of the participant, and calibration data mapped the position of the eye with respect to locations in the scene.  The natural progression of miniaturization of components in the 2000’s eventually led to a head-mounted eye-tracker from Positive Science (https://www.positivescience.com/) that is small enough and light enough to be tolerated by young infants.  This has opened up data collection in natural contexts as infants engage in everyday activities, including crawling, walking, and reaching for objects.

Eye tracking is tailor-made for investigating the role of selective attention in perception and learning in infancy. For example, one of us has used it to investigate the emergence of lipreading in infancy and its link to speech and language development (Lewkowicz & Tift-Hansen, 2012). In these studies, infants watch videos of talking faces while we collect eye gaze data. Using specific areas of interest (AOIs) such as the face, eyes, and mouth we then export gaze measures such as first fixation, latency to first fixation, number of fixations, duration of individual fixations, total fixation, etc. for each AOI.

The main advantage of data generated by an eye-tracker is that they provide novel insights into selective attention processes underlying perception, learning, and memory that traditional looking time methods do not provide. Aggregate measures of total looking time at specific aspects of stimuli do not provide the temporal resolution to assess rapid changes in attention to objects referred to by language or to determine with precision when in a sequence of visual events the infant looks at stimuli or decides to terminate fixation. 

A cautionary note is that there is a tendency to employ a favored measure even when its use is not always justified (like the proverbial hammer that is used even when no nail is in need of pounding).  Beware the tendency to expect that more detailed data collected from large samples of infants will automatically reveal insights about underlying cognitive processes.  Data-driven approaches are only as good as the hypotheses they are intended to test.  Always keep in mind that you must link the eye-movement measures you collect with a theory about how attention, perception, language or learning develops.  

———————

References

Aslin, R. N., & Salapatek, P. (1975). Saccadic localization of visual targets by the very young human infant. Perception & Psychophysics, 17, 293-302. 

Haith, M. M. (1969). Infrared television recording and measurement of ocular behavior in the human infant. American Psychologist, 24, 279.

Hessels, R. S., & Hooge, I. T. (2019). Eye tracking in developmental cognitive neuroscience–The good, the bad and the ugly. Developmental Cognitive Neuroscience, 40, 100710.

Lewkowicz, D. J., & Hansen-Tift, A. (2012). Infants deploy selective attention to the mouth of   a talking face when learning speech. Proceedings of the National Academy of Sciences, 109, 1431-1436.

Oakes, L. M. (2012). Advances in eye tracking in infancy research. Infancy, 17, 1–8.

Salapatek, P., & Kessen, W. (1966). Visual scanning of triangles by the human newborn. Journal of Experimental Child Psychology, 3, 155-167.

About the Authors

Richard N. Aslin

Richard N. Aslin

Haskins Laboratories and Yale University

Richard N. Aslin is Distinguished Research Scientist at Haskins Laboratories and Clinical Professor at the Yale Child Study Center.  His research investigates language learning and development in infants and young children using behavioral (eye-tracking) and neural (fMRI, fNIRS, EEG) methods, with a particular emphasis on machine-learning approaches to neural decoding.

David J. Lewkowicz

David J. Lewkowicz

Haskins Laboratories and Yale University

David J. Lewkowicz is a Senior Scientist at Haskins Laboratories and Professor Adjunct in the Yale Child Study Center. His research is concerned with the development of multisensory attention and perception and its role in the development of speech and language in infancy and beyond.

You May Also Like…

The Impact of Wartime on Infant Development

The Impact of Wartime on Infant Development

Introduction In 2017, 535 million children (one quarter) of the world’s children lived in countries affected by armed conflict, violence, disaster and/or chronic crisis.[1] In 2018, over 29 million babies were born in conflict-affected areas, starting their lives in...

Founding Generation Symposium 2024

Founding Generation Symposium 2024

The ICIS Founding Generation Summer Fellowship for Undergraduates aims to develop the next generation of scholars to advance innovative research on infancy and translation of research for the public good. The program pairs promising students with research mentors from...

Copyright

© 2021 by the author. Except as otherwise noted, the ICIS Baby Blog, including its text and figures, is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. To view a copy of this license, visit:
https://creativecommons.org/licenses/by-sa/4.0/legalcode

Translate »
Share This