ICIS Congress Program

PROGRAM AT-A-GLANCE

INVITED SPEAKERS

BOOK HOTEL

Pre-Congress Workshops

Full Day Workshop: Insights from global collaboration: ManyBabies updates and new initiatives

Organisers and Presenters:

Sho Tsuji: Univeristy of Tokoyo
Martin Zettersten: Princeton University
Judit Gervain: University of Padua
Jessica Kosie: Arizona State University
Tiffany Leung: University of Miami
Francis Yuen: University of British Columbia

Overview:

ManyBabies (MB) is a global consortium of developmental psychology labs working collaboratively on large-scale replications and best practices in developmental psychology research. At last count, the ManyBabies community consists of over 600 researchers located in 50 countries on six continents (manybabies.org/people).

The goals of this workshop are to share updates and results from many of MB’s collaborative projects, encourage community engagement and networking, and discuss ways to positively impact the future of developmental science. ManyBabies members, as well as non-contributors who are MB-curious and want to learn more, are all welcome to join. 

In the morning, leaders of MB projects will present updates (and in many cases, hot-off-the-presses data!) on the projects that hundreds of members of the MB (and ICIS!) community have been working on for the last number of years. Following these research talks, attendees will gather for lunch where they will have the chance to socialize with collaborators while participating in fun games. In the afternoon, attendees will actively participate in small group discussions on topics ranging from the methodological challenges faced by infant researchers, ways to build a more inclusive and equitable MB community, and the role of ManyBabies in the future of infant research. 

Our hope is that all attendees leave the workshop not only knowing about the latest findings from ManyBabies collaborative projects, but also with a renewed sense of purpose, optimism, and excitement for the work they are doing as a member of the infant research community–both with ManyBabies and beyond.

Full Day Workshop: Measuring complex infant behaviours using machine learning and computer vision

Organisers and Presenters:

John Franchak: University of California, Riverside
Sam Wass: University of East London
Sinead Rocha: Goldsmiths, University of London
Trinh Nguyen:​ Italian Institute of Technology 

Goals and Objectives:

‘Natural behaviour is the language of the brain’ (Miller et al., 2022). But most current attempts to study brain and cognitive development do not measure complex, messy, natural behaviours at all. Instead, we measure cognition while children watch identically repeated sequences of events on a screen or sleep in a scanner. Whilst motivated by a laudable desire to reduce measurement noise and isolate particular cognitive functions, this approach to studying cognitive development naturally brings with it a number of well-recognised theoretical and methodological limitations (Smith & Gasser, 2005; Wass & Goupil, 2023).

Over recent years, there has increasingly been a shift towards observing and analysing large corpora of unconstrained, real-world behaviour in complex, dynamic settings. Wireless, wearable sensors, including motion trackers, video cameras, microphones, GPS, proximity sensors, and physiological sensors are making it straightforward to collect day-, week-, or month-long recordings. But one practical problem that researchers face is in how to extract meaning from these data. Large corpora of data can, naturally, not be analysed by labor-intensive video coding done by researchers. Likewise, raw sensor signals (e.g., acceleration) do not directly translate into interpretable outcome behaviors (e.g., walking).

In this workshop, we present recent state-of-the-art advances from researchers who are measuring complex infant behaviours automatically. Three talks describe machine-learning techniques for classifying motor and social behaviors based from continuously-recording motion sensor data, either worn on infants or embedded in “intelligent toys”. Three talks discuss approaches for using computer vision algorithms to automatically detect infant and caregiver behaviors, such as facial expressions, affect, dance, from naturalistic video.

In the afternoon, we will present a practical workshop and provide sample code and analyses for researchers wishing to learn these new, state-of-the-art techniques themselves.

Format:

The morning consists of 6 talks that present recent advances from a diverse group of researchers (geography/gender/content area) using machine learning and computer vision methods. In addition to those 6 talks, we shall also invite a number of other researchers active in this field from around the world to present short ‘flash’ talks on their research, in order to promote and present a diversity of perspectives.

The afternoon is divided into two interactive sections. First, participants are invited to rotate around practical demonstrations of a selection of the methods discussed in the morning talks (e.g. DeepLabCut, OpenPose, etc.) that will be shared with attendees. Current and potential users of these techniques are guided through existing workflows, with the opportunity to ask questions and share experiences. Second, we facilitate small group discussions by content area (e.g. perception, motor, social, music) that address how we map automatically extracted behaviours to the constructs that we are aiming to measure. We hope that these discussions will be a vital step in research communities moving together in order to realise the potential of emerging automated technologies for infant studies.

Speakers will contribute example data and processing code to a collection of GitHub repositories that will serve as a resource for participants who want to start employing these methods in their own work. An additional repository will host a compiled directory of camera and sensor hardware recommendations.

Half Day (morning) Workshop: How to build an Observation Lab - What do you need to keep in mind? What are the Do’s and Don’ts? And what to do with the video footage?

Organisers and Presenters:

Reinhard Grassl: Mangold International GmbH

Overview:

As live observation has several limitations, capturing of video is essential for behavioral studies. But setting up an observation lab is far from easy, especially for someone who has no experience in audio-/video-technology. It requires a professional approach and appropriate technical systems to make the process of data acquisition and observation efficient. This not only increases the efficiency of the observation process but also its effectiveness, because more and better results can be expected.

The attendees will get general information about the (room) requirements.

  • How is this handled in most cases?
  • How does the room have to be equipped?
  • How to do the installation?

They will learn what equipment is available, relevant and useful.

  • Audio-/Video-Equipment
  • Synchronized Recording
  • Integration of special solutions, e.g. eye tracking, physiology, and so on

And they will get an overview about how to perform a study and what to do with the recorded video-footage.

  • How to record everything in sync?
  • How to set markers during the recording?
  • How to code and analyze the recorded videos? (basic overview)
  • How to use 3rd party data (e.g. physiology data) and tools (e.g. python)?

Without such knowledge, there are 1000 possibilities to waste time. Nevertheless, you want to remain a researcher and not become an audio / video professional.

So the goal is to keep everything as simple as possible.

Half Day (afternoon) Workshop: EgoActive: Technological and methodological advances for infant development research “in the wild”

Organisers and Presenters:
Elena Geangu: University of York
Astrid Priscilla Martinez Cedillo: University of Essex
Quoc Vuong: Newcastle University
William Smith: University of York
Harry Mason: University of York
Yisi Zhang: Tsinghua University

Goals and Objectives

During the past decade, pioneering efforts have shown that the field is ready to move rigorous investigations of infant development from the lab to the real world (and potentially back to the lab, with experimental paradigms informed by the real-world big data). However, these efforts have also revealed that the existent technological, data quantification, and analytic approaches massively lag behind in meeting the requirements for answering questions about infant development while taking into account the complexity of the real-world big data, and the challenges of studying infants in the ‘wild’.

This workshop will introduce the newly developed EgoActive platform, partially filling these gaps. Developed by scientists for scientists, it consists of wearable sensors that allow the recording of infants’ and caregivers’ egocentric view (head-mounted cameras), synchronised in time with the recordings of cardiac activity and body movement (ECG/Acc body sensors) during typical everyday life situations. The wearable sensors are complemented by automated and semi-automated algorithms for processing large datasets of raw audio-video and physiological signals and for extracting multimodal measures relevant to characterising infants’ environment and the development of important cognitive functions (e.g., attention). In developing these research tools, a particular emphasis has been placed on their rigorousness for scientific research.

Format

The workshop consists of 15-20 minutes presentations linked to practical demonstrations, showcasing a combination of expertise in infant development, vision neuroscience, computer vision, biomedical engineering, and advanced statistics.

The attendees will learn the key features of the new tools, including their empirical validation, as well as how they can be used for understanding infant development in complex real-world environments. Throughout the talks, we will present research findings generated with these methods.

During the practical demonstrations, the attendees will have the opportunity to try first-hand how the EgoActive platform works, and to acquire visual and heart beat data throughout the duration of the workshop. There will also be practical demonstrations of data processing and analyses, which in part will include recordings in the natural environment from 6- to 16-months infants with the EgoActive wearables.

The presentations and demonstrations will be as follows: (1) an overview of the naturalistic approach to development; (2) an introduction to the EgoActive platform and how it works; (3) approaches to analyzing and characterizing infants’ visual environment (e.g.,  visual features and faces); (4) methods for analyzing and characterizing infants’ physiological responses in a naturalistic environment; and (5) how to use visual and heart beat data to predict and characterize infants’ sustained attention to objects and events in a dynamic natural environment.

As the majority of the efforts presented in this workshop are to a large extent interdisciplinary, there will also be opportunities to discuss the inherent challenges of interdisciplinary approaches, how to overcome them, and how to embed them in infant development research.

Half Day (afternoon) Workshop: Baby FACS: Facial Action Coding System for Infants and Young Children: Methods and Research Strategies

Organisers and Presenters:

Harriet Oster: New York University
Marco Dondi: University of Ferrara

Our goal in this workshop is to provide a detailed introduction to Baby FACS and its use in infancy research. Baby FACS is an objective, anatomically based coding system adapted for infant facial morphology from the adult FACS (Ekman, Friesen, & Hager, 2002). Drs. Oster and Dondi will review the theoretical and methodological foundations of Baby FACS and its unique advantages for addressing questions about infant sensory, perceptual, and cognitive processes, engagement in social interaction, emotional expression, and emotion regulation. Because Baby FACS coding takes variations in facial morphology into account, it is uniquely suited to studying developmental changes and individual and cultural differences in facial expression in typically developing full-term and preterm infants, fetuses in utero, blind infants, and infants with facial anomalies.

Baby FACS modifications of the adult FACS are based on differences in facial morphology and in appearance changes produced by facial muscle actions. Photos and videos will illustrate discrete facial muscle Action Units (AUs), complex AU configurations, and the temporal dynamics of distinctive infant expressions, including positive and negative affect, focused attention, states of alertness, pain, hunger, and stereotyped facial behaviors such as yawns and rooting. 

Attendees will observe each other’s faces, mimicking infants’ facial movements. The presenters will discuss naturalistic and experimental research paradigms for studying infants’ facial expressions and their interactions with parents and other people, other infants and children,, caregivers, animals and toys. 

Thank you to our 2024 Sponsors & Exhibitors!