ICIS Online Events
The International Congress of Infant Studies is pleased to announce a series of online events to review the present state of the field, address key issues critical to further progress, and promote the education and participation of young researchers.
Each webinar consists of presentations and Q&A time with the audience. A listing of upcoming events is available in the calendar below.
Webinar registration is FREE for members. Complimentary registration is a benefit of ICIS membership. If you have not yet renewed or joined, please see our membership page for all membership details.
Member Video Resources
ICIS Online Event videos are available in the member only area of the website. If you are not a member, consider joining ICIS to access these resources.
Videos are not to be recorded, downloaded or distributed without the written consent of ICIS.
Events Calendar
A recording of all online events will be available to ICIS members in the member only area of the website within a few days of the presentation.
Beyond who’s speaking when: machine learning tools to extract rich multi-dimensional features from home audio recordings
19th June at 9am PST/5pm GT/6pm CET

Gio Esposito
University of East London
Introductory overview – parsing vocalisations, arousal, acoustic features, and child-directed speech from home audio recordings
Giovanni Esposito is a doctoral researcher at the UEL BabyDevLab, where he is working on the ERC-funded Oscillatory Neural and Autonomic Correlates of Social Attunedness (ONACSA) project. His thesis involves analysing the project’s home-recorded wearable data to investigate the relationship between infants’ autonomic arousal and their vocal behaviours, whether this initially tight coupling changes over development and how this relates to vocal development and dyadic interactions with caregivers. Giovanni is interested in the potential for machine learning to facilitate our understanding of naturalistic daylong recordings and has worked on speaker diarization, infant cry classification, and infant-directed speech classification.

Einari Vaaras
Tampere University
Analysing affect from acoustic speech signals
Einari Vaaras received his B.Sc. (Tech.) and M.Sc. (Tech.) degrees in electrical engineering from Tampere University, Finland, in 2019 and 2021, respectively. He is currently pursuing his D.Sc. (Tech.) degree at the same institution. Einari’s research interests include representation learning, active learning, paralinguistic speech processing, and the application of machine-learning models in clinical healthcare.

Pierre Labendzki
University of East London
Quantifying density and complexity in the semantic content of child-directed speech
Pierre is finishing his PhD under the supervision of Prof. Sam Wass and Dr Louise Goupil. His PhD is exploring how caregivers scaffold their behaviours to offer the best trade-off between familiarity and novelty for the multiple and changing needs of their infants. In particular, how nursery rhymes and infant-directed speech are designed for attention and comprehension, and how caregivers use multiple modalities and hierarchical levels to attract and maintain infant attention.

Abdellah Fourtassi
Aix-Marseille University
Quantifying semantic contingency in child-caregiver conversations
Abdellah Fourtassi is an Associate Professor of Computer and Cognitive Science at Aix-Marseille University. He leads the Computational Communicative Development (CoCoDev) team, which investigates how children acquire language and communication skills through social interaction. His research leverages AI techniques to study developmental processes in naturalistic environments. More information is available on his website: afourtassi.github.io

Manjunath Mulimani
Tampere University
Extracting auditory objects from home recordings
Manjunath Mulimani is a Postdoctoral Research Fellow at Tampere University, Finland. He received his PhD in Computer Science and Engineering at the National Institute of Technology Karnataka Surathkal, India in 2020. His research interests include acoustic signal processing, incremental learning, sound event detection, and classification.