Research

Studying development in context

The FIELD Project (Family-Infant Eye tracking and Language Development) brought wearable eye trackers to families' homes to record the daily experiences of toddlers and their caregivers. Studying children at home gives us the ability to study the complexity of natural behaviors and test the validity of hypotheses generated in the lab.

We sought to maximize ecological validity by removing experimenter presence during data collection, allowing other family members to be home, encouraging caregivers to do whatever they want, and speak in the language(s) most familiar to the child. The real world is noisy with people, places, and things to learn about and explore! 

Below is a video showing what toddlers' everyday lives look like, taken from their perspective. The purple circle is the location of their gaze.

For more see: Schroer, S.E., Peters, R.E., Yarbrough, A., & Yu, C. (2022). Visual attention and language exposure during everyday activities: an at-home study of early word learning using wearable eye trackers. Proceedings of the 44th Annual Meeting of the Cognitive Science Society. [PDF]

video1_toddler-everyday-experiences.mp4

First-person view of infant (top) and parent (bottom). Photos courtesy of the Developmental Intelligence Lab.


Real-time word learning

Within an interaction, what behaviors support in-the-moment word learning? How do those behaviors shape infant's field-of-view?

In my doctoral research, I studied parents and infants in free-flowing interactions in a home-like lab environment. Dyads were given unfamiliar objects to play with. Parents were not prompted to teach their children, so object naming occurred naturally in the interaction. I directly measured real-time patterns in parent and infant behavior using wearable eye trackers and motion sensors. 

I studied how infants’ actions change their visual experiences and elicit behavior from social partners, how dyads coordinate their behavior, and how all of this creates the landscape for early learning.

For more see:

Schroer, S.E., & Yu, C. (2023). Looking is not enough: Multimodal attention supports the real-time learning of new words. Developmental Science.  [PDF]

Schroer, S.E., & Yu, C. (2023). Embodied attention resolves visual ambiguity to support infants’ real-time word learning. To appear in the Proceedings of the 45th Annual Meeting of the Cognitive Science Society. [PDF]

Schroer, S.E., & Yu, C. (2021). Multimodal attention creates the visual input for infant word learning. Proceedings of the 2021 IEEE International Conference on Development and Learning. [PDF]