Research

First-person views of children and an adult walking up steps on a narrow wall. The blue dot is the location of their gaze. Photos courtesy of the VR Lab at UT Austin.

Using vision to guide our actions

Where do children look while they're walking? As adults, we can use visual information to plan our actions in response to changes in the environment - like slowing down on an icy patch of sidewalk. We are "expert walkers" and know where to look and how to adapt our gait to safely navigate different terrains. In my postdoctoral work I am studying the development of visually guided walking.

We are collecting a dataset of children walking on different terrains - integrating eye tracking, body tracking, and 3D-reconstructions of the environment. We will explore how children learn about the uncertainty of a terrain in the moment and how their gaze patterns develop over time.

For videos and a recent poster, go to this drive folder

Studying development in context

For my dissertation, I brought wearable eye trackers to families' homes to record the daily experiences of toddlers and their caregivers. Studying children at home gives us the ability to study the complexity of natural behaviors and test the validity of hypotheses generated in the lab.

We sought to maximize ecological validity by removing experimenter presence during data collection, allowing other family members to be home, encouraging caregivers to do whatever they want, and speak in the language(s) most familiar to the child. The real world is noisy with people, places, and things to learn about and explore! 

Below is a video showing what toddlers' everyday lives look like, taken from their perspective. The purple circle is the location of their gaze.

For more see: Schroer, S.E., Peters, R.E., Yarbrough, A., & Yu, C. (2022). Visual attention and language exposure during everyday activities: an at-home study of early word learning using wearable eye trackers. Proceedings of the 44th Annual Meeting of the Cognitive Science Society. [PDF]

video1_toddler-everyday-experiences.mp4

First-person view of infant (top) and parent (bottom). Photos courtesy of the Developmental Intelligence Lab.


Real-time word learning

Within an interaction, what behaviors support in-the-moment word learning? How do those behaviors shape infant's field-of-view?

In my doctoral research, I studied parents and infants in free-flowing interactions in a home-like lab environment. Dyads were given unfamiliar objects to play with. Parents were not prompted to teach their children, so object naming occurred naturally in the interaction. I directly measured real-time patterns in parent and infant behavior using wearable eye trackers and motion sensors. 

I studied how infants’ actions change their visual experiences and elicit behavior from social partners, how dyads coordinate their behavior, and how all of this creates the landscape for early learning.

For more see:

Schroer, S.E., & Yu, C. (2023). Looking is not enough: Multimodal attention supports the real-time learning of new words. Developmental Science.  [PDF]

Schroer, S.E., & Yu, C. (2023). Embodied attention resolves visual ambiguity to support infants’ real-time word learning. To appear in the Proceedings of the 45th Annual Meeting of the Cognitive Science Society. [PDF]

Schroer, S.E., & Yu, C. (2021). Multimodal attention creates the visual input for infant word learning. Proceedings of the 2021 IEEE International Conference on Development and Learning. [PDF]