Javascript must be enabled to continue!
Just-in-time: gaze guidance in natural behavior
View through CrossRef
ABSTRACTNatural eye movements have primarily been studied for over-learned activities such as tea-making, sandwich-making, and hand-washing, which have a fixed sequence of associated actions. These studies indicate a sequential activation of low-level cognitive schemas facilitating task completion. However, it is unclear if these action schemas are activated in the same pattern when a task is novel and a sequence of actions must be planned in the moment. Here, we recorded gaze and body movements in a naturalistic task to study action-oriented gaze behavior. In a virtual environment, subjects moved objects on a life-size shelf to achieve a given order. In order to compel cognitive planning, we added complexity to the sorting tasks. Fixations aligned with the action onset showed gaze as tightly coupled with the action sequence, and task complexity moderately affected the proportion of fixations on the task-relevant regions. Our analysis further revealed that the gaze was allocated to action-relevant targets just in time. Planning behavior predominantly corresponded to a greater visual search for task-relevant objects before the action onset. The results support the idea that natural behavior relies very little on working memory, and humans refrain from encoding objects in the environment to plan long-term actions. Instead, they prefer just-in-time planning by searching for action-relevant items in the moment, directing their body and hand to it, monitoring the action until it is terminated, and moving on to the next action.Author SummaryEye movements in the natural environment have primarily been studied for over-learned and habitual everyday activities (tea-making, sandwich-making, hand-washing) with a fixed sequence of associated actions. These studies show eye movements correspond to a specific order of actions learned over time. In this study, we were interested in how humans plan and execute actions for tasks that do not have an inherent action sequence. To that end, we asked subjects to sort objects based on object features on a life-size shelf in a virtual environment as we recorded their eye and body movements. We investigated the general characteristics of gaze behavior while acting under natural conditions. Our paper provides a comprehensive approach to preprocess naturalistic gaze data in virtual reality. Furthermore, we provide a data-driven method of analyzing the different action-oriented functions of gaze. The results show that bereft of a predefined action sequence, humans prefer to plan only their immediate actions where eye movements are used to search for the target object to immediately act on, then to guide the hand towards it, and to monitor the action until it is terminated. Such a simplistic approach ensures that humans choose sub-optimal behavior over planning under sustained cognitive load.
Cold Spring Harbor Laboratory
Title: Just-in-time: gaze guidance in natural behavior
Description:
ABSTRACTNatural eye movements have primarily been studied for over-learned activities such as tea-making, sandwich-making, and hand-washing, which have a fixed sequence of associated actions.
These studies indicate a sequential activation of low-level cognitive schemas facilitating task completion.
However, it is unclear if these action schemas are activated in the same pattern when a task is novel and a sequence of actions must be planned in the moment.
Here, we recorded gaze and body movements in a naturalistic task to study action-oriented gaze behavior.
In a virtual environment, subjects moved objects on a life-size shelf to achieve a given order.
In order to compel cognitive planning, we added complexity to the sorting tasks.
Fixations aligned with the action onset showed gaze as tightly coupled with the action sequence, and task complexity moderately affected the proportion of fixations on the task-relevant regions.
Our analysis further revealed that the gaze was allocated to action-relevant targets just in time.
Planning behavior predominantly corresponded to a greater visual search for task-relevant objects before the action onset.
The results support the idea that natural behavior relies very little on working memory, and humans refrain from encoding objects in the environment to plan long-term actions.
Instead, they prefer just-in-time planning by searching for action-relevant items in the moment, directing their body and hand to it, monitoring the action until it is terminated, and moving on to the next action.
Author SummaryEye movements in the natural environment have primarily been studied for over-learned and habitual everyday activities (tea-making, sandwich-making, hand-washing) with a fixed sequence of associated actions.
These studies show eye movements correspond to a specific order of actions learned over time.
In this study, we were interested in how humans plan and execute actions for tasks that do not have an inherent action sequence.
To that end, we asked subjects to sort objects based on object features on a life-size shelf in a virtual environment as we recorded their eye and body movements.
We investigated the general characteristics of gaze behavior while acting under natural conditions.
Our paper provides a comprehensive approach to preprocess naturalistic gaze data in virtual reality.
Furthermore, we provide a data-driven method of analyzing the different action-oriented functions of gaze.
The results show that bereft of a predefined action sequence, humans prefer to plan only their immediate actions where eye movements are used to search for the target object to immediately act on, then to guide the hand towards it, and to monitor the action until it is terminated.
Such a simplistic approach ensures that humans choose sub-optimal behavior over planning under sustained cognitive load.
Related Results
Impact of Simulated Gaze Gestures on Social Interaction for People with Visual Impairments
Impact of Simulated Gaze Gestures on Social Interaction for People with Visual Impairments
Gaze and eye contact have important social meanings in our daily lives. The sighted often uses gaze gestures in communication to convey nonverbal information that a blind interlocu...
Embodiment matters when establishing eye contact with a robot
Embodiment matters when establishing eye contact with a robot
Eye contact constitutes a strong social signal in humans and affects various attentional processes. However, eye contact with another human might evoke different responses compared...
Heartfelt gaze: Cardiac afferent signals and vagal tone affect gaze perception
Heartfelt gaze: Cardiac afferent signals and vagal tone affect gaze perception
AbstractPerceiving others’ gaze direction is an essential aspect of social interactions. The cone of direct gaze (CoDG) refers to the range within which an observer perceives gaze ...
Gaze-Cueing With Crossed Eyes: Asymmetry Between Nasal and Temporal Shifts
Gaze-Cueing With Crossed Eyes: Asymmetry Between Nasal and Temporal Shifts
A person’s direction of gaze (and visual attention) can be inferred from the direction of the parallel shift of the eyes. However, the direction of gaze is ambiguous when there is ...
Analyzing Gaze and Hand Movement Patterns in Leader-Follower Interactions During a Time-Continuous Cooperative Manipulation Task
Analyzing Gaze and Hand Movement Patterns in Leader-Follower Interactions During a Time-Continuous Cooperative Manipulation Task
Humans often interact with each other during daily life and many times one finds that one person (at least for some time) takes the lead while the other follows. Different from usu...
Effects of Autistic Trait-Related Joint Attention on Visual Working Memory
Effects of Autistic Trait-Related Joint Attention on Visual Working Memory
Abstract
Background: Social attention deficits have been found in individuals with high autistic traits in the non-clinical population However, the eye movement patterns tr...
Anticipatory Eye Gaze as a Marker of Memory
Anticipatory Eye Gaze as a Marker of Memory
Abstract
Human memory is typically studied by direct questioning, and the recollection of events is investigated through verbal reports. Thus, current research confound...
An algorithmic approach to determine expertise development using object-related gaze pattern sequences
An algorithmic approach to determine expertise development using object-related gaze pattern sequences
AbstractEye tracking (ET) technology is increasingly utilized to quantify visual behavior in the study of the development of domain-specific expertise. However, the identification ...

