Abstract
Previous research has shown that observers can predict the target object of a grasping action from early hand preshaping cues. However, critical questions remain unexplored: how predictions adapt to the available kinematic information and evolve throughout the movement timeline. We address these fundamental gaps by combining kinematic analysis with machine-learning approaches. Using motion capture technology, we recorded reach-to-grasp actions toward large and small objects and had participants predict target size from hand kinematics at varying time points. Our analysis revealed that prediction performance not only evolved with increasing information but, crucially, differed significantly between target size choices. To provide insight into the participants’ performance, we developed a comparative framework using two distinct machine learning models: Support Vector Machines modeling kinematic information and convolutional neural network–recurrent neural networks extracting visual patterns. This comparison indicates that predicting the target objects of observed actions adapts to the available kinematic information depending on the target object, with prediction changing over time accordingly. These findings advance our understanding of action prediction and have significant implications for social cognition and human-machine interaction.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
