Prediction of static and perturbed reach goals from movement kinematics
In Proceedings of the HumanE AI conference, 16.11.-16.11.2022, Stockholm, n.n., Nov/2022.
Actions are complex cognitive phenomena and can be described at different levels of abstraction,
from abstract action intentions to the description of the mechanistic properties of movements
(Jacob et al., 2005; Kilner, 2011; Urgesi et al., 2014). As social animals, humans behave largely on
the basis of their interpretation and predictions about the actions and their intentions and these
latter strongly modulate the kinematic parameters of reach-to-grasp movements. In fact, it was
demonstrated that changes in both target properties (intrinsic/extrinsic features) and prior
intentions affect both the reach and the grasp components (Egmose & Køppe, 2018; Jeannerod et
al., 1995). The knowledge that different motor intentions modulate the kinematics of human
behaviour opens to the possibility that the observed kinematics may serve as a cue to predict action
intentions. Reliable predictions about the intention and outcome of an observed action can be made
because action execution is itself shaped by its final goal and/or by the physical properties of its
target. The key point is that, because the action is shaped by its final goal, information about the
goal is available to the observer before action execution is completed. This applies to a wide
spectrum of ecological motor behaviours, from simple actions like reach-to-grasp an object, to more
complex ones like throwing a stone to hit a prey or playing an interactive ball game (Maselli et al.,
However, it is unknown whether, when an unexpected change in target goals during a reaching
execution in an individual action occurs at different directions and depths with respect to the body,
the action intention consequently changes with target goal modification and which is the temporal
structure of the decoding of target goals that change their spatial position in two dimensions. In the
present study, we characterized the information embedded in the kinematics of reaching
movement towards targets located at different directions and depths with respect to the body in a
condition where the targets remained static for the entire duration of the movement and in a
condition where the targets shifted in another position during the movement execution. We
designed our analysis to perform a temporal decoding of the final goals by a classifier.
23 naïve volunteers (11 males and 12 females, mean age 22.6±2.3) took part in the study. 12
participants performed Experiment 1 and 11 participants performed Experiment 2. We
characterized the information embedded in the kinematics of reaching movement towards targets
located at different directions and depths with respect to the body in a condition where the targets
remained static for the entire duration of movement and in a condition where the targets shifted
to another position during movement execution. The target shift could occur at the movement
onset (Experiment 1) or 100 ms after the movement onset (Experiment 2). We designed our analysis
to perform a temporal decoding of the final goals by a recurrent neural network (RNN). Specifically,
to evaluate the temporal evolution of decoding accuracy in recognizing the target positions, we
estimated the decoding performance using x, y and z components of the index and wrist markers in
small time intervals as RNN input data.
In Experiment 1, we found that, at average level, a progressive increase of the classification
performance from the onset to the end of movement above the defined chance level (16,6%) is
visible in both direction and depth dimensions as well as in decoding perturbed visual targets.
However, classification accuracies in decoding targets along direction and depth dimension show
differences in the maximum accuracy reached by the classifier in the final phase of the movement.
In fact, for target in direction, the maximum accuracy was 0.94, whereas, for target in depth, the
maximum accuracy was 0.77. Moreover, for the entire course of the movement, the classification
accuracy of targets in direction showed higher values with respect to that of target in depth, and
this difference reached the significance level from the 63% of the reaching execution (SPM1d with
two-sample Hotellings’ T2 test, p<0.05).
In Experiment 2, a progressive increase of the recognition rate is visible across the movement
execution in both direction and depth dimensions. The comparison between the average
recognition rate across the movement execution displays that the prediction of static and perturbed
target goals located along direction was more accurate than the prediction of static and perturbed
target goals located along depth, consistently with the results of Experiment 1. However, no
significant differences between decoding accuracy of direction and depth were found (SPM1d with
two-sample Hotellings’ T2 test, p>0.05).
This study provides novel insights into quantitative estimate of the accuracy that it is possible to
achieve in the prediction of reaching target goal located in the 3-dimensional space and when the
target perturbation occurs at the onset of the reaching or at later stages in the reach.