Modeling movement in real-world tasks is a fundamental scientific goal for motor control, biomechanics, and rehabilitation engineering. However, existing models and their simplifying assumptions such as linear and fixed timescale mappings do not generalize to real-world contexts. Here, we develop a deep learning-based framework for action prediction with architecture-dependent trial embedding, outperforming traditional models across multiple contexts (walking and running, treadmill and overground, varying terrains) and input modalities (multiple body states, gaze). We find that neural network architectures with flexible input history-dependence like GRU and Transformer perform best overall. By quantifying the model's predictions relative to an autoregressive baseline, we identify context- and modality-dependent timescales. There is greater reliance on fast-timescale predictions in complex terrain, gaze predictions precede body state predictions, and full-body state predictions precede center-of-mass-relevant predictions. This deep learning framework for action prediction provides quantifiable insights into the control of complex movements and can be extended to other actions, contexts, and populations.
翻译:暂无翻译