Task-trained recurrent neural networks (RNNs) are widely used in neuroscience and machine learning to model dynamical computations. To gain mechanistic insight into how neural systems solve tasks, prior work often reverse-engineers individual trained networks. However, different RNNs trained on the same task and achieving similar performance can exhibit strikingly different internal solutions, a phenomenon known as solution degeneracy. Here, we develop a unified framework to systematically quantify and control solution degeneracy across three levels: behavior, neural dynamics, and weight space. We apply this framework to 3,400 RNNs trained on four neuroscience-relevant tasks: flip-flop memory, sine wave generation, delayed discrimination, and path integration, while systematically varying task complexity, learning regime, network size, and regularization. We find that higher task complexity and stronger feature learning reduce degeneracy in neural dynamics but increase it in weight space, with mixed effects on behavior. In contrast, larger networks and structural regularization reduce degeneracy at all three levels. These findings empirically validate the Contravariance Principle and provide practical guidance for researchers seeking to tune the variability of RNN solutions, either to uncover shared neural mechanisms or to model the individual variability observed in biological systems. This work provides a principled framework for quantifying and controlling solution degeneracy in task-trained RNNs, offering new tools for building more interpretable and biologically grounded models of neural computation.
翻译:任务训练的循环神经网络(RNN)在神经科学和机器学习中被广泛用于建模动态计算。为了深入理解神经系统如何解决任务的机制,先前的研究通常对单个训练好的网络进行逆向工程。然而,在相同任务上训练并达到相似性能的不同RNN可能展现出显著不同的内部解决方案,这一现象被称为解简并性。在此,我们开发了一个统一的框架,用于系统性地在三个层面量化与控制解简并性:行为、神经动力学和权重空间。我们将此框架应用于在四个神经科学相关任务上训练的3,400个RNN:触发器记忆、正弦波生成、延迟辨别和路径积分,同时系统性地改变任务复杂度、学习机制、网络规模和正则化。我们发现,更高的任务复杂度和更强的特征学习会降低神经动力学中的简并性,但会增加权重空间中的简并性,对行为的影响则较为复杂。相比之下,更大的网络和结构正则化在所有三个层面均能减少简并性。这些发现实证验证了逆变原理,并为研究人员提供了实用的指导,以调整RNN解决方案的变异性——无论是为了揭示共享的神经机制,还是为了模拟生物系统中观察到的个体差异。这项工作为量化与控制任务训练RNN中的解简并性提供了一个原则性框架,为构建更具可解释性和生物学基础的神经计算模型提供了新工具。