Neural network controllers increasingly demand millions of parameters, and language model approaches push into the billions. For embedded aerospace systems with strict power and latency constraints, this scaling is prohibitive. We present Tiny Recursive Control (TRC), a neural architecture based on a counterintuitive principle: capacity can emerge from iteration depth rather than parameter count. TRC applies compact networks (approximately 1.5M parameters) repeatedly through a two-level hierarchical latent structure, refining control sequences by simulating trajectories and correcting based on tracking error. Because the same weights process every refinement step, adding iterations increases computation without increasing memory. We evaluate TRC on nonlinear control problems including oscillator stabilization and powered descent with fuel constraints. Across these domains, TRC achieves near-optimal control costs while requiring only millisecond-scale inference on GPU and under 10~MB memory, two orders of magnitude smaller than language model baselines. These results demonstrate that recursive reasoning, previously confined to discrete tasks, transfers effectively to continuous control synthesis.
翻译:神经网络控制器参数规模已普遍达到百万级,语言模型方法甚至扩展至数十亿参数。对于具有严格功耗与延迟约束的嵌入式航空航天系统,这种参数增长趋势难以实现。本文提出微型递归控制(TRC),这是一种基于反直觉原理的神经架构:模型能力可通过迭代深度而非参数数量涌现。TRC通过两级分层潜结构,反复应用紧凑网络(约150万参数),通过轨迹模拟与跟踪误差校正来优化控制序列。由于所有权重参数在每个优化步骤中共享,增加迭代次数仅提升计算量而不增加内存占用。我们在非线性控制问题上评估TRC,包括振荡器稳定与燃料约束下的动力下降控制。实验表明,TRC在GPU上仅需毫秒级推理时间且内存占用低于10MB,比语言模型基线低两个数量级,同时实现接近最优的控制成本。这些结果证明,原本局限于离散任务的递归推理机制可有效迁移至连续控制合成领域。