Training giant models from scratch for each complex task is resource- and data-inefficient. To help develop models that can leverage existing systems, we propose a new challenge: Learning to solve complex tasks by communicating with existing agents (or models) in natural language. We design a synthetic benchmark, CommaQA, with three complex reasoning tasks (explicit, implicit, numeric) designed to be solved by communicating with existing QA agents. For instance, using text and table QA agents to answer questions such as "Who had the longest javelin throw from USA?". We show that black-box models struggle to learn this task from scratch (accuracy under 50\%) even with access to each agent's knowledge and gold facts supervision. In contrast, models that learn to communicate with agents outperform black-box models, reaching scores of 100\% when given gold decomposition supervision. However, we show that the challenge of learning to solve complex tasks by communicating with existing agents \emph{without relying on any auxiliary supervision or data} still remains highly elusive. We release CommaQA, along with a compositional generalization test split, to advance research in this direction. Dataset and Code available at https://github.com/allenai/commaqa.
翻译:为每项复杂任务从零到零的培训巨型模型是资源和数据效率低下的。 为了帮助开发能够利用现有系统的模式, 我们提出了一个新的挑战: 学习如何通过自然语言与现有代理商( 或模式) 沟通解决复杂的任务。 我们设计了一个合成基准, CommaQA, 有三个复杂的推理任务( 清晰、 隐含、 数字), 其设计方式是通过与现有的 QA 代理商的沟通来解决。 例如, 使用文本和表格 QA 代理商回答问题, 比如“ 谁有美国最长的 Javelin 扔出? ” 。 我们显示黑盒模式努力从零到零( 不到 50 ⁇ ) 学习这项任务( 的准确度), 甚至在接触每个代理商的知识和黄金事实监督上。 相反, 我们设计了一个能够与超越黑盒模式的代理商进行沟通的模型, 当给定金分解调时达到100 。 然而, 我们显示, 学习如何通过与现有代理商的代理商进行沟通来解决复杂的任务, 而无需依赖任何辅助监督或数据 。 我们仍然非常难以理解 。 我们释放CommaQA, QA, 以及该方向上的数据代码和 ASet 。