Reinforcement learning agents have been mostly developed and evaluated under the assumption that they will operate in a fully autonomous manner -- they will take all actions. In this work, our goal is to develop algorithms that, by learning to switch control between machine and human agents, allow existing reinforcement learning agents to operate under different automation levels. To this end, we first formally define the problem of learning to switch control among agents in a team via a 2-layer Markov decision process. Then, we develop an online learning algorithm that uses upper confidence bounds on the agents' policies and the environment's transition probabilities to find a sequence of switching policies. We prove that the total regret of our algorithm with respect to the optimal switching policy is sublinear in the number of learning steps. Moreover, we also show that our algorithm can be used to find multiple sequences of switching policies across several independent teams of agents operating in similar environments, where it greatly benefits from maintaining shared confidence bounds for the environments' transition probabilities. Simulation experiments in obstacle avoidance in a semi-autonomous driving scenario illustrate our theoretical findings and demonstrate that, by exploiting the specific structure of the problem, our proposed algorithm is superior to problem-agnostic algorithms.
翻译:强化学习机构大多是在以下假设下开发和评价的:它们将以完全自主的方式运作 -- -- 它们将采取所有行动。在这项工作中,我们的目标是发展算法,通过学习转换机器和人类代理人之间的控制,使现有的强化学习机构能够在不同的自动化水平下运作。为此,我们首先正式确定学习通过一个2层Markov决策程序转换一个团队中的代理人之间的控制的问题。然后,我们开发一个在线学习算法,利用代理人政策和环境过渡可能性的高度信任界限来寻找转换政策的顺序。我们证明,我们对最佳转换政策的算法的完全遗憾是学习步骤数目的次线性。此外,我们还表明,我们的算法可以用来找到在类似环境中运作的若干独立代理人小组之间改变政策的多重顺序,这极大地得益于为环境过渡概率维持共同的信任界限。模拟半自主驱动情景中的障碍避免试验,说明我们的理论结论,并表明,通过利用问题的具体结构,我们提议的算法是优于问题。