Building agents to interact with the web would allow for significant improvements in knowledge understanding and representation learning. However, web navigation tasks are difficult for current deep reinforcement learning (RL) models due to the large discrete action space and the varying number of actions between the states. In this work, we introduce DOM-Q-NET, a novel architecture for RL-based web navigation to address both of these problems. It parametrizes Q functions with separate networks for different action categories: clicking a DOM element and typing a string input. Our model utilizes a graph neural network to represent the tree-structured HTML of a standard web page. We demonstrate the capabilities of our model on the MiniWoB environment where we can match or outperform existing work without the use of expert demonstrations. Furthermore, we show 2x improvements in sample efficiency when training in the multi-task setting, allowing our model to transfer learned behaviours across tasks.
翻译:与网络互动的建筑代理机构将大大改进知识理解和代表性学习。然而,由于州与州之间的行动空间巨大,行动数量不同,因此网络导航任务对于当前的深度强化学习模式来说很困难。在这项工作中,我们引入了DOM-Q-NET,这是基于RL的网络导航解决这两个问题的新结构。它与不同行动类别的不同网络的Q功能相匹配:点击 DOM 元素并输入字符串输入。我们的模型利用一个图形神经网络来代表标准网页的树形 HTML 。我们展示了我们在MiniWoB 环境中的模型能力,在那里,我们可以不使用专家演示就匹配或超越现有工作。此外,在多任务设置培训时,我们展示了2x效率的提高,使得我们的模型能够传递跨任务学习的行为。