Decentralized finance (DeFi) has seen a tremendous increase in interest in the past years with many types of protocols, such as lending protocols or automated market-makers (AMMs) These protocols are typically controlled using off-chain governance, where token holders can vote to modify different parameters of the protocol. Up till now, however, choosing these parameters has been a manual process, typically done by the core team behind the protocol. In this work, we model a DeFi environment and propose a semi-automatic parameter adjustment approach with deep Q-network (DQN) reinforcement learning. Our system automatically generates intuitive governance proposals to adjust these parameters with data-driven justifications. Our evaluation results demonstrate that a learning-based on-chain governance procedure is more reactive, objective, and efficient than the existing manual approach.
翻译:过去几年来,分散化金融(DeFi)在很多类型的协议(如贷款协议或自动市场制造者(AMMs))中表现出了巨大的兴趣。这些协议通常使用离链式治理来控制这些协议,在离链式治理中,象征性持有者可以投票修改协议的不同参数。然而,到目前为止,选择这些参数是一个手工过程,通常由协议背后的核心小组来完成。在这项工作中,我们模拟了一种 DeFi 环境,并提出了一个半自动参数调整方法,通过深层次的Q网络强化学习(DQN) 。我们的系统自动生成了直观治理建议,用数据驱动的理由来调整这些参数。我们的评价结果表明,基于学习的链式治理程序比现有的手工方法更具反应性、客观性和效率。