We argue that accountability mechanisms are needed in human-AI agent relationships to ensure alignment with user and societal interests. We propose a framework according to which AI agents' engagement is conditional on appropriate user behaviour. The framework incorporates design-strategies such as distancing, disengaging, and discouraging.
翻译:我们认为,为确保人机智能体关系与用户及社会利益保持一致,必须建立相应的问责机制。本文提出一个框架,使人工智能体的参与以用户行为的适当性为前提。该框架整合了疏离、脱离参与和抑制等设计策略。