Large language models (LLMs) are now accessible to anyone with a computer, a web browser, and an internet connection via browser-based interfaces, shifting the dynamics of participation in AI development. This article examines how interactive feedback features in ChatGPT's interface afford user participation in LLM iteration. Drawing on a survey of early ChatGPT users and applying the mechanisms and conditions framework of affordances, we analyse how these features shape user input. Our analysis indicates that these features encourage simple, frequent, and performance-focused feedback while discouraging collective input and discussions among users. Drawing on participatory design literature, we argue such constraints, if replicated across broader user bases, risk reinforcing power imbalances between users, the public, and companies developing LLMs. Our analysis contributes to the growing literature on participatory AI by critically examining the limitations of existing feedback processes and proposing directions for redesign. Rather than focusing solely on aligning model outputs with specific user preferences, we advocate for creating infrastructure that supports sustained dialogue about the purpose and applications of LLMs. This approach requires attention to the ongoing work of "infrastructuring" - creating and sustaining the social, technical, and institutional structures necessary to address matters of concern to stakeholders impacted by LLM development and deployment.
翻译:大型语言模型(LLMs)现已通过基于浏览器的界面向任何拥有计算机、网络浏览器和互联网连接的用户开放,这改变了参与人工智能开发的动态。本文探讨了ChatGPT界面中的交互式反馈功能如何为用户参与LLM迭代提供可能性。基于对早期ChatGPT用户的调查,并应用可供性的机制与条件框架,我们分析了这些功能如何塑造用户输入。我们的分析表明,这些功能鼓励简单、频繁且以性能为中心的反馈,同时抑制用户间的集体输入和讨论。借鉴参与式设计文献,我们认为,如果这些约束在更广泛的用户群体中复制,可能会加剧用户、公众与开发LLMs的公司之间的权力失衡。通过批判性审视现有反馈过程的局限性并提出重新设计的方向,我们的分析为日益增长的参与式人工智能研究做出了贡献。我们主张,不应仅专注于使模型输出与特定用户偏好对齐,而应创建支持就LLMs的目的和应用进行持续对话的基础设施。这种方法需要关注“基础设施化”的持续工作——即创建和维护必要的社会、技术和制度结构,以解决受LLM开发和部署影响的利益相关者所关切的问题。