Centralized content moderation paradigm both falls short and over-reaches: 1) it fails to account for the subjective nature of harm, and 2) it acts with blunt suppression in response to content deemed harmful, even when such content can be salvaged. We first investigate this through formative interviews, documenting how seemingly benign content becomes harmful due to individual life experiences. Based on these insights, we developed DIY-MOD, a browser extension that operationalizes a new paradigm: personalized content transformation. Operating on a user's own definition of harm, DIY-MOD transforms sensitive elements within content in real-time instead of suppressing the content itself. The system selects the most appropriate transformation for a piece of content from a diverse palette--from obfuscation to artistic stylizing--to match the user's specific needs while preserving the content's informational value. Our two-session user study demonstrates that this approach increases users' sense of agency and safety, enabling them to engage with content and communities they previously needed to avoid.


翻译:集中式内容审核范式既存在不足又过度干预:1)它未能考虑伤害的主观性;2)对于被认定为有害的内容,即使这些内容可以被挽救,它仍采取生硬的压制措施。我们首先通过形成性访谈对此展开研究,记录了看似良性的内容如何因个人生活经历而变得有害。基于这些发现,我们开发了DIY-MOD——一款实现新范式的浏览器扩展:个性化内容转换。DIY-MOD基于用户自身对伤害的定义,实时转换内容中的敏感元素,而非压制内容本身。该系统从多样化方案(从模糊化到艺术风格化)中为每项内容选择最合适的转换方式,以匹配用户的特定需求,同时保留内容的信息价值。我们的双阶段用户研究表明,这种方法增强了用户的自主权与安全感,使他们能够接触以往需要回避的内容与社区。

0
下载
关闭预览

相关内容

【MIT】硬负样本的对比学习
专知
13+阅读 · 2020年10月15日
论文浅尝 | Interaction Embeddings for Prediction and Explanation
开放知识图谱
11+阅读 · 2019年2月1日
国家自然科学基金
46+阅读 · 2015年12月31日
国家自然科学基金
0+阅读 · 2014年12月31日
VIP会员
相关基金
Top
微信扫码咨询专知VIP会员