Nowadays, artificial intelligence algorithms are used for targeted and personalized content distribution in the large scale as part of the intense competition for attention in the digital media environment. Unfortunately, targeted information dissemination may result in intellectual isolation and discrimination. Further, as demonstrated in recent political events in the US and EU, malicious bots and social media users can create and propagate targeted `fake news' content in different forms for political gains. From the other direction, fake news detection algorithms attempt to combat such problems by identifying misinformation and fraudulent user profiles. This paper reviews common news feed algorithms as well as methods for fake news detection, and we discuss how news feed algorithms could be misused to promote falsified content, affect news diversity, or impact credibility. We review how news feed algorithms and recommender engines can enable confirmation bias to isolate users to certain news sources and affecting the perception of reality. As a potential solution for increasing user awareness of how content is selected or sorted, we argue for the use of interpretable and explainable news feed algorithms. We discuss how improved user awareness and system transparency could mitigate unwanted outcomes of echo chambers and bubble filters in social media.
翻译:目前,作为数字媒体环境引起关注的激烈竞争的一部分,大规模使用人工智能算法来进行有针对性的个人化内容传播。不幸的是,有针对性的信息传播可能导致智力孤立和歧视。此外,正如美国和欧盟最近的政治事件所显示的那样,恶意机器人和社交媒体用户可以以不同形式创造和传播有针对性的“假消息”内容,以获取政治收益。从另一个方向看,假新闻检测算法试图通过识别错误和欺诈性用户描述来克服这些问题。本文回顾了通用新闻反馈算法以及假新闻检测方法,我们讨论了如何滥用新闻反馈算法来宣传虚假内容、影响新闻多样性或影响可信度。我们审视了新闻反馈算法和建议引擎如何能够证实偏见,将用户孤立到某些新闻来源并影响对现实的认识。作为提高用户对内容选择或排序的认识的一个潜在解决方案,我们主张使用可解释和可解释的新闻报道算法。我们讨论了提高用户意识和系统透明度如何减少社会媒体回声室和泡沫过滤器的不必要结果。