简介: 本白皮书是Google Cloud AI解释产品随附的技术参考。 它面向负责设计和交付ML模型的模型开发人员和数据科学家。 我们的目标是让他们利用AI解释来简化模型开发并向主要利益相关者解释模型的行为。 产品经理,业务负责人和最终用户也可能会发现本白皮书的相关部分,特别是围绕AI解释的用例,以及至关重要的是围绕其正确用法及其当前限制的考虑。 具体来说,我们向这些读者介绍"用法示例"以及"归因限制和使用注意事项"部分。

白皮书的目录:

  • 特征归因(Feature Attributions)

  • 特征归因的限制和使用注意事项(Attribution Limitations and Usage Considerations)

  • 解释模型元数据(Explanation Model Metadata)

  • 使用What-if工具的可视化(Visualizations with the What-If Tool)

  • 使用范例(Usage Examples)

参考链接: https://cloud.google.com/ml-engine/docs/ai-explanations/overview

成为VIP会员查看完整内容
0
61

相关内容

广义上的可解释性指在我们需要了解或解决一件事情的时候,我们可以获得我们所需要的足够的可以理解的信息,也就是说一个人能够持续预测模型结果的程度。按照可解释性方法进行的过程进行划分的话,大概可以划分为三个大类: 在建模之前的可解释性方法,建立本身具备可解释性的模型,在建模之后使用可解释性方法对模型作出解释。

主题: Opportunities and Challenges in Explainable Artificial Intelligence (XAI): A Survey

摘要: 如今,深度神经网络已广泛应用于对医疗至关重要的任务关键型系统,例如医疗保健,自动驾驶汽车和军事领域,这些系统对人类生活产生直接影响。然而,深层神经网络的黑匣子性质挑战了其在使用中的关键任务应用,引发了引起信任不足的道德和司法问题。可解释的人工智能(XAI)是人工智能(AI)的一个领域,它促进了一系列工具,技术和算法的产生,这些工具,技术和算法可以生成对AI决策的高质量,可解释,直观,人类可理解的解释。除了提供有关深度学习当前XAI格局的整体视图之外,本文还提供了开创性工作的数学总结。我们首先提出分类法,然后根据它们的解释范围,算法背后的方法,解释级别或用法对XAI技术进行分类,这有助于建立可信赖,可解释且自解释的深度学习模型。然后,我们描述了XAI研究中使用的主要原理,并介绍了2007年至2020年XAI界标研究的历史时间表。在详细解释了每种算法和方法之后,我们评估了八种XAI算法对图像数据生成的解释图,讨论了其局限性方法,并提供潜在的未来方向来改进XAI评估。

成为VIP会员查看完整内容
0
62

In the last years, Artificial Intelligence (AI) has achieved a notable momentum that may deliver the best of expectations over many application sectors across the field. For this to occur, the entire community stands in front of the barrier of explainability, an inherent problem of AI techniques brought by sub-symbolism (e.g. ensembles or Deep Neural Networks) that were not present in the last hype of AI. Paradigms underlying this problem fall within the so-called eXplainable AI (XAI) field, which is acknowledged as a crucial feature for the practical deployment of AI models. This overview examines the existing literature in the field of XAI, including a prospect toward what is yet to be reached. We summarize previous efforts to define explainability in Machine Learning, establishing a novel definition that covers prior conceptual propositions with a major focus on the audience for which explainability is sought. We then propose and discuss about a taxonomy of recent contributions related to the explainability of different Machine Learning models, including those aimed at Deep Learning methods for which a second taxonomy is built. This literature analysis serves as the background for a series of challenges faced by XAI, such as the crossroads between data fusion and explainability. Our prospects lead toward the concept of Responsible Artificial Intelligence, namely, a methodology for the large-scale implementation of AI methods in real organizations with fairness, model explainability and accountability at its core. Our ultimate goal is to provide newcomers to XAI with a reference material in order to stimulate future research advances, but also to encourage experts and professionals from other disciplines to embrace the benefits of AI in their activity sectors, without any prior bias for its lack of interpretability.

1
40
下载
预览

Incorporating knowledge graph into recommender systems has attracted increasing attention in recent years. By exploring the interlinks within a knowledge graph, the connectivity between users and items can be discovered as paths, which provide rich and complementary information to user-item interactions. Such connectivity not only reveals the semantics of entities and relations, but also helps to comprehend a user's interest. However, existing efforts have not fully explored this connectivity to infer user preferences, especially in terms of modeling the sequential dependencies within and holistic semantics of a path. In this paper, we contribute a new model named Knowledge-aware Path Recurrent Network (KPRN) to exploit knowledge graph for recommendation. KPRN can generate path representations by composing the semantics of both entities and relations. By leveraging the sequential dependencies within a path, we allow effective reasoning on paths to infer the underlying rationale of a user-item interaction. Furthermore, we design a new weighted pooling operation to discriminate the strengths of different paths in connecting a user with an item, endowing our model with a certain level of explainability. We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine.

0
7
下载
预览

Explainable Recommendation refers to the personalized recommendation algorithms that address the problem of why -- they not only provide the user with the recommendations, but also make the user aware why such items are recommended by generating recommendation explanations, which help to improve the effectiveness, efficiency, persuasiveness, and user satisfaction of recommender systems. In recent years, a large number of explainable recommendation approaches -- especially model-based explainable recommendation algorithms -- have been proposed and adopted in real-world systems. In this survey, we review the work on explainable recommendation that has been published in or before the year of 2018. We first high-light the position of explainable recommendation in recommender system research by categorizing recommendation problems into the 5W, i.e., what, when, who, where, and why. We then conduct a comprehensive survey of explainable recommendation itself in terms of three aspects: 1) We provide a chronological research line of explanations in recommender systems, including the user study approaches in the early years, as well as the more recent model-based approaches. 2) We provide a taxonomy for explainable recommendation algorithms, including user-based, item-based, model-based, and post-model explanations. 3) We summarize the application of explainable recommendation in different recommendation tasks, including product recommendation, social recommendation, POI recommendation, etc. We devote a chapter to discuss the explanation perspectives in the broader IR and machine learning settings, as well as their relationship with explainable recommendation research. We end the survey by discussing potential future research directions to promote the explainable recommendation research area.

0
10
下载
预览

This paper identifies the factors that have an impact on mobile recommender systems. Recommender systems have become a technology that has been widely used by various online applications in situations where there is an information overload problem. Numerous applications such as e-Commerce, video platforms and social networks provide personalized recommendations to their users and this has improved the user experience and vendor revenues. The development of recommender systems has been focused mostly on the proposal of new algorithms that provide more accurate recommendations. However, the use of mobile devices and the rapid growth of the internet and networking infrastructure has brought the necessity of using mobile recommender systems. The links between web and mobile recommender systems are described along with how the recommendations in mobile environments can be improved. This work is focused on identifying the links between web and mobile recommender systems and to provide solid future directions that aim to lead in a more integrated mobile recommendation domain.

0
7
下载
预览

Images account for a significant part of user decisions in many application scenarios, such as product images in e-commerce, or user image posts in social networks. It is intuitive that user preferences on the visual patterns of image (e.g., hue, texture, color, etc) can be highly personalized, and this provides us with highly discriminative features to make personalized recommendations. Previous work that takes advantage of images for recommendation usually transforms the images into latent representation vectors, which are adopted by a recommendation component to assist personalized user/item profiling and recommendation. However, such vectors are hardly useful in terms of providing visual explanations to users about why a particular item is recommended, and thus weakens the explainability of recommendation systems. As a step towards explainable recommendation models, we propose visually explainable recommendation based on attentive neural networks to model the user attention on images, under the supervision of both implicit feedback and textual reviews. By this, we can not only provide recommendation results to the users, but also tell the users why an item is recommended by providing intuitive visual highlights in a personalized manner. Experimental results show that our models are not only able to improve the recommendation performance, but also can provide persuasive visual explanations for the users to take the recommendations.

0
7
下载
预览
小贴士
相关论文
Directions for Explainable Knowledge-Enabled Systems
Shruthi Chari,Daniel M. Gruen,Oshani Seneviratne,Deborah L. McGuinness
14+阅读 · 2020年3月17日
Qingyu Guo,Fuzhen Zhuang,Chuan Qin,Hengshu Zhu,Xing Xie,Hui Xiong,Qing He
70+阅读 · 2020年2月28日
Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges toward Responsible AI
Alejandro Barredo Arrieta,Natalia Díaz-Rodríguez,Javier Del Ser,Adrien Bennetot,Siham Tabik,Alberto Barbado,Salvador García,Sergio Gil-López,Daniel Molina,Richard Benjamins,Raja Chatila,Francisco Herrera
40+阅读 · 2019年10月22日
Summit: Scaling Deep Learning Interpretability by Visualizing Activation and Attribution Summarizations
Fred Hohman,Haekyu Park,Caleb Robinson,Duen Horng Chau
3+阅读 · 2019年9月2日
Xiang Wang,Dingxian Wang,Canran Xu,Xiangnan He,Yixin Cao,Tat-Seng Chua
7+阅读 · 2018年11月12日
Maartje ter Hoeve,Anne Schuth,Daan Odijk,Maarten de Rijke
5+阅读 · 2018年5月14日
Yongfeng Zhang,Xu Chen
10+阅读 · 2018年5月13日
Elias Pimenidis,Nikolaos Polatidis,Haralambos Mouratidis
7+阅读 · 2018年5月6日
Xu Chen,Yongfeng Zhang,Hongteng Xu,Yixin Cao,Zheng Qin,Hongyuan Zha
7+阅读 · 2018年1月31日
Shuai Zhang,Lina Yao,Aixin Sun
4+阅读 · 2017年8月3日
Top