Navigation auf uzh.ch

Suche

Department of Informatics Interactive Visual Data Analysis Group

Visual Explanation of Ranking and Recommendation

Introduction

Explainable Artificial Intelligence (XAI) is a research field whose goal is to provide explanations about black-box models. The goal is to help users (namely model developers, data journalists, and end-users) understand the behavior and decision-making mechanism of these models. To do so, current approaches use surrogate models as LIME and GrandCam are trained to approximate the predictions of the underlying black-box model. The results of the surrogate models are then visualized through bar charts which show the features' importance to the prediction.

Most of the current research in the XAI field has focused on one specific user group, the model developer, and on computer vision problems. For example, by explaining how, in a classification task, each layer in a convolutional neural network works.

However, few have studied and proposed novel techniques about how to visually communicate these explanations or, have considered different user groups such as data analysis, data journalists, and the end-users. More specifically, few pieces of research have focused on applying these XAI techniques to ranking and recommendation problems.

Ranking and recommendation explanation

Every time services like e-commerce such as Galaxus, Amazon, and Zalando or streaming service as Netflix and Spotify are used, we are prompt with a ranking of items or some kind of recommendations “made just for you”. The users of these systems are not prompt with any explanation on how these results are being computed or how well are aligned with their needs. As a result, the users have just to simply trust the system that the provided ranking or recommendation is customized for them and that the result is not affected by any kind of bias.

How can information visualization and human-computer interaction techniques be combined with XAI ones to increase user trust in ranking and recommendation results and create human-centered explainers?

In this multidisciplinary project, we first define user needs, then we prototype, design, develop, and qualitative/quantitative evaluate interactive visual web applications to enchant user trust on ranking and recommendation via visual explanation.

Publications

[POSTER] A design space for explainable ranking and ranking models

 

radar charts of ranking explainers

Item ranking systems support users in multi-criteria decision-making tasks. Users need to trust rankings and ranking algorithms to reflect user preferences nicely while avoiding systematic errors and biases. However, today only few approaches help end users, model developers, and analysts to explain rankings. We report on the study of explanation approaches from the perspectives of recommender systems, explainable AI, and visualization research and propose the first cross-domain design space for explainers of item rankings. In addition, we leverage the descriptive power of the design space to characterize a) existing explainers and b) three main user groups involved in ranking explanation tasks. The generative power of the design space is a means for future designers and developers to create more target-oriented solutions in this only weakly exploited space.

Full poster abstract: arXiv

Conference: EUROVIS 2022

[WORKSHOP] What should we watch tonight?

A personalised user experience on the web relies heavily on recommender systems. While there are state-of-the-art recommender systems achieving strong results, these systems fail in generating the correct item for some users. We train two different recommendation models, EmbeddingMF and DeepFM, and attempt to unpack the reasoning for our models’ training results, explaining to the reader the impact these have on model recommendations. We use various visualization techniques to show that, once a user deviates from the high-density area of a rating distribution, the recommender system becomes unable to generate accurate recommendations for them. This finding calls for new directions in the development of recommendation models, going beyond a “one-model-fits-all-users” paradigm. We call for increased attention to individual needs through adoption of a human-centered approach for the design of recommendation systems.

Interactive article:

Conference: VISxAI - VIS 2022