ML + HCI: what is being investigated at the intersection of machine learning and human-computer interaction

Many are convinced that the field of Human Computer Interaction (HCI or human-computer interaction) is limited only to the design of sites or applications, and the main task of a specialist is to satisfy users by increasing the like button by a few pixels . In the post, we want to show that this is not at all the case, and tell what is happening in HCI at the interface with machine learning and artificial intelligence research. Perhaps this will allow readers to look at this area from a new perspective. 



For an overview, we took the proceedings of the CHI: Conference on Human Factors in Computing Systems for 10 years, and with the help of NLP and social network analysis, we looked at topics and areas at the intersection of disciplines.





 



In Russia, the focus is especially strong on applied problems of UX design. Many of the events that helped the growth of HCI abroad did not happen in our country: iSchools did not appear , many specialists who were involved in related aspects of engineering psychology left science, etc. As a result, the profession reappeared, starting from applied problems and research. One of the results of this is visible even now - it is the extremely low representation of Russian HCI work at key conferences. 



But outside of Russia, HCI has developed in very different ways, focusing on a variety of topics and areas. On the master's program " Information systems and human-computer interaction”At the St. Petersburg HSE we, among other things, discuss - with students, colleagues, graduates of similar specialties from European universities, partners who help develop the program - what belongs to the field of human-computer interaction. And these discussions show the heterogeneity of the direction in which each specialist has his own, incomplete, picture of the field. 



From time to time we hear questions about how this direction is related (and whether it is connected at all) with machine learning and data analysis. To answer them, we turned to recent research presented at the CHI conference .



First of all, we will tell you what is happening in areas such as xAI and iML (eXplainable Artificial Intelligence and Interpretable Machine Learning) from the side of interfaces and users, as well as how in HCI they study the cognitive aspects of the work of data scientists, and we will give examples of interesting work in recent years in each area.



xAI and iML



Machine learning techniques are undergoing intensive development and - more importantly from the point of view of the area under discussion - are being actively implemented in automated decision making. Therefore, researchers are increasingly discussing the following questions: how do non-machine learning users interact with systems where similar algorithms are used? One of the important questions of this interaction: how to make users trust the decisions made by the models? Therefore, every year the topics of interpreted machine learning (Interpretable Machine Learning - iML) and explicable artificial intelligence (eXplainable Artificial Intelligence - XAI) are becoming more and more hot. 



At the same time, if at such conferences as NeurIPS, ICML, IJCAI, KDD, the algorithms and means of iML and XAI are discussed, the CHI focuses on several topics related to the design features and experience of using these systems. For example, at CHI-2020, several sections were devoted to this topic at once, including “AI / ML & seeing through the black box” and “Coping with AI: not agAIn!”. But even before the appearance of separate sections, there were many such works. We have identified four areas in them.



Design of interpretive systems for solving applied problems



The first direction is the design of systems based on interpretability algorithms in various applied problems: medical, social, etc. Such works arise in very different areas. For example, working at CHI-2020 CheXplain: Enabling Physicians to Explore and Understand Data-Driven, AI-Enabled Medical Imaging Analysis describes a system that helps doctors examine and explain chest x-ray results. She offers additional textual and visual explanations, as well as pictures with the same and opposite result (supporting and conflicting examples). If the system predicts that a disease is visible on the x-ray, it will show two examples. The first supportive example is a snapshot of the lungs of another patient who has confirmed the same disease. The second, contradicting example is a snapshot in which there is no disease, that is, a snapshot of the lungs of a healthy person. The main idea is to reduce obvious errors and reduce the number of external consultations in simple cases in order to make a diagnosis faster.





CheXpert: automated region selection + examples (unlikely vs definitely) 





Developing systems for researching machine learning models



The second direction is the development of systems that help to interactively compare or combine several methods and algorithms. For example, in the work of Silva: Interactively Assessing Machine Learning Fairness Using Causality at CHI-2020, a system was presented that builds several machine learning models on user data and provides the possibility of their subsequent analysis. The analysis includes building a causal graph between variables and calculating a number of metrics that assess not only the accuracy, but also the fairness of the model (Statistical Parity Difference, Equal Opportunity Difference, Average Odds Difference, Disparate Impact, Theil Index), which helps to find bias in predictions.





Silva : graph of relationships between variables + graphs for comparing fairness metrics + color highlighting of influential variables in each group 



General issues of model interpretability



The third area is the discussion of approaches to the problem of interpretability of models in general. Most often these are reviews, criticism of approaches and open questions: for example, what is meant by “interpretability”. Here I would like to note the review at CHI-2018 Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda, in which the authors reviewed 289 major papers on explanations in artificial intelligence, and 12,412 publications citing them. Using network analysis and case modeling, they identified four key research areas 1) Intelligent and Ambient (I&A) Systems, 2) Explainable AI: Fair, Accountable, and Transparent (FAT) algorithms and Interpretable Machine Learning (iML), 3) Theories of Explanations: Causality & Cognitive Psychology, 4) Interactivity and Learnability. In addition, the authors described the main research trends: interactive learning and interaction with the system.



User research 



Finally, the fourth area is user research on algorithms and systems that interpret machine learning models. In other words, these are studies about whether in practice new systems are becoming clearer and more transparent, what difficulties users face when working with interpretative rather than original models, how to determine whether the system is being used as planned (or a new use has been found for it) - maybe incorrect), what are the needs of users and whether the developers offer them what they really need.



There are a lot of interpretation tools and algorithms, so the question arises: how to understand which algorithm to choose? In Questioning the AI: Informing Design Practices for Explainable AI User Experiencesthe issues of motivation for the use of explanatory algorithms are discussed and problems are identified that, with all the variety of methods, have not yet been sufficiently solved. The authors come to an unexpected conclusion: most of the existing methods are built in such a way that they answer the question "why" ("why did I have such a result"), while users also need an answer to the question "why not" ("why not another "), and sometimes -" what to do to change the result. " 



The paper also says that users need to understand what are the limits of applicability of methods, what limitations they have - and this needs to be explicitly implemented in the proposed tools. This problem is shown more clearly in the articleInterpreting Interpretability: Understanding Data Scientists' Use of Interpretability Tools for Machine Learning . The authors conducted a small experiment with specialists in the field of machine learning: they showed the results of several popular tools for interpreting machine learning models and asked them to answer questions related to making decisions based on these results. It turned out that even experts trust such models too much and do not take the results critically. Like any tool, explanatory models can be misused. When developing the toolkit, it is important to take this into account, using the accumulated knowledge (or specialists) in the field of human-computer interaction in order to take into account the characteristics and needs of potential users. 



Data Science, Notebooks, Visualization 



Another interesting area of ​​HCI is in the analysis of the cognitive aspects of working with data. Recently, science has raised the question of how the “degrees of freedom” of the researcher - the features of data collection, experimental design, and choice of analytical methods - affect research results and their reproducibility. While much of the discussion and criticism is related to psychology and the social sciences, many issues concern the reliability of conclusions in the work of data analysts in general, as well as the difficulties in communicating these findings to analysis consumers.



Therefore, the subject of this HCI area is the development of new ways to visualize uncertainty in model predictions, the creation of systems for comparing analyzes carried out in different ways, as well as the analysis of analysts' work with tools such as Jupyter notebooks.



Visualizing uncertainty



Uncertainty visualization is one of the features that distinguish scientific graphics from presentation and business visualization. For quite a long time, the principle of minimalism and focus on the main trends was considered the key in the latter. However, this leads to overconfidence of users in a point estimate of a magnitude or forecast, which can be critical, especially if we have to compare forecasts with different degrees of uncertainty. Job Uncertainty Displays Using Quantile Dotplots or CDFs Improve Transit Decision-Makingexamines how visualization of uncertainty in prediction for scatter plots and cumulative distribution functions helps users make more rational decisions using the example of the problem of estimating the time of arrival of a bus from data from a mobile application. What's especially nice is that one of the authors maintains the ggdist package for R with various options for visualizing uncertainty. 





Uncertainty visualization examples ( https://mjskay.github.io/ggdist/ )



However, there are often problems of visualizing possible alternatives, for example, for user action sequences in web analytics or application analytics. Work Visualizing Uncertainty and Alternatives in Event Sequence Predictions analyzes how a graphical representation of the alternatives based on the model Time-Aware Recurrent Neural Network (TRNN ) helps experts to make decisions and to trust them.



Model Comparison



As important as visualizing uncertainty, an aspect of analysts' work is comparing how - often hidden - the researcher's choice of different approaches to modeling at all its stages can lead to different analytical results. In psychology and the social sciences, pre-registration of research design and a clear separation of exploratory and confirmatory studies are gaining popularity. However, in tasks where research is more data-driven, an alternative can be tools that allow you to assess the hidden risks of analysis by comparing models. Working Increasing the Transparency of Research Papers with Explorable Multiverse Analyzessuggests using interactive visualization of several approaches to analysis in articles. In essence, the article turns into an interactive application where the reader can evaluate what will change in the results and conclusions if a different approach is applied. This seems like a useful idea for practical analytics as well.



Working with tools for organizing and analyzing data



The last block of work is related to the study of how analysts work with systems like Jupyter Notebooks, which have become a popular tool for organizing data analysis. Article Exploration and Explanation in Computational Notebooks analyzes the contradictions between research and explaining the objectives of learning found on Github interactive documents, and Managing Messes in Computational Notebooksthe authors analyze how notes, pieces of code, and visualizations evolve in an iterative analyst workflow, and suggest possible additions to tools to support this process. Finally, already at CHI 2020, the main problems of analysts at all stages of work, from loading data to transferring a model to production, as well as ideas for improving tools, are summarized in the article What's Wrong with Computational Notebooks? Pain Points, Needs, and Design Opportunities .





Transformation of the structure of reports based on execution logs ( https://microsoft.github.io/gather/ )



Summarizing



Concluding the part of the discussion "what does HCI do" and "why does an HCI specialist know machine learning", I would like to reiterate the general conclusion from the motivation and results of these studies. As soon as a person appears in the system, this immediately leads to a number of additional questions: how to simplify interaction with the system and avoid mistakes, how the user changes the system, whether the actual use differs from the planned one. As a result, we need those who understand how the process of designing systems with artificial intelligence works, and know how to take into account the human factor. 



We teach all this on the master's program " Information systems and human-computer interaction". If you are interested in HCI research, check out the light (the admissions campaign has just begun ). Or follow our blog: we will tell you more about the projects that the students have been working on this year.



All Articles