You are currently viewing 20/10/2021 – AI3SD Autumn Seminar II: Explainable AI & ML

20/10/2021 – AI3SD Autumn Seminar II: Explainable AI & ML

This event was the second of the AI3SD Autumn Seminar Series that was run from October 2021 to December 2021. This seminar was hosted online via a zoom webinar and the theme for this seminar was Explainable AI & ML, and consisted of two talks on the subject. Below are the videos of the talks, with speaker biographies. The full playlist of this seminar can be found here.

How can Explainable AI help scientific exploration? – Professor Carlos Zednik

My research centers on the explanation of natural and artificial cognitive systems. Many of my articles specify norms and best-practice methods for cognitive psychology, neuroscience, and explainable AI. Others develop philosophical concepts and arguments with which to better understand scientific and engineering practice. I am the PI of the DFG-funded project on Generalizability and Simplicity of Mechanistic Explanations in Neuroscience. In addition to my regular research and teaching, I do consulting work on the methodological, normative, and ethical constraints on artificial intelligence, my primary expertise being transparency in machine learning. In this context I have an ongoing relationship with the research team at neurocat GmbH, and have contributed to AI standardization efforts at the German Institute for Standardization (DIN). Before arriving in Eindhoven I was based at the Philosophy-Neuroscience-Cognition program at the University of Magdeburg, and prior to that, at the Institute of Cognitive Science at the University of Osnabrück. I received my PhD from the Indiana University Cognitive Science Program, after receiving a Master’s degree in Philosophy of Mind from the University of Warwick and a Bachelor’s degree in Computer Science and Philosophy from Cornell University. You can find out more about me on Google Scholar, PhilPapers, Publons, and Twitter.

Q & A

Q1: The abstract of your talk says explainable AI tools can be used to better understand what a “big data” model is a model of. In scientific research, why wouldn’t the scientist who made the model know what it is a model of? 

Good question, in a very broad sense, of course they know. So, if you’re putting patient data into a machine learning model, then the system is going to extract something from that patient data. And you could say, well, it’s a model of that patient data. So, at some very broad level of analysis, you can say of course you already know what the model is a model of. But what you don’t know, and this is what machine learning is really good at, is that the data might contain regularities that maybe we did not know of in advance. So, we did not maybe know that there is a correlation between, for example , sleep apnoea and diabetes. Maybe the correlation between those things was unknown (although I think in this particular case it was), but what the system has learned to detect is, or track is, exactly the fact that sleep apnoea in patients can be used to predict adult-onset diabetes, and so in that sense our model is a model of this relationship that we did not know in advance – even for the medical practitioners. So, in a broad sense, yes, we know what the model is a model of, but in a very specific sense we do not.  

Q2: Transparency in this context is also about building/reinforcing Trust. Doesn’t this require that anything brought out through transparency needs to be understandable to the user? 

So again, user here is ambiguous. Remember there is this discussion of agents and stakeholders in the ML ecosystem.  For users in this case, I would think users are something like decision subjects. The people were affected by the decisions, so these might be the people who are denied credit from a bank because of the use of a certain AI system. And if they then, assuming the GDPR has those teeth, (which is a matter of debate) say to the bank; “Hey, I need to know why I was denied a loan”, then the explanation that is given by the bank should be of course understandable to the user. And that’s exactly why these explanations are agent-relative, and these explanations should cite these epistemically relevant elements that are appropriate to the agent or to the stakeholder. I would argue that for an end user, I suppose a layperson, the appropriate ERE’s are precisely features of the environment, that are sensible or meaningful or easy to interpret, such as income level or, more problematically, race or gender or home address.  So, of course, depending on who is requesting the explanation, the explanation should be understandable to that person. An interesting sidebar here is these explanations now cite features, that are not actually in the black box at all, so an “explanation” of a “black box system” in this case is not opening the black box; it’s citing features of the environment that are being tracked.  

Q3: My question is regarding robot emotional intelligence, not whether machines have any emotions but whether machines can be intelligent without any emotions? 

First of all, I don’t really know either what intelligence is or what emotions are. I like to think in terms of systems that are able to behave in flexible ways and adapt to their environments, and at least for human beings and presumably animals, emotions are one way of doing that.  Emotions are a mechanism for adapting to certain situations when we’re in a certain emotional state that might be a way to protect us. For example, in a vulnerable situation, or to run away, or to be angry and feel strong and so on. So, these are a kind of emotional response to deal with unpredictable environments and in the sense that we want to develop systems that can rival our levels of adaptivity and flexibility and so on, we might need to implement similar mechanisms such as those. Whether we want to call them emotions or not, I don’t know.  Whether those systems then will have this feeling of emotion, this kind of conscious aspect of the emotion, I don’t know. And I’ll be straightforward, I’m the wrong person to ask there. I’m usually interested in more measurable aspects of behaviour and cognition than the ones that we cannot measure.  

Q4: If weak emergence of high-level features is possible, would this undermine at least some of those XAI strategies – presumably, we would no longer be identifying the way lower- and higher-level features relate? 

So not all those XAI methods aim to relate the low- and high-level features, some XAI methods just aim to characterize those high-level features. So, if we can characterise the representational structures in a system, we might not need to know how those representational structures are implemented. In particular; parameters, or variables. Of course, that’s a tough thing to do, but that’s one way to answer the question. Another way to answer the question is to try to characterise the behaviour in a compact way. So, a method that I didn’t talk about, like LIME, just tries to linearly approximate the system’s behaviour, and so we don’t need to look at the underlying structures, parameters, and variables of the of the network. In this case, we just need to look at the behaviour and approximate this. For certain stakeholders in certain situations, certain contexts, that might be enough, but you’re right, the kind of complexity you mention makes the task of course difficult for other methods.  

DOI Link


Explainable Machine Learning for Trustworthy AI – Dr Fosca Giannotti

Fosca Giannotti is a director of research of computer science at the Information Science and Technology Institute “A. Faedo” of the National Research Council, Pisa, Italy. Fosca Giannotti is a pioneering scientist in mobility data mining, social network analysis and privacy-preserving data mining. Fosca leads the Pisa KDD Lab – Knowledge Discovery and Data Mining Laboratory http://kdd.isti.cnr.it, a joint research initiative of the University of Pisa and ISTI-CNR, founded in 1994 as one of the earliest research lab centered on data mining. Fosca’s research focus is on social mining from big data: smart cities, human dynamics, social and economic networks, ethics and trust, diffusion of innovations. She has coordinated tens of European projects and industrial collaborations. Fosca is currently the coordinator of SoBigData, the European research infrastructure on Big Data Analytics and Social Mining http://www.sobigdata.eu, an ecosystem of ten cutting edge European research centres providing an open platform for interdisciplinary data science and data-driven innovation. Recently she is the PI of ERC Advanced Grant entitled XAI – Science and technology for the explanation of AI decision making. She is member of the steering board of CINI-AIIS lab. On March 8, 2019 she has been features as one of the 19 Inspiring women in AI, BigData, Data Science, Machine Learning by KDnuggets.com, the leading site on AI, Data Mining and Machine Learning https://www.kdnuggets.com/2019/03/women-ai-big-data-science-machine-learning.html.

Q&A

Q1: How do you contextualise explainable AI for the case of mobility data especially when it comes to transport and urban planning? How can we evaluate the trust of policy-makers towards AI and ML techniques which they use for decision making?

OK, so the first for mobility data, that’s one very, very interesting question. These data habilitate a variety of intelligent services aimed at supporting different decision makers: from the citizen that wants to know his personal best trip to the urban planner that needs to take decision on transportation policies.  In both cases, is it possible to empower such users returning knowledge that often is the result of complex combination of data driven and model driven processes.  To make such empowerment effective explanation is a requirement, so there is the need to feed the visual analytics interfaces with explanation of the recommendations coming by deep models. Current effort on our lab is extending our results on explanator for deep models for time series also to mobility data: local explanator that provide explanation in the form of exemplars and counter-exemplar.

So, how can we evaluate the trust of policymakers towards AI Machine learning techniques which they used for decision making? I think that we must change our attitude before deploying any AI systems, which kind of validation, which kind of trials we must do. We must be capable to validated the AI system also with respect to the kind of decision process, in particular we also need to measure the impact that explanations is capable to achieve (doing trials with and without explanations).  This require new methodology and an important line of research that needs to involve other disciplines such as  psychologist and sociologists – there are theories there that we can try to put in place with our techniques.

Q2: If a linear model can explain a deep neural network, would it have been better and equivalent to use the linear model in place of the deep neural network?

This is a very smart question so if you have a linear model, you must stay with the linear model. This is something that I’m very much convinced that if you can learn a transparent model by scratch you have to stay with that.  Local explainers, are a good solution when globally, you are not capable of building a good surrogate (transparent) model.  There is interesting research, which is working on the long term goal of having good transparent models, possibly integrating symbolic and sub symbolic reasoning, but still very far away.  If the black box is very efficient because there are many features, so far is very difficult to have a transparent model built from scratch, that is equivalent to the black box.

Q3: Models like decision tree are unstable (the set of rules can change substantially upon minor changes in inputs) and have issues working with correlated inputs (select only one of them ignoring others). So, one may expect that there will be a lot of possible rule sets explaining the underlying black box model with the comparable accuracy. How do you solve these issues when using these models as surrogate ones to explain more complex models?

The set of rules can change substantially upon minor changes in the inputs.

Stability, fidelity, and faithfulness are important property for a local explainator. The way an explainator reconstructs the behaviour of the black box, includes some random step that may cause dramatic difference in generating explainations for multiple requests of same or similar instance. To avoid this, the design and implementation of the explainator itself needs to be very robust and stable with respect this specific issue.

Q4: As you have shown there are many different explanation models. How can we validate explanations from the models more rigorously and how do we know we can trust them with explaining new examples?

So, this is a very important line of research – validation. The validation part implies inventing, I say this word, inventing new methods for validating. Of course, I don’t want to be too negative. So, there are methods to validate the explanation terms of quality in several metrics and also the accuracy with respect the black box model. So, in this sense, the methods are quite well formulated. It’s much more difficult to evaluate the quality of the overall decision.

DOI Link

Leave a Reply