The current hype of Artificial Intelligence (AI) mostly refers to the success of machine learning and its sub-domain of deep learning. However, AI is also about other areas, such as Knowledge Representation and Reasoning, or Distributed AI, i.e., areas that need to be combined to reach the level of intelligence initially envisioned in the 1950s. Explainable AI (XAI) now refers to the core backup for industry to apply AI in products at scale, particularly for industries operating with critical systems. This paper reviews XAI not only from a Machine Learning perspective, but also from the other AI research areas, such as AI Planning or Constraint Satisfaction and Search. We expose the XAI challenges of AI fields, their existing approaches, limitations and opportunities for Knowledge Graphs and their underlying technologies.
Artificial Intelligence (AI), as a discipline aiming at building intelligent machines mimicking “cognitive” functions that humans associate with other human minds, such as “learning”, “problem solving” [67], and addresses intelligence for systems from a large variety of facets. From Machine Learning (ML) to Knowledge Representation and Reasoning (KRR), Game Theory, Uncertainty in AI (UAI), Robotics, Multi-Agent Systems, Constraint Satisfaction and Search (CSS), Planning and Scheduling, Computer Vision, Natural Language Processing, all are foundational pillars of AI as we know it today. All latter sub-fields of AI have matured, specialized, and sometimes converged together with the aim of accessing to General Artificial Intelligence, i.e., the holy grail of AI.
Many research questions have been vertical to all sub-fields of AI, such as decidability and complexity from a theoretical perspective or scalability from a more applied dimension. However, one is remaining current, even getting more traction than others in the new world of industrialized AI: explainability. Obtaining explainable AI systems consists in addressing the following question: “how to build intelligent systems able to expose explanation in a human-comprehensible way” for any of its AI decision. We will use the well-adopted XAI term, standing for eXplainable AI, when referencing to the explanation problem in AI. Answering this XAI question is far from trivial, and has been studied for years in all subfields of AI, with no exception. Such problem has been tackled under different names, concepts, definitions, with various requirements and objectives. For instance interpretation and justification are terms coined in KRR, diagnostics in UAI, debugging in robotics, constraints relaxation in CSS, feature importance in ML, or feature attribution for Neural Networks [73,79].
Despite a surge of innovation focusing on ML-based AI systems such question of explainability has not been deeply studied as much as in the other AI subfields, such as KRR. However, answers to this question of explainability and questions related to the responsibility, validity (e.g., robustness), privacy-preserving and more broadly trust of AI systems (Fig. 1) will be intrinsically connected to the adoption of AI in industry at scale, particularly in industries operating with critical systems. Indeed explanation, which could be used for debugging intelligent systems or deciding to follow a recommendation in real-time, will increase acceptance and user trust.
On the combination of valid, responsible, privacy-preserving and explainable AI towards trustable AI.
Unsurprisingly, the exact same research community, from which the most successful ML-based AI systems [35,74] emerged, is now trying to fill the gap between black-box ML systems [46] to more white-box ML systems. Some approaches are more successful than others, but still the AI community is far from having self-explainable AI systems which automatically adapt to any (i) data, (ii) ML algorithm, (iii) model, (iv) user, or (v) application and (v) context. Even more surprisingly, only works in KRR and its subfields of Web and AI, i.e., Semantic Web [11], Linked Data [12], and more recently Knowledge Graphs [13], engaged in the endeavour of explaining the broader family of ML-based systems. However, KRR, the Semantic Web together with Knowledge Graphs, aiming at representing and reasoning over structured information [25], should be designed and armed to move XAI closer to human comprehension. In the following we will refer to Knowledge Graphs any graph structured knowledge bases that store factual information in form of relationships between entities [58] e.g., YAGO [78], DBpedia [7], NELL [16], Freebase [13], and the Google Knowledge Graph [77].
This paper reviews XAI in the various fields of AI, i.e., by first describing the main research question, its XAI challenge, existing approaches, their limitations and opportunities for Knowledge Graphs and their underlying technologies.
XAI challenges in major AI fields (DAI: distributed AI, UAI: uncertainty in AI, KRR: knowledge representation and reasoning, NLP: natural language processing).
Knowledge graphs for XAI methods
This section highlights the main research question in major AI fields, their associated XAI challenge (Fig. 2), together with existing approaches, their limitations and opportunities for Semantic Web and Knowledge Graphs technologies. AI areas are broken down following the AAAI taxonomy for research paper submission [75]. Although such a taxonomy has some limitations e.g., questionable limit, natural intersection of AI domains, at least it benefits from a well-accepted list of fields in AI, which are well-represented in major generalist AI conferences, such as IJCAI [50] and ECAI [42].
Machine learning (except neural netwok)
∙ Research Question: ML algorithms [68] aim at elaborating a mathematical model based on sample data, known as“training data”, in order to make predictions or decisions on unseen data, known as “test data” without being explicitly programmed to perform the task. Five main tasks of learning are studied: (i) supervised learning if data contains both input and labeled data, (ii) unsupervised learning to derive some structures in data if labels are not exposed, (iii) semi-supervised learning if labelled data is small compared to unlabelled data, (iv) distant learning [34] which exploits relational data of unlabelled data from existing knowledge bases, and (v) reinforcement learning if further information could be captured through interaction with the environment.
∙ XAI Challenge: All tasks of ML expose mathematical models through an appropriate, but somehow abstract representation of data. XAI in ML [30] is about explanation of (i) models, known as global explanation, and (ii) a prediction, known as local explanation.
∙ Approaches: Some models are naturally designed to explicit their rationale e.g., linear regression, decision trees, generalized linear (or additive), naive Bayes models. In case of more complex models, some of their representative elements, such as feature importance, partial dependency plot or individual conditional expectation can be used for capturing high level representation of the ML model for global explanation. State-of-the-art approaches [52,64] go further by revisiting feature importance for local explanation.
∙ Limitations: Most approaches limit explanation to features involved in the data and model, or at best to examples, prototypes [44] or counterfactuals [55]. Explanation should go beyond correlation (which is what feature importance is about) and numerical similarity (which is what local explanation is about).
∙ Opportunity: Knowledge Graphs do encode contexts, do expose connections and relations, and support inference and causation natively. Existing XAI approaches in ML consider a flat representation of data, and context is out of the loop of the explanation process. Knowledge Graphs could be used for encoding better representations of data, structuring an ML model in a more interpretable way, adopt semantic similarity for local explanation. For instance we could envision linking knowledge graphs extracts to input data of a Machine Learning task to solve some distant learning tasks [34]. In addition we could envision approaches relying on Knowledge Graphs to compact large trees in decisions trees or even random forest. For instance combinations of nodes could be captured as a unique (probabilistic) concept or property in Knowledge Graphs. Machine Learning and Knowledge Graphs have great potential to be combined, and benefit from each other strength [26].
Artificial (deep) neural networks
∙ Research Question: Similarly to other ML approaches, Artificial Neural Networks (ANNs) aim at learning representation. The main differentiator with other approaches is its scalability and performance with a high number of features and instances, which better fit images and texts.
∙ XAI Challenge: Both local and global explanations are a strong focus of the ANN community.
∙ Approaches: Contrary to other ML approaches, there is no easy way around explanation of ANN models or predictions. Existing techniques either encode feature importance through attribution [73,79], attention mechanism [62], or obtain a more interpretable approximation through surrogate models [24], such as decision tree.
On the role of knowledge graphs for explainable artificial (deep) neural networks. (What is the causal relationship between the input/output/training data?) – Extension of Fig. 8 in [83] and https://fortune.com/longform/ai-artificial-intelligence-deep-machine-learning/.
∙ Limitations: Explanations are artificially built, for instance by forcing the network to focus on some group of features or correlations at best. In addition they do not represent any logic of the learning task, making explanation a very difficult task to achieve. The latter is due to the foundational theory of ANN, which consists in deriving a mathematical model through local optimizations.
∙ Opportunity: Novel ANN architectures need to be designed to natively encode explanation. Some recent approaches which aim at capturing better model hierarchical relationships [36], or causality mechanism [10] are promising. However, they could be polished further by (i) adding logic representation layers in ANN, such as [38] using network dissection approaches [8], (ii) encoding the semantics of inputs, outputs and their properties cf. Fig. 3. Knowledge Graphs could play a central role in such a new design, particularly as novel architectures should embed causation and feature reasoning. This is the case of [53] which introduced a layered graph model representation of (RDF-type) graphs in the ANN architectures for reasoning purpose. The layer is representing the semantics of predicates in Knowledge Graphs, and is captured as adjacency matrices. Other approaches from the neural-symbolic reasoning community [37] are worth investigating as they combine ANNs with probabilistic logic [54] or first order fuzzy logic [27]. Knowledge graph embeddings [33,72] are also Machine Learning artifacts where explanations could be elaborated their a latent representations. Such design could advance ANN further by supporting integration, discovery, fragmentation, composition and even reasoning.
Computer vision
∙ Research Question: Computer Vision relies on ANN architectures due to the nature and size of its data. Tasks range from semantic segmentation, object detection, scene reconstruction to visual question answering.
∙ XAI Challenge: The main XAI task in Computer Vision is identification of pixels, or group of pixels responsible for triggering a shape detection, an uncertainty or an error. Explanation is often referred to as visual inspection due to the nature of data processed.
∙ Approaches: Saliency maps [2] are classic methodologies in Computer Vision. They include many variants of gradient modification for capturing representative features. Network dissection [8] is another approach segmenting ANN to derive interpretable units and layers.
∙ Limitations: Although saliency maps expose interesting visualization artifacts, they do not capture any semantics. At best those artifacts capture a disentangled representation, which remain subject to human interpretation. Knowledge Graphs could expose the semantics of such disentangled representation. However, integrating semantics in ANN, hidden units of feature space remain open challenges.
∙ Opportunity: Adding semantics through context and Knowledge Graphs could help answering open questions, such as: What is a disentangled representation, and how can its factors be quantified and detected? Do interpretable hidden units reflect a special alignment of feature space, or are interpretations a chimera? All are open questions discussed in [8], and not yet resolved. Other open questions are: What conditions in state-of-the-art training lead to representations with greater or lesser entanglement? What is the semantics of a group of hidden units in neural networks? Interesting avenues aim at combining detection with reasoning to improve, and potentially explain semantic segmentation [3].
Constraint satisfaction and search
∙ Research Question: Constraint Satisfaction and Search aims at finding a solution to a set of constraints that impose conditions that the variables must satisfy. A solution is a set of values for the variables that satisfies all constraints. Constraints are defined on a finite domain.
∙ XAI Challenge: The main challenge is to identify which constraints to relax for conflict resolutions. Explanations are usually a subset of variables which satisfies a set of constraints.
∙ Approaches: Constraint Satisfaction and Search problems on finite domains are typically solved using a form of search. Backtracking, constraint propagation, local search are examples of such approaches. Even though the problem is known to be an NP complete problem with respect to the domain size, research has shown a number of tractable sub-cases with promising approaches [40,59].
∙ Limitations: Even though optimal structures and search spaces have been largely introduced in the community, complexity remains one of the main limitations.
∙ Opportunity: It has been demonstrated that any structure in problem representation has largely benefited search [49]. We could envision more knowledge-driven structure, inspired from Knowledge Graphs, which could dynamically adapt to variables, constraints, search space. Knowledge Graphs could even drive search through semantic and logical relations among constraints, which could be modelled as entities in a graph. In such cases constraints will be augmented with distant data from Knowledge Graphs.
Game theory
∙ Research Question: Game Theory [70] is the study of mathematical models of strategic interaction between rationale decision-makers. Examples of games include zero-sum games [56], in which one person’s gains result in losses for the other participants.
∙ XAI Challenge: Game Theory has been dealing with XAI from its inception as one of its main challenge is to identify and to understand the underlying mathematical model as well as its properties. Game theory is applied to a wide range of behavioural relations, and is now an umbrella term for the science of logical decision making in humans, animals, and computers, in which explanation is the core question driving the modelling.
∙ Approaches: The Shapley value [69] is a solution concept in game theory, which inspired recent research in Machine Learning to address the problem of explanation [52]. The Shapley value is characterized by a collection of desirable properties, and is used to capture the influence of a player in a game settings (or a feature in a machine learning setting). Such properties characterize the explanation.
∙ Limitations: Similarly to the domain of Constraint Satisfaction and Search, complexity is a challenge for explainability in game theory. Only an approximate solution is feasible, usually identified through some randomization of coalition in feature values.
∙ Opportunity: As recently explored, structured representation of the models as its features [18] has shown better scalability, while not necessarily improving explainability. Knowledge Graphs could be considered to better structure models, organize features, then reducing the search space and potentially improve understanding and readability of explanation, particularly when embedded in a structured set of connected entities. Recent examples [19] have demonstrated that graph structures do reduce the complexity of search.
Uncertainty in AI
∙ Research Question: The field of Uncertainty in AI is at the frontier of various AI fields, namely knowledge representation, learning and reasoning. Bayesian probability is one of the core fundamental, and Probabilistic Graphical Models (PGMs) [47] are usually central for representing and reasoning with uncertainty as they encode probability distributions.
∙ XAI Challenge: Graphical models are often used to model multivariate data, since they allow to represent high-dimensional distributions compactly. The explanations draw their attention on the compact distributions and their underlying data. Explanation is then naturally embedded through those relationships, usually through interdependencies and decomposition in data.
∙ Approaches: Some approaches [9] are formulating PGMs as weighted logical formulas [43] to tightly decouple the constraints and dependencies from the probabilistic parameters. Reasoning can then be performed on the logic representations. Other approaches analyzes latent spaces and its direct connections with the underlying data [81]. The strength of existing approaches is the underlying reasoning capabilities that PGMs and other probabilistic and logic systems offer.
∙ Limitations: Even though PGMs are appropriate representations to connect inter-dependable data, dependencies remains probabilistic. Therefore humans are required to remain in the loop to interpret any dependencies. Even embedded in logical formulas there is little gained as we are still embedded in the framework of standard probability theory.
∙ Opportunity: Semantic representations and connections through Knowledge Graphs could be used to disambiguate and force latent variables to represent interpretable content. This is particularly relevant as PGMs fit naturally in graph representations, in contextual information such as knowledge graphs could extend reasoning functionalities. Interesting avenues are Probabilistic Knowledge Graphs [57] or knowledge expansion over probabilistic knowledge bases [20].
Robotics
∙ Research Question: Robotics is an interdisciplinary branch of engineering and AI science, which deals with the design, construction, operation, and use of robots, as well as computer systems for their control, sensory feedback, and information processing. The underlying technologies are used to develop machines that can replicate human actions. They usually combine and integrate many of the technologies in the AI field.
∙ XAI Challenge: XAI is required in Robotics mainly for debugging and resolving discrepancy between a solution and an expected answer. Some of the XAI challenges are (1) the rationale of coordination in multi-robots Systems and swarms, (2) the fusion of explanation coming from many underlying AI systems, such as Planning and Scheduling, Computer Vision, or Knowledge Representation and Reasoning. They are unique challenges for robotics with many interesting opportunities as explanation is multi-modal, could be complementary but also conflicting, is spatial and temporal, is driven by goals but also initial conditions.
∙ Approaches: Narration of autonomous robot experience [66] together with approaches of summarization [15] have been recently introduced as a succinct way of presenting the decision process of robots. Various levels of granularity in the decision process are provided. [61] combine a robotics ontology with linguistic elements to expose the rational of robots’ actions.
∙ Limitations: Although the latter models extract information from a large poll of data, such systems do not explain their actions and justify their decisions [71]. Explanation is usually too fine-grained to be properly integrated by humans. Seamless integration of multi-modal explanation is also not addressed in the literature.
∙ Opportunity: The level of abstraction in explanation together with its multi-modal fusion are net opportunities for Knowledge Graphs. Some semantics could deeply support in exposing appropriate and personalized representations of explanations while fusing explanation content in a compact and comprehensible representation [60]. Knowledge Graphs have been designed to capture knowledge from heterogenous domains, making them a great candidate to achieve explanation per se in robotics.
Distributed AI
∙ Research Question: Distributed AI is the field of AI dedicated to the development of distributed solutions for problems. It is related to Multi-Agent Systems but also to any representation, structure, system which could make AI scalable.
∙ XAI Challenge: Main XAI challenges are focusing on explaining and resolving agent conflicts, based on their intentions and beliefs [80]. State-of-the-art approaches aim at identifying the best strategy, through explanation, to achieve a goal. More recent works focus on human comprehension of agent behaviour, its strategy, and its convergence in case of conflicting intentions and beliefs of agents [4,5].
∙ Approaches: Approaches, such as [39] determines the motivation for a decision by recalling the situation in which the decision was made, and replaying the decision under variants of the original situation. In such scenario they are able to discover what factors led to the decisions, and what alternatives might have been chosen had the situation been slightly different. Approaches tend to be very close to counterfactual [14] and case-based reasoning [1].
∙ Limitations: Even though ontology is a core representation layer for agents to communicate and negotiate, it is rarely used for explaining agent behaviour, its strategy and success. Lighter knowledge representations might be envisioned.
∙ Opportunity: The dynamics of agents interaction should be captured more formally, and embedded with broader common sense knowledge to identify human interpretable explanation. Formalization does not need to be complex. For instance some dedicated Knowledge Graphs could be used to contextualize the agents environment. Some recent works are going towards this direction of formalizing agent interactions [21].
Automated planning and scheduling
∙ Research Question: Automated Planning and Scheduling [29] is a branch of Artificial Intelligence that is about the realization of strategies or action sequences, typically for execution by intelligent agents, autonomous robots and unmanned vehicles. Unlike classical control and classification problems, the solutions are complex and must be discovered and optimized in multi-dimensional space. It could be done in real-time, i.e., on-line, or at design-time, i.e., off-line. Solutions usually resort to iterative trial and error processes.
∙ XAI Challenge: XAI challenges in AI planning [28] are as follows: explaining (i) causal relationships of actions, (ii) why some actions are chosen in particular situations, (iii) why plans are better than some, (iv) why plans could not be computed, (v) why replanning might be required.
∙ Approaches: Past work on explanations primarily involved the AI system explaining the correctness of its plan and the rationale for its decision in terms of its own model [17].
∙ Limitations: Existing approaches fail in exposing human-understandable explanation, as it is usually limited to the planner’s domain e.g., in term of actions and initial situation. This strongly limits the comprehension to experts in the given tasks.
∙ Opportunity: Knowledge Graphs could be a way forward to better contextualize complex terms, and even better summarize complex actions in more succinct and meaningful way.
Natural language processing
∙ Research Question: Natural Language Processing is concerned with the interactions between computers and human (natural) languages, in particular how to program computers to process and analyze large amounts of natural language data. Research questions includes (visual [6], multi-turn [51]) question answering [48], conversational agents with broader questions related to Speech Recognition, Natural Language Understanding and Generation.
∙ XAI Challenge: Similarly to machine learning, identifying importance of feature or entity is critical, as it aims at identifying which part of speech is driving the most relevant information. Other core XAI tasks include: explaining the rationale of questions sequencing in dialogue, debugging a plan-based dialogue system [45] or explaining the utterances which were intended to achieve [23]
∙ Approaches: The problem of identifying the most representative entities in a text classification task is addressed by [64] with many variants. Some works [22] extract plan-based model to understand intention and explain rationale of the discourse.
∙ Limitations: On the one hand ML-based approaches, which focus on important entities in text, suffer from having statistics-based explanation only, i.e., mainly based on co-occurrence and correlation. Pioneering work [76], relying on tree like structure in form of dependency trees, have been first steps towards structuring text processing tasks. On the other hand plan-based models have not been deeply explored, and many research questions related to their representation, rationale in questions sequencing remain open.
∙ Opportunity: Semantic descriptions, exposing meaningful representations, have demonstrated to have a positive impact on tasks such as relation extraction [32,41], event extraction [63] or text classification [82]. Similar representations, inspired from Knowledge Graphs could provide the semantic layer missing from brute-force machine learning approaches on text, aiming at exposing explanation [65]. They could also drive or at least guide sequencing of questions by refining, abstracting or instantiating obscure terms in questions. Challenges and approaches from neural language models for the semantic web are also interesting avenues of exploration [31].
Conclusion
Despite a surge of innovation focusing on ML-based AI systems, industry is facing the dilemma of applying in products at scale, particularly for industries operating with critical systems. Trust, and trust in AI has been revelled as the one term coining industry needs to move to the next step. Trustable AI is about responsibility validity, privacy-preserving modelling and also explainability. Explanation, which could be used for debugging intelligent systems or deciding to follow a recommendation in real-time, will increase acceptance and user trust. Explanation in AI has different open questions, meaning, definitions and approaches, depending on which AI fields is touching the question. Although various solutions have been introduced, the question remain open in all areas of AI. We presented their challenges in more details, some of their existing approaches, their limitations and opportunities for Knowledge Graphs to bring explainable AI to the right level of semantics and interpretability. Indeed significant progress in complex AI tasks, such as explainable AI could only be achieved through combinations with semantic layers, empowering explanation of complex AI systems.
References
1.
A.Aamodt and E.Plaza, Case-based reasoning: Foundational issues, methodological variations, and system approaches, AI Communications7(1) (1994), 39–59. doi:10.3233/AIC-1994-7104.
2.
J.Adebayo, J.Gilmer, M.Muelly, I.J.Goodfellow, M.Hardt and B.Kim, Sanity checks for saliency maps, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, 2018, pp. 9525–9536, http://papers.nips.cc/paper/8160-sanity-checks-for-saliency-maps.
3.
M.Alirezaie, M.Längkvist, M.Sioutis and A.Loutfi, Semantic referee: A neural-symbolic framework for enhancing geospatial semantic segmentation, CoRR, arXiv:1904.13196, 2019.
4.
D.Amir and O.Amir, HIGHLIGHTS: Summarizing agent behavior to people, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, July 10–15, 2018, 2018, pp. 1168–1176, http://dl.acm.org/citation.cfm?id=3237869.
5.
O.Amir, F.Doshi-Velez and D.Sarne, Agent strategy summarization, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, July 10–15, 2018, 2018, pp. 1203–1207, http://dl.acm.org/citation.cfm?id=3237877.
6.
S.Antol, A.Agrawal, J.Lu, M.Mitchell, D.Batra, C.L.Zitnick and D.Parikh, Vqa: Visual question answering, in: Proceedings of the IEEE International Conference on Computer Vision, 2015, pp. 2425–2433.
7.
S.Auer, C.Bizer, G.Kobilarov, J.Lehmann, R.Cyganiak and Z.G.Ives, DBpedia: A nucleus for a web of open data, in: The Semantic Web, 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007 + ASWC 2007, Busan, Korea, November 11–15, 2007, 2007, pp. 722–735. doi:10.1007/978-3-540-76298-0_52.
8.
D.Bau, B.Zhou, A.Khosla, A.Oliva and A.Torralba, Network dissection: Quantifying interpretability of deep visual representations, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017, 2017, pp. 3319–3327. doi:10.1109/CVPR.2017.354.
9.
V.Belle, Logic meets probability: Towards explainable AI systems for uncertain worlds, in: IJCAI, 2017, pp. 5116–5120.
10.
Y.Bengio, T.Deleu, N.Rahaman, N.R.Ke, S.Lachapelle, O.Bilaniuk, A.Goyal and C.J.Pal, A meta-transfer objective for learning to disentangle causal mechanisms, CoRR, arXiv:1901.10912, 2019.
C.Bizer, T.Heath and T.Berners-Lee, Linked data: The story so far, in: Semantic Services, Interoperability and Web Applications: Emerging Concepts, IGI Global, 2011, pp. 205–227. doi:10.4018/978-1-60960-593-3.ch008.
13.
K.Bollacker, C.Evans, P.Paritosh, T.Sturge and J.Taylor, Freebase: A collaboratively created graph database for structuring human knowledge, in: Proceedings of the 2008 ACM SIGMOD International Conference on Management of Data, ACM, 2008, pp. 1247–1250. doi:10.1145/1376616.1376746.
14.
L.Bottou, J.Peters, J.Quiñonero-Candela, D.X.Charles, D.M.Chickering, E.Portugaly, D.Ray, P.Simard and E.Snelson, Counterfactual reasoning and learning systems: The example of computational advertising, The Journal of Machine Learning Research14(1) (2013), 3207–3260.
15.
D.J.Brooks, A.Shultz, M.Desai, P.Kovac and H.A.Yanco, Towards state summarization for autonomous robots, dialog with robots, in: 2010 AAAI Fall Symposium, Arlington, Virginia, USA, November 11–13, 2010, 2010, http://www.aaai.org/ocs/index.php/FSS/FSS10/paper/view/2223.
16.
A.Carlson, J.Betteridge, B.Kisiel, B.Settles, E.R.HruschkaJr. and T.M.Mitchell, Toward an architecture for never-ending language learning, in: Proceedings of the Twenty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2010, Atlanta, Georgia, USA, July 11–15, 2010, 2010, http://www.aaai.org/ocs/index.php/AAAI/AAAI10/paper/view/1879.
17.
T.Chakraborti, S.Sreedharan, Y.Zhang and S.Kambhampati, Plan explanations as model reconciliation: Moving beyond explanation as soliloquy, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19–25, 2017, 2017, pp. 156–163. doi:10.24963/ijcai.2017/23.
18.
J.Chen and M.I.Jordan, LS-tree: Model interpretation when the data are linguistic, CoRR, arXiv:1902.04187, 2019.
19.
J.Chen, L.Song, M.J.Wainwright and M.I.Jordan, L-Shapley and C-Shapley: Efficient model interpretation for structured data, CoRR, arXiv:1808.02610, 2018.
20.
Y.Chen and D.Z.Wang, Knowledge expansion over probabilistic knowledge bases, in: Proceedings of the 2014 ACM SIGMOD International Conference on Management of Data, ACM, 2014, pp. 649–660.
21.
P.Chocron and M.Schorlemmer, Inferring commitment semantics in multi-agent interactions, in: Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS 2018, Stockholm, Sweden, July 10–15, 2018, 2018, pp. 1150–1158, http://dl.acm.org/citation.cfm?id=3237867.
22.
P.R.Cohen, Back to the future for dialogue research: A position paper, CoRR, arXiv:1812.01144, 2018.
23.
P.R.Cohen and C.R.Perrault, Elements of a plan-based theory of speech acts, in: Communication in Multiagent Systems, Agent Communication Languages and Conversation Polocies, 2003, pp. 1–36. doi:10.1007/978-3-540-44972-0_1.
24.
M.Craven and J.W.Shavlik, Extracting tree-structured representations of trained networks, in: Advances in Neural Information Processing Systems, 1996, pp. 24–30.
25.
P.Cudre-Mauroux, Leveraging Knowledge Graphs for Big Data Integration, Semantic Web Journal11(1) (2020), 13–17.
26.
C.d’Amato, Machine learning for the semantic web: Lessons learnt and next research directions, Semantic Web Journal11(1) (2020), 195–203.
27.
I.Donadello, L.Serafini and A.S.d’Avila Garcez, Logic tensor networks for semantic image interpretation, in: Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence, IJCAI 2017, Melbourne, Australia, August 19–25, 2017, 2017, pp. 1596–1602. doi:10.24963/ijcai.2017/221.
28.
M.Fox, D.Long and D.Magazzeni, Explainable planning, CoRR, arXiv:1709.10256, 2017.
29.
M.Ghallab, D.S.Nau and P.Traverso, Automated Planning – Theory and Practice, Elsevier, 2004. ISBN 978-1-55860-856-6.
30.
R.Goebel, A.Chander, K.Holzinger, F.Lecue, Z.Akata, S.Stumpf, P.Kieseberg and A.Holzinger, Explainable AI: The new 42?, in: International Cross-Domain Conference for Machine Learning and Knowledge Extraction, Springer, 2018, pp. 295–303. doi:10.1007/978-3-319-99740-7_21.
31.
D.Gromann, Neural language models for the multilingual, transcultural, and multimodal semantic web, Semantic Web Journal11(1) (2020), 29–39.
32.
Z.GuoDong, S.Jian, Z.Jie and Z.Min, Exploring various knowledge in relation extraction, in: Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, 2005, pp. 427–434.
33.
W.L.Hamilton, P.Bajaj, M.Zitnik, D.Jurafsky and J.Leskovec, Embedding logical queries on knowledge graphs, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, 2018, pp. 2030–2041, http://papers.nips.cc/paper/7473-embedding-logical-queries-on-knowledge-graphs.
34.
X.Han and L.Sun, Distant supervision via prototype-based global representation learning, in: Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, February 4–9, 2017, S.P.Singh and S.Markovitch, eds, AAAI Press, 2017, pp. 3443–3449, http://aaai.org/ocs/index.php/AAAI/AAAI17/paper/view/14472.
35.
R.High, The Era of Cognitive Systems: An Inside Look at IBM Watson and How It Works, IBM Corporation, Redbooks, 2012.
36.
G.E.Hinton, S.Sabour and N.Frosst, Matrix capsules with EM routing, in: 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30–May 3, 2018, Conference Track Proceedings, 2018, https://openreview.net/forum?id=HJWLfGWRb.
37.
P.Hitzler, F.Bianchi, M.Ebrahimi and M.K.Sarker, Neural-symbolic integration and the Semantic Web, Semantic Web Journal11(1) (2020), 3–11.
38.
A.Ignatiev, N.Narodytska and J.Marques-Silva, Abduction-based explanations for machine learning models, in: Proceedings of the Thirty-Third AAAI Conference on Artificial Intelligence, (AAAI-19), Honolulu, Hawaii, USA, 2019, 2019.
39.
W.L.Johnson, Agents that learn to explain themselves, in: AAAI, 1994, pp. 1257–1263.
40.
U.Junker, QUICKXPLAIN: Preferred explanations and relaxations for over-constrained problems, in: Proceedings of the Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative Applications of Artificial Intelligence, San Jose, California, USA, July 25–29, 2004, 2004, pp. 167–172, http://www.aaai.org/Library/AAAI/2004/aaai04-027.php.
41.
N.Kambhatla, Combining lexical, syntactic, and semantic features with maximum entropy models for extracting relations, in: Proceedings of the ACL 2004 on Interactive Poster and Demonstration Sessions, Association for Computational Linguistics, 2004, 22. doi:10.3115/1219044.1219066.
42.
G.A.Kaminka, M.Fox, P.Bouquet, E.Hüllermeier, V.Dignum, F.Dignum and F.van Harmelen (eds), ECAI 2016 – 22nd European Conference on Artificial Intelligence, 29 August–2 September 2016, The Hague, The Netherlands – Including Prestigious Applications of Artificial Intelligence (PAIS 2016), Frontiers in Artificial Intelligence and Applications, Vol. 285, IOS Press, 2016. ISBN 978-1-61499-671-2.
43.
K.Kersting and L.De Raedt, Bayesian logic programming: Theory and tool, in: Statistical Relational Learning, 2007, p. 291.
44.
B.Kim, O.Koyejo and R.Khanna, Examples are not enough, learn to criticize! Criticism for interpretability, in: Advances in Neural Information Processing Systems 29: Annual Conference on Neural Information Processing Systems 2016, Barcelona, Spain, December 5–10, 2016, 2016, pp. 2280–2288.
45.
H.Kitano and C.Van Ess-Dykema, Toward a plan-based understanding model for mixed-initiative dialogues, in: Proceedings of the 29th Annual Meeting on Association for Computational Linguistics, Association for Computational Linguistics, 1991, pp. 25–32. doi:10.3115/981344.981348.
46.
P.W.Koh and P.Liang, Understanding black-box predictions via influence functions, in: Proceedings of the 34th International Conference on Machine Learning – Volume 70, JMLR.org, 2017, pp. 1885–1894.
47.
D.Koller and N.Friedman, Probabilistic Graphical Models – Principles and Techniques, MIT Press, 2009, http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=11886. ISBN 978-0-262-01319-2.
48.
C.Kwok, O.Etzioni and D.S.Weld, Scaling question answering to the web, ACM Transactions on Information Systems (TOIS)19(3) (2001), 242–262. doi:10.1145/502115.502117.
49.
C.Labreuche and S.Fossier, Explaining multi-criteria decision aiding models with an extended Shapley value, in: Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13–19, 2018, J.Lang, ed., ijcai.org, 2018, pp. 331–339. ISBN 978-0-9992411-2-7. doi:10.24963/ijcai.2018/46.
50.
J.Lang (ed.), Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, IJCAI 2018, Stockholm, Sweden, July 13–19, 2018, ijcai.org, 2018, http://www.ijcai.org/proceedings/2018/. ISBN 978-0-9992411-2-7.
51.
R.Lowe, N.Pow, I.Serban and J.Pineau, The Ubuntu dialogue corpus: A large dataset for research in unstructured multi-turn dialogue systems, in: Proceedings of the SIGDIAL 2015 Conference, the 16th Annual Meeting of the Special Interest Group on Discourse and Dialogue, Prague, Czech Republic, 2–4 September 2015, 2015, pp. 285–294, http://aclweb.org/anthology/W/W15/W15-4640.pdf.
52.
S.M.Lundberg, G.G.Erion and S.Lee, Consistent individualized feature attribution for tree ensembles, CoRR, arXiv:1802.03888, 2018.
53.
B.Makni and J.Hendler, Deep learning for noise-tolerant RDFS reasoning, PhD thesis, Rensselaer Polytechnic Institute, 2018.
54.
R.Manhaeve, S.Dumancic, A.Kimmig, T.Demeester and L.D.Raedt, DeepProbLog: Neural probabilistic logic programming, in: Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, 2018, pp. 3753–3763, http://papers.nips.cc/paper/7632-deepproblog-neural-probabilistic-logic-programming.
55.
B.D.Mittelstadt, C.Russell and S.Wachter, Explaining explanations in AI, in: Proceedings of the Conference on Fairness, Accountability, and Transparency, FAT* 2019, Atlanta, GA, USA, January 29–31, 2019, 2019, pp. 279–288. doi:10.1145/3287560.3287574.
56.
J.Nash, Non-cooperative games, Annals of Mathematics (1951), 286–295. doi:10.2307/1969529.
57.
M.Nickel, K.Murphy, V.Tresp and E.Gabrilovich, A review of relational machine learning for knowledge graphs, Proceedings of the IEEE104(1) (2015), 11–33. doi:10.1109/JPROC.2015.2483592.
58.
M.Nickel, K.Murphy, V.Tresp and E.Gabrilovich, A review of relational machine learning for knowledge graphs, Proceedings of the IEEE104(1) (2016), 11–33. doi:10.1109/JPROC.2015.2483592.
59.
B.O’Sullivan, A.Papadopoulos, B.Faltings and P.Pu, Representative explanations for over-constrained problems, in: Proceedings of the Twenty-Second AAAI Conference on Artificial Intelligence, Vancouver, British Columbia, Canada, July 22–26, 2007, 2007, pp. 323–328, http://www.aaai.org/Library/AAAI/2007/aaai07-050.php.
60.
S.Patki, A.F.Daniele, M.R.Walter and T.M.Howard, Inferring compact representations for efficient natural language understanding of robot instructions, CoRR, arXiv:1903.09243, 2019.
61.
M.Pomarlan, R.Porzel, J.Bateman and R.Malaka, From sensors to sense: Integrated heterogeneous ontologies for natural language generation, in: Proceedings of the Workshop on NLG for Human–Robot Interaction, Association for Computational Linguistics, Tilburg, The Netherlands, 2018, pp. 17–21, https://www.aclweb.org/anthology/W18-6904. doi:10.18653/v1/W18-6904.
62.
V.Ramanishka, A.Das, J.Zhang and K.Saenko, Top-down visual saliency guided by captions, in: 2017 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017, Honolulu, HI, USA, July 21–26, 2017, 2017, pp. 3135–3144. doi:10.1109/CVPR.2017.334.
63.
T.Rattenbury, N.Good and M.Naaman, Towards automatic extraction of event and place semantics from Flickr tags, in: Proceedings of the 30th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, ACM, 2007, pp. 103–110.
64.
M.T.Ribeiro, S.Singh and C.Guestrin, “Why should I trust you?”: Explaining the predictions of any classifier, in: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, August 13–17, 2016, 2016, pp. 1135–1144. doi:10.1145/2939672.2939778.
65.
M.T.Ribeiro, S.Singh and C.Guestrin, Anchors: High-precision model-agnostic explanations, in: Thirty-Second AAAI Conference on Artificial Intelligence, 2018.
66.
S.Rosenthal, S.P.Selvaraj and M.M.Veloso, Verbalization: Narration of autonomous robot experience, in: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI 2016, New York, NY, USA, 9–15 July, 2016, pp. 862–868, http://www.ijcai.org/Abstract/16/127.
67.
S.J.Russell and P.Norvig, Artificial Intelligence – a Modern Approach, 3rd internat. edn, Pearson Education, 2010, http://vig.pearsoned.com/store/product/1,1207,store-12521_isbn-0136042597,00.html. ISBN 978-0-13-207148-2.
68.
S.J.Russell and P.Norvig, Artificial Intelligence: A Modern Approach, Pearson Education Limited, Malaysia, 2016.
69.
L.S.Shapley, A value for n-person games, Contributions to the Theory of Games2(28) (1953), 307–317.
70.
L.S.Shapley and M.Shubik, The assignment game I: The core, International Journal of Game Theory1(1) (1971), 111–130. doi:10.1007/BF01753437.
71.
R.K.Sheh, “Why did you do that?” Explainable intelligent robots, in: The Workshops of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, Saturday, February 4–9, 2017, 2017, http://aaai.org/ocs/index.php/WS/AAAIW17/paper/view/15162.
72.
J.Shi, H.Gao, G.Qi and Z.Zhou, Knowledge graph embedding with triple context, in: Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM 2017, Singapore, November 6–10, 2017, 2017, pp. 2299–2302. doi:10.1145/3132847.3133119.
73.
A.Shrikumar, P.Greenside and A.Kundaje, Learning important features through propagating activation differences, in: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, 2017, pp. 3145–3153, http://proceedings.mlr.press/v70/shrikumar17a.html.
74.
D.Silver, J.Schrittwieser, K.Simonyan, I.Antonoglou, A.Huang, A.Guez, T.Hubert, L.Baker, M.Lai, A.Boltonet al., Mastering the game of go without human knowledge, Nature550(7676) (2017), 354. doi:10.1038/nature24270.
75.
S.P.Singh and S.Markovitch (eds), Proceedings of the Thirty-First AAAI Conference on Artificial Intelligence, San Francisco, California, USA, February 4–9, 2017, AAAI Press, 2017, http://www.aaai.org/Library/AAAI/aaai17contents.php.
76.
R.Socher, C.C.Lin, A.Y.Ng and C.D.Manning, Parsing natural scenes and natural language with recursive neural networks, in: Proceedings of the 28th International Conference on Machine Learning, ICML 2011, Bellevue, Washington, USA, June 28–July 2, 2011, 2011, pp. 129–136, https://icml.cc/2011/papers/125_icmlpaper.pdf.
77.
T.Steiner, R.Verborgh, R.Troncy, J.Gabarro and R.Van de Walle, Adding realtime coverage to the Google knowledge graph, in: 11th International Semantic Web Conference (ISWC 2012), Citeseer, 2012.
78.
F.M.Suchanek, G.Kasneci and G.Weikum, Yago: A core of semantic knowledge, in: Proceedings of the 16th International Conference on World Wide Web, WWW 2007, Banff, Alberta, Canada, May 8–12, 2007, 2007, pp. 697–706. doi:10.1145/1242572.1242667.
79.
M.Sundararajan, A.Taly and Q.Yan, Axiomatic attribution for deep networks, in: Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6–11 August 2017, 2017, pp. 3319–3328, http://proceedings.mlr.press/v70/sundararajan17a.html.
80.
K.P.Sycara, M.Paolucci, M.V.Velsen and J.A.Giampapa, The RETSINA MAS infrastructure, Autonomous Agents and Multi-Agent Systems7(1–2) (2003), 29–48. doi:10.1023/A:1024172719965.
81.
A.Vellido, J.D.Martín-Guerrero and P.J.G.Lisboa, Making machine learning models interpretable, in: 20th European Symposium on Artificial Neural Networks, ESANN 2012, Bruges, Belgium, April 25–27, 2012, 2012, https://www.elen.ucl.ac.be/Proceedings/esann/esannpdf/es2012-7.pdf.
82.
P.Wang and C.Domeniconi, Building semantic kernels for text classification using Wikipedia, in: Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, ACM, 2008, pp. 713–721. doi:10.1145/1401890.1401976.
83.
S.Zeldam, Automated failure diagnosis in aviation maintenance using explainable artificial intelligence (XAI), Master’s thesis, University of Twente, 2018.