Abstract
There has been a renewed interest in symbolic AI in recent years. Symbolic AI is indeed one of the key enabling technologies for the development of neuro-symbolic AI systems, as it can mitigate the limited capabilities of black box deep learning models to perform reasoning and provide support for explanations. This paper discusses the different roles that explicit knowledge, in particular ontologies, can play in drawing intelligible explanations in neuro-symbolic AI. We consider three main perspectives in which ontologies can contribute significantly, namely reference modelling, common-sense reasoning, and knowledge refinement and complexity management. We overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The paper concludes by discussing some open challenges related to the adoption of ontologies in explanations.
Introduction
The limited capability of deep learning systems to perform abstraction, reasoning and to support explainability have prompted a heated debate about the value of symbolic AI in contrast to neural computation [21,39]. Researchers identified the need for the development of hybrid systems, that is, systems that integrate neural models with logic-based approaches to provide AI systems capable of transferring learning and bridge the gap between lower-level information processing (e.g., for efficient data acquisition and pattern recognition) and higher-level abstract knowledge (e.g., for general reasoning and explainability).
Neuro-symbolic AI [21,39] seeks to offer a principled way of integrating learning and reasoning, by aiming at establishing correspondences between neural models and logical representations [21]. The development of hybrid systems is widely seen as one of the major challenges facing AI today [39]. Indeed, there is no consensus on how to achieve this, with proposed techniques in the literature ranging from inductive logic programming [74], logical tensor networks [4], Markov logic networks [58] and logical neural networks [59], to name a few. What seems widely accepted is that knowledge representation—in its many incarnations—is a key asset to enact such hybrid systems.
The benefits of adopting explicit knowledge have also been recognized in Explainable AI (XAI) [1,15]. XAI research focuses on the development of methods and techniques that seek to provide explanations of how deep learning and machine learning models, which are deemed
Current approaches to XAI focus on the mechanistic aspects of explanations, that is, generating explanations that describe
Explicit knowledge is thus of paramount importance for the development of hybrid systems and the provision of intelligible explanations. This position paper explores the role of explicit knowledge representation artifacts (i.e., symbolic structures), such as ontologies and knowledge graphs in neuro-symbolic AI, particularly supporting explainability and the generation of human-understandable explanations.
The rest of the paper is organized as follows. Firstly, we provide a concise introduction to ontologies and their typical conceptualisation and formalisation in computer science. Next, we present and elaborate on three perspectives that demonstrate how formal knowledge can contribute to the development of intelligible explanations. In support of these perspectives, we summarise existing works that align with each viewpoint. Throughout the paper we provide several examples that illustrate the main concepts. Finally, we outline several future challenges associated with ontologies and explanations in neuro-symbolic AI.
Ontologies and their role in explanations
Several recent surveys and position papers [1,10,15,50] emphasize the importance of designing explainability solutions that cater to diverse purposes and stakeholders, and highlight the limitations of existing approaches in supporting comprehensive human-centric Explainable AI. Current methods predominantly concentrate on specific types and formats of explanations, primarily focusing on the mechanistic aspects that explain
In addition, it is important to note that the explanations are based on background knowledge. This background knowledge encompasses both the decision being explained and the intended recipient of the explanation. Explainability techniques must bridge the communication gap between the AI system and the users of the explanations, tailoring their outputs to different user groups. Consequently, if the explanations are not easily understandable by the users, they may be compelled to seek additional knowledge to obtain reliable insights and avoid drawing false conclusions.
The use of explicit knowledge, such as ontologies, knowledge graphs, or other forms of structured formal knowledge, can potentially help to bridge these gaps [69]. There has been a renewed interest in incorporating explicit knowledge into AI in recent years. Ontologies and knowledge graphs have been successfully applied in various domains, including knowledge-aware news recommender systems [37], semantic data mining and knowledge discovery [44,60], as well as natural language understanding [66,73]. These applications highlight the value of leveraging explicit knowledge to enhance Machine Learning systems and address the challenges of explainability and user comprehension.
In the subsequent sections, we will commence by offering a concise introduction to ontologies, as they are commonly understood in AI, in particular, in areas such as knowledge representation and the Semantic Web. Subsequently, we will explore the potential role that ontologies can assume in explainability.
Ontologies
Within the realm of computer science, ontologies serve as a formal method for representing the structure of a specific domain. They capture the essential entities and relationships that arise from observing and understanding the domain, enabling its comprehensive description [27].
To illustrate the concept further, let us consider a simple conceptualisation of the domain of the university and its employees. In an ontology, the entities within this domain can be organised into concepts and relations using unary and non-unary (typically, binary) predicates. At the core of the ontology there is a hierarchy of concepts, known as a taxonomy. For instance, if our focus is on academic roles, we might have relevant concepts such as
From a formal representation perspective, an ontology can be defined as a collection of axioms formulated using a suitable logical language. These axioms serve to express the intended semantics of the ontology, providing a conceptual framework for understanding a specific domain. It is worth noting that axioms play a crucial role in constraining the interpretations of the language used to formalise the ontology. They define the intended models that correspond to the conceptualisation of the domain while excluding unintended interpretations. In this way, axioms help establish the boundaries and constraints of the ontology, ensuring its coherence and consistency. As an example, we can formulate simple axioms to define the properties of relations within our ontology. We can state that the relation
Description Logics (DL) [3] are one of the most well-known knowledge representation languages used to model ontologies in AI. They are of particular interest because they were created with the focus on tractable reasoning, and they provide the underpinning semantics of the W3C Web Ontology Language (OWL). Given a set of concept names, a set of role names, and some connectives that vary on the DL used, a DL ontology consists of two sets of axioms: the so-called TBox (terminological box) and the ABox (assertional box). In general, the TBox contains axioms describing relations between concepts, while the ABox contains axioms stating individuals (instances). For example, the statement ‘every researcher is a person with a PhD title’ belongs to the TBox, while ‘Bob is a researcher’ belongs to the ABox. Figure 1 shows a DL specification that we will consider here as an ontology for our simple conceptualisation of the university domain

An ontology excerpt for the university domain formalised in DL. ⊑ is the subsumption relation, ⊓ is conjunction, and ∃ is the existential relation. ⊤ is the top concept in the ontology. The TBox axioms state that ‘every researcher is a person with a PhD title’, and ‘every professor is a researcher with a tenured position’. The Abox axioms state that Bob is a researcher, Mike is a professor, Bob and Mike collaborate, and Bob is supervised by Mike.
Ontologies played a crucial role as enabling technologies in the development of the Semantic Web [7]. The Semantic Web aims to annotate data on the web with semantic information, enabling computers to interpret and process data effectively. Although the Semantic Web did not fully realise all of its envisioned potential, ontologies have regained increased popularity in recent years, partly due to the re-emergence of knowledge graphs. Notably, the introduction of Google’s Knowledge Graph in 2012 contributed significantly to this resurgence.2 Knowledge graphs, powered by ontologies, have proven to be valuable tools for organising and structuring vast amounts of data, enabling efficient data retrieval, knowledge discovery, and semantic reasoning. Reasoning over ontologies or knowledge graphs can be performed by means of standard knowledge representation formalisms (e.g., RDF, RDF Schema, OWL) and query languages (e.g., SPARQL, Cypher, Gremlin) just to name a few of them.
In recent years, a number of knowledge graphs have become available on the Web, offering valuable structured information. One prominent example is DBpedia [44], which constructs a knowledge graph by automatically extracting key-value pairs from Wikipedia infoboxes. These pairs are then mapped to the DBpedia ontology using crowdsourcing efforts. Another notable knowledge graph is ConceptNet [66], a freely available linguistic knowledge graph that integrates information from various sources such as WordNet, Wikipedia, DBpedia, and OpenCyc. These knowledge graphs provide valuable resources for semantic understanding, information retrieval, and knowledge discovery. For a more comprehensive description of knowledge graphs and related standards, we recommend referring to the works cited in [35,69].
The knowledge representation language used to formalise an ontology provides support for both standard and non-standard reasoning tasks. Standard reasoning tasks involve checking various properties, such as subsumption and satisfaction. Concept subsumption determines if the description of one concept, for example,
Given this background, we believe that ontologies can play a crucial role in the realm of explainability. In particular, their adoption can significantly contribute to the development of explanations in neuro-symbolic AI from various perspectives:
By leveraging ontologies in these ways, the development of explanations can be enhanced, ensuring a solid conceptual basis, facilitating understanding through common-sense reasoning, and enabling flexible knowledge abstraction and refinement.
In the following sections, we overview some of the existing approaches in the literature, and we position them according to these three proposed perspectives. The rational behind the selection of these approaches lies in their connection with ontologies and their usage w.r.t. the above perspectives. This article is not intended to be a comprehensive review of the state-of-the-art of ontologies in neuro-symbolic AI (see [33] for some advances regarding ontologies and neuro-symbolic AI).
Historically, ontologies served as explicit models for the conceptual development of information systems [20]. Task ontologies were created to represent generic problem-solving methods and facilitate the reuse of task-dependent knowledge across diverse domains and applications [26]. In this context, the Unified Problem-Solving Method description Language (UPML) served as a relevant resource for representing tasks and problem-solving methods as reusable, domain-independent components [20]. Another example is the Web Service Modeling Ontology (WSMO), which focused on describing different aspects associated with Semantic Web Services [19]. These ontology resources played crucial roles in facilitating the representation and standardisation of knowledge in their respective classes of applications.
In the context of explainability, ontologies can play a crucial role as a common reference model for specifying explainable systems, i.e., as a model that addresses the area of explainable systems itself as a domain whose shared conceptualisation needs to be articulated and explicitly represented. Several studies have explored this avenue (e.g., [5,11,53,68,72]) highlighting the necessity of a shared interchange model for addressing the factors involved in explainable systems. To achieve this, they proposed taxonomies and ontologies to model key notions of the XAI domain including: explanations, users, the mapping of end-user requirements to specific explanation types, as well as to the AI capabilities of systems.
Nunes and Jannach [53] conducted a systematic literature review that examined the characteristics of explanations provided to users. Their work focused on aspects such as content, presentation, generation, and evaluation of explanations. They proposed a taxonomy that encompasses various explanation goals and different forms of knowledge that comprise the explanation components.
Arrieta et al. [5] present a taxonomy that established a mapping between deep learning models and the explanations they generate. Furthermore, they identified the specific features within these models that are responsible for generating these explanations. Their research contributes to the understanding of the relationship between model characteristics and the interpretability of their outputs. Their taxonomy covers different types of explanations that are produced by sub-symbolic models, including
In the study by Wang et al. [72], the authors introduced a conceptual framework that elucidates how human reasoning processes can be integrated with explainable AI techniques. This framework establishes connections between different facets of explainability, such as explanation goals and types, as well as human reasoning mechanisms and AI methods. Notably, it facilitates a deeper understanding of the parallels between human reasoning and the generation of explanations by AI systems. By leveraging this conceptual framework, researchers and practitioners can gain insights into the interplay between human cognition and explainable AI.
Tiddi et al. [68] proposed an ontology design pattern specifically tailored for explanations. The authors observed that while the components of explanations may vary across different fields, there exist certain atomic components that can represent generic explanations. These atomic components include the associated

Explanation ontology overview with key classes separated into three overlapping attribute categories: user, interface, and system [11].
In a related study, Chari et al. [11] extended the ontology design pattern proposed by Tiddi et al. [68] to encompass explanations generated through computational processes. They developed an
Overall, ontologies act as a lingua franca for representation and information exchange in explainable AI systems by providing shared, formal representation of domain knowledge. They facilitate transparent communication, promote collaboration between different stakeholders, and enhance the interpretability and reliability of AI systems’ explanations. By using ontologies, neuro-symbolic AI systems can bridge the gap between technical AI components and human understanding, making AI more transparent, accessible and, hence, trustworthy to end-users. However, despite the existence of several proposals for a general-purpose explanation ontology, they have primarily been utilized in an academic context. Their widespread adoption hinges on their standardization and acceptance within industry.
The majority of existing approaches to XAI rely on statistical analysis of black box models [1]. While this type of analysis has demonstrated its utility in gaining some insight into the internal workings of black box models, it generally lacks support for explainability based on common-sense reasoning [15,49]. As a result, these approaches often fall short in providing explanations that closely align with human reasoning, thereby limiting their capacity to (in the best scenario) generating intelligible explanations. Conversely, there is widespread acceptance that symbolic knowledge can effectively facilitate common-sense reasoning. Therefore, it is reasonable to consider that explanation techniques can leverage ontologies to enhance model explainability and generate explanations that are more understandable to humans.
In the context of explainability, several works attempt to fertilise explainability with ontologies. Seeliger et al. [64] surveyed what combinations of ontologies and knowledge graphs, and statistical models have been proposed to enhance model explainability, and what domains have been particularly important. The authors highlighted that quite a few approaches exist in supervised and unsupervised machine learning, whereas the integration of symbolic knowledge in reinforcement learning is almost overlooked. Most approaches in supervised learning seek to define a mapping between network inputs or neurons and ontology concepts which are then used in the explanations [16–18].

Decision tree extracted without (a) and with (b) a domain ontology to explain the conditions to grant or refuse a loan [16]. It can be seen that the use of an ontology leads to different features appearing in the decision nodes.
In general, these methods rely on the presence of a domain ontology, which aids in generating symbolic justifications for the outputs of neural network models. Nevertheless, the way in which the ontology is integrated may differ among various approaches. In [18], it is shown how the activations of the inner layers of a neural network w.r.t. a given sample can be aligned with domain ontology concepts. To detect this alignment, multiple mapping networks (one network for each concept) are trained. Each network takes certain layer activations as input and produces the relevance probability for the corresponding concept as output. However, since the number of inner activations is typically substantial, and not all concepts can be extracted from each layer, this alignment procedure can be inefficient. In [17], it is assumed that training data contains labels that are (manually) mapped to concepts defined in a domain ontology. This semantic link is then exploited to provide explanations as justifications of the classification obtained. In [16], a similar mapping is applied between input features and concepts in a domain ontology, where the ontology is used to guide the search for explanations. In particular, the authors proposed an algorithm that extracts decision trees as surrogate models of a black box classifier and takes into account ontologies in the tree extraction procedure. The algorithm learns decision trees whose decision nodes are associated with more general concepts defined in an ontology (Figure 3). This has proven to enhance the human-understandability of decision trees [16].
In the context of unsupervised learning, Tiddi et al. [67] introduced a technique to explain clusters by traversing a knowledge graph in order to identify commonalities among them. The system generates potential explanations by utilising both the background knowledge and the given cluster, and it is independent of the specific clustering algorithm employed.
One of the main challenges in all of these approaches lies in aligning the data utilised in statistical models with semantic knowledge. One possible solution is to create an ontology dedicated to each dataset and application, ensuring that the generated explanations are tailored to the specific problem. However, this approach can be prohibitive to application scenarios with stringent time and scalability constraints. An alternative in these cases might be to systematically construct a suitable ontology for mapping. This involves mapping sets of features to existing general domain ontologies, such as MS Concept Graph [73] or DBpedia [44]. This process can be facilitated through ontology matching techniques [24] and mapping methods [38]. It is important to note that human supervision is required for this process, and manual fine-tuning of the mappings is necessary. Unfortunately, there is no definitive blueprint to follow in this regard. The inclusion of explicit knowledge is essential for any attempts at generating human-interpretable explanations. The choice between a domain-specific ontology or adapting a domain-independent (i.e., upper-level, foundational) one on an ad-hoc basis depends on the specific requirements of each application. For example, when a primary requirement of the application is knowledge sharing and interoperability, opting for an upper-level/foundational ontology is strongly advised. Such an ontology can properly support the tasks of: making explicit the domain conceptualization at hand; safely identifying the correct relations between elements in different applications; reusing standardized domain-independent concepts and theories.
There are ontology matching approaches in the literature that establish mappings between domain ontologies with support of foundational ontologies, as well as approaches that map domain ontologies to foundational ontologies. In general, mapping knowledge derived from statistical techniques to ontologies (as computational logical theories) allows for supporting a form of hybrid reasoning, in which symbolic automated reasoning can complement statistical ones and circumvent their limitations [45]. However, in the particular case of mapping learned concepts and relations to a foundational ontology, we have the additional opportunity of grounding these knowledge elements in domain-independent axiomatized theories that describe common-sensical notions such as parthood, (existential, generic, historical) dependence, causality, temporal ordering of events, etc. [29]. This can, in turn, support more refined and transparent explanations via the expansion of the consequences of these mappings enabled by logical reasoning. For example, by mapping an element
Abstraction and refinement are mechanisms that can be availed to represent knowledge in a more general or more specific manner. With abstraction one would hope to consider all is relevant and drop all the irrelevant details, with refinement one would hope to get more details. These mechanisms play a central role in human-reasoning. Humans use abstraction as a way to deal with the complexity of the world every day. Abstraction and refinement have many important applications, e.g., in natural language understanding, problem solving and planning, and reasoning by analogy.
Different formalisation of abstraction and refinement were proposed in the literature. Keet [40] argued that most proposals for abstractions differ along three dimensions: language to which it is applied, methodology, and semantics of what one does when abstracting. A syntactic theory of abstraction, mainly based on proof-theoretic notions, was proposed in [25], whilst a semantic theory of abstractions based on viewing abstractions as model level mappings was proposed in [52]. These solutions were mainly theoretical and not developed for or assessed on their potential for implementation, reusability and scalability.
Abstraction is tightly connected with analogical reasoning. The structure mapping theory proposed by Gentner [23] suggests that humans use analogical reasoning to map the structure of one domain onto another, highlighting the shared relational information between objects. This process involves abstraction, as it can disregard specific object details in favor of identifying higher-level relational patterns that are relevant for making inferences and solving problems.
In the context of explainababilty, abstraction and analogical reasoning are essential for generating intelligible explanations to users. Abstraction allows an explanatory system to simplify complex decision-making processes by identifying higher-level patterns or by composing local explanations into (more general) global explanations [65]. On the other hand, analogical reasoning aids in aligning these patterns with familiar analogs from known domains. Consequently, users can grasp intricate decision-making processes in a more accessible and intelligible manner.
Abstraction and analogical reasoning are closely linked to the idea that explanations are selective and can be seen as truthful approximations of reality [49]. One typical example is the scientific explanation of an atom through the analogy of a miniature solar system, where the nucleus plays the role of the sun and the electrons orbit around it like planets. Clearly, this analogy is a simplified representation of the atomic structure. In reality, atoms are much more complex, and the behavior of electrons is better described using quantum mechanics. However, by using the solar system analogy, lay users can visualise and understand the basic concept of how an atom structure is organised with a central nucleus and orbiting electrons. Furthermore, explanations often involve simplifications to make the information more understandable. These simplifications are necessary because the complete representation of reality can be difficult to grasp. For instance, if asked to explain a prediction, a medical diagnosis AI system can offer an abstracted explanation that emphasizes the essential factors contributing to the diagnosis, rather than overwhelming the user with an exhaustive list of all the features and their values.

Refinement and abstraction applied to obtain explanations tailored to expert and lay users. Explanations are grounded to a domain ontology modeling concept definitions and relations between them. An explanation can be made more specific or more general by exploiting concept relationships defined in the ontology. In the explanation for expert users, the concepts associated with
To exemplify this idea further, let us imagine that the medical diagnosis AI system runs a predictive model about the risk of a patient of getting a heart attack, and that it needs to provide an explanation for each prediction. The recipient of the explanation can be a medical doctor, who is a domain expert, or a lay user (Figure 4). On the one hand, the doctor would like to get a detailed explanation that provides insights about what attributes are used to make a diagnosis. On the other hand, a lay user might feel more comfortable in receiving a more abstract explanation that can still justify the diagnosis. Figure 4 exemplifies this idea. On the right side of the figure, two explanations are shown: one for an expert user and one for a lay user. Clearly, both of them explain the diagnosis but with a different level of details.3 The explanations are grounded to a domain ontology, e.g., some features in the explanation are associated with concepts defined in the domain ontology. On the left side of the figure an excerpt of a simple ontology modeling the heart disease domain is represented. The ontology consists of simple concepts such as
Abstraction is just one of the possible techniques to address the goals of complexity management of complex information structures [61]. Complexity Management, more generally, is essential for explanation, given that to explain one must focus on the explanation goals of an explanation-seeker [62]. In other works, according to the
Another important notion of refinement is
Knowledge refinement and complexity management techniques play a pivotal role in offering explanations tailored to user requirements and backgrounds. Despite notable progress in their adoption, there remains scope for their advancement as enabling technologies for explainability in neuro-symbolic AI. For instance, ontology unpacking currently relies heavily on manual processes, necessitating collaboration with domain experts, while the development and exploration of supportive tools are ongoing endeavors. Abstraction and refinement tasks often deal with extensive search spaces, with the incorporation of preferences or heuristics still to be explored.
Apart from the challenges associated with the three perspectives discussed until now, there are several other open challenges that must be tackled to integrate ontologies as fundamental enabling technologies for explanations in neuro-symbolic AI. In the following, we outline a few of them. The interested reader will find a more comprehensive discussion of the challenges associated with explanations in [15,46].
Ontology as explanations: While ontologies may contribute to offer explanations within a domain of interest (see e.g., [16,36]), and, ontology usage should result in an understanding of the domain, this does not automatically mean that the ontology itself is self-explainable and easily understandable by humans [62]. The same holds for other symbolic artefacts that are offered as explanations to numerical black boxes (e.g., knowledge graphs, decisions trees). For this reason, it can be argued that these symbolic artefacts also require their own explanation. The idea of ontological unpacking is relevant here as a means to identify and make explicit the
Causal explanations: A key notion related to explanations is that of causality [54]. Although not all explanations are causal explanations, causal explanations occupy a fundamental place in the scientific explanation literature [42]. In particular, knowing what relationship there is between input and output, or between input features can foster human-understandable explanations. However, causal explanations are largely lacking in the machine learning literature, with only few exceptions, e.g., [12]. Ontologies can capture causal rules once the knowledge over an application domain has been modelled. On the other hand, to the best of our knowledge, only a few works attempted to define and model causality within an ontology. In [43], the authors proposed different definitions of causality, and studied how constraints of different nature, namely structural, causality, and circumstantial, intervene in shaping causal relations. However, as the authors claim, the approach is incomplete and further extensions are needed. [9] puts forth a formal theory of causation. In this theory, we have the explicit manifestation of dispositions, which are activated by the obtaining of certain situations (roughly individual state of affairs), and which are manifested via the occurrence of events. In other words, events cause each other via a continuous mechanism of events bringing about situations, that activate dispositions, that in turn are manifested as other events, and so on.
Evaluating the human-understandability and effectiveness of explanations: Rudin et al. [63] pointed out evaluation of explanations as a major challenge to be faced in the context of XAI. On the one hand, one would like to quantify to what extent users can understand and use an explanation. A few approaches proposed quantitative metrics and protocols [13,34,71], but it is still unclear how to compare the results of different evaluations and establish a common understanding of how to evaluate explanations. There are already some promising approaches in the literature to solve this problem. In [51], the authors identify several conceptual properties that should be considered to assess the quality of explanations, and they propose quantitative evaluation methods to evaluate an explanation. More recently, a survey-based methodology for guiding the human evaluation of explanations was proposed in [14].
Ontologies and large language models: The capability of large language models (LLMs) to generate multi-modal content, which can sometimes result from hallucinations, raises doubts about their trustworthiness. Several works attempt to connect ontologies with LLMs [8,47]. Once a link between the content generated by an LLM and an ontology is established, the latter can be used to check the consistency of the content, provide explanations for its consistency or inconsistency, and potentially offer ways to repair hallucinations.
Finally, another ontology-based approach to explanation, which is connected to value-based justification (in ethics) is the one discussed in [32]. There, explicability is related to the reconstruction of decision-making processes, which in turn are grounded on preference relations, which in turn are grounded on value-assessments. The whole approach is grounded on an ontological analysis of ethical dimensions, and, ultimately, on an ontological analysis of the notions of value, risk, autonomy and delegation.
In the last years, there has been a resurgence of interest in symbolic AI. Symbolic AI stands out as a pivotal enabling technology for neuro-symbolic AI systems. It effectively addresses the constraints inherent in black box deep learning models by facilitating reasoning capabilities and explanatory support.
In this paper, we discussed the role of ontologies and knowledge in explanations for neuro-symbolic AI from three perspectives: reference modelling, common-sense reasoning, and knowledge refinement and complexity management.
The role played by ontologies within these perspectives can be summarized as follows. Firstly, ontologies provide formal reference consensual models for designing explainable systems and generating human-understandable explanations. Ontologies provide a common lingua for defining explanations, promoting interoperability, and reusability of explanations across various domains. Secondly, ontologies enable the creation of explanations with linked semantics. This can, in turn, support more refined and transparent explanations via knowledge expansion enabled by logical reasoning. Thus, integrating ontologies with current explanation techniques allows for supporting a form of hybrid reasoning, enhancing the human-understandability of explanations. Finally, ontologies offer the ability to abstract and refine knowledge, which serves as the foundation for human reasoning. Knowledge refinement and complexity management is essential to craft personalised explanations that are human-centric and tailored to different user profiles.
Given the above, ontologies can play a crucial role for explanations in neuro-symbolic AI. Nevertheless, a number of challenges still need to be addressed, namely the integration of foundational and domain ontologies in current explainability approaches, the adoption of complexity management techniques to ensure ontologies as explanations are easily manageable and comprehensible for users, how to model and evaluate causality, and the evaluation of their human-understandability. Last but not least, an important challenge to address is establishing the relationship between ontologies and large language models, and exploring how this connection can be used to explain and possibly correct LLMs hallucinations.
