In this paper, we propose an argumentation formalism that allows for both deductive and abductive argumentation, where ‘deduction’ is used as an umbrella term for both defeasible and strict ‘forward’ inference. Our formalism is based on an extended version of our previously proposed information graph (IG) formalism, which provides a precise account of the interplay between deductive and abductive inference and causal and evidential information. In the current version, we consider additional types of information such as abstractions which allow domain experts to be more expressive in stating their knowledge, where we identify and impose constraints on the types of inferences that may be performed with the different types of information. A new notion of attack is defined that captures a crucial aspect of abductive reasoning, namely that of competition between abductively inferred alternative explanations. Our argumentation formalism generates an abstract argumentation framework and thus allows arguments to be formally evaluated. We prove that instantiations of our argumentation formalism satisfy key rationality postulates.
In the legal and forensic domains, reasoning about evidence plays a central role in the rational process of proof [2,4]. To aid in this process, various graph-based tools exist that allow domain experts to make sense of a mass of evidence in a case, such as mind maps [23,35], argument diagrams [6,23] and Wigmore charts [40]. Because of their informal nature, these tools typically do not directly allow for formal evaluation using AI techniques such as computational argumentation [13]. Hence, we wish to formalise and disambiguate analyses performed using such tools in a manner that (1) allows for formal evaluation and that (2) adheres to principles from the literature on reasoning about evidence [2,4,17,25], while (3) allowing inference to be performed and visualised in a manner that is closely related to the way inference is performed and visualised by domain experts using such tools.
As we described in previous work [39], principles from the literature on reasoning about evidence state that inference is often performed using domain-specific generalisations [2,4,6], also called defaults [25,32], which capture knowledge about the world in conditional form. A distinction can be made between causal generalisations (e.g. ‘fire typically causes smoke’) and evidential generalisations (e.g. ‘smoke is evidence for fire’) [4,25]. In the current paper, we also consider generalisations that are neither causal nor evidential; examples are abstractions [4,12] and mere statistical correlations. Inference can be performed in a deductive or forward fashion, where from a generalisation (e.g. ‘fire typically causes smoke’) and its antecedent (fire), the consequent (smoke) is strictly or defeasibly inferred, and in an abductive [12,17] or backward fashion, where from a causal generalisation or an abstraction and by affirming the consequent (smoke), the antecedent (fire) is defeasibly inferred. Note that the term ‘deduction’ is not consistently used in the literature, as it can either mean strict inference, in which the consequent universally holds given the antecedents (e.g. [20]) or defeasible inference, in which the consequent tentatively holds given the antecedents (e.g. [33]). To cover both meanings, in this paper ‘deduction’ is used as an umbrella term for both defeasible ‘forward’ inference and strict ‘forward’ inference.
Pearl [25, p. 264] argued that people generally consider it difficult to express knowledge using only causal generalisations, and in an empirical study, van den Braak and colleagues [36] found that while there are situations in which subjects consistently choose either causal or evidential modelling techniques, there are also many examples in which people use both types of generalisations in their reasoning. For instance, subjects often considered testimonies to be evidential, whereas a motive for committing an act is considered a cause for committing that act. This discussion illustrates that in formal accounts of reasoning about evidence, it is important to allow for causal and evidential generalisations [4]. Moreover, in this paper we show that it is important to also allow for abstractions and other types of generalisations, as these allow domain experts to be more expressive in stating their knowledge. The need for including these types of generalisations will become apparent from the examples we consider and the conceptional analysis of reasoning about evidence we provide.
When performing analyses using aforementioned tools such as mind maps, domain experts naturally mix the different types of generalisations and perform both deductive and abductive inferences, where the used generalisations and the inference type (deduction, abduction) are typically left implicit. Hence, in previous work [39] we set out to formalise analyses performed using these tools by providing a precise account of the interplay between the different types of inferences and generalisations and the constraints on performing inference we need to impose in terms of the information graph (IG) formalism. In this paper, we propose an extension of the IG-formalism, where in addition to causal and evidential generalisations we now also allow for abstractions and introduce a category of generalisations termed ‘other’, consisting of generalisations that are neither causal nor evidential nor abstraction such as aforementioned mere correlations, thereby increasing the expressivity of the IG-formalism. We particularly focus on identifying conditions under which performing inference with abstractions can lead to undesirable results. Specifically, care should be taken that no version of an event at a lower level of abstraction is inferred if an alternative version of this event at a lower level of abstraction was already previously inferred. Hence, we extend on the constraints imposed by Pearl’s C–E system [25] which say that, in performing inference, care should be taken that no cause for an effect is inferred in case an alternative cause for this effect was already previously inferred. Moreover, in this paper we also consider exceptional circumstances under which the constraints of Pearl’s C–E system should not be imposed, namely in case enabling conditions [11] are provided under which a generalisation may be used in performing inference. Based on these constraints and our conceptional analysis of reasoning about evidence, we define how deductive and abductive inference may be performed with IGs. Most existing formalisms that allow both inference types with causal and evidential information, abstractions, and other types of information are logic-based (e.g. [4,5,12,20]); instead, we opt for a graph-based formalism to remain closely related to the way analyses are visualised using aforementioned graph-based tools.
The information specified in an IG serves as a source of information that can be used to facilitate the construction of AI systems for which formal semantics are defined. In earlier work [39], we investigated the application of our IG-formalism in facilitating the construction of Bayesian networks (BNs) [16], graphical models of joint probability distributions. In this paper, we instead focus on argumentation, where we propose an argumentation formalism based on IGs that allows for both deductive and abductive argumentation [38]. Previous work on abduction includes work on formal logical models of abductive reasoning (e.g. [12,17]) and the work of Kakas and colleagues on abductive logic programming [19]. However, to the best of our knowledge, our proposed formalism is one of the first formalisms that models combined abductive and deductive reasoning in a formalism for structured argumentation. The closest to the current paper is Bex’s integrated theory of causal and evidential arguments [5], which is based on the framework [20]. In Bex’s integrated theory, the roles of generalisation and inference are not separated; instead, causal and evidential inferences are defined and arguments are constructed by forward chaining such inferences. In contrast to [5] we put special emphasis on the constraints that need to be imposed on the types of inferences that may be performed with the different types of generalisations, where we formally prove that arguments based on IGs indeed adhere to the identified constraints. Finally, compared to the framework [20], which only allows for deductive reasoning, we allow for both deductive and abductive reasoning and introduce a new type of conflict, namely conflict between competing alternative explanations [17], which is currently not accounted for in that framework. The relations to existing formalisms is further discussed in Section 6.
Our approach generates an abstract argumentation framework as in Dung [13], that is, a set of arguments with a binary attack relation, which thus allows arguments to be formally evaluated according to Dung’s argumentation semantics. Besides allowing for rebuttal and undercutting attack, which are among the types of attacks that are typically distinguished in structured argumentation [20,27], we also define the notion of alternative attack among arguments based on IGs, a concept based on the notion of competing alternative explanations that is inspired by [3,5]. Alternative attack captures a crucial aspect of abductive reasoning, namely that of conflict between abductively inferred conclusions [17].
Our argumentation formalism extends a preliminary version proposed in [38] that was based on a more restricted version of our IG-formalism [39] in which only causal and evidential generalisations without enablers were considered. Moreover, in comparison to our earlier work [38] we now also prove that key rationality postulates [9] are satisfied by instantiations of our formalism, which implies that anomalous results as identified by [9] are avoided.
To summarise the main contributions of this paper, we propose an argumentation formalism that allows for both deductive and abductive argumentation, the latter of which has received relatively little attention in argumentation. Our argumentation formalism is based on an extended version of our IG-formalism, where in addition to causal and evidential generalisations we now also allow for abstractions and other types of generalisations, as well as generalisations that include enabling conditions, where constraints are imposed on the types of inferences that may be performed with these new types of generalisations. A new notion of attack is defined, namely alternative attack. Our approach allows arguments to be evaluated using Dung’s semantics. We formally prove that instantiations of our argumentation formalism satisfy key rationality postulates [9].
The paper is structured as follows. In Section 2 we provide a conceptual analysis of reasoning about evidence. In Section 3 we present examples of analyses performed using informal reasoning tools typically used by domain experts, namely Wigmore charts and mind maps, which illustrates that both deductive and abductive inference is performed by domain experts using both causal and evidential generalisations, abstractions, and other types of generalisations. Based on these examples, in Section 4 we motivate and define our IG-formalism. In Section 5 we then define our argumentation formalism based on our IG-formalism and prove formal properties of our approach. In Section 6 we discuss related work. In Section 7 we summarise our findings and conclude.
Reasoning about evidence
In this section, we provide a conceptual analysis of reasoning about evidence, where we review the terminology used to describe it and introduce assumptions that demarcate the scope of the work presented in this paper. This analysis extends the analysis provided in our previous work [39] in which only causal and evidential generalisations without enablers were considered. More specifically, we now also consider abstractions and other types of generalisations, as well as generalisations that include enabling conditions. The concepts and assumptions introduced in this section are formalised in Sections 4 and 5.
Inference is the process of drawing conclusions from premises starting from the evidence, where evidence is that what has been established with certainty in the context under consideration. For instance, in the context of a legal trial, the evidence consists of that what is actually observed by a judge or jury, such as documents (e.g. police and autopsy reports) and other tangible evidence, as well as testimonial evidence [2]. Inference is often performed using domain-specific generalisations [2,4,6], also called defaults [25,32], which capture knowledge about the world in conditional form. Generalisations can either be strict or defeasible, where defeasible generalisations are of the form ‘If, then usually/normally/typically b’ and strict generalisations are of the form ‘If, then always b’. Here, claims are called the antecedents of the generalisation and b its consequent, where we assume that claims are literal propositions and that generalisations have one or more antecedents and exactly one consequent. In case a generalisation has multiple antecedents, it expresses that only the antecedents together allow us to infer the consequent. We semi-formally denote generalisations as , among others to ease the description of examples in this section and in Section 3. For defeasible generalisations, exceptional circumstances can be provided under which the generalisation may not hold, whereas strict generalisations hold without exception. An example of a (defeasible) generalisation is ‘If fire, then typically smoke’, where ‘fire’ is its antecedent and ‘smoke’ its consequent. An example of an exception to this generalisation is that sufficient oxygen is present for complete combustion to occur.
A distinction can be made between causal and evidential generalisations [4,25], where instead of writing these generalisations in the form ‘If … , then …’, causal generalisations are written as ‘usually/normally/typically cause e’ (e.g. ‘fire typically causes smoke’) and evidential generalisations are written as ‘are evidence for c’ (e.g. ‘smoke is evidence for fire’). For a causal generalisation, its antecedents express causes for the consequent, and for an evidential generalisation, its consequent expresses the usual cause for its antecedents. In the context of commonsense reasoning about evidence, causal and evidential generalisations are often assumed to be defeasible (see e.g. [4,18]); in this paper, this assumption is also made. The examples considered throughout this paper illustrate that causal and evidential generalisations are typically not strict.1
Note that strict generalisations such as strict rules from classical logic and definitions can be expressed using strict generalisations of type ‘other’ and strict abstractions.
In this paper, we also consider generalisations that are neither causal nor evidential. For instance, abstractions [4,12] allow for reasoning at different levels of abstraction. More precisely, abstractions are of the form ‘can usually/normally/typically/always be considered a specialisation of q’ (e.g. guns can usually be considered deadly weapons), where antecedents are considered to be more specific than the more abstract consequent q. As noted by Console and Dupré [12], abstractions are syntactically the same as causal generalisations but they are semantically different in that the antecedents of abstractions do not express causes for the consequent or vice versa. Abstractions may be defeasible (cf. [4]) but may also be strict (cf. [12]); an example of a strict abstraction is generalisation lung_cancercancer, which states that lung cancer is a type of cancer. An example of defeasible abstraction is gundeadly_weapon, where an example of an exception to this generalisation is that the gun is a non-functional replica, or a water gun.
Table indicating for each generalisation type whether generalisations may be defeasible or strict
Causal generalisations
Evidential generalisations
Abstractions
Other generalisations
Defeasible
V
V
V
V
Strict
X
X
V
V
Another example of a different type of generalisation is a generalisation representing a mere statistical correlation, such as a correlation between homelessness and criminality. While there may be one or more confounding factors that cause both homelessness and criminality (e.g. unemployment), a domain expert may be unaware of these factors or may wish to refrain from expressing them explicitly. In this paper, we distinguish between generalisations that are causal, evidential, abstraction, or of another type, where generalisations of type ‘other’ may be defeasible or strict. Specifically, as this category contains all possible types of generalisations other than causal, evidential and abstraction, we allow for the option to distinguish between strict and defeasible generalisations among these generalisations. Table 1 provides an overview of the different generalisation types, where for each type it is indicated whether generalisations may be defeasible or strict. The notation , , and is used for the different types of generalisations, respectively.
Different types of inferences can be performed with generalisations depending on whether their antecendents or consequent are affirmed in that they are either observed or inferred; here, a claim is inferred iff it is either deductively or abductively inferred, where in deductive inference the consequent is inferred from the antecedents and in abductive inference the antecendents are inferred from the consequent. These two inference types are now considered in more detail.
Deductive inference
Inference can be performed in a deductive fashion, where from a generalisation and by affirming the antecedents, the consequent is inferred by modus ponens on the generalisation. As noted in the introduction, the term ‘deduction’ is used for both defeasible and strict ‘forward’ inference; hence, deduction is not necessarily a stronger or more reliable form of inference than abduction, which is a type of defeasible inference. Defeasible deduction can only be performed using defeasible generalisations (of any type) and not using strict generalisations (see Table 2). Strict deductive inference can only be performed using strict abstractions and strict generalisations of type ‘other’. For a given instance of deductive inference, it will be explicitly specified whether it concerns strict or defeasible deductive inference.
Consider causal generalisation firesmoke. By affirming g’s antecedent fire, its consequent smoke is defeasibly deductively inferred.
The following example illustrates strict deductive inference.
Consider strict abstraction lung_cancercancer. Upon observing that a person has lung cancer, we can strictly deductively infer that the person has cancer using g.
Table indicating for defeasible and strict generalisations of every type which types of inferences may be performed
Causal generalisations
Evidential generalisations
Defeasible abstractions
Strict abstractions
Defeasible other generalisations
Strict other generalisations
Defeasible deduction
V
V
V
X
V
X
Strict deduction
X
X
X
V
X
V
Abduction
V
X
V
V
X
X
Prediction [33] is a specific type of deductive inference in which the consequent of a causal generalisation is deductively inferred by affirming its antecedents. Specifically, as the antecedents of a causal generalisation express causes for the consequent, the consequent is said to be predicted from the antecedents in this case. Example 1 provides an example of prediction.
Abductive inference
Abduction [12,17], a type of defeasible inference, can be performed using causal generalisations and abstractions: from a causal generalisation or an abstraction and by affirming the consequent, the antecedents are inferred, since if the antecedents are true it would allow us to deductively infer the consequent modus-ponens-style. Following [17], in case causes and are abductively inferred from common effect e using causal generalisations and , then and for , , are considered to be competing alternative explanations for e. We assume that causes (and ) are not in competition among themselves.
Consider the following causal generalisations:
firesmoke;
smoke_machinesmoke.
By affirming the common consequent (smoke), fire and smoke_machine are abductively inferred, which are then competing alternative explanations of smoke.
Abduction can also be performed using abstractions [4,12], where the used abstraction can either be defeasible (cf. [4]) or strict (cf. [12]). An example of a model including strict abstractions is that of Console and Dupré [12], in which both explanatory axioms (comparable to causal generalisations) and abstraction axioms are used to explain observations. Multiple explanations that are inferred using abstraction axioms can then be considered competing alternative explanations. Note that an abductive inference step with a strict abstraction is still defeasible, as it concerns an inference step from the more abstract consequent to a more specific antecedent. Following Console and Dupré [12] and Bex [4], we allow for abduction using both strict and defeasible abstractions, where in performing abduction with abstractions and the antecedents and for , , are considered to be competing alternative explanations of the common consequent q. We assume that antecedents (and ) are not in competition among themselves.
Consider the following defeasible abstractions:
gundeadly_weapon;
knifedeadly_weapon.
By affirming the common consequent (deadly_weapon), gun and knife are abductively inferred using generalisations and , which are then competing alternative explanations of deadly_weapon.
The following example illustrates abductive inference with strict abstractions.
Consider the following strict abstractions:
lung_cancercancer;
colon_cancercancer.
Upon observing that a person has cancer, lung_cancer and colon_cancer are abductively inferred, which are then competing alternative explanations of cancer.
Representing causal knowledge
Abductive inference with causal generalisations and deductive inference with evidential generalisations are related: in some cases, we will accept not only causal generalisation ‘c usually/normally/typically causes e’ but also evidential generalisation ‘e is evidence for c’ [5,25], which we will call the evidential counterpart of the causal generalisation. However, it can be argued that we only accept the evidential counterpart of a causal generalisation if c is the usual cause of e, where we assume that only one cause can be the usual cause of e.
Fire can be considered the usual cause of smoke, so we will accept both causal generalisation firesmoke and its evidential counterpart smokefire. In this case, abduction with generalisation g can be encoded as deduction with generalisation . Because a smoke machine cannot be considered the usual cause of smoke, we will accept causal generalisation smoke_machinesmoke but we will not accept evidential generalisation smokesmoke_machine.
Note that a causal generalisation g can only have an evidential counterpart in case g has a single antecedent, as we assume generalisations have a single consequent but multiple antecedents. Furthermore, as we assume that only one cause can be the usual cause of e, only one of the causal generalisations or can be replaced by an evidential generalisation. Hence, we do not consider and to be competing alternative explanations of e in case deductive inference is performed using evidential generalisations and .
Mixed inference and inference constraints
Deductive and abductive inference can be iteratively performed, where mixed abductive-deductive inference is also possible.
Suppose that from the causal generalisation firesmoke and by affirming its consequent (smoke), its antecedent (fire) is inferred. Now, if the additional causal generalisation fireheat is provided, then its consequent (heat) can be deductively inferred (or predicted) as the antecedent (fire) has been previously abductively inferred.
Constraints on performing inference with causal and evidential generalisations
Mixed deductive inference, using both causal and evidential generalisations, can also be performed [5], but as noted by Pearl [25] care should be taken in performing mixed inference that no cause for an effect is inferred in case an alternative cause for this effect was already previously inferred.
Consider the example in which causal generalisation smoke_machinesmoke and evidential generalisation smokefire are provided. Deductively chaining these generalisations would make us infer that there is a fire when seeing a smoke machine, which is clearly undesirable.
Similarly, in performing mixed deductive-abductive inference, care should be taken that no cause for an effect is inferred in case an alternative cause for this effect was already previously inferred.
Consider Example 8, where instead of an evidential generalisation smokefire a causal generalisation firesmoke is provided. Upon seeing a smoke machine, this would make us infer that there is a fire in case deductive inference and abductive inference are performed in sequence, which is again undesirable.
Accordingly, we wish to prohibit these types of inference patterns, and refer to the constraint that no cause for an effect should be inferred in case an alternative cause for this effect was already previously inferred as Pearl’s constraint [25].
The above discussion can be extended to generalisations with multiple antecedents.
Suppose that the following generalisations are provided:
high_body_temperaturefever;
smokecoughing;
fever, coughingpneumonia.
Upon observing that a person has high body temperature and that there is smoke, this would make us infer that the person has a fever and is coughing using generalisations and , respectively. In turn, this would make us infer that the person has pneumonia using generalisation , which is undesirable: as a cause for coughing was already previously inferred (smoke), we should not be able to infer a different cause for coughing (pneumonia). Specifically, fever is in itself not a sufficient condition for inferring pneumonia: coughing is also necessary. Only in case a separate evidential generalisation feverpneumonia is provided should we be able to infer pneumonia.
Similar problems arise in performing inference using causal generalisations with multiple antecedents. Accordingly, we wish to extend Pearl’s constraint to generalisations with multiple antecedents. However, there are exceptions under which we do not wish to prohibit the aforementioned types of inference patterns, namely in case additional circumstances, also called enabling conditions [11], or enablers, are provided under which a causal or evidential generalisation may be used in performing inference. Generalisations that include enablers are of the general form , where are its enablers and its actual antecedents. For a causal generalisation, only its actual antecedents and not its enablers express causes for the consequent. Similarly, for an evidential generalisation its consequent only expresses the usual cause for its actual antecedents and not for its enablers. Causality is a contentious topic, and it is easy to disagree about whether an event is an actual cause or an enabler. Cheng and Novick [11] note that an event is typically viewed as an actual cause if it describes a situation that deviates from ‘normal’ circumstances. For instance, lighting a match is considered a cause of fire, but the presence of oxygen is typically not consider a cause of fire as it is normal that oxygen is present. This is, however, also context-dependent, and oxygen can be considered a cause of fire in situations where oxygen is typically not present (e.g. in space). We note that generalisations capture knowledge about the world as perceived by the person stating the knowledge, and that the distinction between enablers and actual causes allows domain experts to be more expressive in stating their knowledge.
The following example illustrates that deductively chaining a causal and an evidential generalisation does not lead to undesirable results for evidential generalisations that include enablers.
Consider the example in which the following evidential generalisation is provided:
fire, dry_woodlightning_strike.
Generalisation states that from the presence of dry_wood and fire we can conclude that there may have been a lightning strike. In this case, dry_wood is an enabler of the generalisation, and lightning_strike cannot be considered a cause for antecedent dry_wood. Only in case fire was previously deductively inferred using a causal generalisation (e.g. torchfire) should the application of evidential generalisation be blocked. However, in case dry_wood was previously inferred using a causal generalisation (e.g. warm_summerdry_wood) and fire is not inferred using a causal generalisation, then we should be able to infer lightning_strike using generalisation .
Similarly, inference may be performed using causal generalisations that include enablers, but Pearl’s constraint does not need to be reconsidered in this case as illustrated by the following example.
Consider the example in which the following causal generalisations are provided:
torchfire;
match, oxygenfire.
In this case, the presence of oxygen is an enabler of generalisation , as it cannot be considered an actual cause of fire. Upon striking a match in the presence of oxygen, we can deductively infer that there is a fire using generalisation . Similar to Example 9, we should now not be able to abductively infer torch using generalisation . Similarly, performing deduction and abduction in sequence using generalisations and is undesirable.
To summarise this section, we wish to prohibit (1) subsequent deductive inference using a causal and an evidential generalisation in case the consequent of the causal generalisation is an actual antecedent of the evidential generalisation and not an enabler, and (2) subsequent deductive and abductive inference using two causal generalisations with the same consequent. Note that, while these constraints deviate from Pearl’s original constraints [25] as enabling conditions are now also taken into account, we will refer to these constraints as Pearl’s constraint throughout this paper.
Constraints on performing inference with abstractions
When performing inference with abstractions, care should be taken that no version of an event at a lower level of abstraction is abductively inferred if an alternative version of this event at a lower level of abstraction was already previously inferred. In particular, performing deduction and abduction in that order with two abstractions with the same consequent leads to undesirable results.
Consider generalisations gundeadly_weapon and knifedeadly_weapon from Example 4. Upon observing that a provided object is a gun, this would make us deductively infer that this object is a deadly_weapon using generalisation . Upon performing abduction with , this would make us infer that the provided object is a knife, which is clearly undesirable.
Performing abduction and deduction in that order with two abstractions with the same consequent does not lead to undesirable results.
Consider abstractions knifedeadly_weapon and knifemetal_object. Upon observing metal_object, we can abductively infer knife using generalisation . In turn, claim deadly_weapon is deductively inferred using generalisation .
The following example illustrates that mixed inference, using either a causal generalisation and an abstraction or an evidential generalisation and an abstraction, does not lead to undesirable results. Hence, no additional inference constraints need to be imposed.
Consider Example 5. Assume that in addition to strict abstractions lung_cancercancer and colon_cancercancer, causal generalisation smokingcancer is provided. Upon observing that a person smokes, we deductively infer that the person has cancer using generalisation . Using generalisations and , we can then in turn abductively infer that the person has either lung cancer or colon cancer, which are then competing alternative explanations of cancer (see Example 5). Note that it is not undesirable to infer lung_cancer or colon_cancer from cancer in this case, as smoking and lung_cancer (colon_cancer) are not alternative explanations of cancer; instead, smoking is a cause of cancer, while lung_cancer (colon_cancer) is not a cause of cancer but instead describes claim cancer at a lower level of abstraction. Similar observations can be made by replacing generalisation by generalisation cancersmoking.
To summarise this section, we only wish to prohibit subsequent deduction and abduction using two abstractions with the same consequent and not other inference patterns involving abstractions. Finally, note that for generalisations of type ‘other’ no additional inference constraints are imposed.
Ambiguous inference
Situations may arise in practice in which both deduction and abduction can be performed with the same causal generalisation or abstraction; the inference type is, therefore, ambiguous.
Consider generalisation firesmoke. Suppose fire and smoke are not observed but have been previously inferred, for instance via deduction using generalisations see_firefire and see_smokesmoke, where see_fire and see_smoke are provided as evidence. Then both deduction and abduction can be performed with to infer smoke from fire and fire from smoke.
Generally, we do not wish to prohibit this type of ambiguous inference patterns as we do not consider them to be undesirable.
Examples of analyses performed using informal reasoning tools
In this section, we present examples of analyses performed using two tools that are typically used by domain experts, namely Wigmore charts [40] and mind maps [23,35]. Based on these examples, we motivate and illustrate the design choices for our IG-formalism in Section 4.
Example of an analysis performed in a Wigmore chart
First, Wigmore charts are considered, which are diagrams familiar to legal experts in which symbols indicating hypotheses and claims are joined by lines representing relations between these hypotheses and claims. Wigmore charts were introduced by John Henry Wigmore [40] and were applied and further developed by Anderson, Schum, Twining and others (e.g. [2,18]), who provided a modernised, more user-friendly version of Wigmore’s charting method. In this section, we consider these modern versions of Wigmore charts, specifically the version adopted by Kadane and Schum [18]. In their charts, each symbol represents a unique claim. As noted by Kadane and Schum [18, p. 88], vertical arcs between nodes in the chart indicate inferences between corresponding claims, where the generalisations used in performing these inferences are not explicitly recorded in the chart. To be able to interpret whether inferences are deductive or abductive, and hence what the antecedents and consequents are of generalisations used in performing the inferences, the evidence in the chart also needs to be considered.
Wigmore chart concerning Sacco’s consciousness of guilt, along with the corresponding key list, adapted from Kadane and Schum [18, pp. 330–331].
An example of a modern Wigmore chart, adapted from Kadane and Schum [18, pp. 330–331], is depicted in Fig. 1, which also serves as our running example. The Wigmore chart concerns parts of an actual legal case, namely the well-known Sacco and Vanzetti case. The case concerns Sacco and Vanzetti, who were convicted for shooting and killing payroll guard Berardelli during a robbery. In this example, we only consider the part of the case concerning Sacco’s consciousness of guilt. During their arrest, Sacco and Vanzetti were armed. According to the two arresting officers, Connolly and Spear, Sacco and Vanzetti made suspicious hand movements, from which the prosecution concluded that they intended to draw their concealed weapons in order to escape their arrest. This suggests that they were conscious of having committed a criminal act.
On the right-hand side of Fig. 1 the corresponding key list is depicted, which indicates for every number in the chart to which claim it corresponds. Claims provided by the defence and prosecution are represented as diamonds and circles in the chart, respectively, where nodes corresponding to the evidence are shaded. Finally, horizontal lines in the Wigmore chart indicate that information needs to be combined to draw a conclusion.
As noted earlier, the generalisations used in performing the indicated inferences are left implicit in the chart. Instead, in their analysis of the case some of the used generalisations are indicated in the text (see e.g. [18, pp. 97–98]). For instance, generalisations used in the inferences from the testimonies are of the general form ‘If a person testifying under oath tells us that event E occurred, then this event (probably, usually, often, etc,) did occur.’ [18, p. 88]. As noted by Kadane and Schum [18, pp. 74–76], in constructing their charts abduction is in some instances performed to generate interim hypotheses between the evidence and the ultimate claim . However, Kadane and Schum do not explicitly indicate which inferences in their charts are abductive and which are deductive.
Lastly, it is important to note that the manner in which claims and links conflict is not precisely specified in Kadane and Schum’s Wigmore charts, as also observed by Bex and colleagues [6] in formalising such Wigmore charts as Pollock-style arguments [27]. For instance, multiple interpretations of the conflicts between the defence’s claims 462 and 465 and the prosecution’s claims 152 and 153 are possible. One possible interpretation is that 462 and 465 indicate support for the negation of claim 153: as Sacco carried his weapon for an innocent reason (either 462 or 465), he intended to surrender his weapon and, therefore, did not intend to use it. Alternatively, 462 and 465 can be considered competing alternative explanations of 152, and hence be interpreted as exceptions to the performed inference step from 152 to 153. Specifically, as Sacco carried his weapon for an innocent reason (462 or 465), this caused him to draw his weapon (152) with the intention of surrendering it.
Example of an analysis performed using a mind mapping tool
Next, we present an example of an analysis performed using a mind mapping tool [23], which is an example of a tool typically used by domain experts, for instance in crime analysis [35]. A mind map usually takes the shape of a diagram in which hypotheses and claims are represented by boxes and underlined text, and undirected edges symbolise relations between these hypotheses and claims. An example is depicted in Fig. 2, which is based on a standard template used by the Dutch police for criminal cases involving the suspicious death of a person. The mind map represents various scenario-elements and the crime analyst uses evidence to support or oppose these elements, indicated in the mind map by plus and minus symbols, respectively. Compared to Wigmore charts, which offer a wide range of symbols and arcs to allow users to be expressive and precise in modelling legal reasoning, mind maps are less precise and are used to obtain an overview of different possible alternative scenarios. In the following example, only supporting evidence is considered, which allows us to focus on the manner in which competing alternative explanations are captured in mind maps.
Example of a partially filled in mind map.
An example of a partially filled in mind map is depicted in Fig. 2. In this example case, a body was found; we are interested in the cause of death of this person. First, high-level hypothesis ‘Murder’ is examined. According to witness testimony (Testimony 1), the person was hit with a hammer (Hammer); however, according to another testimony (Testimony 2), the person was hit with a stone (Stone). By means of plus symbols and undirected edges connecting the evidence to these claims, it is indicated that claims Testimony 1 and Testimony 2 support claims Hammer and Stone, respectively. Hammer and Stone are connected via undirected edges to Hit angular, which indicates that hammers and stones can generally be considered to be angular. In turn, claim Hit angular is connected to the ‘With’ question to indicate that it provides an answer to this question. As an answer to the ‘In which way’ question, it is indicated that the person died because of a head wound (Head wound), which is again supported by the claim that the person was hit with an angular object (Hit angular). An autopsy report (Autopsy) further supports claim Head wound.
Next, high-level hypothesis ‘Accident’ is examined, which provides a competing alternative explanation of Head wound. As an answer to the ‘In which way’ question, it is again indicated that the person died because of a head wound and that this claim is supported by Autopsy; however, in contrast to the answer to this question for high-level hypothesis ‘Murder’, it is indicated that the head wound was caused as the person fell on a table by accident (Fell on table), a claim supported by Testimony 3.
As the edges in a mind map are undirected, it is unclear from the graphical representation alone which types of generalisations and inferences were used in constructing this map. Establishing this with certainty would require directly consulting the domain experts involved in constructing the chart. We note, however, that the reasoning performed in constructing this mind map can be interpreted in at least two possible ways. One interpretation is that the domain expert first (preliminarily) inferred that the person died because of a head wound from the autopsy report via deduction using the evidential generalisation AutopsyHead wound, and then abductively inferred Hit angular using the causal generalisation Hit angularHead wound. In turn, Hammer and Stone are abductively inferred from Hit angular using the abstractions HammerHit angular and StoneHit angular. These two claims are then competing alternative explanations of Hit angular and are subsequently grounded in evidence, namely via deduction from the testimonies using evidential generalisations Testimony 1Hammer and Testimony 2Stone. An alternative interpretation is that the mind map was constructed iteratively from the evidence, where from the testimonies the claims Hammer and Stone are inferred via deduction using generalisations and . Claim Hit angular is then inferred modus-ponens style: from abstractions and and the previously inferred antecedents, the consequent is deductively inferred. In this way, Hammer and Stone are not in competition for Hit angular.
This example illustrates that the types of generalisations and inferences involved in the analysis of a case using a mind mapping tool are typically left implicit. Similarly, the manner in which claims and links conflict is not precisely specified in mind maps: in particular, conflicts between competing alternative explanations are not explicitly indicated in the graph.
The information graph formalism
The examples from Section 3 make it plausible that both deductive and abductive inference is performed by domain experts when performing analyses using reasoning tools they are familiar with. In performing such analyses, the used generalisation, as well as the inference type (deduction, abduction), are left implicit. Furthermore, the assumptions of domain experts underlying their analyses are typically not explicitly stated, making these analyses ambiguous to interpret. For current purposes, we wish to provide a precise account of the interplay between the different types of inferences and generalisations that formalises and disambiguates these analyses in a manner that makes the used generalisations explicit. Information graphs (IGs), which we define in Section 4.1, are knowledge representations that explicitly describe generalisations in the graph. In constructing an IG from an analysis performed using a tool, an interpretation step may be required; we provide examples of this interpretation step by discussing possible formalisations of the Wigmore chart of Section 3.1 and the mind map of Section 3.2. In Section 4.2, we define how deductive and abductive inferences can be read from IGs given the evidence, based on our conceptual analysis of reasoning about evidence (Section 2). Compared to our previously proposed IG-formalism [39] in which only causal and evidential generalisations were considered, abstractions and other types of generalisations are now also considered, as well as generalisations that include enabling conditions, where constraints are imposed on the types of inferences that may be performed with these new types of generalisations.
Information graphs
First, the syntax of IGs is defined. Throughout this paper, boldface is used to indicate sets used in the IG-formalism.
(Information graph).
An information graph (IG) is a directed graph , where P is a set of nodes representing propositions from a propositional language consisting of only literals and that is closed under classical negation, where the negation symbol is denoted by ¬. A is a set of (hyper)arcs that divides into three pairwise disjoint subsets G, N and X of generalisation arcs, negation arcs and exception arcs, defined in Definitions 2, 4, and 5, respectively.
For IGs, there is a one-to-one correspondence between nodes and propositions, generalisation arcs and generalisations, exception arcs and exceptions, and negation arcs and negations. Throughout this paper, in the context of IGs, the terms ‘node’ and ‘proposition’, ‘generalisation arc’ and ‘generalisation’, ‘exception arc’ and ‘exception’, and ‘negation arc’ and ‘negation’ are therefore used interchangeably. We write in case or . Finally, note that while we currently only consider classical negation, our IG-formalism may be extended in future work to allow for more general notions of conflicts such as contrariness (cf. [20]).
(Generalisation arc).
Let be an IG. A generalisation arc is a directed (hyper)arc , indicating a generalisation with antecedents and consequent . Here, propositions in are called the tails of g, denoted by , and p is called the head of g, denoted by . G divides into four pairwise disjoint subsets , , and of causal generalisation arcs, evidential generalisation arcs, abstraction arcs, and all other types of generalisation arcs, respectively. Generalisations in and are defeasible, divides into disjoint subsets and of strict and defeasible abstraction arcs, respectively, and divides into disjoint subsets and of strict and defeasible other types of generalisation arcs, respectively. For , divides into disjoint subsets and of propositions representing enabling conditions and actual antecedents of the generalisation, respectively, where for it holds that and possibly , and for it holds that (i.e. ).
Curly brackets are omitted in case . In figures in this paper, generalisation arcs are denoted by solid (hyper)arcs, which are labelled ‘c’ for , ‘e’ for , and ‘a’ for , where ‘o’ labels for are omitted.
In accordance with our assumptions stated in Section 2, causal and evidential generalisations are defeasible and can include enablers. Abstractions and other types of generalisations can either be strict or defeasible. A causal generalisation may have an evidential counterpart of the form (see Section 2.3), but only if c is the usual cause of e. Definition 2 does not prohibit the coexistence of a causal generalisation and its evidential counterpart in an IG, and inferences can be read from IGs including both generalisations without yielding anomalous results; hence, both generalisations may be included if considered desirable. However, it should be noted that g and represent the same knowledge, and that care should be taken in for instance modelling exceptions to generalisations (see Definition 5), as an exception to g can also be considered an exception to . Ultimately, it is the responsibility of the knowledge engineer in consultation with the domain expert to decide which knowledge to include in the IG and to ensure this knowledge is correctly and consistently represented.
In the following example, the Wigmore chart of Section 3.1 is modelled as an IG.
In Fig. 3, an IG is depicted for a possible interpretation of the Wigmore chart of Fig. 1. This interpretation is based on a previous interpretation of this Wigmore chart as a preliminary version of an IG in which only causal and evidential information is considered and the roles of generalisation and inference are not separated [37]. For every claim p in the Wigmore chart, a proposition node p is included in P. As noted by Kadane and Schum [18, p. 88], the generalisations used in the inferences from the testimonies are evidential. As propositions 150, 151, 463, 464, 466 and 470 denote testimonies, the IG includes generalisation arcs ; , and in . Here, testimonies 150, 151 and 463, 464 are combined in the antecedents of generalisations and , respectively, as these sets of propositions concern testimonies to the same claim. As 461 concerns Sacco’s testimony to denying 149, proposition is included in P and generalisation arc is included in .
Kadane and Schum do not indicate which (types of) generalisations were used in performing the inferences between propositions 149 and . We note that the inferences between 149 and 155 fit a so-called episode scheme for intentional actions [4, p. 64], a story scheme in which someone’s psychological state causes them to form certain goals, which in turn lead to actions that have consequences. In this case, Sacco intended to escape from his arrest (154; goal) as he was conscious of having committed a criminal act (155; psychological state); therefore, we consider 155 a cause of 154. Sacco’s intention to use his weapon (153) can then be considered a sub-goal of 154 and his intention to draw his concealed weapon (152) a further sub-goal of 153. Sacco’s intention to draw his weapon (152) caused Sacco to attempt to put his hand under his overcoat (149; action); therefore, we consider 152 a cause of 149. The IG therefore includes generalisation arcs ; ; and in to denote these generalisations.
Proposition 155 can be considered an abstraction of : being involved in a robbery and shooting can generally be considered committing a criminal act. The involved generalisation is defeasible: involvement in a robbery and shooting does not imply that this involvement is of a criminal nature, as it may also imply that the person under consideration is the victim. Proposition can be considered a strict abstraction of 156, as at a higher level of abstraction being conscious of having been involved in the specific robbery and shooting that took place in South Braintree can be considered being conscious of having been involved in a robbery and shooting. can be considered a cause of 156: committing a specific robbery and shooting typically causes a person (in this case Sacco) to be conscious of having been involved in this act. Therefore, generalisation is included in , in , and in . Finally, from 469 (Sacco believed he was being arrested because of his political beliefs), we can conclude that Sacco was not conscious of having been involved in a robbery and shooting (). We consider the relation between 469 and to be defeasible and neither causal nor evidential nor an abstraction, and therefore include in .
In the following example, the mind map of Section 3.2 is modelled as an IG.
IG corresponding to an interpretation of the Wigmore chart of Fig. 1, where ‘e’ labels denote evidential generalisations, ‘c’ labels denote causal generalisations, ‘a’ labels denote abstractions, is a negation arc and is an exception arc.
IG corresponding to a possible interpretation of the mind map of Fig. 2.
Consider Fig. 4, which depicts an IG for a possible interpretation of the mind map of Fig. 2. The generalisations used in the inferences from the testimonies, as well as from autopsy, are considered to be evidential; therefore, generalisation arcs , , and are included in . The relation between hammer (stone) and hit_angular is neither causal nor evidential; instead, generalisation arcs and are included in to express that, at a higher level of abstraction, both hammers and stones can generally be considered angular objects. These generalisations are defeasible as not all hammers and stones are angular. Finally, hit_angular and fell_on_table can both be considered causes of head_wound; therefore, generalisation arcs and are included in .
The following example illustrates generalisation arcs that include enabling conditions.
Consider → head_wound in , which is an adjustment to generalisation of Example 20 which states that falling on a table causes a head wound in case you are not wearing a helmet. As in Example 20, proposition fell_on_table expresses a cause for head_wound and hence, fell_on_table is included in . Proposition no_helmet does not express a cause for head_wound and can thus be considered an enabler of ; therefore, no_helmet is included in . It should be noted that, while no_helmet does not express a cause for the consequent, it still is a necessary condition of generalisation .
Specific configurations of generalisations express that two propositions are alternative explanations of a common proposition, as captured by Definition 3. The terminology used is illustrated in Fig. 5.
Illustration of the terminology used in Definition 3.
(Alternative explanations).
Let be an IG. Then are alternative explanations of , as indicated by generalisations g and in G, iff one of the following holds:
, , , and either:
, , , , or;
, , .
, , , and either:
, , , , or;
, , .
, , and , , , .
Note that cases and are symmetrical in terms of and and the used generalisations; we opt to keep the distinction between these two cases as they simplify the proof of Proposition 1. In case , q is an actual antecedent and not an enabler of both and ; hence, both and are actual causes of q. Assuming that g and both have multiple actual antecedents in case , then and are alternative explanations of every proposition . Hence, it is meaningful to define alternative explanations in the context in which generalisations have non-singleton sets of actual antecedents; this similarly holds for the other cases of Definition 3. In case , is an actual antecedent and not an enabler of and thus a cause of q, and q is an actual antecedent and not an enabler of and thus is a cause of q. In case , and are actual antecedents of and , respectively; hence, both and are actual causes of q. Finally, in case 3, and are antecedents of and with the same consequent q, and hence are alternative explanations of q.
Consider the IG of Fig. 4. According to condition of Definition 3, hit_angular and fell_on_table are alternative explanations of head_wound as indicated by generalisations and . Similarly, according to condition 3 of Definition 3, hammer and stone are alternative explanations of hit_angular as indicated by generalisations and .
A negation arc captures a conflict between a proposition and its negation expressed in an IG.
(Negation arc).
Let be an IG. A negation arc is a bidirectional arc in that exists between a pair iff .
Consider the running example. As both 149 and are included in the IG of Fig. 3, negation arc is also included in the graph. Similarly, the IG of Fig. 3 includes negation arc . As noted in Section 3.1, one possible interpretation of the conflicts between propositions 462, 465 and 153 is that 462 and 465 indicate support for . Accordingly, generalisations and can be included, as depicted in the adjusted IG of Fig. 6. As these generalisations are defeasible and neither causal nor evidential nor an abstraction, and are included in . Negation arc is then included in N. An alternative interpretation of these conflicts is provided in Example 24.
As defeasible generalisations do not hold universally, exceptional circumstances can be provided under which such a generalisation may not hold; hence, we allow exceptions to defeasible generalisations to be specified in IGs by means of exception arcs.
Adjustment to part of the IG of Fig. 3, where 462 and 465 indicate support for .
(Exception arc).
Let be an IG. An exception arc is a hyperarc , where is called an exception to defeasible generalisation .
An exception arc directed from p to g indicates that p provides exceptional circumstances under which g may not hold.
Consider the running example. Instead of interpreting the conflicts between propositions 462, 465 and 153 as negations (see Example 23), an alternative interpretation is that 462 and 465 indicate exceptions to generalisation . Specifically, 462 and 465 can be considered competing alternative explanations of 152: as Sacco carried his weapon for an innocent reason (462 or 465), this caused him to draw his weapon (152) with the intention of surrendering it. In Fig. 3, these exceptions are indicated by curved hyperarcs and in X.
Reading inferences from information graphs
We now define how deductive and abductive inferences can be performed with constructed IGs. By itself, a generalisation arc only expresses that the tails together allow us to infer the head in case this generalisation is used in deductive inference, or that the tails together can be inferred from the head in case of abductive inference. Only when considering the available evidence can directionality of inference actually be read from the graph.
(Evidence set).
Let be an IG. An evidence set is a subset for which it holds that for every , .
The restriction that for every it holds that ensures that not both a proposition and its negation are observed.
In figures in this paper, nodes in corresponding to elements of E are shaded and all shaded nodes correspond to elements of E. We emphasise that various evidence sets E can be used to establish (different) inferences from the same IG.
In Fig. 1, the evidence consists of the testimonies. In Figs 7 and 8, the IGs of Figs 3 and 6 are again depicted with nodes in shaded.
The IG of Fig. 3, where evidence set E (shaded) and resulting inference steps () are also indicated.
The IG of Fig. 6, where evidence set E (shaded) and resulting inference steps () are also indicated.
We now define when we consider configurations of generalisation arcs and evidence to express deductive and abductive inference.
Deductive inference
First, we specify under which conditions we consider a configuration of generalisation arcs and evidence to express deductive inference, where strict and defeasible deduction are distinguished.
(Deductive inference).
Let be an IG, and let be an evidence set. Let , with . Then given E, q is deductively inferred from propositionsusing a generalisationinG iff , :
, or;
is deductively inferred from propositions using a generalisation , where if , , or;
is abductively inferred from a proposition using a generalisation in , , (see Definition 8).
Here, proposition q is defeasibly deductively inferred from, denoted , iff , and proposition q is strictly deductively inferred from, denoted , iff .
For ease of reference, symbols and are annotated with the name of the generalisation used in performing a defeasible or strict inference. In accordance with our assumptions stated in Section 2.1, deduction can be performed using all types of generalisations in G, where strict deduction can only be performed using strict abstractions and strict other types of generalisations. The condition ensures that deduction cannot be performed with a generalisation to infer its consequent in case its consequent is already observed. Deduction can only be performed using a generalisation to infer its consequent from its antecedents in case every antecedent has been affirmed in that either is observed (i.e. ), itself is deductively inferred, or is abductively inferred. In correspondence with Pearl’s constraint (see Section 2.4.1), we assume in condition 2 that a proposition cannot be deductively inferred from using a generalisation if at least one of its actual antecedents is deductively inferred using a generalisation . In this case, q and propositions are considered alternative explanations of as indicated by g and (Definition 3, case or case ). Condition 3 of Definition 7 is explained in Section 4.2.3, after abductive inference is defined.
In the IG of Fig. 7, given E propositions 149, , 462, 465 and 469 are defeasibly deductively inferred from 150 and 151, 461, 463 and 464, 466, and 470 using generalisations , , , , and , respectively, as (Definition 7, condition 1). Proposition 152 is then defeasibly deductively inferred from 149 using , as 149 is deductively inferred (Definition 7, condition 2). Propositions 153, 154 and 155 are then iteratively defeasibly deductively inferred using generalisations , and , respectively. Finally, from 469, is defeasibly deductively inferred using , as 469 is deductively inferred.
The following example illustrates strict deductive inference.
Consider Example 2 from Section 2.1. In this example, generalisation arc lung_cancer → cancer is included in . As lung_cancer, cancer is strictly deductively inferred from lung_cancer (Definition 7, condition 1).
The next example illustrates the restrictions put on performing deduction in our IG-formalism.
Figure 9a depicts an example of an IG in which q cannot be deductively inferred from p using , as . In Fig. 9b, q cannot be deductively inferred from and using , as and is neither deductively nor abductively inferred.
In Fig. 9c, Example 8 illustrating Pearl’s constraint is modelled. As smoke_machine, smoke is deductively inferred from smoke_machine using by condition 1 of Definition 7. fire cannot in turn be inferred from smoke using by condition 2 of Definition 7, as and smoke is deductively inferred using .
Examples of IGs illustrating the restrictions put on performing deduction within our IG-formalism (a–c).
Abductive inference
Next, we specify under which conditions we consider a configuration of generalisation arcs and evidence to express abductive inference.
(Abductive inference).
Let be an IG, and let be an evidence set. Let , with . Then given E, propositionsare abductively inferred from q using ain, denoted , iff:
, or;
q is deductively inferred from propositions using a generalisation in G, (see Definition 7), where if and if , or;
q is abductively inferred from a proposition using a generalisation in , .
In accordance with our assumptions stated in Section 2.2, abduction is defeasible and is modelled using only causal generalisations and abstractions. Following Console and Dupré [12] and Bex [4], we assume that abductive inference can be performed with both strict and defeasible abstractions, where such an inference is always defeasible as it concerns an inference from the more abstract consequent to a more specific antecedent (see Section 2.2). The condition ensures that abduction cannot be performed with a generalisation to infer its antecedents in case at least one of its antecedents is already observed. Furthermore, abductive inference can only be performed using a generalisation to infer its antecedents from its consequent in case has been affirmed in that either is observed (i.e. ), is deductively inferred, or is itself abductively inferred.
In correspondence with Pearl’s constraint (see Section 2.4.1), we assume in condition 2 that propositions cannot be abductively inferred from a proposition using a generalisation if its consequent q is deductively inferred using a generalisation , . In enforcing this constraint, we do not need to consider whether or not the antecedents of g or include enablers, as illustrated in Example 12 from Section 2.4.1. More specifically, in Definition 2 it is assumed that , ; therefore, at least one proposition is an actual antecedent of g and at least one proposition is an actual antecedent of , which are then alternative explanations of q according to case of Definition 3 which may not be inferred from each other by inferring q as an intermediary step. Similarly, we assume in condition 2 that if to account for our constraints on performing deduction and abduction in that order with two abstractions (see Section 2.4.2). In this case, propositions are alternative explanations of as indicated by g and according to case 3 of Definition 3.
In the IG of Fig. 7, given E proposition is abductively inferred from 155 using , as 155 is deductively inferred (Definition 8, condition 2). In turn, propositions 156 and are iteratively abductively inferred using generalisations and , respectively. Note that although is a strict abstraction, the abductive inference from to 156 is defeasible and not strict; specifically, that Sacco was conscious of having been involved in a robbery and shooting does not allow us to strictly infer that he was conscious of having been involved in the specific robbery and shooting that took place in South Braintree.
Example of an IG illustrating abductive inference with causal generalisations (a); example of an IG illustrating abductive inference with abstractions (b).
In the IG of Fig. 10a, q and are abductively inferred from r using generalisation in by condition 1 of Definition 8, as . Then by condition 3 of Definition 8, and are abductively inferred from q using and , respectively.
The following example further illustrates abductive inference with abstractions.
In Fig. 10b, Example 15 from Section 2.4.2 is modelled as an IG. As smoking, cancer is deductively inferred from smoking using . Propositions lung_cancer and colon_cancer are then abductively inferred from cancer using strict abstractions and , respectively (Definition 8, condition 2). Hence, in this example, a cause (smoking) for an event (cancer) is known, after which this event is inferred and is in turn further specified at a lower level of abstraction (lung_cancer or colon_cancer). As noted in Section 2.4.2, this type of mixed inference using a causal generalisation and abstractions does not lead to undesirable results.
The following examples illustrate that Pearl’s constraint for mixed deductive-abductive inference (see Section 2.4.1), as well as our proposed constraints on performing inference with abstractions (see Section 2.4.2), are adhered to.
An IG illustrating Pearl’s constraint for mixed deductive-abductive inference (a); an IG illustrating our inference constraints for abstractions (b); an IG illustrating mixed abductive-deductive inference (c).
The IG of Fig. 4, where evidence set E (shaded) and resulting inference steps () are also indicated.
In Fig. 11a, Example 9 is modelled as an IG. As smoke_machine, smoke is deductively inferred from smoke_machine using . fire cannot be inferred from smoke, as and smoke is deductively inferred using (Definition 8, condition 2).
In Fig. 11b, Example 13 is modelled as an IG. As gun, deadly_weapon is deductively inferred from gun using . knife cannot in turn be inferred from deadly_weapon, as and deadly_weapon is deductively inferred using (Definition 8, condition 2).
The following example describes the inferences that can be made based on the IG of Fig. 4 corresponding to the mind map example of Section 3.2.
Consider the IG of Fig. 12. Given , head_wound is deductively inferred from autopsy using . Then, hit_angular and fell_on_table are abductively inferred from head_wound using and , respectively (Definition 8, condition 2). head_wound is also deductively inferred from fell_on_table using , as fell_on_table is deductively inferred from using ; the inference type of is, therefore, ambiguous (see Section 2.5). hammer and stone are abductively inferred from hit_angular using and , respectively (Definition 8, condition 3). hit_angular is also deductively inferred from hammer and stone using and , respectively, as hammer is deductively inferred from using and stone is deductively inferred from using . Then, head_wound is deductively inferred from hit_angular using .
Mixed abductive-deductive inference
As apparent from Definitions 7 and 8, mixed abductive-deductive inference can be performed within our IG-formalism.
In Fig. 11c, Example 7 from Section 2.4 is modelled as an IG. From smoke, fire is abductively inferred using , as smoke. Then heat is deductively inferred (or predicted) from fire using (Definition 7, condition 3).
An argumentation formalism based on information graphs
Based on our IG-formalism from Section 4, we now define an argumentation formalism that allows for both deductive and abductive argumentation. Note that the IG-formalism is not an argumentation formalism, and that no semantics for IGs were defined in Section 4. Instead, we defined how inference can be performed with IGs and we defined different notions of conflicts. In the current section, we define an argumentation formalism based on IGs which allows us to assign a semantics to argumentation frameworks constructed on the basis of IGs. More specifically, our approach generates an abstract argumentation framework as in Dung [13], that is, a set of arguments with a binary attack relation, which thus allows arguments based on IGs to be formally evaluated according to Dung’s semantics. We can then study properties of generated AFs; in particular, we prove that Caminada and Amgoud’s [9] postulates are satisfied by instantiations of our formalism, which warrants the sound definition of instantiations of our argumentation system and implies that anomalous results such as issues regarding inconsistency and non-closure as identified by [9] are avoided. Our argumentation formalism extends a preliminary version proposed in [38] that was based on a more restricted version of our IG-formalism [39] in which only causal and evidential generalisations without enablers were considered. Moreover, satisfaction of rationality postulates was not proven in that paper.
In Section 5.1, we define arguments on the basis of a provided IG and an evidence set E, which capture sequences of deductive and abductive inference applications as defined in Definitions 7 and 8 starting with elements from E. We then formally prove that arguments constructed on the basis of IGs conform to our inference constraints (Section 2.4). In Section 5.2, we define several types of attacks between arguments based on IGs, which are based on the different types of conflicts defined for our IG-formalism. In Section 5.3 we instantiate Dung’s abstract approach with arguments and attacks based on IGs and provide the definitions of Dung’s argumentation semantics. In Section 5.4, we then prove that rationality postulates [9] are satisfied by instantiations of our formalism.
Arguments
In this section, we define how arguments on the basis of an IG and an evidence set E are constructed. Here, we take inspiration from the definition of an argument as defined for the framework [20]. By remaining close to the framework, this allows us to straightforwardly show that rationality postulates are satisfied for our argumentation formalism based on IGs (see Section 5.4). In what follows, for a given argument, the operator returns all propositions in E used to construct the argument, returns its conclusion, returns all its sub-arguments (including itself), returns its immediate sub-arguments, returns all the generalisations used in constructing the argument, returns the last generalisation used in constructing the argument, and return all the defeasible and strict inferences used in constructing the argument, respectively, and returns the last inference used in constructing the argument. Definition 9 is explained and illustrated in Examples 34 and 35.
(Argument).
Let be an IG, and let be an evidence set. An argument A on the basis ofandE is any structure obtainable by applying one or more of the following steps finitely many times, where steps 2 (i.e. step or ) and 3 or vice versa are not subsequently applied using the same generalisation arc :
p if , where: ; ; ; ; ; undefined; ; ; undefined.
if are arguments such that p is defeasibly deductively inferred from using a generalisation according to Definition 7, where it holds that and if g is of the form in and its evidential counterpart is included in , then . For A, it holds that:
; ;
; ;
; ;
;
;
.
if are arguments such that p is strictly deductively inferred from using a generalisation , according to Definition 7, where , , , , and are defined as in step , and where:
;
;
.
if is an argument such that p is abductively inferred from using a generalisation , for some propositions according to Definition 8, where:
; ; ; ; ; ; ; ; .
Note that we overload symbols and to denote an argument while it also denotes a defeasible or strict inference. The set of all arguments on the basis of and E is denoted by .
An argument is called strict if ; otherwise, A is called defeasible. An argument is called a premise argument if only step 1 of Definition 9 is applied, deductive if only steps 1, and are applied, abductive if only steps 1 and 3 are applied, and mixed otherwise. The restriction that steps 2 (i.e. step or ) and 3 or vice versa are not subsequently applied using the same generalisation arc ensures that cycles in which two propositions are iteratively deductively and abductively inferred from each other using the same g are avoided in argument construction. Similarly, in case causal generalisation has an evidential counterpart (see Sections 2.3 and 4.1), then the restriction in step that ensures that cycles in which c and e are iteratively deductively inferred from each other using and g are avoided. Note that cycles in which c and e are iteratively deductively inferred from each other using g and in that order are already avoided due to the enforcement of Pearl’s constraint in condition 2 of Definition 7.
The IG of Fig. 7, where arguments and direct attacks (⇢) on the basis of the IG and E are also indicated.
Consider Fig. 13, in which arguments constructed on the basis of the IG of Fig. 7 are indicated. According to step 1 of Definition 9, and are premise arguments. Based on and , defeasible deductive argument is constructed by step of Definition 9, as 149 is defeasibly deductively inferred from 150 and 151 using . Arguments ; ; and similarly are defeasible deductive arguments. Argument is a defeasible mixed argument by step 3 of Definition 9, as is abductively inferred from 155 using . Similarly, arguments and are defeasible mixed arguments. To illustrate the operators used in Definition 9, for , we have that ; ; ; ; ; ; ; ; .
Step 3 of Definition 9 is now illustrated in more detail.
On the basis of the IG of Fig. 10a and , is a premise argument. From , arguments and are constructed by step 3 of Definition 9, as q and are abductively inferred from using causal generalisation . Then again by step 3, and are constructed using and , respectively.
Properties of arguments based on IGs
We now prove a number of formal properties of arguments based on IGs. Lemma 1 states that the conclusions of deductive, abductive, and mixed arguments constructed in our argumentation formalism based on IGs are not observed.
Letbe a set of arguments on the basis of IGand evidence setE. Letbe a deductive, abductive, or mixed argument. Then.
As A is not a premise argument, step , step or step 3 of Definition 9 is applied last in constructing A. In case step or of Definition 9 is applied last, then such that is deductively inferred using according to Definition 7. Hence, per the restrictions of Definition 7, . In case step 3 of Definition 9 is applied last, then such that is abductively inferred using according to Definition 8. Hence, per the restriction of Definition 8 that . □
In performing inference care should be taken that no cause for an effect is inferred in case an alternative cause for this effect was already previously inferred (see Section 2.4.1). Similarly, care should be taken that no version of an event at a lower level of abstraction is inferred if an alternative version of this event at a lower level of abstraction was already previously inferred (see Section 2.4.2). In the context of IGs, for , propositions in express causes for the common effect expressed by , for , expresses the usual cause for propositions in , and for , propositions in are at a lower level of abstraction than . Hence, in defining how inferences can be read from IGs, restrictions are put in Definitions 7 and 8 such that our inference constraints (see Section 2.4) are adhered to. We now formally prove that these inference constraints are never violated in constructing sequences of arguments on the basis of IGs.
First, we formally define the inference constraints of Section 2.4 in the context of arguments constructed on the basis of IGs.
(Inference constraint).
Let be a set of arguments on the basis of IG and evidence set E. Let be alternative explanations of as indicated by generalisations (Definition 3). If arguments A and B exist in with , , and , then there does not exist an argument with , .
We now formally prove that this inference constraint is indeed adhered to.
(Adherence to inference constraint).
Letbe a set of arguments on the basis of IGand evidence setE. Thenadheres to the inference constraint of Definition
10
.
Assume that are alternative explanations of as indicated by generalisations and in G, and assume that arguments exist with , , . Then we need to prove that no argument C exists in with and . In constructing argument B, either step , step or step 3 of Definition 9 is applied last, where generalisation is used to infer . Here, cannot be of the form , , (Definition 3, case 1) as in this case antecedent q of is inferred from consequent of , which would be an instance of abductive inference while per the restrictions of Definition 8 abductive inference can only be performed using generalisations in . More specifically, argument B cannot be constructed by applying step , and 3 of Definition 9 last if is of that form. Thus, we only need to consider cases 2 and 3 of Definition 3, where a generalisation , , respectively a generalisation , , is used to construct B, namely by applying step or of Definition 9 last to deductively infer . We now show that for the given options for , no argument C with , can be constructed using .
First, consider case of Definition 3 in which , , , . Then no argument C with , can be constructed using , as in this case abduction would be performed with to infer from q while per the restrictions in condition 2 of Definition 8 abduction cannot be performed with as was previously deductively inferred using . In particular, step 3 of Definition 9 cannot be applied in constructing C using . Furthermore, neither step nor step of Definition 9 can be applied in constructing C using , as these steps specify deductive and not abductive inferences.
Next, consider case of Definition 3 in which , , . Then no argument C with , can be constructed using , as in this case deductive inference would be performed with to infer while per the restrictions in condition 2 of Definition 7 deductive inference cannot be performed with as was previously deductively inferred using . In particular, step of Definition 9 cannot be applied in constructing C using . Furthermore, step of cannot be applied in constructing C using , as this step can only be applied using strict generalisations and , and step 3 cannot be applied in constructing C using , as this step specifies an abductive and not a deductive inference.
Finally, consider case 3 of Definition 3 in which , , , . Then no argument C with , can be constructed using , as in this case abduction would be performed with to infer from q while per the restrictions in condition 2 of Definition 8 abduction cannot be performed with as was previously deductively inferred using . In particular, step 3 of Definition 9 cannot be applied in constructing C using . Furthermore, neither step nor step of Definition 9 can be applied in constructing C using , as these steps specify deductive and not abductive inferences. □
Attack
In this section, several types of attacks between arguments on the basis of IGs are defined. Among the types of attacks that are typically distinguished in structured argumentation (for instance in [20]) are rebuttal, undermining, and undercutting attack. Of these types of attacks, we only consider rebuttal and undercutting attack and not undermining attacks (i.e. attack on an argument’s premises [20]), as in IGs we assume that all premises are certain and cannot be attacked (cf. ’s axiom premises). We also distinguish a fourth type of attack, namely alternative attack, a concept based on the notion of competing alternative explanations (see Section 2.2) that is inspired by [3,5]. In our argumentation formalism, attacks directly follow from the constructed arguments and the specified exception arcs in an IG. Hence, attacks between arguments do not need to be separately specified by the user.
First, we define the general notion of attack, after which the different types of attacks are defined.
(Attack).
Let be a set of arguments on the basis of IG and evidence set E. Let . Then A attacks B iff A rebuts B, A undercuts B, or A alternative attacks B, as defined in Definitions 12, 13 and 14, respectively.
Rebuttal attack
First, rebuttal attack is defined. Informally, a rebuttal is an attack on the conclusion of an argument for which it holds that the last inference used in constructing the argument is defeasible.
(Rebuttal attack).
Let be a set of arguments on the basis of IG and evidence set E. Let A, B and be arguments in with . Then A rebuts B (on ) iff there exists a negation arc in N and is of the form for some , .
Note that, as it is assumed that is of the form (i.e. is defeasible), it holds that is a deductive, abductive, or mixed argument; hence, by Lemma 1, . Furthermore, while a negation arc expresses a symmetric conflict, our definition of rebuttal attack allows for both symmetric or asymmetric rebuttal, as illustrated by the following example.
Consider the IG of Fig. 13. Let , , be the arguments introduced in Example 34. Let and let . Then rebuts (on ) and rebuts (on ), as , (and hence in N), where and are defeasible inferences. This symmetric rebuttal is indicated in Fig. 13 by means of a bidirectional dashed arc between these propositions. Similarly, let be as introduced in Example 34, and let ; ; . Then rebuts (on ) and rebuts (on ).
Consider again Example 33, in which heat is predicted from fire. Assume that contrary to this prediction we observe that there is no heat (¬heat). Let smoke; fire; heat; heat. Then rebuts (on ), but does not rebut as is not of the form for some , (i.e. is a premise argument).
Undercutting attack
Next, undercutting attack is considered. Informally, an undercutter attacks a defeasible inference by providing exceptional circumstances under which the inference may not be applicable. In our argumentation formalism based on IGs, undercutting attacks between arguments follow from the specified exception arcs in . Specifically, as an exception arc directed from to specifies an exception to defeasible generalisation g, an argument with undercuts an argument with .
(Undercutting attack).
Let be a set of arguments on the basis of IG and evidence set E. Let with . Then A undercuts B (on ) iff there exists an exception arc such that and .
Undercutting attack is illustrated by the following example.
Consider the IG of Fig. 13. Let , , , , be the arguments introduced in Example 34. Let ; . Then undercuts (on ), as in X and . This direct attack is indicated in Fig. 13 by means of a dashed arc directed from 465 to defeasible inference . As undercutting attack is defined on subarguments, also attacks for . Similarly, let ; ; . Then undercuts (on ), as in X and . Argument then also attacks for .
Alternative attack
Lastly, alternative attack is defined. Arguments are involved in alternative attack iff their abductively inferred conclusions are competing alternative explanations (see Section 2.2).
(Alternative attack).
Let be a set of arguments on the basis of IG and evidence set E. Let be alternative explanations of as indicated by generalisations g and in G, where either (Definition 3, case ) or (Definition 3, case 3). Let with . Then A alternative attacks B (on ) iff there exists an argument such that and are abductively inferred from using generalisations g and , respectively.
Note that A only alternative attacks B on iff is an abductive inference and hence iff the last used inference in constructing is defeasible. Furthermore, unlike direct rebuttal attack, which can either be symmetric or asymmetric, direct alternative attack is always symmetric in that A alternative attacks B on B iff B alternative attacks A on A.
Under the conditions set out in Definition 14, arguments for constructed from C via abductive inference using g are involved in alternative attack with for constructed from C via abductive inference using . We do not consider arguments for to be in competition with arguments for , as enablers of causal generalisations do not express alternative causes for the consequent. Arguments (as well as ) are not involved in alternative attack among themselves, in accordance with our assumption that the antecedents of a causal generalisation or abstraction are not in competition. Finally, in case and , then arguments are not involved in alternative attack with , as the actual antecedents of g express causes for the effect expressed by the consequent but the tails of are not alternative explanations of the consequent; instead, propositions in are at a lower level of abstraction than .
Consider the IG of Fig. 12. Given E, arguments autopsy; head_wound; hit_angular; fell_on_table; hammer; and stone are constructed. Here, hit_angular and fell_on_table are abductively inferred from head_wound using and , respectively, and hammer and stone are abductively inferred from hit_angular using and , respectively. Then alternative attacks (on ) and alternative attacks (on ), as hit_angular and fell_on_table are alternative explanations of head_wound as indicated by and in (Definition 3, case ). As and , also alternative attacks and (on ). Finally, alternative attacks (on ) and alternative attacks (on ), as hammer and stone are alternative explanations of hit_angular as indicated by and in (Definition 3, case 3).
Consider Example 12 from Section 2.4.1. Assume that in addition to generalisations and , evidential generalisation see_fire → fire is provided. Given , arguments see_fire; fire; torch; match; and oxygen are constructed. Then and are involved in alternative attack, as torch and match are alternative explanations of fire as indicated by and in (Definition 3, case ), where torch and match are abductively inferred from fire using and , respectively. is not involved in alternative attack with , as oxygen.
Consider Fig. 10a. Let , , be as defined in Example 35. Then and are not involved in alternative attack, as and are abductively inferred from using the same generalisation ; specifically, in case of Definition 3 it is assumed that , and hence and q are not alternative explanations of r by that definition.
Finally, note that a causal generalisation may be replaced by an evidential generalisation if is the usual cause of e, in which case abductive inference with can be encoded as deductive inference with (see Section 2.3). Considering the case in which only and not is included in IG and additional causal generalisation is provided, then arguments , , are constructed upon observing e, where and are involved in alternative attack according to Definition 14. However, in case only and are included in and not , then arguments , , are constructed, where and are not involved in alternative attack as . Hence, if the knowledge engineer considers and to be competing alternative explanations of e, then the involved generalisations should be modelled as causal generalisations in order to achieve alternative attack among constructed arguments. Alternatively, can be interpreted as an undercutter of as it provides an exception to the performed inference (see also [5, p. 15]). We reiterate that it is the responsibility of the knowledge engineer in consultation with the domain expert to decide which knowledge (including conflicts) to represent in an IG and to ensure this knowledge is modelled correctly (see also Section 4.1).
Argument evaluation
In this section, we provide Dung’s definitions for argumentation semantics [13] and illustrate these definitions for our running example.
First, we instantiate Dung’s abstract approach with arguments and attacks based on IGs.
(Argumentation framework).
Let be an IG, and let be an evidence set. An argumentation framework (AF) defined byandE is a pair , where is the set of all arguments on the basis of and E as defined by Definition 9, and where iff and A attacks B (see Definition 11).
An AF can be represented as a directed graph in which arguments are represented by circles and attacks are indicated by solid arcs (→); an example of an AF is depicted in Fig. 14.
Given an AF, we can use any semantics for AFs as defined in [13] for determining the dialectical status of arguments (cf. [20]). The theory of AFs is built around the notion of an extension, which is a set of arguments that is internally coherent and defends itself against attack.
(Dung extensions).
Let be an AF defined by IG and evidence set E.
A set of arguments is conflict-free if there do not exist such that .
An argument is acceptable with respect to some set of arguments iff for all arguments B such that there exists an argument such that .
A conflict-free set of arguments is an admissible extension iff every argument is acceptable with respect to .
An admissible extension is a complete extension iff whenever A is acceptable with respect to ; is the grounded extension iff is the set inclusion minimal complete extension; is a preferred extension iff is a set inclusion maximal complete extension; and is a stable extension iff it is preferred and such that .
The acceptability of arguments in abstract argumentation frameworks can then be evaluated by establishing whether a given argument is a member of the various extensions. Arguments are then assigned a dialectical status that can either be ‘justified’, ‘overruled’, or ‘defensible’, where informally an argument is justified if it survived the competition, overruled if it did not survive the competition, and defensible if it is involved in a tie.
(Justified, overruled and defensible arguments, adapted from [31]).
Let be an argumentation framework.
An argument is (i) justified under grounded semantics iff it is a member of the grounded extension, (ii) overruled under grounded semantics iff it is not justified under grounded semantics and it is attacked by an argument that is justified under grounded semantics, or (iii) defensible under grounded semantics iff it is neither justified nor overruled under grounded semantics.
Let . An argument is (i) justified under T semantics iff it is a member of all T extensions, (ii) overruled under T semantics iff it is not a member of any T extension, or (iii) defensible under T semantics iff it is a member of some but not all T extensions.
We now illustrate the evaluation of arguments based on IGs through our running example.
Consider the IG of Fig. 13. To prevent this example from becoming too involved, we consider the following subset of arguments and binary attack relation over (see Examples 34, 36 and 37). The AF is visualised in Fig. 14. The complete extensions of are:
Under complete semantics, , , , , , , , , are justified as they are members of all complete extensions, is overruled as it is attacked by a justified argument, and , and are defensible. For the other semantics, the same statuses are assigned; for grounded semantics, this is the case as is the set inclusion minimal complete extension. Furthermore, note that and are set inclusion maximal complete extensions for which it holds that such that for ; hence, and are preferred and stable extensions.
Dung’s abstract argumentation approach has been extended with new elements, for instance by adding support relations to abstract argumentation frameworks (e.g. [10]) or by adding preference relations (e.g. so-called preference-based argumentation frameworks, or PAFs [1]), probabilities (see e.g. [15] for an overview), or weights [14] to AFs; a more complete overview is provided in [30]. We opt for the approach introduced by Dung for the evaluation of arguments as it is a well-studied and widely accepted approach in the field of computational argumentation. Moreover, the relations between Dung’s fully abstract approach and formalisms for structured argumentation that are at an intermediate level of abstraction between concrete instantiating logics and Dung’s approach, such as [20] and assumption-based argumentation (ABA) [7], have been previously investigated. In our IG-formalism, we have currently opted not to account for preferences, as these are typically not indicated in tools domain experts use. As the components of our argumentation formalism based on IGs are directly defined based on the elements that are accounted for in our IG-formalism, preferences are currently not accounted for in our argumentation formalism. As shown in work on structured argumentation with preferences (e.g. [20]), the structure of arguments is crucial in determining how preferences must be applied to attacks and one should be cautious in extending AFs with additional elements without taking the structure of arguments into account. There is some work on the relations between support relations in abstract argumentation frameworks and those at the inference level [29]. Relations between our proposed argumentation formalism and extended AFs such as [10] may be investigated in future research.
Satisfying rationality postulates
Caminada and Amgoud [9] studied rule-based argumentation systems and identified conditions under which unintuitive and undesirable results are obtained upon performing inference. They then defined principles, called rationality postulates, that can be used to judge the quality of a given rule-based argumentation system. More specifically, so-called consistency and closure postulates were formulated for systems allowing for strict and defeasible inferences. Since these postulates are widely accepted as important desiderata for structured argumentation formalisms, we prove in this section that these postulates are satisfied by instantiations of our argumentation formalism based on IGs.
Comparison of our argumentation formalism based on IGs to the framework
In proving satisfaction of [9]’s rationality postulates, we follow Modgil and Prakken [20], who proved satisfaction of these postulates for the framework. As noted earlier, in defining our argumentation formalism based on IGs we were inspired by the definitions of argument and attack as given in [20]. In Definition 9, we defined how arguments on the basis of an IG and an evidence set E are constructed. In step of Definition 9, it is specified that an argument A with can be constructed from arguments if p is defeasibly deductively inferred from according to Definition 7 using a generalisation in . Hence, in terms of the terminology used in the framework, generalisations in can be interpreted as domain-specific defeasible inference rules2
For details on using to model domain-specific defeasible and strict inference rules, the reader is referred to [21].
in ’s that are applied when constructing arguments. Similarly, in step of Definition 9 it is specified that an argument A with can be constructed from if p is strictly deductively inferred from according to Definition 7 using a generalisation in . Hence, generalisations in can be interpreted as domain-specific strict inference rules in ’s . Finally in step 3 it is specified that an argument A with can be constructed from an argument if p is abductively inferred from according to Definition 8 using a , for some propositions . Therefore, besides specifying aforementioned domain-specific defeasible and strict deduction rules, generalisations in also specify domain-specific abduction rules in , namely for every a rule can be specified in that states that can be defeasibly inferred from q.
Considering the different types of attacks that are defined in Section 5.2, rebuttal as defined in Section 5.2.1 is identical to rebuttal as defined for a special case of , namely one in which conflict is based on the standard classical notion of negation. Undercutting as defined in Section 5.2.2 is a special case of undercutting as defined for , as we only consider undercutters of inferences in case an exception is provided to a defeasible generalisation used in an inference step. Thus, of the types of attacks that are considered in our argumentation formalism, only alternative attack is not accounted for in . Furthermore, in comparison to our argumentation formalism, Modgil and Prakken do not impose any additional restrictions on argument construction. Hence, to prove that instantiations of our argumentation formalism based on IGs satisfy rationality postulates, in Section 5.4.3 we focus on showing how alternative attack and the additional restrictions that are imposed on argument construction in our argumentation formalism can be taken account in the results and proofs provided in [20].
Additional definitions and assumptions
Following Modgil and Prakken [20], we introduce the following definitions. We define what it means for a set of propositions to be closed under strict generalisations.
(Closure under strict generalisations).
Let be an IG and let . Then the closure ofunder strict generalisations, denoted , is the smallest set containing and the consequent of any whose antecedents are in .
Next, the terms directly consistent and indirectly consistent set are defined.
(Directly consistent set).
Let be an IG and let . Then is directly consistent iff such that .
A set is indirectly consistent if its closure under strict generalisations is directly consistent.
(Indirectly consistent set).
Let be an IG and let . Then is indirectly consistent iff is directly consistent.
As noted by Caminada and Amgoud [9], one should search for ways to alter or constrain one’s argumentation formalism in such a way that rationality postulates are satisfied. Accordingly, following Modgil and Prakken [20] we assume that IGs and evidence sets satisfy a number of properties. Similar to , we leave the user free to make choices as to the strict and defeasible generalisations to include in and the observations to include in E; however, some care needs to be taken in making these choices to ensure that the result of argumentation is guaranteed to be well-behaved. Specifically, to ensure rationality postulates are satisfied, we assume that evidence sets E are indirectly consistent (referred to as the axiom consistency assumption), and we assume that G is closed under transposition. Note that per definition every evidence set is a directly consistent set, as it is assumed in Definition 6 that for every , . Furthermore, all examples of IGs provided in this paper are axiom consistent, as they do not include generalisations for which . Closure under transposition is one of the solutions proposed by Caminada and Amgoud to ‘repair’ an argumentation system to ensure rationality postulates are satisfied [9, p. 16], as it can help generate rules needed to obtain an intuitive outcome.
(Closure under transposition).
Let be an IG. A strict generalisation is a transposition of in iff is of the form for some . We say that G is closed under transposition iff for all strict generalisations , the transpositions of g are also in .
An AF defined by an IG that is axiom consistent and for which is closed under transposition is said to be well defined. In the remainder of this section, we assume that any given AF is well defined. Note that most examples of IGs provided in this paper only include defeasible generalisations and not strict generalisations, and thus that AFs defined by these IGs are well defined. The following example, adapted from Caminada and Amgoud [9], illustrates closure under transposition and how ensuring it can help repair an argumentation system.
Example of an IG for which G is not closed under transposition (a); adjustment to this IG, in which additional generalisations are included such that G is closed under transposition (b).
In the IG depicted in Fig. 15a, strict abstractions bachelorhas_wife and married → has_wife are included. G is not closed under transposition, as generalisations has_wifebachelor and ¬has_wifemarried are not included. Arguments and constructed on the basis of this IG have strict top inferences, as only step of Definition 9 can be applied in constructing from and from using and in , respectively. Note that, as and are strict, and are not involved in rebuttal. In fact, for the AF corresponding to this IG, and hence under any semantics both and are justified. Thus, contradictory propositions has_wife and ¬has_wife are both justified at the same time, which is clearly undesirable and among other things violates the direct consistency postulate (see Theorem 1). In the IG depicted in Fig. 15b, G is closed under transposition as additional generalisations has_wifebachelor and ¬has_wifemarried are now included. In the corresponding AF, directly rebuts and directly rebuts as and are defeasible. Then indirectly rebuts (on ) and indirectly rebuts (on ). Therefore, for this AF the more intuitive outcome is obtained that and cannot both be in the same extension at the same time.
Lastly, the following definitions introduce some terminology used in the below results. Following Modgil and Prakken [22], we define strict continuations in a slightly different way than in [20], but as noted by [22] this does not affect the proofs stated in [20].
(Strict continuations).
Let be an AF defined by IG and evidence set E. The strict continuations of a set of arguments from is the smallest set satisfying the following conditions:
Any argument A is a strict continuation of .
If are arguments and are sets of arguments such that for every , is a strict continuation of and is a (possibly empty) set of strict arguments, and is a strict generalisation in , then argument constructed from using g by applying step of Definition 9 is a strict continuation of .
The maximal fallible sub-arguments of an argument B are those with the ‘last’ defeasible inferences in B. That is, they are the maximal sub-arguments of B on which B can be attacked.
(Maximal fallible sub-arguments).
Let be an AF defined by IG and evidence set E. The set of the maximal fallible sub-arguments of B is defined such that for any , it holds that iff:
is defeasible, and;
There is no such that , and satisfies condition 1.
Proofs
We prove satisfaction of Caminada and Amgoud’s consistency and closure postulates for complete semantics, which implies satisfaction of these postulates for grounded, preferred, and stable semantics. Caminada and Amgoud [9] also propose postulates for the intersection of extensions and their conclusion sets, but since their satisfaction directly follows from satisfaction of the postulates for individual extensions, these postulates will not be reconsidered.
First, a number of intermediate properties are proven. The intermediate result stated in Lemma 2 is identical to Lemma 37 of Modgil and Prakken [20], namely that any strict continuation B of a set of arguments is acceptable with respect to if all are acceptable with respect to a set . The proof follows similar to Lemma 37 of [20], where alternative attack is now also considered.
Letbe an AF defined by IGand evidence setE. Letbe a strict continuation of, and for, letbe acceptable with respect to. Then B is acceptable with respect to.
Let A be any argument such that . By Definition 11, A attacks B iff A rebuts B (on ), A undercuts B (on ), or A alternative attacks B (on ) for some (see Definitions 12, 13, and 14). Here, it holds that is defeasible; more specifically:
By Definition 12, A rebuts B (on ) iff is of the form for some , and hence iff is defeasible, and;
By Definition 13, A undercuts B (on ) iff there exists an exception arc such that and . Hence, in constructing step cannot be applied last, as this step can only be applied with strict generalisations . Therefore, step of step 3 of Definition 9 is applied last in constructing . Thus, the last used inference in constructing is a defeasible deductive inference using (step of Definition 9) or an abductive inference using (step 3 of Definition 9), and hence is defeasible, and;
By Definition 14, A alternative attacks B (on ) iff is an abductive inference and hence iff is defeasible.
Hence, by definition of strict continuations (Definition 22), it must be that iff for some (possibly more than one) . Specifically, if A does not undercut, rebut or alternative attack some , then this contradicts that . Thus, we have shown that if , then for some . By assumption, is acceptable with respect to , thus such that . Thus, B is acceptable with respect to . □
The intermediate result stated in Lemma 3 is similar to Proposition 8 of Modgil and Prakken [20]. Compared to Proposition 8 of [20], in which no assumptions are made regarding A, we now assume that A is defeasible with a strict top inference or that A is strict, as these are the only cases needed in our proof of Theorem 1. As Modgil and Prakken do not impose any restrictions on argument construction in their formalism, a result proven by Caminada and Amgoud [9] (i.e. Lemma 6 of [9]) can be directly used to complete their proof. Below, we show that the restrictions that are imposed on argument construction in our argumentation formalism based on IGs do not restrict the construction of strict continuations, and hence that the proof can similarly be completed.
Letbe an AF defined by IGand evidence setE. Let A and B be arguments insuch that B is defeasible,. Let A be strict or let A be defeasible withstrict. Then for all, there exists a strict continuationofsuch thatrebuts B on.
Let A be strict or let A be defeasible with strict. Let B be defeasible with . First, note that according to Definition 22 any strict continuation of a given set of arguments from is either (1) A if the set of arguments under consideration is (Definition 22, condition 1), or (2) is constructed by applying step of Definition 9 one or more (but finitely many) times (Definition 22, condition 2). As restrictions are imposed on argument construction in our argumentation formalism based on IGs, we first show that in constructing any strict continuation of step of Definition 9 can be applied without restrictions.
Generally, in applying step of Definition 9 an argument C with is constructed from arguments by strictly deductively inferring p from propositions according to Definition 7 using a generalisation in . In Definition 7 no constraints are imposed on performing deduction with strict generalisations ; in particular, the only constraint that is imposed is in condition 2 of this definition, where constraints are imposed on performing deduction with defeasible generalisations in (i.e. Pearl’s constraint). The only other case in which step of Definition 9 cannot be applied in constructing an argument C using a is in case the same g was already used in the previous construction step to construct an argument , namely by applying step 3 of Definition 9. Now again consider argument A. By assumption, A is strict or strict, and therefore step 3 of Definition 9, which specifies a defeasible inference, could not have been applied last in constructing A; therefore, no restrictions are imposed on constructing strict continuations of in our argumentation formalism. By assumption, is well defined and, therefore, closed under transposition; hence, by straightforward generalisation of Lemma 6 in [9] one can construct a strict continuation that continues with strict inferences and that concludes . Since by construction of , has a defeasible top inference and therefore rebuts . But then also rebuts B. □
The intermediate result stated in Lemma 4 is identical to Lemma 38 of [20].
Letbe an AF defined by IGand evidence setE. Letbe acceptable w.r.t. admissible extension. Let. Then, neithernor.
Suppose for contradiction that: (1) such that . As , it follows that B is acceptable w.r.t. , as either , which is acceptable w.r.t. by assumption, or B is an element of admissible extension . Hence such that . Then, as A is acceptable w.r.t. , such that , contradicting is conflict-free; (2) such that . As A is acceptable w.r.t. , such that , contradicting is conflict-free. □
The result stated in Lemma 5 is identical to Lemma 35-2 of Modgil and Prakken [20], namely that an argument A attacks an argument B iff A attacks some sub-argument of B. Compared to Lemma 35-2 of [20], alternative attack is now also considered in the proof.
Letbe an AF defined by IGand evidence setE. Let. Thenifffor some.
By Definition 11, iff A rebuts B (on ), A undercuts B (on ), or A alternative attacks B (on ) for some (see Definitions 12, 13, and 14); hence, also . □
The intermediate result stated in Lemma 6 is identical to Proposition 10 of [20].
Letbe an AF defined by IGand evidence setE. Letbe acceptable with respect to admissible extension. Thenis conflict-free.
We need to show that there do not exist such that . As is an admissible extension, is conflict free: hence, there do not exist such that . Thus, we need to show that , and neither nor for all . As by assumption A is acceptable with respect to , this follows directly from Lemma 4. □
Theorem 1, corresponding to the direct consistency postulate, states that the conclusions of arguments in an admissible extension (and so by implication in a complete extension) are directly consistent. The conclusions of arguments in an extension should not be contradictory, as this leads to what Caminada and Amgoud call ‘absurdities’ [9, p. 15] in that two contradictory statements can then be justified at the same time.
(Direct consistency).
Letbe an AF defined by IGand evidence setE. Then for all admissible extensionsof AF it holds that the setis directly consistent.
Let be an admissible extension of AF and let A and B be arguments in . We show that if , with (i.e. is not directly consistent), then this leads to a contradiction:
If A is a strict argument, and:
if B is also strict, then this contradicts our axiom consistency assumption on evidence sets E;
if B is a defeasible argument, and:
if B has a defeasible top inference, then A rebuts B (on B) by Definition 12, as a negation arc exists in N (as ). Hence, this contradicts that is conflict-free.
if B has a strict top inference, then by Lemma 3 there exists a strict continuation of for every such that rebuts B on ; hence, . By our Lemma 2, is acceptable with respect to , and by Lemma 6, is conflict-free, contradicting that .
If A is a defeasible argument and B is a strict argument, then the result follows similar to case 1.2 with the roles of arguments A and B reversed.
If A and B are defeasible arguments, and:
if or is defeasible, then the result follows similar to case 1.2.1 (either with the roles of arguments A and B as they currently are or with their roles reversed).
if and are strict, then the result follows similar to case 1.2.2. □
The result stated in Lemma 7 is identical to Lemma 35-3 of [20].
Letbe an AF defined by IGand evidence setE. Letand letwith. Thenis acceptable with respect toif A is acceptable with respect to.
Assume that A is acceptable with respect to . We need to prove that for every argument B such that , such that . Let and assume that . By Lemma 5, . Then, as A is acceptable with respect to , such that . Hence, is acceptable with respect to . □
Below, Caminada and Amgoud’s [9] closure and indirect consistency postulates are stated. Informally, the closure postulates state that the conclusions returned by an argumentation system should be ‘complete’ [9, p. 16]. The sub-argument closure postulate states that for any argument A in a complete extension , all sub-arguments of A are also in .
(Sub-argument closure).
Letbe an AF defined by IGand evidence setE. Then for all complete extensionsof AF it holds that if an argument A is inthen all sub-argumentsof A are in.
Let be a complete extension of AF, let and let . Then is acceptable with respect to by Lemma 7. Then is conflict-free by Lemma 6. Hence, since is complete, it holds that . □
Theorem 3, corresponding to the strict closure postulate, states that the conclusions of arguments in a complete extension are closed under strict inference.
(Closure under strict inferences).
Letbe an AF defined by IGand evidence setE. Letbe a complete extension of AF. Then.
It suffices to show that any strict continuation X of is in . By Lemma 2, any such X is acceptable with respect to . By Lemma 6, is conflict-free. Hence, since is complete, it follows that . □
Finally, Theorem 4, corresponding to the indirect consistency postulate, states the mutual consistency of the strict closure of conclusions of arguments in a complete extension.
(Indirect consistency).
Letbe an AF defined by IGand evidence setE. Letbe a complete extension of AF. Thenis indirectly consistent.
To conclude this section, we have shown that instantiations of our argumentation formalism based on IGs satisfy Caminada and Amgoud’s [9] consistency and closure postulates. Satisfaction of these postulates warrants the sound definition of instantiations of our argumentation system and implies that anomalous results as identified by [9] are avoided.
Related work
In this paper, we have proposed an argumentation formalism based on IGs that allows for both deductive and abductive argumentation and which instantiates Dung’s [13] abstract approach. Earlier work by Bex [4,5] is related, although only his integrated theory [5] is purely argumentation-based; the relation to [5] was discussed in the introduction. The hybrid theory proposed by Bex [4] is a formal account of reasoning about evidence in which deduction and abduction are used in constructing evidential arguments and causal stories, which are completely separate entities with their own definitions related to conflict and evaluation. In comparison, our argumentation formalism based on IGs allows for the construction of both deductive and abductive arguments. Moreover, Bex’s hybrid theory does not allow for most types of mixed inference with causal and evidential generalisations and abstractions, and largely avoids the problems associated with mixed inference as identified by Pearl [25] and as identified in the current paper. Bench-Capon and Prakken [3] offer a formalisation of Aristotle’s practical syllogism within a logic for defeasible argumentation that is essentially a preliminary version of [20]. This approach allows for reasoning about alternative goals and values to justify actions, which is akin to performing abductive inference. In formalising this syllogism, Bench-Capon and Prakken only consider the abductive nature of reasoning about desires on the basis of beliefs and goals, whereas we offer a general account of abductive (and deductive) argumentation. Booth and colleagues [8] propose a top-down approach by developing a model of abduction in abstract argumentation [13] and instantiating their approach with abductive logic programs [19]. In comparison to our bottom-up approach, their approach does not allow for mixed abductive-deductive inference with different types of information.
The argumentation formalism presented in this paper is based on a version of the graph-based IG-formalism that considers causal, evidential, abstraction, and other types of generalisations, as well as generalisations that include enabling conditions. Most related formalisms for inference with these types of information are logic-based [4,5,12,17,24,28,33,34] and do not consider the constraints on performing inference that need to be imposed. Poole’s Theorist framework [28] and Shanahan’s approach [33] only allow for causal defaults; complications with reasoning using both causal and evidential defaults as identified by Pearl [25] are thus avoided. The approaches of Ortiz Jr. [24] and Shoham [34] similarly only allow for inference with causal rules, but in contrast to [28,33] also include enabling conditions. The formal logical model of abductive reasoning proposed by Josephson and Josephson [17] allows for explaining observations using causal rules. The approach by Console and Dupré [12] is similar in nature to [17] but also allows for abduction using abstractions, as discussed in Section 2.2.
Graph-based formalisms for reasoning with causality information have also been proposed, notably Pearl’s causal diagrams [26]. Pearl provides a framework for causal inference in which diagrams are queried to determine if the assumptions available are sufficient for identifying causal effects. Compared to our IG-formalism and our argumentation formalism based on IGs, this framework does not allow for capturing asymmetric conflicts such as exceptions in the graph. Moreover, causal diagrams require probabilistic quantification to be queried, while IGs are qualitative.
Conclusion
In this paper, we have proposed an argumentation formalism that allows for both deductive and abductive argumentation, the latter of which has received relatively little attention in argumentation. Our argumentation formalism is based on an extended version of our previously proposed IG-formalism [39], where in addition to causal and evidential generalisations we now also allow for abstractions and other types of generalisations, thereby increasing the expressivity of our IG-formalism. We have identified conditions under which performing inference with abstractions can lead to undesirable results, thereby extending the set of inference constraints imposed by Pearl’s C–E system for reasoning with causal and evidential information [25]. Moreover, we have identified exceptional circumstances under which the constraints of Pearl’s C–E system should not be imposed, namely in case enabling conditions are provided under which a generalisation may be used in performing inference. Based on these constraints and our conceptional analysis of reasoning about evidence, we have defined how deduction and abduction may be performed with IGs. We have then formally proven that arguments constructed in our argumentation formalism based on IGs indeed adhere to these constraints. In the paper, we have focused on the constraints that need to be imposed on performing inference with pairs of generalisations, which cover Pearl’s original constraints and local constraints on performing inference with abstractions. In future work, additional inference constraints may be imposed for longer chains of inferences involving more specific combinations of generalisations, granted that the total set of constraints is consistent. Furthermore, as causality is a contentious topic, our argumentation formalism may be extended in future work by allowing for meta-argumentation about labels of generalisations, as well as other elements of IGs.
Besides allowing for rebuttal attack and undercutting attack, which are among the types of attacks that are typically distinguished in structured argumentation [20,27], we have also defined the notion of alternative attack among arguments based on IGs, a concept based on the notion of competing alternative explanations that is inspired by [3,5]. Alternative attack captures a crucial aspect of abductive reasoning, namely that of conflict between abductively inferred conclusions [17]. We have contributed to the literature on computational argumentation by allowing for the formal evaluation of arguments involved in this type of conflict. Moreover, we have shown that instantiations of our argumentation formalism satisfy key rationality postulates [9], which warrants the sound definition of instantiations of our argumentation system and implies that anomalous results such as issues regarding inconsistency and non-closure as identified by Caminada and Amgoud [9] are avoided.
Our argumentation formalism generates an abstract AF as in Dung [13] and thus allows arguments to be formally evaluated according to Dung’s argumentation semantics. By formalising analyses performed by domain experts using the informal reasoning tools they are familiar with (e.g. mind maps) as IGs as an intermediary step, this allows for the evaluation of IGs using computational argumentation, as well as using other formal systems such as BNs [39].
References
1.
L.Amgoud and C.Cayrol, A model of reasoning based on the production of acceptable arguments, Annals of Mathematics and Artificial Intelligence34 (2002), 197–215. doi:10.1023/A:1014490210693.
2.
T.J.Anderson, D.A.Schum and W.L.Twining, Analysis of Evidence, 2nd edn, Cambridge University Press, 2005.
3.
T.J.M.Bench-Capon and H.Prakken, Justifying actions by accruing arguments, in: Computational Models of Arguments: Proceedings of COMMA 2006, P.E.Dunne and T.J.M.Bench-Capon, eds, Vol. 144, IOS Press, 2006, pp. 247–258.
4.
F.Bex, Arguments, Stories and Criminal Evidence: A Formal Hybrid Theory, Springer, 2011.
5.
F.Bex, An integrated theory of causal stories and evidential arguments, in: Proceedings of the Fifteenth International Conference on Artificial Intelligence and Law, ACM Press, 2015, pp. 13–22. doi:10.1145/2746090.2746094.
6.
F.Bex, H.Prakken, C.A.Reed and D.Walton, Towards a formal account of reasoning about evidence: Argumentation schemes and generalisations, Artificial Intelligence and Law11(2–3) (2003), 125–165. doi:10.1023/B:ARTI.0000046007.11806.9a.
7.
A.Bondarenko, P.Dung, R.Kowalski and F.Toni, An abstract, argumentation-theoretic approach to default reasoning, Artificial Intelligence93 (1997), 63–101. doi:10.1016/S0004-3702(97)00015-5.
8.
R.Booth, D.Gabbay, S.Kaci, T.Rienstra and L.van der Torre, Abduction and dialogical proof in argumentation and logic programming, in: Proceedings of the Twenty-First European Conference on Artificial Intelligence, T.Schaub, G.Friedrich and B.O’Sullivan, eds, Vol. 263, IOS Press, 2014, pp. 117–122.
9.
M.Caminada and L.Amgoud, On the evaluation of argumentation formalisms, Artificial Intelligence171(5–6) (2007), 286–310. doi:10.1016/j.artint.2007.02.003.
10.
C.Cayrol and M.-C.Lagasquie-Schiex, On the acceptability of arguments in bipolar argumentation, in: Proceedings of the Eight European Conference on Symbolic and Quantitative Approaches to Reasoning with Uncertainty, L.Godo, ed., Vol. 3571, Springer, 2005, pp. 378–389. doi:10.1007/11518655_33.
11.
P.W.Cheng and L.R.Novick, Causes versus enabling conditions, Cognition40 (1990), 83–120. doi:10.1016/0010-0277(91)90047-8.
12.
L.Console and D.T.Dupré, Abductive reasoning with abstraction axioms, in: Foundations of Knowledge Representation and Reasoning, G.Lakemeyer and B.Nebel, eds, Vol. 810, Springer, 1994, pp. 98–112. doi:10.1007/3-540-58107-3_6.
13.
P.M.Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial Intelligence77(2) (1995), 321–357.
14.
P.E.Dunne, A.Hunter, P.McBurney, S.Parsons and M.Wooldridge, Weighted argument systems: Basic definitions, algorithms, and complexity results, Artificial Intelligence175 (2011), 457–486. doi:10.1016/j.artint.2010.09.005.
15.
A.Hunter and M.Thimm, Probabilistic reasoning with abstract argumentation frameworks, Journal of Artificial Intelligence Research59 (2017), 565–611. doi:10.1613/jair.5393.
16.
F.V.Jensen and T.D.Nielsen, Bayesian Networks and Decision Graphs, 2nd edn, Springer, 2007.
17.
J.R.Josephson and S.G.Josephson, Abductive Inference: Computation, Philosophy, Technology, Cambridge University Press, 1994.
18.
J.B.Kadane and D.A.Schum, A Probabilistic Analysis of the Sacco and Vanzetti Evidence, John Wiley & Sons Inc., 1996.
19.
A.C.Kakas, R.Kowalski and F.Toni, Abductive logic programming, Journal of Logic and Computation2(6) (1993), 719–770. doi:10.1093/logcom/2.6.719.
20.
S.Modgil and H.Prakken, A general account of argumentation with preferences, Artificial Intelligence195 (2013), 361–397. doi:10.1016/j.artint.2012.10.008.
21.
S.Modgil and H.Prakken, The framework for structured argumentation: A tutorial, Argument and Computation5(1) (2014), 31–62. doi:10.1080/19462166.2013.869766.
22.
S.Modgil and H.Prakken, Abstract rule-based argumentation, in: Handbook of Formal Argumentation, P.Baroni, D.Gabbay, M.Giacomin and L.van der Torre, eds, College Publications, 2018, pp. 286–361.
23.
A.Okada, S.J.Buckingham Shum and T.Sherborne (eds), Knowledge Cartography: Software Tools and Mapping Techniques, 2nd edn, Springer, 2014.
24.
C.L.OrtizJr., A commonsense language for reasoning about causation and rational action, Artificial Intelligence111(1–2) (1999), 73–130. doi:10.1016/S0004-3702(99)00041-7.
25.
J.Pearl, Embracing causality in default reasoning, Artificial Intelligence35(2) (1988), 259–271.
26.
J.Pearl, Causality: Models, Reasoning, and Inference, 2nd edn, Cambridge University Press, 2009.
27.
J.Pollock, Cognitive Carpentry. A Blueprint for How to Build a Person, MIT Press, 1995.
28.
D.Poole, Representing diagnosis knowledge, Annals of Mathematics and Artificial Intelligence11(1–4) (1994), 33–50. doi:10.1007/BF01530736.
29.
H.Prakken, On support relations in abstract argumentation as abstractions of inferential relations, in: Proceedings of the Twenty-First European Conference on Artificial Intelligence, T.Schaub, G.Friedrich and B.O’Sullivan, eds, Vol. 263, IOS Press, 2014, pp. 735–740.
30.
H.Prakken, Historical overview of formal argumentation, in: Handbook of Formal Argumentation, P.Baroni, D.Gabbay, M.Giacomin and L.van der Torre, eds, College Publications, 2018, pp. 73–141.
31.
H.Prakken and G.Vreeswijk, Logics for defeasible argumentation, in: Handbook of Philosophical Logic, R.Goebel and F.Guenthner, eds, Vol. 4, Springer, 2002, pp. 219–318.
32.
R.Reiter, A logic for default reasoning, Artificial Intelligence13(1–2) (1980), 81–132. doi:10.1016/0004-3702(80)90014-4.
33.
M.Shanahan, Prediction is deduction but explanation is abduction, in: Proceedings of International Joint Conference on Artificial Intelligence 89, N.S.Sridharan, ed., Morgan Kaufmann, 1989, pp. 1055–1060.
34.
Y.Shoham, Reasoning About Change: Time and Causation from the Standpoint of Artificial Intelligence, MIT Press, 1988.
35.
N.Timmers, The hybrid theory in practice: A case study at the Dutch police force, Master’s thesis, Utrecht University, The Netherlands, 2017.
36.
S.W.van den Braak, H.van Oostendorp, H.Prakken and G.A.W.Vreeswijk, Representing narrative and testimonial knowledge in sense-making software for crime analysis, in: Legal Knowledge and Information Systems: JURIX 2008: The Twenty-First Annual Conference, E.Francesconi, G.Sartor and D.Tiscornia, eds, Vol. 189, IOS Press, 2008, pp. 160–169.
37.
R.Wieten, F.Bex, H.Prakken and S.Renooij, Exploiting causality in constructing Bayesian networks from legal arguments, in: Legal Knowledge and Information Systems. JURIX 2018: The Thirty-First Annual Conference, M.Palmirani, ed., Vol. 313, IOS Press, 2018, pp. 151–160.
38.
R.Wieten, F.Bex, H.Prakken and S.Renooij, Deductive and abductive reasoning with causal and evidential information, in: Computational Models of Argument, H.Prakken, S.Bistarelli, F.Santini and C.Taticchi, eds, Proceedings of COMMA 2020, Vol. 326, IOS Press, 2020, pp. 383–394.
39.
R.Wieten, F.Bex, H.Prakken and S.Renooij, Information graphs and their use for Bayesian network construction, International Journal of Approximate Reasoning (2020). Manuscript submitted.
40.
J.H.Wigmore, The Principles of Judicial Proof, Little, Brown and Company, 1913.