Information-seeking interactions in multi-agent systems are required for situations in which there exists an expert agent that has vast knowledge about some topic, and there are other agents (questioners or clients) that lack and need information regarding that topic. In this work, we propose a strategy for automatic knowledge acquisition in an information-seeking setting in which agents use a structured argumentation formalism for knowledge representation and reasoning. In our approach, the client conceives the other agent as an expert in a particular domain and is committed to believe in the expert’s qualified opinion about a given query. The client’s goal is to ask questions and acquire knowledge until it is able to conclude the same as the expert about the initial query. On the other hand, the expert’s goal is to provide just the necessary information to help the client understand its opinion. Since the client could have previous knowledge in conflict with the information acquired from the expert agent, and given that its goal is to accept the expert’s position, the client may need to adapt its previous knowledge. The operational semantics for the client-expert interaction will be defined in terms of a transition system. This semantics will be used to formally prove that, once the client-expert interaction finishes, the client will have the same assessment the expert has about the performed query.
In multi-agent systems, agents can have different aims and goals, and it is normal to assume that there is no central control over their behaviour. One of the advantages of these systems is that the information is decentralised. Hence, the agents have to interact in order to obtain the information they need, or to share part of their knowledge.
In this work, we propose a strategy for automatic knowledge acquisition which involves two different kinds of agents: one agent that has expertise in a particular domain or field of knowledge and a client agent that lacks that quality. In our approach, the client agent will initially make a query to the expert agent in order to acquire some knowledge about a topic it does not know about or partially knows about. Since the client conceives the other agent as an expert, it will be committed to believe in the answer for its query. Unlike other approaches in the literature, we consider that the client may have previous strict knowledge that is in contradiction with the information the expert knows about the consulted topic. Hence, the client may require to ask further questions and adapt its previous knowledge in order to be aligned with what the expert says.
A naive solution to the proposed problem would be for the expert to send its whole knowledge base to the client. However, this is not a sensible nor feasible solution for several reasons. First, depending on the application domain, the expert could have private information that is sensitive and should not be shared. Second, its knowledge base could be very extensive and merging it with the client’s could be computationally impracticable in a real-time and dynamic environment. Finally, the merged knowledge bases would probably have many contradictions whose relevance is outside of the domain of the query. Ignoring these inconsistencies would lead to undesired results and conclusions, but solving them would be time-consuming and irrelevant for the query. Another solution would be for the client to revise its initial knowledge base to believe in the expert’s opinion in a single step. However, as will be shown in the following sections, this may imply the unnecessary removal of pieces of information that, from the expert’s perspective, are valid.
In [47], the concept of information-seeking dialogues was introduced, in which one participant is an expert in a particular domain or field of knowledge and the other is not. By asking questions, the non-expert participant elicits the expert’s opinion (advice) on a matter in which the questioner itself lacks direct knowledge. The questioner’s goal is to accept the expert’s opinion while the expert’s goal is to provide just the necessary information to help the questioner understand its opinion about the consulted topic. In this particular type of dialogue, the questioner can arrive at a presumptive conclusion which gives a plausible expert-based answer to its question. Information-seeking has already been addressed in literature when defining dialogue frameworks for agents. However, some of these approaches do not consider that the questioner may have previous strict knowledge in contradiction to the expert’s [31], while others, which consider such a possibility, simply disregard conflicting interactions [17–19].
Differently from existing approaches, our proposal not only considers that agents may have previous strict knowledge in contradiction, but also focuses on a strategy which guarantees that the information-seeking goals are always achieved. That is, once a client-expert interaction finishes, the client agent will believe the same as the expert agent about the initial query. Since the client conceives the other agent as an expert, whenever a contradiction between their knowledge arises the client will always prefer the expert’s opinion. However, in order to avoid the unnecessary removal of pieces of information that – from the expert’s perspective – are valid, the client will keep asking questions to the expert until the goal is achieved.
In order to provide a dialogue protocol specification that satisfies the aforementioned goals, one of the main contributions of our proposal is a definition of the operational semantics in terms of a transition system. Although we will formalise a two-agent interaction, this strategy can be applied in a multi-agent environment in which an expert agent could have several simultaneous interactions with different clients – each one in a separate session.
The research on the use of argumentation to model agent interactions is a very active field, including argumentation-based negotiation approaches [3,26,27,36], persuasion [6,12,32,33], general dialogue formalizations [17], strategic argumentation [12,43,44], among others. In our proposal, agents will be equipped with the structured argumentation reasoning mechanism of DeLP (Defeasible Logic Programming) [22]. DeLP allows the involved agents to represent tentative or weak information in a declarative manner. Such information is used to build arguments, which are sets of coherent information supporting claims. The acceptance of a claim will depend on an exhaustive dialectical analysis (formalised through a proof procedure) of the arguments in favour of and against it. This procedure provides agents with an inference mechanism for warranting their entailed conclusions. We will use the structured argumentation formalism DeLP for knowledge representation and reasoning since the purpose of this paper is to show how to solve the problems associated to the agents’ argument structures in an information-seeking setting. In particular, DeLP has been used to successfully implement inquiries [1,42,45], another type of dialogue defined by [47]. In contrast to other approaches that use argumentation as a tool for deciding among dialogue moves, similarly to [4,8] we use structured argumentation just as the underlying representation formalism for the involved agents.
There is plenty of work on revision of argumentation frameworks (AFs) [11,13,14,30,38] which, regardless of their individual goals, end up adding or removing arguments or attacks and returning a new AF or set of AFs as output. Our proposal differs from all those approaches in that the client agent will not just revise its “initial framework” in order to warrant the expert’s opinion. Instead, the client will keep asking questions and acquiring knowledge from the expert agent (that is relevant to the initial query) in order to avoid removing from its knowledge base pieces of information that, from the expert’s perspective, are valid. As will be explained in detail in the following sections, in order to be able to believe in the expert’s qualified opinion, the client will only revise its previous knowledge if it is in contradiction with the expert’s. In other words, our proposal differs from other approaches in that unnecessary modifications are avoided by maintaining the communication with the expert, with the additional benefit of acquiring more relevant knowledge and making informed changes considering a qualified opinion.
It has been recognised in the literature [22,37] that the argumentation mechanism provides a natural way of reasoning with conflicting information while retaining much of the process a human being would apply in such situations. Thus, defeasible argumentation provides an attractive paradigm for conceptualising common-sense reasoning, and its importance has been shown in different areas of Artificial Intelligence such as multi-agent systems [10], recommender systems [5], decision support systems [23], legal systems [34], agent internal reasoning [2], multi-agent argumentation [42,45], agent dialogues [7,8], among others (see [37]). In particular, DeLP has been used to equip agents with a qualitative reasoning mechanism to infer recommendations in expert and recommender systems [9,24,41]. Below, we introduce an example to motivate the main ideas of our proposal.
(Motivating example).
Consider an agent called that is an expert in the stock market domain, and a client agent called (the client) that consults for advice. Suppose that asks whether to buy stocks from the company Acme. The agent is in favour of buying Acme’s stocks and answers with the following argument: “Acme has announced a new product, then there are reasons to believe that Acme’s stocks will rise; based on that, there are reasons to buy Acme’s stocks”. The client has to integrate the argument shared by into its own knowledge in order to be able to infer the same conclusion drawn by the expert. In the particular case that the client has no information at all about the topic – or at least no information in conflict with the expert’s argument – it will simply add all the provided information to its own knowledge. However, it could occur that the client has previous knowledge about the query’s topic. Consider that can build the following argument: “Acme is in fusion with the company Steel and generally being in fusion makes a company risky; usually, I would not buy stocks from a risky company.” Clearly, this argument built by is in conflict with the conclusion of the one received from . In order to solve the conflict and believe in the expert’s opinion, a naive solution for would be to delete from its knowledge all the conflictive pieces of information without further analysis. Nevertheless, following that solution, valuable information could be unnecessarily lost. However, can follow a different approach: continue with the interaction and send the conflictive argument to to give the expert the opportunity to return its opinion. Now consider that already knows ’s argument but has information that defeats it. Then, sends a new argument that defeats ’s: “Steel is a strong company, and being in fusion with a strong company gives reasons to believe that Acme is not risky.” Finally, can adopt both arguments sent by and then, after the interaction and without losing information, can infer exactly what has advised.
It is important to note that, in the example above, the expert could have much more knowledge (related or not to the topic in question) that will not be sent to . As will be explained in more detail below, in our proposal the expert will just send the necessary information that the client needs to infer the answer to its query. Examples of different situations that can arise during the client-expert interaction will be introduced along the presentation of the paper. The contributions of this paper are:
A strategy for information-seeking in an argumentative setting – defined in terms of a transition system – in which agents use DeLP for knowledge representation and reasoning.
Results that formally prove that the agents always achieve the information-seeking goals.
Two different approaches which the expert can take to minimise – under some assumption – or reduce – using the client’s previous knowledge – the information exchange.
An extension to the operational semantics, which allows the client to reject the expert’s qualified opinion, hence relaxing the assumption that the client is committed to believe in the expert.
The rest of this work is organised as follows. In Section 2 we introduce the background related to the agents’ knowledge representation and reasoning formalism. Then, in Section 3, we explain the client-expert interaction in detail, and we define the operational semantics of the proposed strategy using transition rules. Next, in Section 4, we define some operators that the expert can use to minimise or reduce the information exchange with the client. Section 5 follows with an extension to the operational semantics that allows the client to reject the expert’s opinion, relaxing the assumption of commitment. Then, in Section 6 we discuss on some design choices of our formalism. Next, in Section 7, related work is included. Finally, in Section 8, we present conclusions and comment on future lines of work. At the end of the paper we include an Appendix with the proofs for the formal results of our approach.
Knowledge representation and reasoning
In this section, the background related to the agents’ knowledge representation and reasoning is included. In our approach, both the expert and the client represent their knowledge using DeLP, a language that combines results of Logic Programming and Defeasible Argumentation [21]. As in Logic Programming, DeLP allows to represent information declaratively using facts and rules. A DeLP program consists of a finite set of facts and defeasible rules. Facts are used for the representation of irrefutable evidence, and are denoted with ground literals: that is, either atomic information (e.g., ), or the negation of atomic information using the symbol “∼” of strong negation (e.g., ). In turn, defeasible rules are used for the representation of tentative information, and are denoted , where (the head of the rule) is a ground literal and (the body) is a set of ground literals. A defeasible rule “” establishes a weak connection between “Body” and “Head” and can be read as “reasons to believe in the antecedent Body give reasons to believe in the consequent Head”. When required, a DeLP program will be noted as , distinguishing the subset Π of facts and the subset Δ of defeasible rules. Although defeasible rules are ground, following the usual convention proposed in [28] we will use “schematic rules” with variables denoted with an upper-case letter. The other language elements (literals and constants) will be denoted with an initial lower-case letter. Given a literal q, the complement of q with respect to “∼” will be denoted , i.e., and . An example of a DeLP program follows:
Consider the motivating example introduced above. The DeLP program represents the knowledge of the expert agent .
In the set there are eight facts that represent evidence that the expert has about the stock market domain (for instance: “Acme and Steel are in fusion”, “Magma is not in fusion”, “Steel is a strong company”, “Starter is a new company”, “Acme has announced a new product”). The set has nine (schematic) defeasible rules that can use to infer tentative conclusions. Note that the first two rules will allow to infer , the third and fourth to infer , and the fifth to infer . The last four defeasible rules represent knowledge that has, but were not sent to in the motivating example because they were not relevant with respect to the queries that made.
We will briefly include below some concepts related to the argumentation inference mechanism of DeLP. We refer to [21] for the details. In a valid DeLP program the set Π must be non-contradictory. Since in this proposal Π is a set of facts, this means that Π cannot have a pair of contradictory literals, and this should be considered every time the client adopts new knowledge from the expert. In DeLP a ground literal L will have a defeasible derivation from a program if there exists a finite sequence of ground literals , where is a fact in Π, or there exists a rule with head and body such that every literal of the body is an element of the sequence appearing before (). Since strong negation can appear in the head of defeasible rules, complementary literals can be defeasibly derived. For the treatment of contradictory knowledge, DeLP uses an argumentative inference mechanism which identifies the pieces of knowledge that are in contradiction, and then uses a dialectical process for deciding which conclusions prevail as warranted. This process involves the construction and evaluation of arguments that are either for or against the query under analysis.
(Argument).
Let h be a literal, and a DeLP program. We say that is an argument for h if:
, and
there exists a defeasible derivation for h from , and
is non-contradictory (i.e., no pair of contradictory literals can be defeasibly derived from ), and
is minimal: there is no proper subset of such that satisfies conditions (2) and (3).
Given we say that h is the conclusion of , denoted . For instance, in from Example 2, is an argument for the literal , and is an argument for the literal . Both and have rules that are instances of the schematic rules of . The set of literals used to build an argument is called evidence and is defined as follows:
(Evidence).
Let be an argument, be the set of literals that appear at the head of rules in , and be the set of all literals that appear at the body of rules in . The evidence used to build is defined as .
The evidence of is when the set of rules is not empty, and is otherwise. For instance, , , and . Note that is an argument for , and . Given and , is a sub-argument of when or when and . In particular, every argument is a sub-argument of itself. For instance, is a sub-argument of , and is a sub-argument of . If an argument has more than one rule, then the head of each rule represents an intermediate reasoning step and it corresponds to the conclusion of a sub-argument of . The following four arguments can also be built from :
and are in conflict since they have contradictory conclusions, and so are and , and and . In DeLP, in order to solve conflicts, preferences among arguments can be provided in a modular way and hence, the most appropriate criterion for the application domain can be used. Since the argument comparison criterion is not the focus of our proposal, in the examples below we will assume that it is given as a relation denoted “>”. That is, considering that is the set of all the arguments that can be obtained from a DeLP program , the relation establishes the preference among the arguments of . We refer the interested reader to [20,21,41] where different argument preference criteria are described.
In this paper, agents will be characterised by three elements: (). The first element (Π) is a set of facts which represents the evidence that the agent has, whereas the second element (Δ) is a set of defeasible rules. Thus, from the agent will be able to build a set of arguments denoted . The third element (>) is an argument comparison criterion, i.e., . For instance, in Example 2 we assume that the expert has the following preferences: and . Preferences allow to decide which argument prevails between two conflicting arguments:
(Proper Defeater, Blocking Defeater).
Let and be two arguments. is a proper defeater for at literal h if and only if there exist a sub-argument of such that is in conflict with at h, and (i.e. is preferred to ). If and (i.e. is unrelated by the preference order to ) then is a blocking defeater for .
Continuing with Example 2, for the agent the argument is a proper defeater for because , and is a blocking defeater for because and are in conflict and there is no preference between and . On the contrary, is not a defeater for because .
In DeLP, in order to determine whether a query h can be accepted as warranted from a program , first it is necessary to find out if at least one argument for h can be constructed from . If such argument exists, then all the defeaters for that can be constructed from are considered as potential reasons against h. If there is one argument for h and there are no defeaters for , then and its conclusion h are warranted from . However, since defeaters are arguments, if one or more defeaters for exists, then there may be defeaters for them, defeaters for those defeaters, and so on. Thus, for each defeater , all the defeaters for that can be constructed from must be considered. This leads to the generation of a tree structure called dialectical tree with root in which every node – except for the root – is a defeater for its parent. Figure 1 further below depicts different examples of dialectical trees. We will distinguish proper and blocking defeaters with different types of arrows. An unidirectional arrow from an argument to another argument denotes that is a proper defeater for . On the contrary, a bidirectional arrow from to denotes that is a blocking defeater for .
Given a dialectical tree for , every path from the root to one leaf is called argumentation line, and each argumentation line represents a different sequence that alternates arguments in favour of h (called pro arguments) and arguments against h (called con arguments). That is, given an argumentation line the set of pro arguments is and the set of con arguments is . A dialectical tree is considered acceptable when all its argumentation lines are acceptable (see Definition 4). Informally, that occurs when the argumentation lines satisfy certain constraints imposed on them to avoid infinite or circular argumentation and other undesirable situations. For instance, an argument cannot appear twice in the same argumentation line, and if an argument is defeated by a blocking defeater, that blocking defeater can only be defeated by a proper defeater. In addition, both the set of pro arguments and the set of con arguments must be concordant, that is, no contradictions must be derived when joining Π with all the defeasible rules from the arguments in or when joining Π with all the defeasible rules from the arguments in . We refer the interested reader to [21] and [22] for a complete explanation of those constraints.
(Acceptable Argumentation Line).
Let be an argumentation line. Λ is an acceptable argumentation line iff:
Λ is a finite sequence, and
no argument in Λ is a sub-argument of an argument appearing earlier in Λ (), and
for every i such that the argument is a blocking defeater for , if exists then is a proper defeater for , and
both the set of pro arguments and the set of con arguments are concordant.
Note that, given an acceptable dialectical tree, all the leaves of the tree are undefeated arguments. Therefore, to determine if is either defeated or undefeated, the computation process of DeLP performs a marking of every node of the dialectical tree for as follows: First, the leaves of the tree are marked as ⓤ (undefeated). Then the inner nodes – including the root – are marked as Ⓓ (defeated) if they have at least one child marked as ⓤ, or are marked as ⓤ if all their children are marked as Ⓓ.1
This may resemble the argumentation game of Dung’s [15] grounded semantics. However, they differ since DeLP does not allow arguments to be repeated within an argumentation line, whereas the grounded semantics only prevents pro arguments from being repeated. In this sense, the grounded semantics are more skeptical than DeLP’s dialectical process.
Figure 1 (left) shows the marked dialectical tree that the expert can build for the query . Observe that the root of is marked as ⓤ. Given a query h, there can be more than one argument for h, and for each argument a different marked dialectical tree can be constructed. The next example shows how the expert agent proceeds in a scenario in which more than one dialectical tree for the same query can be constructed.
Consider the DeLP program : Observe that from three different arguments for the conclusion “a” can be constructed: , and . The arguments , , and can also be constructed from . Consider the following preferences among these arguments: , , , , , .
Marked dialectical trees from Example 2 and Example 3.
In Fig. 1 we show four marked dialectical trees that can be constructed from . In figures, nodes of the dialectical trees will be depicted with triangles with the argument’s name inside, the argument’s conclusion at the top vertex, and the mark (ⓤ or Ⓓ) at the rightmost vertex. Given a marked dialectical tree and a node of , denotes the argument that is the root of , denotes the mark assigned to and denotes the mark assigned to in . For instance, in Fig. 1, = , and . Given an argument marked as Ⓓ in a dialectical tree , a denier for is a defeater for marked as ⓤ in . That is, precludes from being marked as ⓤ in . For instance, in Fig. 1, is a denier for in the dialectical tree .
We say that a literal q (respectively an argument ) is warranted by the agent () if there exists at least one dialectical tree for constructed from such that (i.e. results undefeated). For instance, considering and from Example 3, the agent warrants the literal a and the arguments and , and does not warrant . Considering and from Example 2, the agent warrants the literal and the argument .
Given a DeLP program and a literal q, will denote the set of all dialectical trees that can be constructed from such that the root’s claim is the literal q. The set is defined as . That is, contains all the dialectical trees from that provide a warrant for q, and it will be empty if no warrant for q exists. Observe that includes all the dialectical trees from that provide a warrant for . For instance, in Fig. 1, and .
Client-expert interaction
The whole interaction between both agents is called a session, which starts with a query and finishes when the client believes in the expert’s position. We assume that the expert’s knowledge does not change during a session and that the client does not simultaneously maintain more than one session. If the client has specific personal information that represents a context for its query and would affect the expert’s answer, then this contextual information is sent before the session starts.
Once a new session has started, its current state will be determined by a session state. Intuitively, a session state is a structure that keeps track of all the elements that both agents had already communicated, and also other elements that the client needs to ask about and remain pending. As will be explained in detail below, those pending elements will allow the client to acquire the knowledge that it needs to understand the expert’s position while avoiding unnecessary loss of information. In the rest of the paper, we will distinguish the client agent with and expert agent with . As stated above, the expert should know in advance all the relevant specific personal information from the client in order to give a proper answer. Since our approach focuses on the changes made on the client’s knowledge base, we assume that this specific contextual information is already considered in . Session states will be used to formalise the elements that actually change during the client-expert interaction, and thus the client agent will be part of the 5-tuple of the session states whereas the expert agent will not.
(Session State).
Let be a client agent and “q” a query. A session state for “q” is a 5-tuple where is a sequence of interaction elements, is a set of arguments, is a set of preference questions, and is a set of denier questions.
When there is no ambiguity, session states will be simply referred to as “states”, and the query subscript q will be omitted. The first component in a state’s 5-tuple is the client agent that made the query q. is characterised by its knowledge and preferences among arguments (), which can change from state to state during the client-expert interaction. Although the other four components will be formalised and explained in detail further below, their intuitive meaning are introduced next: is a set of arguments received by which are all in favour of the claim of the justification sent by the expert. The goal of the client is to leave all the arguments in marked as ⓤ. is a set of pending preference questions, which are pairs of arguments representing the question “What do you think about the preference between and ?”. is a set of pending denier questions, which are arguments for which the client needs defeaters from the expert. Finally, is a sequence that gathers all the interaction elements, composed by preference questions and denier questions that were previously asked by , and by defeaters and preferences that were previously sent by the expert.
Note that once a query q is posed to an expert agent, one of the following three alternatives could occur: (1) the expert believes in q (i.e. q is warranted by ), (2) the expert believes in the opposite (i.e. is warranted by ), or (3) the expert does not believe in q nor (i.e. q and are not warranted by ). In our approach, a session will only exist if one of the first two alternatives hold, that is, the expert has enough knowledge for a warrant for either q or . When this occurs, the expert’s answer for the client’s query will be either an argument for q or an argument for . This argument is called justification and will be denoted . As will be proved further below, when the session which started by the query q finishes, will be warranted by the client agent, and hence, the client will believe in .
Session evolution outline.
Figure 2 shows a directed graph that outlines how session states could evolve during a client-expert interaction. In that graph, nodes – identified with Greek letters – represent different session states of the interaction, while the directed arcs represent transitions among session states. All these transitions will be formally specified with transition rules during the rest of this section. For instance, the transition rule will specify how the session evolves from the initial state (α) into a new state (β) in which the client has adopted information sent by the expert, whereas the transition rule will specify the necessary conditions for the session to reach the final state. We will refer to Fig. 2 along the rest of this section in order to explain all the details of our approach.
First, two special session states will be distinguished: the initial state and the final state. Observe that a session starts (see α in Fig. 2) when a client agent makes a query q to the expert agent – and, if necessary, sends specific contextual information from (see Section 6). Hence, an initial state contains the elements that characterise before receiving anything from the expert, i.e. (), and the other four components are empty as follows:
A final session state (see ω in Fig. 2) will have at least one interaction element in the tuple’s second component, and the last three components will be empty (i.e. all the pending issues will have been solved). The final state’s first component () will characterise the client agent with all the knowledge that it will have adopted from the expert by the end of the session:
Once a session has started, the expert has to consider the initial query q made by the client and send one argument which justifies its answer. As was shown in Section 2, the expert may have more than one dialectical tree whose root is suitable to be sent to the client as the justification for the submitted query. Formally, the set (or either ) can have more than one element. For instance, consider the example depicted in Fig. 1 and suppose that the client has sent the query a. Then, , and thus the expert has two dialectical trees’ roots as possible candidates to be the justification . Since there can exist different strategies for selecting one element, our approach is defined in a modular way and we assume the existence of an operator that selects one dialectical tree from a set of dialectical trees. Therefore, the most appropriate implementation for the dialectical tree selection could be used depending on the represented application domain. For instance, the expert could select – among the roots of the dialectical trees – the most relevant argument in that domain, or the strongest argument with respect to the used comparison criterion. The analysis and comparison of different implementations for is out of the scope of this paper and is left as future work (see Section 8). Hence, the justification sent by the expert agent for a query q is determined by the operator introduced next:
(Expert Justification).
Let be the expert agent, and “q” be a query. ’s justification for q is defined as:
Consider the expert agent introduced above in Example 3. Let us suppose that receives the query “a”. The expert warrants the literal “a” since there exist two dialectical trees whose root arguments claim “a” and are marked as ⓤ: , and both and are suitable as justification. In this case, we will assume that the selected argument is , and hence, is sent to the client agent. Note that if the expert had received the query “”, the justification would also have been or because “” would not have been warranted by the expert.
As was explained above, the client’s goal is to accept the expert’s opinion, and hence, when the interaction is finished the client should believe the same as the expert about the submitted query. Therefore, when the client receives the expert’s justification , its goal is to be able to warrant from its knowledge base . In order to achieve this, the client will first adopt the received argument . Adopting an argument will consist in making the minimum necessary changes to in order to be able to construct . The client will then add rules and facts to its knowledge base and, in some cases, it will have to withdraw from the elements which are inaccurate from the expert’s point of view. The adoption of an argument is determined by the operator introduced next:
(Argument Adoption).
Given a client agent and an argument . The adoption of by is defined as , where:
, where
,
and
.
.
Consider a client agent that adopts an argument . On the one hand, all the defeasible rules of will be added to and the evidence of will be added to . This added knowledge will allow the client to infer h. On the other hand, all the facts from that contradict the evidence from or contradict the head of a rule in will be erased from . Those erased elements would prevent the argument from being built from the client’s knowledge base (see Section 2). Also, due to the minimality constrain that an argument should satisfy, all the facts from that are a head of a rule in are also removed. The proof of Proposition 1 included in the appendix shows in detail why the literals in the sets X, Y, Z have to be erased upon an argument adoption. The following two examples show how the adoption operator behaves in two different scenarios. In the first one, the client () has nothing to erase. In the second one, the client () has to erase elements in order to adopt the received argument. Note that, given an argument that can be constructed by both the client and the expert, for convenience the same argument name will be given to both.
Consider an agent Kyle, characterised by where (see Fig. 3), who makes the query “a” to the expert agent from Example 4. In response to this query, sends the argument as justification. In order to adopt , Kyle has to add the fact “c” (the evidence of ) to and the defeasible rules “” and “” to without the need to delete anything. Figure 3 shows the result of the argument adoption: . In this case, the sets X, Y and Z from Definition 7 are all empty.
Consider an agent Randy, characterised by where (see Fig. 4), who makes the query “” to the expert agent from Example 4. Randy receives in response and, in order to adopt , adds both the fact “c” to and the defeasible rules “” and “” to . However, Randy also needs to erase two facts that disallow to be a valid argument: “”, given that it is the complement of the head of the rule “”, and “’, given that it is the complement of the evidence “c”. Figure 4 shows the result of the argument adoption: . In this case, the sets X and Y from Definition 7 have the following elements: and .
The following proposition shows that, whenever the client agent adopts an argument from the expert agent, it is guaranteed that the client will be able to construct that argument. The proofs of all the formal results are in the Appendix section at the end of the paper.
Client agent Kyle before and after adopting .
Client agent Randy before and after adopting .
Letbe the expert agent, andan argument constructed by, ifthencan be constructed by.
As it will be explained below, the justification sent by the expert may not be the only argument that the client will need to adopt from the expert in order to achieve its goal. Thus, the operator will be used whenever the client receives an argument from the expert. Another important property that holds in our approach is that, whenever the client adopts a new argument, it is still able to construct all the arguments that it adopted previously from the expert:
Letbe an argument constructed by the expert agent, andan argument constructed by both a client agentand, ifthencan be constructed by.
In order to formally define our strategy and then prove its properties, the operational semantics of the interaction will be defined in terms of a transition system. A transition system is a set of transition rules for deriving transitions. In addition, a transition is a transformation of one session state into another. A transition rule has the form and can be read as “if the session is in the stateand conditionholds, then the session evolves into the state”. Next, the transition rule is introduced, which describes how the session evolves from the initial state (see α in Fig. 2) into a new state in which the client has adopted the expert’s justification (see β in Fig. 2).
When the transition rule is executed, the justification is added to the current session state’s second component and, therefore, the first interaction element in the sequence will always be that argument. is also added to the current state’s third component (the set of received arguments in favour of the claim of ) because it will need to be analysed by the client. For instance, in Example 5, the transition rule makes the session between and the expert evolve from the initial state into the state . Consider also Example 6: since the expert sends and the same justification, then ’s session evolves from the state into the state . Nevertheless, as we will show below, given that both agents have different previous knowledge their sessions will evolve differently.
Recall that, in our approach, the client conceives the other agent as an expert and its goal is to believe in the expert’s qualified opinion. Since the client may have previous knowledge that can be in conflict with the information acquired from the expert agent (e.g. it could build a defeater for ), and its goal is to adopt all the expert knowledge, then the client may need to adapt its previous knowledge losing as little information as possible in order to be aligned with the expert’s position. As we will show next, the involved agents will exchange arguments until the client has a warrant for from its own knowledge. From the client’s point of view, will be the root of a dialectical tree constructed from and, in order to warrant , the result of has to be ⓤ. Hence, whenever this mark is Ⓓ, the client should ask the expert for more information. This situation is captured by a session state with (see β in Fig. 2) in which the client agent will check what it thinks about the previously adopted arguments in in order to determine the course of the session. That could be: either it can warrant and then the session finishes (see transition to ω), or it still does not have a warrant for and then it is necessary to ask subsequent questions to the expert agent (see transition to γ). In order to determine this, the client will introspect into its own knowledge to check the mark (ⓤ or Ⓓ) of the previously adopted arguments.
(Introspection).
Given a client agent , and two arguments and , let be the dialectical tree constructed from such that . ’s introspection for is defined as , if is in . Otherwise, the introspection returns Ⓓ.
Note that introspections check the mark of an argument in a particular dialectical tree. As we mentioned above, the client is only interested in the mark of that argument in the dialectical tree whose root is the justification sent by the expert. When no confusion arises, when we talk about the mark of an argument, we implicitly refer to the mark of that argument in the dialectical tree (of the client or the expert, accordingly) whose root is .
If the client makes an introspection to check the mark of the justification (see β in Fig. 2) and results in ⓤ, this implies that it has a warrant for the claim of and, therefore, the session can end. This behaviour is reflected in the following transition rule:
Recall that the first element in a session state’s second component is always the expert’s justification. If the transition rule is executed the current state’s third component becomes empty, making the session evolve into the final state (see ω in Fig. 2). Example 7 illustrates a scenario in which the client does not have knowledge opposing the expert’s justification. That argument is adopted and no other changes are needed in the client’s knowledge in order to warrant it. In consequence, the session finishes.
Consider again the client agent Randy from Example 6. After adopting the expert’s justification the current session state is . Then, Randy makes an introspection to check the mark of the justification. The only argument constructed by Randy is , and thus the only dialectical tree for “a” is the one depicted in Fig. 5 (left). The result of is ⓤ because does not have any deniers. In consequence, all the arguments in the session state’s third component are removed, and the session finishes after reaching the final state .
Dialectical trees from Example 7 (left) and Example 8 (right).
In contrast to Example 7, the following example shows a scenario in which the client has a denier for the received justification. Hence, the result of the introspection would be Ⓓ and the session would not be able to end immediately. The client-expert interaction will continue with the client’s goal of having a warrant for the received justification.
Consider again the client agent Kyle from Example 5. After adopting the expert’s justification the current session state is . Then, Kyle makes an introspection to check the mark of the justification. The arguments constructed by agent Kyle are and . Now, let us suppose that it has no preference between them. In this case, Kyle’s dialectical trees for “a” are the ones depicted in Fig. 5 (right). The result of is Ⓓ because has a denier (). Therefore, unlike Randy in Example 7, Kyle’s session does not reach the final state immediately after adopting the justification , because the transition rule is not applicable.
A denier (as in Example 8) is an argument defeating another argument which, from the expert’s position, is undefeated (as in Example 8). Clearly the expert will not have deniers for the arguments that it sends to the client. However, since the client could have different knowledge, the client may construct a defeater that the expert cannot. In addition, the comparison criteria of both agents could differ. Finally, the expert could have a defeater for the denier that the client does not. Hence, in order to deal with its deniers, the client will first ask preference questions to the expert in order to adjust its preferences among its arguments. In DeLP the argument comparison criterion is modular and the most appropriate for the application domain being modelled can be used. Hence, we opted to formalize the preference between arguments as the relations and in order to abstract away from the details of the agents’comparison criteria. A discussion on how a particular comparison criterion can be applied to our approach can be found in Section 6. The operator is defined as follows:
(Preference Questions).
Given a client agent , and two arguments and , let be the dialectical tree constructed from such that . ’s set of preference questions for is defined as = { is a defeater for , and is the sub-argument of which is in conflict with, and }.
For instance, since in Example 8 prevents from being marked as ⓤ, the client will obtain a set of preference questions and then will ask to the expert about its preference between both arguments. Similarly to introspections, the set of preference questions for a given argument is obtained from the dialectical tree whose root is the expert’s justification. The following proposition states that, if an argument is marked as Ⓓ, the set of preference questions that can be generated for that argument is not empty.
Letbe a client agent,an argument in a dialectical tree, and, ifthen.
We mentioned above that the justification for the initial query may not be the only argument that the client will need to adopt during the session. As we will explain in detail below, other arguments – also in favour of the claim of the justification (called “pro arguments”) – can be received. Then, if the client makes an introspection to check the mark of the justification (see β in Fig. 2) and does not result in ⓤ, it means that some of these pro arguments are marked as Ⓓ. In this case, the client agent will proceed to ask preference questions for one of the furthest deniers from the root (see γ in Fig. 2). When this pro argument becomes marked as ⓤ later in the session, the pro argument marked as Ⓓ above in the same argumentation line may also become marked as ⓤ and so on, like a cascade effect. This behaviour is reflected in the following transition rule:
The transition rule specifies how the set of preference questions is generated in order to deal with those deniers. The rightmost (last) argument marked as Ⓓ in is selected, and the corresponding set of preference questions is added to the session state’s fourth component. For instance, consider again Example 8 in which the current session state is after Kyle adopted the expert’s justification . It was shown that and, since , the transition rule can be executed, making the session evolve into . That is, the new session state’s fourth component contains a preference question that will be sent to the expert. In the following example the client has more than one denier to deal with.
Consider a new scenario in which a client agent is in the state after adopting the justification . Let us assume that can also construct the arguments , , , , , , where the first five arguments are defeaters for , and is a defeater for . In addition, let us assume that the client’s preferences are , , and . Hence, when the client makes an introspection to check the mark of , it constructs the dialectical tree depicted in Fig. 6 in which there are four deniers: , , (proper defeaters), and (blocking defeater). Given that is marked Ⓓ and is the last argument sent by the expert, the client will generate the set of preference questions for . The result of is the set and, by the transition rule , the session reaches the state .
Whenever a set of preference questions is generated, the expert will answer them one by one by sending to the client its own preference between the corresponding pair of arguments. Then, the client will adopt such preference in order to deal with the deniers. Given a pair of arguments, the expert can either prefer the first over the second, prefer the second over the first, or believe that they are unrelated by the preference order, in which cases the expert will answer grt (greater), les (less), or unr (unrelated), respectively. Given a set of preference questions the expert agent always knows and is able to construct the second element of each pair because it corresponds to an argument that the client has previously received from the expert. However, the first argument in a preference question may not be considered as valid by the expert, either because it contains defeasible rules that are not in , uses evidence that is not in , or simply because it cannot be constructed (it is non-minimal or uses evidence that is contradictory to ). In this case, considering the assumptions we have made for the expert, it will always prefer an argument that it can construct over an argument that it does not consider as valid; and the answer will be les (less). Then, the preference sent by the expert for a pair of arguments is determined by the operator :
(Expert Preference).
Let be the expert agent. ’s preference between two arguments and is defined as:
When the client receives from the expert a preference between two arguments, it will adjust its own preferences accordingly. Similarly to an argument adoption, in a preference adoption the client respects the expert’s opinion. The adoption of a preference is determined by the operator :
(Preference Adoption).
Given a client agent , a preference question , and a preference answer . The preference adoption of P for by is defined as , where:
Note that the adoption of new preferences may require to withdraw from a preference between a pair of arguments which, from the expert’s position, is inaccurate.
The transition rules and will specify how the session evolves once a received preference is adopted. On the one hand, is applicable when the adopted preference directly solves the problem with the denier. In this case, the corresponding preference question is removed from . On the other hand, is applicable when the received preference is not enough to solve the problem with the denier. In this case, as with the pair is removed from , but in addition is added to the set of denier questions. Every denier question is an argument for which the client needs an undefeated argument from the expert such that defeats .
Recall that, given a preference question asked by the client, is always undefeated from the expert’s point of view and, consequently, either is marked as Ⓓ or is an invalid argument that is not part of the dialectical tree. After adopting the corresponding preference for from the expert, the client will make an introspection to check the mark of . If is not marked as ⓤ then is no longer a denier (transition rule ). On the contrary, if is marked as ⓤ (i.e., is still a denier for ) the client will need from the expert an undefeated defeater for (transition rule ). must exist because otherwise – from the expert’s point of view – would not be an undefeated argument in favour of the justification. Note that the transition rules and will be executed until is empty (see γ in Fig. 2).
Consider again the Example 8 in which the transition rule was executed and the session reached the state . Here, although is not empty and , the transition rule cannot be executed because . Nevertheless, is applicable and the session evolves into .
Let us suppose that the client agent from Example 9 is in the session state with the expert agent after generating the preference questions for . Recall that the client’s preferences are , , and , and also let us assume that the expert’s preferences are , , , , and . Both agents’ (relevant) dialectical trees during this part of the session are depicted in Fig. 7. First, the client asks the preference question and the expert answers unr. Then, the client removes (see in Fig. 7) so is now a blocking defeater instead of a proper defeater. Since the introspection for results in ⓤ, the transition rule is executed causing to be added to the set of denier questions because the client (now ) needs a defeater for from the expert. Then, the session reaches the state . Second, the client asks the preference question and the expert answers . The client removes (see ), adds , and makes an introspection for which does not result in ⓤ. Then, is executed and the session evolves into . Third, the client asks the preference question and the expert answers les. The client proceeds to remove (see ), adds , and makes an introspection for which does not result in ⓤ. Then, by the session evolves into . Finally, the client asks the last preference question and the expert answers grt. The client proceeds to add (see ) so is now a proper defeater instead of a blocking defeater. Since the introspection for results in ⓤ, the transition rule is executed causing to be added to the set of denier questions because the client (now ) needs a defeater for from the expert. Then, the session reaches the state . Note that due to the changes in the client’s preferences ( and ), the arguments and are no longer defeaters for , and thus they are not part of the client’s dialectical tree for any more. Those arguments are greyed out in Fig. 7.
Whenever a set of denier questions is generated, the expert will answer them one by one by sending to the client an undefeated defeater for the corresponding denier. Similarly to the selection of the dialectical tree for the justification, the expert may have multiple undefeated defeaters for a denier and there are different strategies to select one of them. We will follow again the same modular approach as we have explained for the operator used in Definition 6: we assume the existence of an operator that the expert uses for selecting a suitable defeater. Therefore, the most appropriate implementation for could be used depending on the represented application domain. For instance, the expert could select – among the undefeated defeaters for the denier – the most relevant argument in that domain, or the strongest argument with respect to the used comparison criterion, etc. As stated above for the operator , the analysis and comparison of different implementations for is left as future work.
Apart from the defeater, the expert will also send the type of the defeater (proper or blocking) so that the client can adjust the preference between that argument and the corresponding denier in advance. The defeater sent by the expert agent for a denier is determined by the operator :
(Expert Defeater).
Given two arguments and , let be the expert agent, and be the dialectical tree constructed by such that . ’s defeater for is defined as , where:
Whenever the client receives from the expert a defeater for a denier, it will adopt the argument using the operator . In addition, it will adjust its preferences accordingly to capture the same type of defeat relationship. The adoption of a defeater is determined by the operator introduced next:
(Defeater Adoption).
Given a client agent , two arguments and such that is a defeater for , and a type of defeater “Type”. Let be the sub-argument of which is in conflict with. The defeater adoption of as defeater for by is defined as , where:
For each denier question , the corresponding defeater adopted by the client will become a new pro argument since it is in favour of the expert’s justification. This behaviour is captured by the last transition rule:
The transition rule is executed whenever has elements, until it is empty (see ϵ in Fig. 2) and the session reaches the state (back to β in Fig. 2). For each denier question , the corresponding received defeater is added to session state’s third component. This means that the next time the session evolves into the state γ (see Fig. 2), if is marked as Ⓓ the client will proceed to ask the preference questions for (the last received argument).
Consider the last session state shown in Example 10 in which there are two denier questions for . Recall the expert’s dialectical tree depicted in Fig. 7 (left). The answer for the denier question from the expert will be , being the only defeater for marked as ⓤ. Then, the transition rule is executed: the client adopts the defeater , adds to its preferences, and the session evolves into . Note that is added to the set of arguments in favour of the claim of the justification. Next, the client asks the last denier question and the expert answers , being the only defeater for marked as ⓤ. The transition rule is executed again: the client adopts , adds to its preferences, is added to the set of arguments in favour of the claim of the justification, and then the session reaches the state (see β in Fig. 2). The client’s current dialectical tree for (the received justification for its original query) is depicted in Fig. 8. Since there are no pending preference or denier questions, the client makes an introspection to check the mark of the justification, which results in ⓤ. Therefore the transition rule is executed and the session evolves into the final state . Observe that the second component has the trace of the whole interaction.
The previous example illustrates a scenario in which the justification is marked as ⓤ after the client adopted the necessary defeaters (new pro arguments) for two deniers. However, as will be shown in Example 12, the client may have undefeated counter-arguments against any of the recently adopted defeaters. These counter-arguments are new deniers, which means that there is still at least one argument marked as Ⓓ in (the set of arguments in favour of the claim of the expert’s justification). Note that if the client has a denier for an argument received from the expert, that denier is also precluding the justification from being marked as ⓤ. In this case, the client will proceed to ask the corresponding preference and denier questions for the last received argument in marked as Ⓓ. When this argument becomes marked as ⓤ later in the session, the above pro argument marked as Ⓓ in the same argumentation line will also become marked as ⓤ (unless it has another denier) and so on, like a cascade effect.
Consider again the session between the client Kyle introduced in Example 5 and the expert agent from Example 4. The session starts with the initial state , by evolves into , by evolves into , by evolves into , and by evolves into . At this point, the client has adopted two arguments: and the defeater , so its knowledge has changed to where , , and . Although Kyle has received from the argument that defeats the denier , Kyle has the argument that is a blocking defeater for . The argument is also a denier since it is marked as ⓤ and is causing and (pro arguments) to be marked as Ⓓ. Therefore, the transition rule is executed again generating a new set of preference questions , and the session reaches the state . Then, the transition rule is executed and, since the expert believes that , the client adopts such preference and the session evolves into with , , and . Now, since and are no longer deniers, results in ⓤ and is applicable, leading to the final state in which the literal a is warranted by the agent .
Recall from the transition rule that the arguments in are only removed – all together – when the justification becomes marked as ⓤ. The reason is that arguments marked as ⓤ may become marked as Ⓓ again. The client constantly acquires new defeasible rules and facts during the session, which could allow it to construct a new argument (denier) which defeats a pro argument previously marked as ⓤ.
Before concluding this section, a detailed example will be included to show the complete interaction between the client and the stock market expert which were introduced in Example 1 to motivate our proposal. In this example we will explain step by step how ’s knowledge changes during the client-expert interaction with regarding the query about whether to buy stocks from the company Acme.
Consider the initial knowledge of the client agent where and
Consider again the expert agent introduced in Example 2, which can build the arguments , , , , and that were shown in Section 2. In addition, consider that the expert’s preferences are and . The session for the query “” will start from the initial state .
Figure 9 (left) depicts , the only dialectical tree that the expert can build for the query “”. Since , “” is warranted by the expert, , and . Consequently, the expert sends as justification for the query, and the client proceeds to adopt the argument . The agent becomes where
The client () is currently able to construct the arguments and . Therefore, by the transition rule , the session evolves into the state . Next, the client makes an introspection to check the mark of the justification. The client’s resulting dialectical tree for is , depicted in Fig. 9 (centre). Given that is a denier for , . Afterwards, the client selects from the session state’s third component (the last received argument marked as Ⓓ) and generates the corresponding preference questions for that argument: . Consequently, by the transition rule , the session evolves into the state: .
Then, the client asks the preference question , and the expert proceeds to answer with . Note that since the client’s set of preferences does not change. Afterwards, the client makes an introspection to check the mark of which results in . Accordingly, is added to the set of denier questions since the client now needs from the expert a defeater for . Therefore, by , the session evolves into the state .
Next, the client asks the denier question , and the expert answer with , being the only defeater for marked as ⓤ. The client proceeds to adopt and becomes where and . Figure 9 (right) depicts , the client’s resulting dialectical tree for . Note that the client (now ) added the preference , being the sub-argument of which is in conflict with. Then, is a new argument in favour of the claim of the justification, and it is added to the session state’s third component. Thus, by the transition rule , the session evolves into the state .
Finally, given that there are no pending preference or denier questions, the client makes an introspection to check the mark of the justification, which results in since has no deniers. In consequence, by the transition rule , the session evolves into the final state . The client is now able to warrant the literal “”.
We will end this section showing some results which prove that our strategy behaves as expected. The following lemma shows that from any reachable session state (except a final one) there is always a single applicable transition rule. Recall that all the proofs are included in the Appendix at the end of the paper.
Letbe a reachable state, there exists one and only one applicable transition rule from s.
Lemma 1 is important because it is used to prove that, given an initial state in which a client agent has made a query to an expert, there always exists a sequence of transitions that leads to the final state. That is, every session will terminate in a finite amount of time and when that occurs the client will have no open issues about the received knowledge. We will also show that, when a session arrives to a final state, the client has a warrant for the claim of the expert’s justification and, therefore, it will have adopted the expert’s position about that query.
Letbe an initial state, there exists a sequence of transitions that leads fromto.
The following corollary is a direct consequence of Theorem 1 and states that the sequence of transitions that leads to the final state is unique. Hence, a session will never loop infinitely.
Letbe an initial state, the sequence of transitions that lead fromto the final session stateis unique.
Finally, the following result shows that in any final session state the client warrants the claim of the justification sent by the expert.
Letbe a final state, the client agentwarrants.
From the results given above, we can conclude that, for any session, the goal of our proposal will always be achieved: the expert provides just the necessary information to help the client understand its opinion about the query, and the client agent acquires relevant information regarding the topic of the query to be able to warrant the claim of the received justification.
Implementing the expert’s selection operators
In this section, we will propose two alternatives for implementing the operators and . Recall that is used by the expert to select a dialectical tree from a set of dialectical trees whose roots are all suitable as the justification for the client. Once is selected and the justification is sent, the client starts asking preference questions and denier questions until it is able to mark as ⓤ. Whenever a denier question is asked, is used by the expert to select an undefeated defeater from a set of defeaters for in .
We will define and with two different focuses. First, assuming a worst-case scenario regarding to the client’s knowledge, we will define the selection operators to minimise the size of the client’s resulting dialectical tree, that is, the dialectical tree that the client will have to construct until it manages to believe in the claim of the justification. Intuitively, reducing the size of that tree also reduces the session’s length in terms of the number of preference questions and denier questions that need to be asked until the session finishes. Then, assuming a more realistic scenario, we will define the selection operators to help the expert reduce the size of the client’s resulting dialectical tree by considering the query’s context and the previous knowledge the client exposed during the session.
The size of the client’s resulting dialectical tree is bounded to which con arguments from (i.e., arguments against the justification) the client can construct. This follows from the fact that, a denier which – from the expert’s perspective – is not a con argument will be removed from the client’s dialectical tree immediately after the expert sends les (recall Definition 10) in response to the corresponding preference question. On the contrary, a denier that actually is a con argument will remain as such – in the best case scenario – until the corresponding denier question is asked and the expert sends a defeater . Then, if the client can construct a defeater for , not only the denier will still remain as such but also the defeater will become a new denier that needs to be dealt with.
In the worst-case scenario, the client will have enough knowledge to be able to construct every con argument in before the session starts. In other words, the client will know every con argument that will be used in each of the dialectical trees that the expert will generate for the query. Nevertheless, even if that occurs, the client will not ask a preference question and a denier question for every con argument. Actually, the client will only do so for those con arguments that will defeat the justification that will be sent by the expert, and for the pro arguments (i.e., arguments in favour of the justification) that will be posed by the expert to defeat the deniers from the client. Hence, since the expert has to send only one defeater for each denier question from the client, insightfully selecting them is the key to minimise the size of the client’s resulting dialectical tree.
Intuitively, if the expert wants to minimise the size of the client’s resulting dialectical tree, has to always select the defeater (pro argument) that will minimise the number of con arguments – known by the expert – that the client will have to deal with while “exploring” the argumentation lines. Analogously, has to select the dialectical tree that – considering is optimal – will minimise the number of con arguments that the client will have to deal with. Assuming the worst-case scenario, optimizing the selections is feasible since the expert “knows” all the deniers that the client can pose.
Definition 14 details how to assign a worst-case scenario selection value (WCSSV) to an argument in a dialectical tree. In the case of a pro argument , this value represents how many con arguments the client will have to deal with – assuming the worst-case scenario – if the expert selects and, from then on, always selects the defeaters with the lowest WCSSV.
(Worst-Case Scenario Selection Value).
Given a dialectical tree , the worst-case scenario selection value of an argument in is defined as:
The operator’s first and second cases assign the corresponding WCSSV to all the dialectical tree’s leaves. The operator’s third case represents the fact that, whenever the client asks a denier question, the expert can choose the defeater that will cause the lowest number of con arguments becoming new deniers for the client. The operator’s fourth case represents the fact that – in the worst-case scenario – the client will be able to construct every con argument below that pro argument. Figure 10 depicts two dialectical trees and their corresponding WCSSVs.
Two dialectical trees whose arguments contain their WCSSV. Undefeated arguments are coloured in white while defeated arguments are coloured in black.
Next, we define the operators and that minimise the number of con arguments that the client will have to deal with assuming the worst-case scenario. Consequently, using these operators imply the size of the client’s resulting dialectical tree is also minimised.
(Dialectical Tree Selection using Worst-Case Scenario Criterion).
Given a non-empty set of dialectical trees , the selected dialectical tree from is defined as where and there does not exist such that .
Consider an expert agent that, given a query, constructs the dialectical trees depicted in Fig. 10. In this case, the operator selects the one at the right since it has the root with lowest WCSSV.
(Defeater Selection using Worst-Case Scenario Criterion).
Given a dialectical tree , and a non-empty set of defeaters in , the selected defeater from is defined as where and there does not exist such that .
Consider the selected dialectical tree from Fig. 10. The expert will initially send to the client the argument coloured in white with a WCSSV of 3 (i.e., [white; 3]) as the justification. Then, if the client can construct the argument [black; 3] and eventually asks the corresponding denier question, the expert has two defeaters to choose from: [white; 5] and [white; 2]. In this case, the operator selects [white; 2] – the one with the lowest WCSSV – which is sent to the client. Following the same criterion, if the client can construct any of the arguments [black; 1] and eventually asks the corresponding denier question, the expert will select and send the corresponding defeater [white; 0] below. In the worst-case scenario, the client’s resulting dialectical tree will be the one inside the dashed box.
Assuming the client has enough knowledge to construct every con argument from the expert may not be realistic depending on the application domain. However, it is clear that unless the client sends in advance all its knowledge, it is impossible for the expert to predict which con arguments the client will be able to build. Hence, given that minimising the size of the client’s resulting dialectical tree without making assumptions is not feasible, we will take another approach and define the selection operators to help the expert reduce it considering the query’s context and the previous knowledge that the client exposed during the session.
As we mentioned in Section 3, before the session starts the client sends to the expert any specific personal information that represents a context for its query and would affect the expert’s answer. As proposed in [22], such contextual information can be temporarily considered by the expert to generate the dialectical trees for the client’s query without changing its own knowledge. Once the session has finished, the received context will disappear from the expert’s knowledge base and will not be used for answering queries from other clients. We refer the interested reader to [22] for the details of how different operators for temporarily integrating knowledge can be defined.
Following the aforementioned approach, the query’s context will be represented by a DeLP program . After the dialectical trees for the query are generated, this program will be referred to as the client’s previous knowledge and will be reused to store all the facts and defeasible rules the expert knows the client knows. Whenever the expert processes a preference question () where and , it will add to , and to . In addition, whenever the expert sends a defeater in response to a denier question, it will add to , and to . Note that receiving a context, using it to generate the dialectical trees, and keeping the client’s previous knowledge updated can be formally introduced into the operational semantics by slightly modifying the transition rule and Definitions 6, 10 and 12.
Even though it is impossible to predict which con arguments the client will be able to build, the expert can use the client’s previous knowledge to select the dialectical tree and the defeaters based on the con arguments that the client is more likely to be able to build. In particular, a constructibility ratio can be calculated for each con argument, corresponding to the number of elements from that argument (facts and defeasible rules) that are present in the client’s previous knowledge over the total number of elements. However, when calculating a constructibility ratio, the expert should not only consider the information the client knows at the moment, but also the information the client will certainly know if it interacts with that particular con argument later in the session. This information consists of all the evidence and defeasible rules of the arguments that are above the con argument in consideration in the corresponding argumentation line, as defined next:
(Ancestral Elements).
Given a dialectical tree and an argument in , let () be an argumentation line in . The ancestral elements of are defined as .
Then, an argument’s constructibility ratio can be calculated by considering both the client’s previous knowledge and the argument’s ancestral elements, as defined next:
(Constructibility Ratio).
Let be the client’s previous knowledge. Given a dialectical tree and an argument in , the constructibility ratio of is defined as:
Following a similar strategy to the WCSSV, the expert can use the constructibility ratio to assign a constructibility selection value (CSV) to the arguments in a dialectical tree. In the case of a pro argument , this value represents the sum of the constructibility ratios of the con arguments the client may have to deal with if expert selects and, from then on, always selects the defeaters with the lowest CSV.
(Constructibility Selection Value).
Given a dialectical tree , the constructibility selection value of an argument in is defined as:
Con arguments increase the total CSV by the value corresponding to their constructibility ratio, instead of increasing it by 1 as in the operator . This section concludes with another approach to defining the operators and in order to help the expert reduce the number of con arguments that the client will have to deal with by just using the client’s previous knowledge. Consequently, using these operators imply the number of preference questions and denier questions that need to be asked until the session finishes is also reduced.
(Dialectical Tree Selection using Constructibility Criterion).
Given a non-empty set of dialectical trees , the selected dialectical tree from is defined as where and there does not exist such that .
(Defeater Selection using Constructibility Criterion).
Given a dialectical tree , and a non-empty set of defeaters in , the selected defeater from is defined as where and there does not exist such that .
Since Definitions 20 and 21 use the operator , which relies on the knowledge the expert knows the client knows, there is no guarantee there will be an effective reduction in the client’s resulting dialectical tree. Nevertheless, they clearly provide an advantage over using a method based on random selection.
Rejecting the expert’s opinion
As we have stated in Section 1, we focus on a proposal in which the client’s goal is to ask questions to an expert in order to acquire knowledge until it believes in the expert’s qualified opinion. As explained, we assume that the client conceives the other agent as an expert on the matter and it will be committed to believe in the answer for its query. Therefore, the client will adopt all the information received from the expert and, in case of a contradiction with its previous knowledge, it will prefer the expert’s opinion.
Although our approach was developed for application domains in which committing to the expert’s opinion is the best alternative, there can be situations in which it is better not to do so. Clearly, the client is the one that has the responsibility of deciding whether to accept the expert’s opinion. For an excellent analysis of this matter see [46], where critical questions are proposed to guide such decision. If the assumption of commitment that we have adopted is relaxed, then the client could argue (with itself or with other agents) about whether to accept the expert’s opinion.
Next, we will introduce three additional transition rules which can be included in the operational semantics in order to relax such assumption. With these transition rules, the client can opt to reject an argument or a preference sent by the expert agent, causing the session to end immediately.
With we denote that the client agent rejects the argument . This may occur, for example, if the client believes in a fact α that is in contradiction with ’s evidence, and refuses to withdraw α from its knowledge base in order to adopt . This transition rule uses the same current state as the transition rule , allowing the client to not adopt the justification sent when the session begins.
With we denote that the client rejects the preference of over . This may occur, for instance, if the client refuses to update its preferences according to what the expert believes. This transition rule uses the same current state as the transition rules and , allowing the client to not adopt the preference sent by the expert.
With we denote that the client rejects the argument as a defeater of for . Similar to , this may occur – for instance – if the client believes in a fact α that is in contradiction with ’s evidence, and refuses to withdraw α from its knowledge base in order to adopt . This transition rule uses the same current state as the transition rules , allowing the client to not adopt the argument as a defeater for .
Note that after any of these transition rules is executed the session evolves into a final state. The reason is that, if the client rejects an expert’s argument or preference, it cannot be guaranteed that the client will be able to believe in the claim of the justification any more.
The reader should note that, if any of these transitions rules is added to the system, Theorem 2 will not hold anymore. In addition, , , and must be modified as follows:
must be added to ’s condition.
must be added to ’s and ’s condition.
must be added to ’s condition.
In Section 7 we will discuss on [46] and a possible approach to tackle the implementation of the operator .
Discussion
Throughout this paper different design choices were made to tackle the issues that our approach addresses. In this section we will discuss some of those decisions, and for some of them possible alternatives will be commented.
As we explained in the previous sections, the goal of the client agent after adopting the justification sent by the expert agent is to believe in such argument, that is, to mark it as ⓤ. The reader may think that if, instead of doing that, the client aims to find any undefeated argument that claims the same as , the session with the expert would be shorter or the changes in the client’s knowledge base would be fewer. Although we could find an example in which that actually happens, this is not necessarily always the case because none of the agents is aware of all the knowledge the other agent has. For instance, consider the following scenario. The expert sends a justification to the client which, after the adoption, is marked as Ⓓ. By asking just one preference question and one denier question could mark as ⓤ, but cannot predict this in advance. Instead, since has some rules and facts that could potentially construct a new argument which claims the same as , decides to pose to these pieces of information in order to find out ’s opinion and to receive other pieces of information that could help itself build . However, after a few interactions, tells that one of facts that are required to build is invalid, thus making an invalid argument and causing the previous interactions to be wasted in some sense.
In addition, consider the expert’s justification . If after adopting the client uses only a proper subset of in order to warrant j instead of warranting , an undesirable result can arise as shown in the following example:
Consider the expert ’s and client ’s knowledge bases depicted in Fig. 11, and that for the argument is preferred to . If ’s query is j, then ’s justification for j is . In our approach, in order to warrant , would have to deal with the denier . Instead of that, if only adds to its knowledge base the fact b that is part of ’s evidence, then would build the argument which, from its point of view, warrants j since has no deniers. However, note that if this alternative is followed, the interaction would end with an undesirable result. Although ’s justification contained a sub-argument for a, will end up ignoring it and believing in . Hence, by proceeding this way, the interaction would finish but the argument that would have for j is something that is not accurate from the expert’s point of view. Instead, using our proposal, would have asked about the preference between the sub-argument and , and then it would have adopted both this preference and .
Client’s and expert’s knowledge bases from Example 14.
Recall the operator introduced in Definition 6. The reader may think that if the expert sends to the client all the justifications it has (instead of selecting one) then the interaction could be shorter. Nevertheless, as shown in the example below, with this alternative both the interaction and the changes to the client’s knowledge could increase. The reason is that, as shown in the following example, either the client should explore for each justification the associated dialectical tree, or has to make all the necessary changes to accept them all.
Consider a client agent that, instead of just one, receives two justifications and from the expert agent . Suppose that has five deniers for (see Fig. 12(a)) while it has only one denier for (see Fig. 12(b)) and thus starting the session with seems to be the option that would imply the shortest interaction with the expert. Then, asks the preference question and replies with grt since it prefers over . Next, since the denier is still undefeated, asks for a defeater for and receives . After adopting , realizes that it has thirty deniers for it (see Fig. 12(c)) and the interaction with the expert will have at least thirty additional preference questions. Note that, if had started the session with and the expert had replied with les to the five preference questions , then the interaction would have been shorter.
When Definition 9 was introduced, we mentioned that we opted to use preference questions in order to abstract away from the details of how the client’s criterion would be modified. Although we could have formalized our approach with a particular family of preference criteria similarly to [7,8], we aimed for a high level formalization that allows to be instantiated with any argument comparison criterion from any family. As we will show next, any comparison criterion can be applied to our approach by making some changes in the operators and . For instance, consider the argument comparison criterion referred to as rule’s priorities [21,22]. In this criterion an argument is preferred to another argument if there exists at least one rule in and one rule in such that and there is no in and in such that , where “>” is a partial order among defeasible rules that is explicitly provided with the program. The intuitive meaning of is that the rule is preferred over in the application domain being considered. Hence, since the argument comparison criterion is based on the preferences among defeasible rules, both the expert agent and the client agent would have the sets and implemented as a partial order among their defeasible rules ( and , respectively). Given a preference question , in addition to the corresponding preference (grt, les or unr) the operator would send two sets: an add-set and a delete-set. The add-set would contain all the pairs of rules such that is in and is in (or vice versa) and is in . Furthermore, if considers that is not a valid argument because of any or some of the following reasons, these sets would contain additional elements: if uses a defeasible rule that is not in , the add-set would also contain all the pairs for every rule in ; if is non-minimal, the add-set would also contain all the facts from that cause to be non-minimal; if uses evidence that is not in , the delete-set would contain all that evidence; finally, if uses evidence that is contradictory to , the delete-set would also contain all that contradictory evidence. On the other hand, the operator would add to and all the elements in the add-set and would delete from and all the elements in the delete-set. For instance, if asks the preference question , will answer with grt, and , and then will add (, ) to .
Related work
In [47], the authors define dialogue as a normative framework comprising exchange of arguments between two participants reasoning together to achieve a collective goal. During a dialogue, the participants take turns to make “moves” and have individual goals and strategies in making such moves. A move is a sequence of locutions provided by a participant at a particular point in the sequence of dialogue. In addition, the authors define four kinds of rules which characterise different types of dialogue: locution rules that define the permissible locutions – like statements, questions, inferences, and so on; structural rules that define the order in which moves can be made by each participant; commitment rules that define the insertion and deletion of propositions from a participant’s commitment store – as a consequence of its moves; and win-loss rules that determine the conditions under which a player wins or loses the game.
Regarding to the different types of dialogues, the authors particularly define information-seeking dialogue in which one participant has some knowledge and the other party lacks and needs that information. This type of dialogue’s goal is to share that knowledge. Expert consultation is a subtype of the information-seeking dialogue in which one participant is an expert in a particular domain or field of knowledge, and the other is not. By asking questions, the non-expert participant (in [47], the layman) tries to elicit the expert’s opinion (advice) on a matter which the questioner itself lacks direct knowledge. In this kind of dialogue, the questioner can arrive at a presumptive conclusion which gives a plausible expert-based answer to its question. Our proposal clearly fits the information-seeking and expert consultation concepts. Furthermore, we could instance every element in the definition of dialogue with elements of our strategy to make a match. For instance, there are strong similarities between the “locution rules” and the operators used by the agents (e.g. and ), between the “structural rules” and the preconditions of the transition rules, between the “commitment rules” and the configurations of the current and resulting states of the transition rules, and between the “win-loss rules” and the final session state . However, instead of defining a dialogue framework in which some rules and strategies are left to be specified by the agent designer, in this work we focus on defining a specific strategy that guarantees that both agent’s goals are always achieved. That is, when the session finishes, the client agent will believe in the claim of the expert’s justification.
In [31], the authors present a framework for argumentation-based dialogues between agents. They define a set of locutions by which agents can trade arguments, a set of agent attitudes which relate what arguments an agent can build and what locutions it can make, and a set of protocols by which dialogues can be carried out. Regarding to the assertion attitudes, an agent may be either confident, if it can assert any proposition p for which it can construct an argument ; or thoughtful, if it can assert any proposition p for which it can construct an acceptable argument . Regarding to the acceptance attitudes, an agent may be either credulous, if it can accept any proposition p if p is backed by an argument; cautious, if it can accept any proposition p if it is unable to construct a stronger argument for ; or skeptical, if it can accept any proposition p if there is an acceptable argument for p. Although in our approach the agents deal with arguments instead of propositions, we can informally categorise them in their proposed terminology. The expert agent acts with a thoughtful attitude because it only answers a query if it can construct an undefeated argument for either p or . We consider that an expert with a confident attitude – that is able to send arguments without considering their marks (ⓤ or Ⓓ) – would not behave as an expert according to [47]. On the other hand, the client agent follows a skeptical attitude because only accepts a conclusion p when it has an accepted (undefeated) argument for p.
Based on the typology of [47], the authors of [31] also define information-seeking dialogues in which one agent A seeks the answer to some question from another agent B, which is believed by the first to know the answer. In the protocol which they provide for an information-seeking dialogue about a proposition p, first A asks a and B replies with either , , or , depending upon the contents of its knowledge base and its assertion attitude. indicates that, for whatever reason B cannot give an answer and – like in our approach – the dialogue terminates without the question being resolved. Excluding this case, A either accepts B’s response if A’s acceptance attitude allows it, or challenges it to make B explicitly state the argument supporting that proposition. Then, B replies to that challenge with an , where S is the support of an argument for the challenged proposition, and this process is repeated for every proposition in S. In contrast, in our approach it is not optional to receive an argument backing a proposition. Given that our agent’s inference mechanism is argumentative, the client agent will require a justification. Finally, unlike our proposal, the authors state that if the agent which starts the information-seeking dialogue by making a question is either skeptical or cautious, it may never accept the proposition whatever the other agent says.
In [17 –19], the authors model information-seeking dialogues using Assumption-Based-Argumentation (ABA) [16]. In such proposal, a questioner agent α proposes a topic χ and an answerer agent β utters information of relevance to χ. They assume that the questioner contributes no information apart from initiating the dialogue and that the answerer is interested in conveying information for χ, but not against. Strategy-move functions are used to help the agents identify suitable utterances that advance the information-seeking dialogue towards its goal while fulfilling the participants’ aims. The authors also define two subtypes of dialogues: IS-Type I, in which the answerer agent conveys all arguments for χ, and IS-Type II, in which the answerer agent conveys only one argument for χ. Unlike that approach, in our work we consider that the client agent may have previous knowledge in conflict with the information acquired from the expert agent. Given that the client agent is committed to believe in the justification for the query sent by the expert, the client will need to adapt its previous knowledge losing as little information as possible. This may either imply the removal of facts contradicting an unchallengeable information acquired from the expert, the addition of new knowledge that allows the construction of arguments provided by the expert, or even further interactions with the expert.
A framework for representing inquiry dialogues that uses Defeasible Logic Programming as the underlying representation formalism is presented in [7,8] and its implementation is reported in [35]. Their approach, unlike us, does not deal with information-seeking dialogues or expert consultations. Instead, their focus is on inquiry dialogues [47], in which the initial situation, the main goal, the participant’s aims, and the side benefits are different from the ones in information-seeking. Inquiry dialogues are defined as arising from an initial situation of “general ignorance” and as having the main goal to achieve the “growth of knowledge and agreement”, while each individual participating aims to “find a ‘proof’ or destroy one”. As explained in [47], in inquiry dialogues there is a common problem to be solved between the participants while in information-seeking dialogues there in not. In the later, the knowledge is already there, and the problem is to communicate it from one party to the other.
The authors define and give the necessary details to generate two subtypes of inquiry dialogues. Argument inquiry dialogues allow two agents to share beliefs to jointly construct arguments for a specific claim that neither of them may construct from their own personal beliefs alone. For instance, an agent that wants to construct an argument for ϕ can open an argument inquiry dialogue with the defeasible rule as its topic. If the two participants manage to provide arguments for each of the elements (), then it would be possible for an argument for ϕ to be constructed. In our approach, after constantly acquiring new arguments during the session, the questioner agent may also be able to build new different arguments that neither of them may construct from their own personal beliefs alone. However, asking for sub-arguments for the antecedents of a defeasible rule is unnecessary: the questioner knows that if it needs to find a defeater for a certain argument then the expert can certainly provide a complete argument that serves such purpose. On the other hand, warrant inquiry dialogues allow two agents that are interested in determining the acceptability of a particular argument to share arguments in order to jointly construct a dialectical tree that none of them may construct from their own personal beliefs alone. In our proposal, both agents do not build a dialectical tree together per se but the expert provides just enough arguments to help the questioner mark the justification argument as undefeated in its corresponding dialectical tree. This implies that each agent’s dialectical tree for the justification argument could result different by the time the session finishes. Argument inquiry dialogues are often embedded within warrant inquiry dialogues. Without embedded argument inquiry dialogues, the arguments that can be exchanged within a warrant inquiry dialogue potentially miss out on useful arguments that involve unexpressed beliefs of the other agent.
The main contribution of [7,8] is a protocol and strategy sufficient to generate sound and complete warrant inquiry dialogues. To prove this, the authors compare the outcome of their dialogues with the outcome that would be arrived at by a single agent that has as its beliefs the union of both the agents participating in the dialogue’s beliefs. This is, in a sense, the “ideal” situation in which there are clearly no constraints on the sharing of beliefs. However, this union of both agents’ beliefs is only possible because it is assumed that the agents use defeasible facts instead of strict facts. Therefore, in contrast with our approach, no contradictions can arise from both agents’ joint beliefs. In addition, the authors assume a global preference ordering across all knowledge, from which a global preference ordering across arguments is derived. Hence, an exchange of opinion on the participants’ arguments’ preferences is unnecessary, differently from our proposal.
The author of [46] explains appeal to expert opinion, a form of argument based on the assumption that the source is alleged to be in a position to know about a subject because he or she has expert knowledge of that subject. In addition, the author mentions that there is a natural tendency to respect experts and treat them as infallible, which is a dangerous approach since they can be wrong. For this reason, an informal argumentation scheme for deciding whether to accept an expert’s opinion is introduced. The author mentions that it is vital to see appeal to expert opinion as defeasible, as open to the following critical questions: (1) How credible is E as an expert source? (2) Is E an expert in the field that A (the proposition) is in? (3) What did E assert that implies A? (4) Is E personally reliable as a source? (5) Is A consistent with what other experts assert? (6) Is E’s assertion based on evidence? Regarding question 5, the consistency question, the author mentions that one can compare A with other known evidence, particularly with what other experts on the field say.
The consistency question from [46] together with the proposal of [25,39,40] could be used in our formalism to add an implementation for the operator defined in Section 5. The authors consider that agents can obtain information from multiple informants, and that the attribution of trust to a particular informant can be higher than the trust attributed to others. Each agent has its own partial order among the different informants, representing the credibility it assigns to them. Then, when information obtained from different informants is in conflict, trust is used in the decision process leading to a prevailing conclusion. In our proposal, whenever the expert sends to the client an argument containing evidence in contradiction to what the client believes, the operator could compare the trust assigned to the different informants to decide which fact prevails. Then, if any of the facts in the expert’s argument’s evidence does not prevail, the returns true, causing the argument to be rejected. The same comparison can be done whenever the client needs to modify its preferences between arguments.
We will end this section with a comment on the differences between our proposal and borth belief revision and revision of argument frameworks (AFs). Recall that, whenever the client agent receives an argument from the expert agent (either the justification or a defeater for a con argument) the operator (see Definition 7) removes any facts from the client’s knowledge base that are in contradiction with the argument’s evidence, in a prioritized belief revision fashion. In addition, differently from traditional belief revision, the operator adds all the argument’s defeasible rules to the client’s knowledge base since DeLP allows the defeasible derivation of contradictory literals. This allows the client not only to construct the received argument, but also to combine its own defeasible rules with the just acquired knowledge to construct new arguments. Given that the client may have deniers for the justification and its goal is to mark the justification as ⓤ, belief revision techniques are not enough. After adopting the argument, the client may need to change its preferences between arguments to be aligned with the expert’s opinion on the matter.
Since changing preferences alters how arguments defeat each other, our proposal may resemble existing work on revision of argumentation frameworks [11,13,14,30,38]. These approaches, regardless of their individual goals, end up adding or removing arguments or attacks and returning a new AF or set of AFs as output. Especially, [11,13] revise AFs by modifying the sets of extensions and then modifying the attack graph accordingly to the newly obtained extensions, while [14] focuses on updating the extensions. However, our proposal differs both conceptually and methodologically: we do not want the client agent to mark the expert’s justification as ⓤ in a single step by changing its preferences or previous knowledge without further information. Instead, we want the client to maintain the communication with the expert in order to ask questions to keep acquiring relevant knowledge (arguments and preferences) and make informed changes considering a qualified opinion.
Conclusions and future work
In this work, we have presented a strategy that involves two agents: an expert agent which has expertise in a particular domain or field of knowledge and a client agent which lacks that quality. The inexpert client or questioner initially makes a query to the expert in order to acquire knowledge about a topic in which the client itself lacks expertise. The client agent is committed to believe in the answer of the query and, unlike other approaches, we consider that the client may have previous knowledge in conflict with the information acquired from the expert agent. Thus, the client could need to adapt its previous knowledge without making unnecessary changes in order to accept what the expert says.
A naive solution to the proposed problem would be for the expert to send all of its knowledge to the client, but this is not feasible because, depending on the domain, the expert may have private information or its knowledge could be very extensive. Furthermore, the joined knowledge bases of both agents would probably have contradictions which are completely unnecessary to solve because their relevance is outside of the domain or field of knowledge of the query. For instance, consider a simple scenario in which the expert believes in , and the client believes in and asks the query . If we join both knowledge bases this contradiction must be treated, but unnecessarily removing facts from the client’s knowledge base that are irrelevant with respect to the client’s query just to solve this conflict would not be appropriate. Another naive solution would be for the client to imitate the expert’s knowledge about the query by simply adding an unchallengeable fact to its knowledge base, but the client would blindly believe in the answer that was sent. On the contrary, with the strategy we propose in this work, the client can arrive at a presumptive conclusion which gives a plausible, expert-based answer to its initial query. From a high-level point of view, the client agent learns to think about the topic in question like the expert agent.
All interactions between both agents occur during a session which will start only if the expert has a warrant for the literal of the query or its complement, i.e., it must be certain about the topic of the client agent’s query. In this case, the expert will select one of its undefeated arguments to send to the client as justification. Whenever the client acquires new knowledge from the expert, it may have to withdraw some of its previous contradictory knowledge (which is inaccurate from the expert’s position).
The goal of this strategy is for the client to be able to mark the justification as ⓤ (that is, it manages to believe in the claim of the justification) without making unnecessary changes to its previous knowledge. However, after adopting the justification, the client may have denier arguments for it, some of which may be acknowledged or not by the expert. In this case, the client will have to ask preference and denier questions to the expert from which it will acquire new preferences and arguments. The session will continue until the goal is finally achieved. We proved that every session eventually finishes and, when this happens, the client agent will believe in the claim of the expert’s justification. This means that the goal of this strategy is always achieved. Another conclusion we can draw is that the fewer pieces of information (facts and rules) the client previously has about the topic of the query, the shorter will be the session. This property holds because the client will have fewer denier arguments against the justification, and it will be easier to mark it as ⓤ. Ultimately, it could occur that the client knows absolutely nothing, and the goal of the session would be simply achieved by adopting the justification from the expert.
DeLP was used as the underlying knowledge representation and reasoning mechanism in order to show how to solve the problems associated to the agents’ argument structures in an information-seeking setting. Conceptually, our proposal can be divided into three different levels: First, a level corresponding to the outline illustrated in Fig. 2, which abstracts away from the argumentative reasoning mechanism used by the agents. Second, a level corresponding to the actual transition rules whose conditions are defined by different operators. Third, a level corresponding to the implementation of those operators. In particular, the transition rules and the operators were defined considering the particular characteristics of DeLP. Even though the separation of these three conceptual levels provides some modularity to our approach, the second level would need some modifications in order to be able to adapt the formalism to another structured argumentation formalism (e.g., ASPIC+ [29]). The transition rules should be more abstractly defined to allow different operators – regardless of how they are implemented – as long as they satisfy certain conditions. The modularization of the transition system is left as future work.
Future work has multiple directions. Although DeLP allows the use of strict rules (besides defeasible rules), we have not considered them in our proposal. We plan to adapt the operational semantics of our formalism to allow agents to have strict rules in their knowledges bases. Hence, in order to guarantee that the client will always be able to believe in the expert’s justification, different belief revision operators will be necessary to insightfully adapt the client’s knowledge during the session. In addition, we are interested in adapting our strategy to the didactic dialogues proposed in [47], in which the purpose of the expert agent is not just to satisfy the doubts of the client agent, but also to turn it into an expert itself. To provide a strategy for didactic dialogues, our expert agent would deliberatively need to decide how to share all its knowledge about its field of expertise to the client. This could also imply an effort from the client to show the expert how deep is its knowledge regarding the topic at hand. Finally, we would like to further analyze the selection operators defined in Section 4 to formally define minimal information exchange in the context of a session between a client and an expert.
Footnotes
Acknowledgements
This work has been partially supported by PGI-UNS (grants 24/ZN32, 24/N046).
Appendix
References
1.
R.A.Agis, S.Gottifredi and A.J.García, An approach for distributed discussion and collaborative knowledge sharing: Theoretical and empirical analysis, Expert Systems with Applications116 (2019), 377–395. doi:10.1016/j.eswa.2018.09.016.
2.
L.Amgoud, C.Devred and M.Lagasquie-Schiex, Generating possible intentions with constrained argumentation systems, Int. J. Approx. Reasoning52(9) (2011), 1363–1391. doi:10.1016/j.ijar.2011.07.005.
3.
L.Amgoud, Y.Dimopoulos and P.Moraitis, A general framework for argumentation-based negotiation, in: Argumentation in Multi-Agent Systems, 4th International Workshop, ArgMAS 2007, Honolulu, HI, USA, May 15, 2007, Revised Selected and Invited Papers, 2007, pp. 1–17.
4.
K.Atkinson, T.J.M.Bench-Capon and P.McBurney, A dialogue game protocol for multi-agent argument over proposals for action, Autonomous Agents and Multi-Agent Systems11(2) (2005), 153–171. doi:10.1007/s10458-005-1166-x.
5.
P.Bedi and P.B.Vashisth, Empowering recommender systems using trust and argumentation, Inf. Sci.279 (2014), 569–586. doi:10.1016/j.ins.2014.04.012.
6.
T.J.M.Bench-Capon, Persuasion in practical argument using value-based argumentation frameworks, J. Log. Comput.13(3) (2003), 429–448. doi:10.1093/logcom/13.3.429.
7.
E.Black and A.Hunter, A generative inquiry dialogue system, in: 6th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2007), Honolulu, Hawaii, USA, May 14–18, 2007, p. 241.
8.
E.Black and A.Hunter, An inquiry dialogue system, Autonomous Agents and Multi-Agent Systems19(2) (2009), 173–209. doi:10.1007/s10458-008-9074-5.
9.
C.E.Briguez, M.C.Budán, C.A.D.Deagustini, A.G.Maguitman, M.Capobianco and G.R.Simari, Argument-based mixed recommenders and their application to movie suggestion, Expert Syst. Appl.41(14) (2014), 6467–6482. doi:10.1016/j.eswa.2014.03.046.
10.
Á.Carrera and C.A.Iglesias, A systematic review of argumentation techniques for multi-agent systems research, Artif. Intell. Rev.44(4) (2015), 509–535. doi:10.1007/s10462-015-9435-9.
11.
S.Coste-Marquis, S.Konieczny, J.-G.Mailly and P.Marquis, On the revision of argumentation systems: Minimal change of arguments statuses, in: Fourteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2014.
12.
J.Devereux and C.Reed, Strategic argumentation in rigorous persuasion dialogue, in: Argumentation in Multi-Agent Systems, 6th International Workshop, ArgMAS 2009, Budapest, Hungary, May 12, 2009, Revised Selected and Invited Papers, 2009, pp. 94–113.
13.
M.Diller, A.Haret, T.Linsbichler, S.Rümmele and S.Woltran, An extension-based approach to belief revision in abstract argumentation, International Journal of Approximate Reasoning93 (2018), 395–423. doi:10.1016/j.ijar.2017.11.013.
14.
S.Doutre, A.Herzig and L.Perrussel, A dynamic logic framework for abstract argumentation, in: Fourteenth International Conference on the Principles of Knowledge Representation and Reasoning, 2014.
15.
P.M.Dung, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games, Artificial intelligence77(2) (1995), 321–357. doi:10.1016/0004-3702(94)00041-X.
16.
P.M.Dung, R.A.Kowalski and F.Toni, Assumption-based argumentation, in: Argumentation in Artificial Intelligence, Springer, 2009, pp. 199–218. doi:10.1007/978-0-387-98197-0_10.
17.
X.Fan and F.Toni, Assumption-based argumentation dialogues, in: IJCAI 2011, Proceedings of the 22nd International Joint Conference on Artificial Intelligence, Barcelona, Catalonia, Spain, July 16–22, 2011, pp. 198–203.
18.
X.Fan and F.Toni, Agent strategies for ABA-based information-seeking and inquiry dialogues, in: ECAI 2012 – 20th European Conference on Artificial Intelligence. Including Prestigious Applications of Artificial Intelligence (PAIS-2012) System Demonstrations Track, Montpellier, France, August 27–31, 2012, pp. 324–329.
19.
X.Fan and F.Toni, Mechanism design for argumentation-based information-seeking and inquiry, in: International Conference on Principles and Practice of Multi-Agent Systems, Springer, 2015, pp. 519–527.
20.
E.Ferretti, M.Errecalde, A.J.García and G.R.Simari, Decision rules and arguments in defeasible decision making, in: Computational Models of Argument: Proceedings of COMMA 2008, Toulouse, France, May 28–30, 2008, pp. 171–182.
21.
A.J.García and G.R.Simari, Defeasible logic programming: An argumentative approach, TPLP4(1–2) (2004), 95–138.
22.
A.J.García and G.R.Simari, Defeasible logic programming: DeLP-servers, contextual queries, and explanations for answers, Argument & Computation5(1) (2014), 63–88.
23.
S.A.Gómez, C.I.Chesñevar and G.R.Simari, ONTOarg: A decision support framework for ontology integration based on argumentation, Expert Syst. Appl.40(5) (2013), 1858–1870. doi:10.1016/j.eswa.2012.10.025.
24.
S.A.Gómez, A.Goron, A.Groza and I.A.Letia, Assuring safety in air traffic control systems with argumentation and model checking, Expert Syst. Appl.44 (2016), 367–385. doi:10.1016/j.eswa.2015.09.027.
25.
S.Gottifredi, L.H.Tamargo, A.J.García and G.R.Simari, Arguing about informant credibility in open multi-agent systems, Artif. Intell.259 (2018), 91–109. doi:10.1016/j.artint.2018.03.001.
26.
N.C.Karunatillake, N.R.Jennings, I.Rahwan and T.J.Norman, Argument-based negotiation in a social context, in: 4th International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2005), Utrecht, The Netherlands, July 25–29, 2005, pp. 1331–1332.
27.
S.Kraus, Negotiation and cooperation in multi-agent environments, Artif. Intell.94(1–2) (1997), 79–97. doi:10.1016/S0004-3702(97)00025-8.
28.
V.Lifschitz, Foundations of logic programs, in: Principles of Knowledge Representation, G.Brewka, ed., CSLI Pub., 1996, pp. 69–128.
29.
S.Modgil and H.Prakken, The ASPIC+ framework for structured argumentation: A tutorial, Argument & Computation5(1) (2014), 31–62. doi:10.1080/19462166.2013.869766.
30.
M.O.Moguillansky, N.D.Rotstein, M.A.Falappa, A.J.García and G.R.Simari, Argument theory change through defeater activation, in: COMMA, 2010, pp. 359–366.
31.
S.Parsons, M.Wooldridge and L.Amgoud, An analysis of formal inter-agent dialogues, in: Proceedings of the First International Joint Conference on Autonomous Agents and Multiagent Systems: Part 1, ACM, 2002, pp. 394–401. doi:10.1145/544741.544835.
32.
L.Perrussel, S.Doutre, J.Thévenin and P.McBurney, A persuasion dialog for gaining access to information, in: Argumentation in Multi-Agent Systems, 4th International Workshop, ArgMAS 2007, Honolulu, HI, USA, May 15, 2007, Revised Selected and Invited Papers, 2007, pp. 63–79.
33.
H.Prakken, Formal systems for persuasion dialogue, Knowledge Eng. Review21(2) (2006), 163–188. doi:10.1017/S0269888906000865.
34.
H.Prakken, A.Z.Wyner, T.J.M.Bench-Capon and K.Atkinson, A formalization of argumentation schemes for legal case-based reasoning in ASPIC+, J. Log. Comput.25(5) (2015), 1141–1166. doi:10.1093/logcom/ext010.
35.
L.Riley, K.Atkinson, T.R.Payne and E.Black, An implemented dialogue system for inquiry and persuasion, in: Theorie and Applications of Formal Argumentation – First International Workshop, TAFA 2011, Barcelona, Spain, July 16–17, 2011, Revised Selected Papers, 2011, pp. 67–84.
36.
S.V.Rueda, A.J.García and G.R.Simari, Argument-based negotiation among BDI agents, Journal of Computer Science & Technology2 (2002).
37.
G.R.Simari and I.Rahwan (eds), Argumentation in Artificial Intelligence, Springer, 2009.
38.
M.Snaith and C.Reed, Argument revision, Journal of Logic and Computation27(7) (2016), 2089–2134.
39.
L.H.Tamargo, A.J.García, M.A.Falappa and G.R.Simari, On the revision of informant credibility orders, Artif. Intell.212 (2014), 36–58. doi:10.1016/j.artint.2014.03.006.
40.
L.H.Tamargo, S.Gottifredi, A.J.García and G.R.Simari, Sharing beliefs among agents with different degrees of credibility, Knowl. Inf. Syst.50(3) (2017), 999–1031. doi:10.1007/s10115-016-0964-6.
41.
J.C.Teze, S.Gottifredi, A.J.García and G.R.Simari, Improving argumentation-based recommender systems through context-adaptable selection criteria, Expert Syst. Appl.42(21) (2015), 8243–8258. doi:10.1016/j.eswa.2015.06.048.
42.
M.Thimm, Realizing argumentation in multi-agent systems using defeasible logic programming, in: Argumentation in Multi-Agent Systems, 6th International Workshop, ArgMAS 2009, Budapest, Hungary, May 12, 2009, Revised Selected and Invited Papers, 2009, pp. 175–194.
43.
M.Thimm, Strategic argumentation in multi-agent systems, KI28(3) (2014), 159–168.
44.
M.Thimm and A.J.García, On strategic argument selection in structured argumentation systems, in: International Workshop on Argumentation in Multi-Agent Systems, 2010, pp. 286–305.
45.
M.Thimm, A.J.Garcia, G.Kern-Isberner and G.R.Simari, Using collaborations for distributed argumentation with defeasible logic programming, in: Proceedings of the 12th International Workshop on Non-Monotonic Reasoning (NMR’08), 2008, pp. 179–188.
46.
D.Walton, Appeal to Expert Opinion: Arguments from Authority, Penn State Press, 2010.
47.
D.Walton and E.C.Krabbe, Commitment in Dialogue: Basic Concepts of Interpersonal Reasoning, SUNY Press, 1995.