Abstract
The paper describes a computational model that we have implemented in an experimental dialogue system (DS). Communication in a natural language between two participants A and B is considered, where A has a communicative goal that his/her partner B will make a decision to perform an action D. A argues the usefulness, pleasantness, etc. of D (including its consequences), in order to guide B's reasoning in a desirable direction. A computational model of argumentation is developed, which includes reasoning. Our model is based on the studies in the common-sense conception of how the human mind works in such situations. Theoretical considerations are followed by an analysis of Estonian spoken human–human dialogues. First, calls of clients to travel agencies are studied where a travel agent could use various arguments in order to persuade a client to book a trip. The analysis demonstrates that clients are primarily looking for information; argumentation occurs in a small number of dialogues. Secondly, calls of sales clerks of an educational company to different organisations are analysed where training courses are offered. The sales clerks, unlike the travel agents, try to persuade clients, stressing the usefulness of a course. Finally, face-to-face conversations are studied where one participant is arguing for an action by the partner. The corpus analysis shows that our model covers simple situations occurred in actual dialogues. Therefore, we have chosen a limited application of the model in a DS – communication trainer. The computer can optionally perform A's or B's role and the interaction with the user in Estonian follows norms and rules of human–human communication.
Keywords
Introduction
There are many dialogue systems (DSs) that interact with a user in a natural language and help him/her to solve practical problems, e.g. to book flights, to receive information about bus or train timetables, to detect computer faults, etc. (Jokinen & McTear, 2009). Usually, such tasks do not include argumentation. Rather practical dialogue (Allen, Ferguson, & Stent, 2001) is implemented in these systems. On the other hand, there are tasks and situations where not only information retrieval but also argumentation are required. Argumentation in the general sense is a large and complicated research field of human interaction with its specific units, structures and rules, the concrete inventory of which depends on the interaction domain, participants and their goals.
This paper describes a computational model of agreement negotiation processes, which involves natural reasoning. More detailed treatment of the theoretical framework of our approach will be given in the next section, but a preliminary delimitation of the senses in which we use the concepts ‘agreement negotiation’ and ‘natural reasoning’ would already be appropriate here.
First, the general type of interaction we are dealing with represents a kind of directive interaction where the goal of one participant, A, is to get another participant, B, to carry out a certain action D. In this interaction situation, the concepts of agreement and negotiation have quite specific contents, as compared with the generally accepted ones (e.g. Rahwan et al., 2004). We want to stress two points. (1) ‘Agreement’ means here primarily ‘B agrees/does not agree to do D’, that is, B’s agreement concerns the wish of A that B would do D, not some general standpoint of A concerning the situation or problem under discussion and (2) the interaction (dialogue) between A and B can be considered as negotiation because we are studying such interactions (dialogues) only where B’s decision to do or not D is preceded by a discussion where different aspects of ‘doing D’ (D as an acceptable goal, different kinds of resources, sub-actions, etc. needed to carry out D) are considered by the participants, that is, it is preceded by a process of argumentation in the sense of natural human interaction.
Second, when dealing with argumentation, we will concentrate on the process of reasoning as its central part. Argumentation is used by communication participants to direct reasoning of the partner. And here again, the specific context determines the choice of the focus of our interests. We want to model a ‘naïve’ theory of reasoning, a ‘theory’ that people themselves use when they are interacting with other people and trying to predict and influence their decisions. Naive in the sense that it is not scientific, that is, not constructed according to any system of principles accepted in scientific discourse, but nevertheless is based on certain principles which, taken together, form a conceptual whole that can be characterised as a ‘theory’. In psychology, the corresponding approach is known as Theory Theory, which says that people use a ‘naïve’ theory of psychology to infer the mental states of other people and understand or predict their future behaviour (e.g. Carruthers & Smith, 1996). In this sense, the conception of Theory Theory constitutes a subtopic of the Theory of Mind, a general research area of scientific psychology, which deals with human mental states and processes, including those involved in communication. We use the term ‘natural reasoning’ in the context of this general approach.
Departing from these theoretical considerations, one of our aims has been to investigate actual dialogues, not artificially constructed ones. Analysis of human–human dialogues can provide information about their structure and linguistic features for developing DSs, which interact with a user in a natural language. For this paper, three sub-corpora of the Estonian Dialogue Corpus (EDiC) were analysed. EDiC includes mainly information-seeking dialogues, that is why it was hard to find argumentation for and against doing an action. However, as a result of the corpus analysis, typical sequences of dialogue acts (DAs) were found in human–human spoken dialogues that form agreement negotiations and reflect reasoning of the participant who has to make a decision about an action.
We have implemented the model in an experimental DS. We have planned one of the practical applications of the DS as a participant in communication training sessions – communication trainer. Here the DS can establish certain restrictions on argument types, on the order in the use of arguments and counter-arguments, etc. But let us stress that our interest lies in the development of such DSs as are able to interact with human users by following not only the formal rules of exchanging information but also the ‘rules’ (regularities) of human reasoning, specifically in the process of argumentation. The training system described represents just one of the implementations by using which the adequacy of our theoretical approach can be checked.
The main goal of the paper is to introduce our formal model of argumentation, which is based on a naive theory of reasoning, to justify the model on actual human–human argumentation dialogues and implement a simple DS, which includes the model of reasoning and makes it possible to demonstrate how the model will work.
The paper has the following structure. Section 2 describes the theoretical background of the study and gives an overview of the related research. Section 3 introduces the model of a conversational agent. Partner-influencing methods as well as ways of reasoning are included in the model. We will make links between the reasoning and partner influencing. In Section 4, three sub-corpora of EDiC are analysed, in order to justify the model. Section 5 discusses the implementation of the model in a simple DS – a communication trainer. Section 6 gives a summary and plans for further work.
Theoretical background and related work
Communication is the only means that agents can use to influence the actions of other agents and thereby to increase their knowledge about their communication partners (Wells & Reed, 2006). There are certain methods that people use in the construction of communication. These methods grant the purposeful, result-producing and coherent nature of communication. Reasoning-influencing strategies are one such method. They exemplify ways of influencing and directing the reasoning and decision-making processes of partners in communication – such as asking, luring, persuading, insisting, demanding, threatening, etc. Since the use of reasoning-influencing strategies critically depends on the selective manipulation of the partner's wishes, emotions, assessments, beliefs, reasoning patterns, etc., the application of such a strategy necessarily presupposes from its author the ability to operate with the model of partner as a functional whole: the ability to predict ‘what leads to what’ in a partner's inner functioning, using it as the basis of explication of the ‘model of the human mind’ in general.
This explication should represent the intuitive, naive model of functioning of the human mind. It should reflect knowledge which human beings have and use about their partners in everyday communication. At the same time, this model should be more than a simple collection of beliefs. It should form a basis for drawing inferences from concrete data in concrete situations, for making predictions concerning the possible outcomes of the acts of influence exercised on the partner. In this sense, it may be called a ‘theory’ humans use in their everyday life, a ‘naïve’ theory. In theoretical psychology dealing with human thinking, there is a subfield which aims at explicating the nature of this ‘naïve theory’. This is the Theory Theory approach we referred to in the Introduction. In the context of this paper, it is reasonable to differentiate two lines of research which, although ultimately aiming at the same goal, are doing it on different conceptual and formal levels and use different data and methods.
The first approach may be called cognitivistic because its origins lie in the interdisciplinary research field called cognitive science which, among other disciplines, originally included artificial intelligence and, specifically, the explication of human-language-understanding ability. At present, one of the hottest topics in this specific area is the role of reasoning in human social interaction, and the role of argumentation in this context in particular. For instance, why and how do humans reason at all before making decisions in social interaction.
For a good overview, we refer to the paper of Mercier and Sperber (2011). According to the format of such papers in the Behavioral and Brain Sciences journal, almost half of the paper consists of comments on the authors’ text made by other researchers, and thus one can get a relatively adequate picture of the state of the art in the field. One of the main conclusions important for the topic of this paper can be summarised as follows: ‘reasoning in social interaction necessarily involves, for a participating agent, reasoning about the predictable reasoning processes of other participating agents in the context of the topic under consideration’. In other words, a naive theory of reasoning.
In the broader context, it should be added that the discussion about such naive theories of reasoning and argumentation has transcended the borders of psychology and become a natural part of neighbouring disciplines that deal with human communication in the frames of the ‘cognitive paradigm’. First of all, it involves (linguistic) semantics and pragmatics (e.g. Jackendoff, 2002; Ortony, 2003; Paradis, Hudson, & Magnusson, 2013).
But even more important is that other areas related to human social interaction have also been and are involved in the discussion, starting with those works of cultural anthropology from which the concept of ‘folk theory’ originated (D'Andrade, 1987) and ending with contemporary studies on the ‘collective wisdom’ (Landemore & Elster, 2012).
The second approach we have in view is the one directly related to creating DSs which interact with human users but, at the same time, being computer systems, have in their focus the regularities of human reasoning. That is, if a dialogue at hand does not constitute a simple question and answer exchange, the computer must understand what kind of reasoning processes started the additional questions or investigations in the partner, and be able to give them adequate answers or explanations if needed. The main point is how this should be realised in a computer program.
A good overview of the context in which we place ourselves can be found in the paper (Rahwan et al., 2004), as pointed out in the Introduction. In particular, we consider critically important the basic list of requirements to an argumentation-based negotiation (ABN) agent noted in the paper (p. 358): an ABN agent needs to be able to (1) evaluate incoming arguments and update its mental state accordingly; (2) generate candidate outgoing arguments and (3) select an argument from a set of available arguments. We hope that the description of our model demonstrates at least attempts to satisfy these requirements.
Below we give a short overview of some more concrete views concerning the modelling of human–computer interaction the description of which should make clearer the conceptual borderlines of the area into which we ourselves place our model described in the following sections.
Researchers of communication are studying formal dialogue games – interactions between agents, where each agent ‘moves’ by making utterances, according to a given set of rules. Dialogue games have been used to study human–computer interaction to explain sequences of human utterances and model complex human reasoning (Levin & Moore, 1978; McBurney & Parsons, 2009; Yuan, Moore, Reed, Ravenscroft & Maudet, 2011). Dialogue games are useful because they provide stereotypic patterns of dialogue. Loui (1998) characterises negotiation as an argumentative game. Hadjinikolis, Modgil, Black, Mcburney, and Luck (2012) provide an argumentation-based framework for persuasion dialogues, based on a logical conception of arguments that an agent may undertake in order to strategise over the choice of moves to make in a dialogue game, based on its model of its opponents.
Understanding and simulating behaviour of human agents, as presented in text, is an important problem to be solved in decision-making and decision support tasks (Morge & Mancarella, 2012). On the one hand, argument structures can be learnt from previous experience with these agents. Another approach is based on the assessment of quality and consistency of argumentation of agents (Chesñevar, Maguitman, & Loui, 2000).
Principles and criteria that characterise patterns of inference have to be formalised in order to create a logical model of common-sense reasoning. Dȩbowska, Łoziński, and Reed (2009) investigate the links between the philosophical and linguistic study of human reasoning and argumentation expressed in language, and on the other hand the formal, logical accounts of argument structures. Typically, argumentation takes place on the basis of incomplete information. Classical logic is inadequate since it behaves monotonically. Argumentation is a non-monotonic logic of reasoning with incomplete and uncertain knowledge (Bench-Capon & Dunne, 2007). Loui (1998) investigates the appropriateness of formal dialectics as a basis for non-monotonic reasoning and defeasible reasoning.
For the current state of argumentation theory, see, for example, Bench-Capon and Dunne (2007), Besnard and Hunter (2008) and Chesñevar et al. (2000).
Argumentation theorists Douglas Walton and Erik Krabbe propose a taxonomy of six types of dialogue: (1) persuasion, (2) negotiation, (3) inquiry, (4) deliberation, (5) information-seeking dialogue and (6) eristic dialogue. These types are characterised by their main goal, initial situation and participants’ aims. Each type of dialogue has its own distinctive rules and goals, its permitted types of move, and its conventions for managing the commitments incurred by the participants as a result of the moves they make. Each type of dialogue they add exhibits a normative model, an enveloping structure that can aid us in evaluating the argumentative and other moves contained in it (Walton & Krabbe, 1995, pp. 8–9). Boella, Hulstijn, and van der Torre (2004) consider persuasion dialogues – the dialogues in which one agent is trying to influence the behaviour of another agent. There are three ways of influencing behaviour, each corresponding to the manipulation of one of the mental attitudes: by issuing a command, by convincing and by suggestion. These strategies have different requirements regarding the social setting of the dialogue. For example, for commands an authority relationship between agents is necessary. One agent models the other agent and tries to predict its response, given the other agent's model of itself (Boella et al., 2004).
Negotiation is a process where each party tries to gain an advantage for itself by the end of the process. Negotiation is intended to achieve compromise. Rahwan and Larson (2011) explore the relationships between mechanism design and formal logic, particularly in the design of logical inference procedures when knowledge is shared among multiple participants. Tohmé (2002) analyses the dynamics of beliefs and decision, in order to determine conditions on the agents that allow them to reach agreements. When one person initiates communication with another s/he mainly proceeds from the fact that the partner is a human being who feels, reasons and has wishes and plans, on the one hand, like every human being and, on the other, as this individual person. In order to be able to foresee what processes will be triggered in the partner after a DA (move), the agent must know the inner workings of the partner's psychological mechanisms. When aiming at a certain goal in communication, the agent must know how to direct the functioning of these mechanisms in order to bring about the intended result in the partner. Because of this, we have modelled the reasoning processes that people supposedly go through when working out a decision whether to perform an action or not. In a model of conversational agent, it is necessary to represent cognitive states as well as processes. One of the most well-known models of this type is the belief-desire-intention (BDI) model of human practical reasoning developed as a way of explaining future-directed intention (Allen, 1995; Boella & van der Torre, 2003; Bratman, 1987; Grosz & Sidner, 1986). This model is based on the Theory Theory.
Modelling the communication process
Let us consider conversation between two agents – A and B – in a natural language. In the goal base of one participant (let it be A), a certain goal G A related to B’s activities gets activated and triggers in A a reasoning process. In constructing his/her first turn, A must plan the DAs and determine their verbal form as a turn tr1. This turn triggers a reasoning process in B where two procedures are distinguished: the interpretation of A’s turn and the generation of his/her response tr2. B’s response triggers in A the same kind of reasoning cycle in the course of which s/he has to evaluate how the realisation of his/her goal G A has proceeded, and depending on this s/he may activate a new sub-goal of the initial goal G A , and the cycle is repeated: A builds the next turn tr3, etc. The dialogue comes to an end when A has reached his/her goal or abandoned it, or the participants have agreed to continue the interaction in the future.
Model of conversational agent
In our model, a conversational agent is a program that consists of six (interacting) modules (cf. Koit & Õim, 2000, 2004; for its further elaborations, see e.g. Koit, 2011):
where PL is planner, PS is problem solver, DM is dialogue manager, INT is interpreter, GEN is generator and LP is linguistic processor. PL directs the work of both DM and PS, where DM controls communication process and PS solves domain-related tasks. The task of INT is to make semantic analysis of partner's utterances and that of GEN is to generate semantic representations of agent's own contributions. LP carries out linguistic analysis (starting from speech recognition) and generation (ending with speech synthesis). Conversational agent is using goal base GB and knowledge base KB in its work. Knowledge base consists of four components
A necessary precondition of interaction is existence of common knowledge of agents:
Argumentation that involves reasoning
After A has expressed his/her goal that the partner B would decide to do D, B can respond with agreement or rejection, depending on the result of his/her reasoning. Rejection can be (but not necessarily) supported with an argument. These arguments can be used as giving information about the reasoning process that brought B to the given decision.
The general principles of our reasoning model are analogous to the BDI model but it has some specific traits that we consider important.
First – especially in the case of B’s reasoning – along with desires we also consider other kinds of motivational inputs for creating the intention of an action in an actor (e.g. whether the actor considers the action useful to him/her or s/he is forced to do it independent of his/her immediate wish – e.g. s/he is forced to do it by some obligation or by threats of the partner). In other words, we are trying to analyse the concept of ‘desire’ used in the BDI model in fact as a cover concept for motivation to do something into some more detailed kinds, and this just so far as we are able to differentiate (and model formally) the effects that these factors will have on the following reasoning process of the agent.
Second, starting from this general idea, we have worked out a model of reasoning which leads to the emergence of the intention (goal) in an actor to do the action in question, or refuse to do it. Our reasoning model is based on the studies in the common-sense conception of how the human mind works in such situations (cf. D'Andrade, 1987). We suppose that in natural (everyday or even institutional) communication people start, as a rule, from this conception, not from any consciously chosen scientific one. We want to model a ‘naïve’ theory of reasoning, a ‘theory’ that people themselves use when they are interacting with other people and trying to predict and influence their decisions.
In our implementation, the reasoning model consists of two functionally linked parts: (1) a model of human motivational sphere and (2) reasoning algorithms.
Model of reasoning subject
In the motivational sphere, three basic factors that regulate reasoning of a subject concerning D can be differentiated. First, the subject may wish to do D, if pleasant aspects of D for him/her outweigh unpleasant ones; second, the subject may find it reasonable to do D, if D is needed to reach some higher goal, and useful aspects of D outweigh harmful ones; and third, the subject can be in a situation where s/he must (is obliged) to do D – where not doing D will lead to some kind of punishment. We call these factors wish-, needed- and must-factors, respectively. These factors should be taken as an approximation of the real constituents of the reasoning process. As we can see in the analysis of real-life dialogues (see Section 3), it is always not easy to differentiate, on the level of concrete DAs, between them (thus, for instance, needed as well as must can be said to include, or rely on, the wish-factor in some way, not to speak of the exact connections between these factors with emotions, such as joy or fear, for instance). Nevertheless we think that it is important to make these kinds of differentiation; quite another question is, how deep we will be able to go in using them in modelling the human reasoning, e.g. in the argumentation dialogue.
Thus, according to our present model, the model of motivational sphere of a subject can be represented by the following vector of ‘weights’:
In the description, w(pleasant),
Let us briefly discuss the categories that will be used in the formulation of the reasoning algorithm:
pleasant/unpleasant – these categories represent primary subjective-emotional evaluations of D, which are anchored in the sensual system of the subject useful/harmful – these are prototypical rational evaluations, i.e. they are based on certain beliefs or certain knowledge of the subject, and there are certain criteria for making the corresponding judgements. These criteria are connected, first of all, with the goals of the subject – an aspect of D is useful if it helps the subject to achieve some goal and, correspondingly, an aspect of D is harmful if it prevents the subject from achieving some goal obligatory/prohibited – these are also rational evaluations, but they are based on either the knowledge of certain social norms or some directive communicative act of a person who is in the position (has the power) to exercise his/her will upon the reasoning subject. Obligations and prohibitions are related to the concept of punishment, which is an action taken by some other subject because the reasoning subject has not followed the corresponding obligations or prohibitions resources of the subject concerning D constitute any kinds of internal (psychical and/or physical) and external circumstances that create the possibility to perform D.
However, we are aware that these motivational factors are not independent of each other. Thus, a useful outcome of an action is in some sense also pleasant for the subject, punishment is unpleasant (and can be harmful) for the punished person, etc., but we will not go into these details in our present model.
The values of the dimension obligatory/prohibited are in a sense absolute: something is obligatory or not, prohibited or not. On the other hand, the dimensions pleasant/unpleasant, useful/harmful have a scalar character: something is pleasant or useful, unpleasant or harmful to a certain degree.
For simplicity's sake, it is supposed here that these aspects have numerical values (weights) and that in the process of reasoning (when weighing the pro- and counter-arguments) these values can be compared and summed up in a certain way. Here
If there are many actions
Knowledge base about interacting subjects, KBS, contains the vectors
Reasoning algorithms
The second part of the reasoning model consists of reasoning algorithms that regulate (as we suppose) human action-oriented reasoning. A reasoning algorithm represents steps that the agent goes through in his/her reasoning process; these consist in computing and comparing the weights of different aspects of D; and the result is the decision to do D or not.
According to our model, the reasoning process directed at an action D of a subject can be triggered by three main types of factors: wish-, needed- or must-factors. The reasoning process consists of a sequence of steps where such aspects participate as resources of the reasoning subject for doing D, positive aspects of D or its consequences (pleasantness, usefulness, also punishment for not doing D if it is obligatory) and negative aspects (unpleasantness, harmfulness, punishment for doing D if it is prohibited).
How does the reasoning itself proceed? It depends on the factor which triggers it. In addition, a reasoning model, as a naive theory of mind, includes some principles which represent the interactions between factors and the causal connection between factors and the decision taken. For instance, the principles fix such concrete preferences as:
people want pleasant states and do not want unpleasant ones people prefer more pleasant states to less pleasant ones the more pleasant the imagined future state, the more intensively a person strives for it.
In addition, there are also more concrete preference rules, e.g.:
if D has been found pleasant (and the subject wishes to do it) then the subject checks the needed- and must-factors first from the point of view of their possible negative values (‘what harmful consequences would D have?’) if the sum of the values of the inner (wish- and needed-)factors and the value of the external (must-)factor appear equal in a situation (i.e. there arises a conflict) then the decision suggested by the inner factors is preferred.
We do not go into details concerning these principles here. Instead, we refer to (Davies & Stone, 1995).
There are three reasoning procedures in our model (wish, needed, must), which depend on the factor that triggers the reasoning. Each procedure represents steps that a subject goes through in the reasoning process (computing and comparing weights of different aspects of D), and the result is the decision to do D or not. As an example, let us present a reasoning procedure wish which is triggered by wish-factor, that is, the subject believes that it would be pleasant to do D, i.e. Reasoning procedure wish. Precondition: w(pleasant)>w(unpleasant).
In the case of other input factors (needed, must), the general structure of the algorithm is analogous, but there are differences in concrete steps.
The reasoning model is connected with the general model of conversational agent in the following way. First, the planner PL makes use of reasoning schemes and second, the KBS contains the vector
When comparing our model with the BDI model, beliefs are represented by knowledge of the conversational agent with reliability less than 1; desires are generated by the vector of weights
Some wishes or needs can be stronger than others: if
A communicative strategy is an algorithm, which is used by a communication participant to achieve his/her communicative goal. About the general concept of the communicative strategy, see, for example, Heritage (1991).
One relevant aspect of human–human communication which is relatively well studied in pragmatics of human communication and which we have included in our model is the concept of communicative space. For a general description of the approach, see, for example, Brown and Levinson (1987). Communicative space is defined by a number of coordinates that characterise the relationships of participants in a communicative encounter. Communication can be collaborative or confrontational (that is represented by one axis of the communicative space), personal or impersonal (another axis); it can be characterised by the social distance between participants (the third axis). The next axes represent the modality of communication (friendly, ironic, hostile, etc.), the intensity (peaceful, vehement, etc.), etc. Together, these dimensions bring the social aspect of communication into the model (cf. Boella et al., 2004). Just as in the case of motivations of human behaviour, people have an intuitive, ‘naïve theory’ of these coordinates, too. The values of the coordinates can be expressed by specific words (such as friendly, ironic, and so on) as in the case of pleasant, useful, etc. aspects of an action (Section 2.2.1). Instead, we use numerical values as approximations in our model.
A communicative strategy can be implemented in various ways, e.g. the participant A can use different arguments, in order to influence his/her partner B to do D: stress pleasantness of D (i.e. entice B), stress its usefulness (persuade B, i.e. suggest B to do D), or stress punishment for not doing D if it is obligatory (threaten B). We call these concrete ways of realisation of a communicative strategy communicative tactics. In our model, the choice of communicative tactics depends on the ‘point’ in the communicative space where the participants believe to find themselves.
A communicative strategy for A can be presented as the following algorithm (Figure 2).
Communicative strategy.
The general strategy used by A may be called reasoning-influencing strategy since it is based on the model of reasoning we described in the previous section. The aim of the strategy is to influence the reasoning process of the partner in such a way that its output would match the goal of the A.
The strategy may be realised in different forms depending on the steps in the reasoning process of the partner through which A is trying to influence this process. The dialogue proceeds in the following way.
A informs the partner B of the communicative goal, formulating it, e.g. as a request or proposal. B goes through the reasoning process – whether to do D or not – and makes a decision, of which s/he informs A.
At this point there appears a need to model B’s reasoning process and incorporate this model into the dialogue model. When the reasoning process has started, the subject considers various positive and negative aspects of D. If the positive aspects weigh more, the subject will make the decision to do D, otherwise s/he will make the decision not to do D.
If B’s decision is negative, A will try, during the following process of interaction, to influence B’s reasoning in such a way that ultimately s/he would make a positive decision.
Three communicative tactics exist for A in our model, which are related to the three reasoning procedures (wish, needed, must). The reasoning procedure wish may be triggered in the partner by tactics of enticing, the procedure needed by tactics of persuading (suggesting), and the procedure needed by tactics of threatening. These tactics are similar to the persuasion strategies considered in (Boella et al., 2004): convincing, suggestion and a command, respectively.
Participant A when implementing a communicative strategy uses a partner model – a vector
If A uses a statement
Every tactic has its own ‘title aspect’ – the most important aspect of the action D for that tactic. The title aspects are: pleasantness of D for enticing, usefulness for persuading and punishment of not doing (an obligatory action) D for threatening. While enticing, A first of all tries to increase the pleasantness of D for the partner B; while persuading, A tries to increase the usefulness of D, and while threatening A stresses punishment for not doing D. Using the notion of the title aspect, all the tactics can be represented by the following algorithm (Figure 3).
Communicative tactics.
Therefore, we consider every tactic used by a participant as an algorithm for choosing (generating) his/her next utterance (argument) as a reaction to the partner's refusal to perform the action. A participant can follow the same tactics consistently until there are arguments (see Figure 2).
A detailed description of the tactic of persuasion can be found in (Koit, Roosmaa, & Õim, 2009). The tactics of enticing and threatening can be represented in an analogous way.
We will go back to some details of the model in Section 4.
How do people argue for or against an action? When aiming at a certain goal in communication, the agent must know how to direct the functioning of the partner's psychological mechanisms in one or another direction in order to bring about the intended result in the partner. This knowledge forms a necessary part of human communicative competence. A model of human being as a partner in communication should explicate, in particular, people's intuitive knowledge about the inner mental structures of human beings, a naive, or folk theory. Because of that, the aim of the corpus analysis is to justify our model on Estonian spoken human–human dialogues taken from the dialogue corpus EDiC (Hennoste et al., 2008). This is not verification in the mathematical sense. However, arguments and counter-arguments expressed in a natural language appear in the actual negotiation dialogues and demonstrate, even though indirectly, how the participants are reasoning before making a decision. DAs are annotated in our corpus by using a special annotation typology (Hennoste et al., 2008) which is based on conversation analysis (Hutchby & Wooffitt, 1998). Still, the major part of the typology coincides with well-known typologies (e.g. DAMSL). Three sub-corpora were chosen for analysis, two of them consisting of institutional and one of everyday dialogues: (1) calls of clients to travel agencies, (2) calls of sales clerks of an educational company where training courses are offered for other companies and (3) face-to-face conversations between acquaintances. However, it was hard to find suitable empirical material for our study because the corpus contains mostly information-seeking dialogues. We did not record, transliterate nor annotate new dialogues especially for the current analysis but used the existing corpus.
Travel agency
First, 36 dialogues were taken from the EDiC where clients call travel agencies supposedly with a goal to travel somewhere. A travel agent is interested in booking a trip by the client, therefore, we may expect that the agents try to influence clients in such a way that they decide to book the trip immediately, during the current conversation. Among the considered communicative tactics (Section 2.3), threatening can be excluded because a travel agent as an official person is obligated to communicate cooperatively, impersonally, friendly and peacefully (i.e. to stay in a neutral, fixed point of the communicative space). An agent can only persuade or entice a client.
Argumentation has only been found in four dialogues. It turned out that the clients mainly want to get information, not to book a trip. Moreover, no persuasion by a travel agent finishes with the client agreeing to book a trip.
In Dialogue 1 (in the Appendix), participant A (a travel agent) presents several arguments (giving several pieces of information) trying to persuade the client B to take the trip but the client does not make a decision. In Dialogue 2 (in the Appendix), A is similarly proposing various arguments for the usefulness of booking the trip.
In the analysed travel dialogues, a typical sequence of DAs that form argumentation is as follows (the subsequence marked by → can be repeated):
A: Proposal →A: Giving Information →B: Continuer B: Deferral
It should be mentioned that the used DA typology does not include a special act Argument. In the travel dialogues, arguments of travel agents are represented by giving information (the DA Giving Information). A client does not propose any counter-arguments but allows a travel agent to give all his arguments only signalling that she is waiting for further information (the DA Continuer, e.g. ‘I see’, ‘uhuh’, etc.). After that, she informs the agent about her decision (typically, deferral like utterance 16 in Dialogue 1 in the Appendix).
The subsequence marked by → can be considered as an information-sharing sub-dialogue (Chu-Carroll & Charberry, 1998), which is initiated by A to give further information to B in such a way that B after reasoning could make an informed decision about the proposal.
Educational company
Second, a closed part of the EDiC has been analysed, consisting of 44 calls where sales clerks in an educational company offer different courses of their company (language, secretary training, bookkeeping, etc.) to managers of other companies. The dialogues have been put into a secret list, according to an agreement with the educational company (only fragments of dialogues may be published). Fourteen dialogues out of 44 have been excluded from this study (the needed person is not present, the number the clerk is calling is wrong, the recording breaks off). Like a travel agent, a sales clerk can only persuade or entice a client to take a course; threatening is excluded. Various arguments are used by the clerks to indicate the usefulness or pleasantness of the courses. However, the decisions of clients will be postponed in most cases. It is not surprising because the recordings belong to the initial period of negotiations.
The analysed 30 dialogues can be divided into two groups: 1) A and B are communicating for the first time (six dialogues) or 2) they have been in contact previously (24 dialogues).
First call
A typical dialogue starts with A’s introduction and a question whether B does know the educational company. Then a short overview of the company is given (e.g. we are an international company, we are dealing with sale, marketing). All the statements can be considered also as arguments for taking a training course. Then a proposal is made by A to B to take some courses. A points to the activities of B’s organisation, which demonstrates that he has previous knowledge about her institution (e.g. your firm is dealing with retail and wholesale therefore you could be interested in our courses). If B does not make a decision, then A asks B to tell more about her institution in order to get more information (arguments) for the necessity of the courses for B and offers them again1
In the examples, transcription of conversation analysis is used (Hutchby, Wooffitt 1998). Utterances are numerated.
All the dialogues end with an agreement to keep in touch (A promises to send information materials to B, to call B later, etc.), and B does not accept or reject taking a course but postpones her decision. That can be considered as a good result for A – his arguments have been reasonable. B needs more time before making the final decision.
B agrees to take a course only in one conversation, she agrees with reservations in two dialogues and does not agree in one dialogue. In the remaining dialogues, A and B come to the agreement to keep in touch as in the case of the first communication, i.e. B postpones the decision.
A always starts the conversation with referring to a previous communication (we communicated in November, I sent catalogues to you):
It is significant that the introductory part is quite long in the dialogues. A behaves very politely, friendly and sometimes familiarly (this holds especially for male clerks):
In this way, A prepares a suitable background for his proposal and herewith makes a refusal more difficult for B.
In the main part of a dialogue, A presents various arguments for the usability of the courses for B’s institution and meanwhile collects new information about B by asking questions in order to learn more and get new arguments for doing D:
1. B. 2. A. 3. B. 4.
A typical sequence of DAs that form argumentation includes more DAs as compared with travel dialogues (DAs in brackets ‘[‘…‘]’ can be missed; subsequences marked with → can be repeated):
A: Giving Information Proposal →A: Giving Information →B: Continuer [→A: Question] [→B: Giving Information] B: Accept/Reject/Deferral
Proposal will be made by A after an introductory part – giving information. In the main part, A’s arguments are presented by giving information, B responds using continuer as in the travel dialogues (uhuh, I see, etc.). In addition, A can ask questions in the main part of a dialogue.
We can conclude that in institutional dialogues, A (an official) presents his arguments by giving information but the partner B (a customer) usually does not give counter-arguments which explicitly show how does her reasoning process proceed. The outcome of A’s influencing process (B’s decision) becomes clear only at the end of interaction.
Face-to-face conversations
Finally, nine face-to-face dialogues were analysed where the participants know each other well. There are parts in the conversations where performing a certain action is under consideration and arguments for and against are presented by the participants. Here we can hope to find all the communicative tactics considered in our model. Actually, enticing and persuading were used but not threatening.
In Dialogue 3 (in the Appendix), A tries to achieve B’s agreement to publish his personal data at the website of an institute but B does not want to publicise his relations with the institute.
The utterances of the participants form adjacency pairs of opinions/statements and agreements/disagreements as known in conversation analysis (Hutchby & Wooffitt, 1998), which can be interpreted as arguments and counter-arguments. A implements the tactics of persuasion, stressing the usefulness of the action (publishing B’s data). B, trying to postpone the decision, points out less usefulness or more harmfulness in most cases. B does not make a decision during the conversation; therefore, A does not achieve his communicative goal.
The general structure of the analysed face-to-face dialogues can be represented by the following DAs (a subsequence marked with → where the exchange of arguments/counter-arguments takes place can be repeated several times):
A: Proposal/Request →B: →A: B: Accept/Reject/Deferral
We can summarise that in the analysed everyday dialogues, partner influencing is different as compared with institutional dialogues – here B’s reasoning process is explicated by giving counter-arguments (assertions). In such a case, A has more information for choosing a new argument (new assertion) on the next step of negotiation in order to direct B’s reasoning to a (desirable) positive decision.
The corpus analysis demonstrates that the actual human–human spoken conversations can be more complicated than the dialogues which can carry out the conversational agent modelled in Section 2. For that reason, we have chosen a limited task for implementation of the model in an experimental DS – ‘a communication trainer’, which can train the user to influence his/her partner (computer) consistently using certain communicative tactics. The DS can establish certain restrictions on argument types, on the order in the usage of arguments and counter-arguments, etc. (cf. Bringsjord et al., 2008; Yuan, Moore & Grierson, 2008). The DS can alternately play two roles: (1) of the participant A who is influencing the reasoning of user B in order to achieve B’s decision to do an action D or (2) the role of the participant B who is rejecting arguments for doing the action D proposed by user A. In the first case, the DS does not deviate from the selected tactics but follows them in a systematic way. In the second case, the DS does not deviate from the selected reasoning procedure.
Early attempts to produce a general purpose automated reasoning system were widely seen as a dead end. Special purpose reasoners for smaller, tractable problems, such as theorem proving, have superseded them (Morris, Tarassenko, & Kenward, 2005). Scheuer, Loll, Pinkwart, and McLaren (2009) review the way in which argumentation has been supported and taught to students using computer-based systems. Studies have indicated that argumentation systems can be beneficial for students, and there are research results that can guide system designers and teachers as they implement and use argumentation systems. They conclude that both on the technology and educational psychology sides a number of research challenges remain to be addressed in order to make real progress in understanding how to design, implement and use educational argumentation software.
Implementation: a conversational agent
Four kinds of dialogue management architectures are most common (Ginzburg & Fernandez, 2010). The earliest and also one of the most sophisticated models of conversational agent behaviour is based on the use of planning techniques (Allen, 1995). The two simplest and most commercially developed architectures are finite-state and frame-based (Jurafsky & Martin, 2008). The most powerful are information-state dialogue managers (Traum & Larsson, 2003). The information state may include aspects of dialogue state and also beliefs, desires, intentions, etc. of dialogue participants.
In our application, we have chosen the information-state approach. An experimental DS is implemented (in programming language Java), which in (text-based) interaction with a user in Estonian can optionally play the role of A or B (Koit, 2012). Our aim is, first of all, to demonstrate how reasoning can be influenced and modelled in dialogue, therefore, not all the modules listed in Section 3.1 are implemented in our DS (for example, the DS uses only predefined set of sentences and does not make language generation).
In the first version of the DS, both the participants only operate with semantic representations of linguistic input/output, the surface linguistic part of interaction is provided in the form of a list of ready-made sentences in Estonian, which are used both by the computer and the user. The user can choose his/her sentences from a menu. The sentences are only classified semantically according to their possible functions and contributions in a dialogue (e.g. the sentences used by A to increase the usefulness of the action, the sentences used by B to indicate harmfulness of the action, etc.). Still, the files of Estonian sentences can easily be substituted with their translations and interaction can take place in another language. The weight of each sentence (argument) is equal to 1.
In the second version, there are ready-made sentences only for the computer. The user can put in free text which will be interpreted by the computer using keywords or phrases in order to classify the user texts semantically. The sentences chosen by the computer as well as texts put in by the user can have different weights. Speech recognition and speech synthesis are not included.
Representation of information states
Let us consider the situation where the computer (conversational agent) plays A’s role. The most important component of an information state of the conversational agent is the partner model, which is changing during the interaction.
There are two parts of an information state of a conversational agent – private (information accessible only for the agent) and shared (accessible for both participants). The private part consists of the following components (Koit, 2011):
current partner model (vector wAB of weights – A’s picture of B). In the beginning of interaction, A can create the vector randomly or take into account pre-knowledge about B, if it is available tactic reasoning procedure r
j
which A is trying to trigger in B and bring to a positive decision (it is determined by the chosen tactics, e.g. when enticing, A triggers the reasoning procedure WISH in B) stack of aspects of D under consideration. In the beginning, A puts the ‘title’ aspect of the chosen tactics into the stack (e.g. pleasantness when A is enticing B) set of DAs (finite) set of utterances for increasing or decreasing the weights (‘arguments for/against’)
The shared part of an information state contains
set of reasoning models set of tactics dialogue history – the utterances together with participants’ signs and DAs
Update rules
There are different categories of update rules which will be used for moving from the current information state into the next one:
I. rules used by A in order to interpret B’s turns and II. rules used by A in order to generate its own turns
for the case if the title aspect of the used tactics is located on top of the goal stack (e.g. if the tactic is enticement then the title aspect is pleasantness) for the case if another aspect is located over the title aspect of the tactics used (e.g. if A is trying to increase the usefulness of D for B but B argues for unpleasantness, then unpleasantness lies over the usefulness) for the case if there are no more utterances for continuation of the current tactics (and new tactics should be chosen if possible) for the case if A has to abandon its goal for the case if B has made the positive decision, and therefore A has reached the goal.
Let us consider two examples of update rules. The first one (1) belongs to the category I and the second one (2) to the II-2.
There are special rules for updating the initial information state.
Examples of interactions with the computer
When playing A’s role, the computer chooses tactics (of enticing, persuading or threatening) and generates (randomly) a model of the partner, according to which the corresponding reasoning procedure (wish, needed or must) yields a positive decision, i.e. the computer ‘optimistically’ presupposes that the user can be influenced in this way. A dialogue begins by an expression of the communicative goal (this is the first utterance u1 of the computer). If the user refuses (after his/her reasoning by implementing a normal human reasoning, which we are trying to model here), the computer determines (on the basis of the user's utterance u2) the aspect of D the weight of which does not match the reality and changes this weight in the user model so that a new model will give a negative result as before but it is an extreme case: if we increased this weight by one unit (in the case of positive aspects of D) or decreased it by one unit (in the case of negative ones) we should get a positive decision. (For simplicity, we suppose here that each argument will change the corresponding weight in the user model exactly by one unit.) On the basis of a valid reasoning procedure (and tactics) the computer chooses a (counter-)argument u3 from the set of sentences for increasing/decreasing this weight in the partner model. A reasoning procedure based on the new model will yield a positive decision. Therefore, the computer supposes, that its next utterance will bring the user to agreement to do the action. Now the user must enter (or choose from a menu) his/her next utterance (e.g. a new argument against the action), and the process continues in a similar way (see Example 1). Each argument or counter-argument can be chosen only once. A similar approach was used in Loui's Skeletal Model (Loui, 1998).
Let us suppose that the computer has chosen the tactic of enticing and has generated the following user model (cf. Section 3.2.1):
The reasoning procedure wish yields a positive decision on this model because
/---/
Example 3 demonstrates in more detail how the partner model is used in an interaction (cf. Koit, 2012). The communicative goal of the conversational agent is to reach the partner's decision to do an action D, which is ‘to become a vegetarian’ (A – computer, B – user, ready-made sentences are used by the computer, the user puts in free text).
A will implement the tactic of persuasion and generates a partner model:
The reasoning procedure needed yields a positive decision in this model (to do D).
The initial information state of A is as follows.
Private part
initial partner model the tactic chosen by A – persuasion A tries to trigger the reasoning procedure needed in B, the pre-conditions are fulfilled: stack contains usefulness – the title aspect of persuasion tactic set of DAs at A’s disposal: {proposal, statements for increasing or decreasing weights of different aspects of D for B, etc.} set of utterances for expressing the DAs at A’s disposal, together with their values: {Doctors argue for vegetarian food − value 1, Meat contains much cholesterol – value 1, etc.}. the reasoning procedures wish, needed and must the communicative tactics of enticement, persuasion and threatening dialogue history – an empty set. A: Would you like to become a vegetarian? B: I like animal food. //Refusal: the user pointed out the unpleasantness of the action, therefore the computer must increase the weight of unpleasantness in the user model in such a way that the reasoning procedure needed will give the negative decision as before but the computer's next argument will increase the weight of usefulness by one unit and the reasoning procedure needed will give the positive decision again. The new model will be (1,5, A: Doctors argue for vegetarian food. //The user model will be: (1,5,9, B: Meat contains many useful ingredients. //Refusal: the usefulness is lower than expected by the computer, the new partner model will be (1, 5, 9, A: Vegetarian food contains less harmful transcendental fats. //A new argument for increasing the usefulness will change the model again: (1, 5, 9,
The shared part of the initial information state contains
The dialogue (originally in Estonian) is generated jointly by the computer (A) and the user (B). A’s first DA is proposal:
etc.
Six voluntary users have carried out a number of experiments with the DS. Let us consider here the case when the computer plays A’s role. When starting a dialogue the computer randomly generates a user model. At the beginning, we have set only one restriction: we required that the initial model should satisfy the presumption(s) underlying the corresponding reasoning procedure. Thus, for enticing
The situation improved considerably when we added another restriction to the initial model: we required that the chosen reasoning procedure should aim at getting a positive decision in this model. In real life, this restriction is also meaningful: while making a proposal or request we suppose that our partner will agree and only when counter-arguments are put forward shall we try to refute them.
In dialogues, two kinds of ‘special cases’ occurred:
‘a dead point’. This is a situation when after exchanging two pairs of turns (i.e. after utterances …, ‘an endless dialogue’. This is a situation where after two pairs of turns the partner model is constant as before but a valid reasoning procedure gives a positive decision in this model. In this case, the dialogue will continue until one of the participants runs out of utterances. (We suppose that each utterance can be used only once.)
The process of correcting the user model works effectively and yields new models only if the user points out some different aspects of D in his/her consecutive expressions. A model will remain constant if the user repeats one aspect over and over again. For example, if the user time and again indicates the unpleasantness of the action then a new value for if it reaches ‘a dead point’ in two consecutive computations, the computer gives up and terminates the dialogue if it is ‘an endless dialogue’, the computer continues until it runs out of utterances.
Often a decision can be negative if a user points out less pleasantness of D in an enticing dialogue or less usefulness of D in a persuading dialogue. The reason is that after computing a new value for w(pleasant) or w(useful) this value can be less than
On this ground, the following tactics could be recommended to the (malicious) user in case of failing dialogues:
in dialogues of enticing and persuading, point out less pleasantness and/or usefulness of D; choose an aspect of D and consistently use utterances for this aspect. If the computer has in its file less utterances of this class than the user then the computer will give up and if a user model exists already where the reasoning procedure gives a negative decision, then point out the unpleasantness and/or harmfulness of D.
In reality, a user does not know whether his/her model in the computer changed or not. This gives the computer an additional chance to continue a dialogue.
The first two tactics can rather simply be eliminated by the programmer:
for all classes of user utterances (if s/he can only use the predetermined sentences from a menu), make the corresponding computer classes of utterances at least as powerful. For example, if the user has 10 utterances for pointing out the harmfulness of D then the computer could have at least 10 utterances for decreasing the harmfulness make the computer's class of utterances for increasing the pleasantness (or usefulness) at least as much as the user classes of utterances for pointing out the insufficiency of pleasantness (usefulness) and asking for additional information about pleasantness (usefulness) make the computer class of utterances for increasing the resources at least as big as the user has for pointing out the insufficiency of resources and for requesting additional information about resources.
On the other hand, a user can support a dialogue with quite simple means. S/he could indicate the insufficiency of resources (independent of computer tactics), insufficiency of usefulness (if the computer is enticing or threatening) and insufficiency of pleasantness (if the computer is persuading or threatening).
The generated dialogues are not quite coherent because the computer uses predefined sentences (e.g. User: ‘I don't have proper dresses’. Computer: ‘You won't be alone – there are three other people with you’). This makes the development of a linguistic processor important for real applications.
In the second version of the software tool, a database is used for identifying different key words and phrases in the user input (the input is checked against regular expressions). The database also includes an index of answer files and links to suitable answers as well as files corresponding to different communicative tactics containing various arguments to present to the user.
The use of unrestricted natural language text as input is both an advantage and a disadvantage for the application, as it helps in creating more natural dialogues, but at the same time if the database is compiled poorly, the conversation can become unnatural in a few moves.
In general, the experiments give a reason to believe that such software could be beneficial for training argumentation skills in a certain domain. Still, we had to implement natural language processing, i.e. language analysis and generation before to do real training with users in order to justify this assumption. So far, we have used two different scenarios. In the first scenario, the goal of A is to convince the partner B to make a decision about a business trip (see Example 2). In the second scenario, a decision about vegetarianism will be made (see Example 3). To include new scenarios, new sets of sentences (arguments) should be compiled and their weights assigned. In the case of free user input, key words and phrases should be determined that help the computer to relate the user arguments to different aspects of the action under consideration.
Summary, conclusions and future work
Our main aim is to model argumentation in agreement negotiation processes. We investigate the conversations where the goal of one partner, A, is to get another partner, B, to carry out a certain action D. Because of this, we consider the reasoning processes that people supposedly go through when working out a decision whether to perform an action or not. We state that people construct folk theories, or naive theories, for the important fields of their experience and that they rely on these theories when acting inside of these domains. They include knowledge, beliefs and image concerning the corresponding domains, but also certain principles and concrete rules that form the basis of operating with these mental structures involved.
The general principles of our reasoning model are analogous to the BDI model but it has some specific traits, which we consider important. First, along with desires we also consider other kinds of motivational inputs for creating the intention of an action in an actor. Secondly, we have worked out a model of reasoning which leads to the emergence of the intention (goal) in an actor to do the action in question or to refuse to do it. Our reasoning model consists of a model of human motivational sphere and of reasoning algorithms.
The interaction proceeds in the following way. (1) A informs partner B of the communicative goal (to do D). (2) B goes through the reasoning process and makes a decision, of which s/he informs A. In the reasoning process, the subject considers various positive and negative aspects of D. If the positive aspects weigh more, the subject will make the decision to do D, otherwise s/he will make the opposite decision – not to do D. (3) If B’s decision is negative, A will try, during the following process of interaction, to influence B’s reasoning in such a way that ultimately B would make a positive decision. There exist three reasoning procedures for B in our model (initiated by B’s wish, need or obligation to do D, respectively) and three communicative tactics, i.e. ways of achieving the communicative goal for A, related to the reasoning procedures (respectively, enticement, persuasion and threatening).
We justify our argumentation model in two ways; first of it being the analysis of real human–human dialogues. Arguments and counter-arguments used by humans in negotiation dialogues demonstrate, even though indirectly, how the participants are reasoning before making a decision. Three types of dialogues taken from the EDiC were analysed. First, calls to travel agencies were studied in order to find out communicative tactics used by travel agents to persuade clients to book a trip. Second, we analysed calls of sales clerks of an educational company to other institutions where various training courses were offered to clients. Third, we studied face-to-face conversations between acquaintances. We were looking for sequences of DAs (moves) used by participants to express their arguments and counter-arguments for/against doing an action. We found persuasion and enticement in the dialogues. We can claim that in main lines, the process runs in accordance with the procedures that represent tactics of persuasion and enticement in our model.
The second way to justify our model is its implementation in an experimental DS – the communication trainer. When interacting, the computer uses ready-made Estonian sentences while the user can choose his/her utterances from a menu (in the first version of software) or put in free text in Estonian (in the second version). The computer can optionally play two roles: (1) of participant A who is influencing the reasoning of user B in order to achieve B’s decision to do an action D or (2) of participant B who is rejecting arguments for doing action D proposed by user A. In the first case, the DS does not deviate from the fixed communicative tactics but follows them consistently. In the second case, the DS does not deviate from the selected reasoning procedure.
The goal of the paper is to introduce our agreement negotiation model, check its validity on our existing dialogue corpus and implement a simple DS, which includes the reasoning model as the central module that is based on a naive theory of reasoning. An equal development of other modules in the DS has not been our current aim.
The novelty of our approach, as we see it, lies in the starting point of our reasoning model – it is based on studies in the common-sense conception of how the human mind works in such situations; we suppose that in everyday or even institutional communication people start, as a rule, from this conception, not from any consciously chosen scientific one. The reasoning model, as a naive theory of mind, includes some principles which represent the relations between different aspects of the action under consideration and the decision taken.
We are continuing our work in the following directions: (1) refining the reasoning model by means of incorporating the ideas presented in the literature on argumentation and improving the formalism, (2) extending the EDiC by new agreement negotiation dialogues and their analysis in order to justify and develop the model, (3) developing a linguistic processor in order to make it possible for the computer to react adequately to the user input and generate semantically coherent dialogues in Estonian, and (4) evaluating the strategies in the implemented DS using the metrics proposed, for example, in Danieli and Gerbino (1995).
Footnotes
Acknowledgement
We thank our students for implementing and testing the model.
Funding
This work is supported by the European Regional Development Fund through the Estonian Centre of Excellence in Computer Science (EXCS) and the Estonian Research Council (projects 8558 and 9124).
Appendix. Examples of human–human dialogues from the EDiC
In the examples, transcription of conversation analysis is used (Hennoste et al. 2008). Utterances are numerated. Slashes // are used for comments.
Dialogue 1 A travel agent (A) calls back to a client (B) and offers a trip to a water park.
Dialogue 2 A travel agent A is proposing various arguments for usefulness of booking the trip by the client B.
Dialogue 3 A tries to achieve B’s agreement to publish his personal data at the website of an institute but B does not want to publicise his relations with the institute.
