Abstract
Persuasion is an activity that involves one party trying to induce another party to believe something or to do something. It is an important and multifaceted human facility. Obviously, sales and marketing is heavily dependent on persuasion. But many other activities involve persuasion such as a doctor persuading a patient to drink less alcohol, a road safety expert persuading drivers to not text while driving, or an online safety expert persuading users of social media sites to not reveal too much personal information online. As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. An automated persuasion system (APS) is a system that can engage in a dialogue with a user (the persuadee) in order to persuade the persuadee to do (or not do) some action or to believe (or not believe) something. To do this, an APS aims to use convincing arguments in order to persuade the persuadee. Computational persuasion is the study of formal models of dialogues involving arguments and counterarguments, of user models, and strategies, for APSs. A promising application area for computational persuasion is in behaviour change. Within healthcare organizations, government agencies, and non-governmental agencies, there is much interest in changing behaviour of particular groups of people away from actions that are harmful to themselves and/or to others around them.
Keywords
Introduction
Persuasion is an activity that involves one party trying to get another party to do (or not do) some action or to believe (or not believe) something. It is an important and multifaceted human facility. Obviously it is very important in commercial activities, but it is also important in professional life, and indeed, in everyday life. In whatever we do, we frequently find ourselves trying to persuade other people with regard to something that is important to us, and/or them.
In this paper, I discuss some aspects of the notion of persuasion, and explain how this leads to the idea of computational persuasion. Computational models of argument are central to the development of computational persuasion. I briefly review some key aspects of computational models of argument, and highlight some topics that need further development. I then briefly cover behaviour change as a field in which we can apply methods from computational persuasion, and evaluate the performance in the field.
What is persuasion?
The aim of persuasion is for the persuader to change the mind of the persuadee. By persuasion, the persuader directs the persuadee to believe (or disbelieve) something that the persuader would like the persuadee to believe (or disbelieve), and this in turn may result in them doing (or not doing) something that the persuader would like to be brought about.
In
In
Some kinds of interaction surrounding persuasion include: The persuader collecting information, preferences, etc from the persuadee; The persuader providing information, offers, etc to the persuadee; The persuader winning favour (e.g. by flattering the persuadee, by making small talk, by being humorous, etc); But importantly, arguments are the essential structures for presenting the claims (and counter claims) in persuasion.
An argument-centric focus on persuasion leads to a number of inter-related aspects (see Section 3) that need to be taken into account, any of which can be important in bringing about successful persuasion. These dimensions can significantly affect the success of argumentation, and therefore go someway to delineating what constitutes a good approach to persuasion. But, before considering these dimensions, I would like to make the following claim.
A corollary of the above claim is that how convincing an argument is does not equal how correct it is. For example, arguments like homeopathy focuses on processes of health and illness rather than states, and therefore it is better than regular medicine and the sheer weight of anecdotal evidence gives rise to the common-sense notion that there must be some basis for homeopathic therapies by virtue of the fact that they have lasted this long can be convincing for some audiences.
Levers for persuasion
We now consider some key dimensions that can affect the success of a persuader in persuading a persuadee. We focus on the dimensions that the persuader may have some control over such as rationality of argumentation, persuasion techniques, argumentation style, framing of the arguments, and emotions invoked by arguments. We consider each of them in the following subsections.
Rationality of argumentation
The study of argumentation has largely focused on what constitute good arguments in a normative sense. This involves identifying the features of good arguments that would be appropriate for honest rational agents to present or accept. For an excellent review, see [146]. The careful presentation of premises, and the use of logical reasoning, is much espoused for good quality argumentation. In order to make a case in professional life (such as politics, academia, business, journalism, etc), it is seen as an essential ability. This has led to techniques for constructing and deconstructing good arguments (see for example [48,66]).
In parallel with elucidating what constitutes a good argument, the study of argumentation has also identified types of poor or inappropriate argumentation such as argumentation fallacies (as well as [146], see reviews in [58,145]). Some fallacies are described as formal fallacies since they violate rules of logic (e.g. claiming the antecedent of an implication is true because its consequent is true), or violate rules of probability theory (e.g. the gambler’s fallacy), but many fallacies are informal since they constitute what could be described as unacceptable arguments, such as begging the question (which is providing a version of the claim as a premise, and using that to derive the claim), argumentum ad hominem (arguing against the arguer rather than their arguments), and appeal to authority (arguing that an argument is true because of the position of the arguer).
Given that the study of argumentation has developed such a comprehensive understanding of how to differentiate good from bad argumentation, if we are dealing with rational agents when undertaking persuasion, it appears important to use good arguments and argumentation, and avoid poor arguments and argumentation.
Furthermore, the overall quality of the argumentation is important from a psychological point of view (see for example [64,67]). If a persuader wants to convince the persuadee of an argument (a persuasion argument), then this includes acceptability of the persuasion argument (against counterarguments), believing the premises of the persuasion argument, fit of persuasion argument with agenda, goals, preferences, etc, quality of constellation of arguments considered (balance, depth, breadth, understandability, etc). Comprehensibility of persuasive arguments has also been shown to be an important determinant of successful persuasion [45].
Despite the importance of being rational in argumentation, there is a tendency in the computational models of argument community to overemphasize the need to be rational. As we will cover later, the role of emotional arguments have been found from psychological studies to be important in persuasion. Furthermore, even for rational argumentation, some widely accepted principles such as not deploying ad hominem arguments can be over-restrictive when for example the speaker has a poor reputation or is unqualified in the topic that they are speaking on, though use of argument schema with critical questions goes someway to redressing the balance in this (for a review of argument schema, see [148]).
Persuasion techniques
Psychological studies have identified persuasion techniques that are seen in human interactions in general. For instance, Cialdini has identified the following six principle of influence [34].
The above principles are supported by empirical studies in psychology. Other studies suggest further principles that appear to useful in various domains such as law (e.g. [52]), healthcare (e.g. [133]), and ecommerce (e.g. [105]).
Argumentation style
Argumentation style concerns who is presenting the arguments, the language used in those arguments, and the way the dialogue is structured. The issues raised touch on broader aspects of the personality of the persuadee, and the context of the persuasion (see for example [132] for further discussion of these issues).
Determining the precise parameters of argumentation style raises some complex issues for effective persuasion in a specific application. Experiential knowledge and psychological studies offer some general guidance but in practice the specific parameters need to be determined for a specific application.
Framing of arguments
The
Consider for example experiments by Tversky and Kahneman where participants are asked to imagine preparations for the outbreak of an unusual disease that is expected to kill 600 people, and to consider two alternative programmes to combat the disease [141]. In the first framing of the study (below), the majority of participants prefer Programme A.
If Programme A is adopted, 200 people will be saved.
If Programme B is adopted, there is
In contrast, in the second framing of the programmes (below), the majority of participants prefer Program D. So even though, the versions above and below are providing the same information, participants can change their preferences based on the framing. The explanation is that in the context of gain (above) people tend to be averse to risk and in the context of loss (below) they try to minimize the loss.
If Programme C is adopted 400 people will die.
If Programme D is adopted there is
In a user study on the persuasiveness of healthy eating messages [138], positively framed messages (e.g. Most people believe that eating a healthy breakfast contributes to a longer lifespan) were shown to be more efficacious than negatively framed messages (e.g. Most people believe that eating an unhealthy breakfast contributes to a shorter lifespan). Furthermore, Cialdini’s principles of persuasion [34] were considered (i.e. reciprocation, commitment, consensus, liking, authority, and scarcity), and it was found that arguments that appeal to authority (e.g. Studies conducted by health experts have shown that eating a healthy breakfast keeps you energized) were the most persuasive.
Personality of persuadee
Another key dimension that can affect the success of persuasion is the personality of the persuadee. Obviously, this is not in the control of the persuadee, but if the persuader knows about the personality of the persuadee, the persuader can make a better choice of strategy to use with the persuadee. Consider for example persuading someone to vote in the national election: If the person “follows the crowd”, then telling them that the majority of the population voted in the last election is more likely to get them out to vote, whereas if the person “follows rules rigorously”, then telling them that it is their duty to vote is more likely to get them out to vote. Mistaking the personality trait can have a negative effect on the chances of successful persuasion. Psychology has developed numerous methods for characterizing personality. An important example is the model based on the OCEAN personality traits which are Openness to experience, Conscientiousness, Extroversion, Agreeableness, and Neuroticism [53].
There are many other ways that knowing more about the persuadee can be used in determining an appropriate strategy for persuasion. Attitudes are important determiners in persuasion, and the psychology of attitudes may provide important insights in a persuasion strategy (for a review see [97]). Various other aspects of psychology are routinely harnessed in commerce, particularly ecommerce, such as the psychology of colour, language, graphics, pricing, and negotiation (for a review see [105]). Cultural information about a persuadee, such as nationality, can give important information about the kinds of interaction that a persuadee might respond positively to (see for example [65]). Other kinds of traits, such as political, ethical, or religious, may be important determiners in some contexts for persuasion. Furthermore, detailed information about multi-dimensional private traits can be predicted from digital records of the user (e.g. from Facebook likes) [88].
Emotion invoked by arguments
Presenting emotional arguments can be important. Emotional arguments are predominantly deployed by the persuader in order to influence the persuadee via emotional devices. For example,
You have a good income, and so you should feel guilty if you do not denote money to this emergency appeal by Médecins Sans Frontières.
Your parents will be proud of you if you complete your thesis and get your PhD.
Note, emotional arguments contrast with evidential/logical arguments, such as for example, the following argument.
You will have a much higher chance of getting a highly paid job if you complete your thesis and get your PhD award.
In certain situations, emotional arguments can be powerful arguments in persuasion. For instance, Lukin et al [95] have shown that with some audiences, emotional arguments are more effective in persuasion than factual arguments. For this, they categorized audiences according to the OCEAN personality traits (i.e. openness to experience, extroversion, agreeableness, conscientiousness, and neuroticism), and showed that conscientious, open, and agreeable people are more convinced by emotional arguments.
What is computational persuasion?
As developments in artificial intelligence attempt to capture more aspects of human cognition, it is natural to consider how persuasion can be captured as a software process. In the following, I define the notion of an automated persuasion system, and use this to define computational persuasion.
An
Clearly, persuasion is a complex and fascinating phenomenon. Furthermore, it is very important for humans to be able to persuade, and to be persuaded. However, this does not mean that it would be best for both parties (i.e. persuader and persuadee) if the persuader is always successful in persuasion. Sometimes it will be good for the persuadee but not always, and so the persuadee needs to judge whether or not to agree. This raises some interesting challenges for developing computational persuasion. In the rest of this review, I am only able to touch on some of the issues. I will proceed by considering computational models of argument (i.e computational argumentation), and discuss features that will be useful for computational persuasion, and highlight some shortcomings.
What do computational models of argument offer?
Computational persuasion is based on computational models of argument. These models are being developed to reflect aspects of how humans use conflicting information by constructing and analyzing arguments. A number of models have been developed, and some basic principles established. We can group much of this work in four levels as follows (with only examples of relevant citations).
Dialectical level
Dialectics is concerned with determining which arguments “win” in some sense. In abstract argumentation, originally proposed in the seminal work by Dung [42], arguments and counterarguments can be represented by a graph. Each node denotes an argument, and each arc denotes one argument attacking another argument. Dung defined some principled ways to identify extensions of an argument graph. Each extension is a subset of arguments that together act as a coalition against attacks by other arguments. An argument in an extension is, in a sense, acceptable. For a review of Dung’s approach and alternatives, see [8]. Labels can also be assigned to arguments, and Caminada and Gabbay [27] provided a formalization based on three labels (in, out and undecided) that they show is equivalent to Dung’s formalisation.
There have been numerous developments of abstract argumentation that include alternative definitions for extensions such as ranking-based semantics [1,24,120], and introduction of attacks on attacks [7,103], preferences [2], weights on attacks [44], support relations (see for example [30,31,109] and see Fig. 1 for an example of an argument graph with supporting and attacking arguments), and probabilities [43,72,74,82,92,135]. Furthermore, there has been the development of software solvers for determining extensions (see for example [33,137]), and the application of natural language processing techniques for constructing argument graphs from free text (see for example [93]). In addition, there are methods for argument dynamics to ensure that specific arguments hold in the extensions of the argument graph such as epistemic enforcement in abstract argumentation [9,10,38], revision of argument graphs [36,37], and belief revision in argumentation (e.g. [18,29,41,50]).

A simple example of a bipolar argument graph for application for persuading a user to undertake more exercise. A dotted arc from a node A to a node B denotes that argument A supports argument B. A solid arc from a node A to a node B denotes that argument A attacks argument B (i.e. A is a counterargument B).
At the dialectical level, arguments are atomic. They are assumed to exist, but there is no mechanism for constructing them. Furthermore, they cannot be divided or combined. To address this, the logical level provides a way to construct arguments from knowledge. At the logical level, an argument is normally defined as a pair
Dialogical level
Dialogical argumentation involves agents exchanging arguments in activities such as discussion, debate, persuasion, and negotiation. Starting with [63,96], dialogue games are now a common approach to characterizing argumentation-based agent dialogues (e.g. [3,22,25,39,40,46,87,100,101,114,117,147]). Dialogue games are normally made up of a set of communicative acts called moves, and a protocol specifying which moves can be made at each step of the dialogue. Dialogical argumentation can be viewed as incorporating logic-based argumentation, but in addition, dialogical argumentation involves representing and managing the locutions exchanged between the agents involved in the argumentation. The emphasis of the dialogical view is on the interactions between the agents, and on the process of building up, and analyzing, the set of arguments until the agents reach a conclusion. See [118] for a review of formal models of persuasion dialogues and [23,136] for reviews and analyses of strategies in dialogical argumentation.
By modelling the persuadee, it is possible to update the model during the dialogue based on the persuadee responses. This can then be used to determine whether there is any chance of the dialogue leading to success, and if not, giving up and unsuccessfully terminating the dialogue [23]. Probabilistic models of the opponent have been used in some strategies allowing the selection of moves for an agent based on what it believes the other agent believes [73], selection of moves based on what it believes the other agent is aware of [126], and based on the history of previous dialogues to predict the arguments that an opponent might put forward [59]. In [20], a planning system is used by the persuader to optimize choice of arguments based on belief in premises, and in [21], an automated planning approach is used for persuasion that accounts for the uncertainty of the proponent’s model of the opponent by finding strategies that have a certain probability of guaranteed success no matter which arguments the opponent chooses to assert.
Utility theory has also been considered in argumentation (for example [99,113,122,128]) though none of these represents the uncertainty of moves made by each agent in argumentation. Probability theory and utility theory (using decision theory) has been used in [81] to identify outcomes with maximum expected utility where outcomes are specified as particular arguments being included or excluded from extensions. Strategies in argumentation have also been analyzed using game theory [47,121,123], though these are more concerned with issues of manipulation, rather than persuasion.
Rhetorical level
Normally argumentation is undertaken in some wider context of goals for the agents involved, and hence individual arguments are presented so as to contribute to the wider aim. For instance, if an agent is trying to persuade another agent to do something, then it is likely that some rhetorical device is harnessed and this will affect the nature of the arguments used (e.g. a politician may refer to investing in the future of the nation’s children as a way of persuading colleagues to vote for an increase in taxation).
Aspects of the rhetorical level include believability of arguments from the perspective of the audience [69], impact of arguments from the perspective of the audience [70], use of threats and rewards [4], appropriateness of advocates [71], and values of the audience [11,12,14,112]. The latter has led to the notion of value-based argumentation, and this has been developed for various applications including group persuasion [112], for persuasion concerning plans [102], for analyzing legal reasoning [13], and for analyzing political argument [6].
The use of emotional arguments could be regarded as a rhetorical device. However, the modelling of emotional aspects of argument has received little attention in the computational argumentation literature. There is a proposal for rules for specifying scenarios where empathy is given or received in negotiation [98], and there is a proposal for specifying argument schemas (rules that specify general patterns of reasoning) for capturing aspects of emotional argument [94]. In contrast, it is interesting to note that affective computing has put emotion at the centre of the relationship between users and computing systems [26].
Shortcomings in the state of the art
So computational models of argument offer a range of formal systems for generating and comparing arguments, and for undertaking this in a dialogue. However there are shortcomings in the state of the art of computational models of argument for application in persuasion. The current state of the literature does not adequately offer the following and hence there are some exciting research challenges to be addressed if we are to deliver computational persuasion.
A formalization of domain knowledge appropriate for constructing arguments concerning persuasion in application such as behaviour change (e.g. a formalism for representing persuadee goals, persuadee preferences, system persuasion goals, and system knowledge concerning actions that can address persuadee goals, etc) though the multiagent communities offer proposals that might be adapted for our needs.
Since we are not attempting to support free text input from the persuadee, we require protocols that take account of the user’s views without recourse to natural language processing. For this, in Section 6, I will discuss how asymmetric dialogues can be used.
Persuadee models that allow the persuasion system to construct a model of the persuadee’s beliefs and preferences, to qualify the probabilistic uncertainty of that model, and to update that model and the associated uncertainty as the dialogue progresses. There are some promising proposals that could contribute to a solution (e.g. [20,21,59,73,75,127]), and I will discuss the progress we have made on this in Section 7.4. However, if we are to harness some of the other levers of persuasion that I discussed in Section 3, then we will need to broaden the modelling to incorporate aspects of personality and bias.
Strategies for persuasion that harness the persuadee model to find optimal moves to make at each stage (trading the increase in probability of successfully persuading the persuadee against the raised risk that the persuadee disengages from the dialogue as it progresses). The strategies may involve the uncertainty in the user’s beliefs and awareness of arguments, and it may also include an assessment of the user’s personality and/or biases. With this kind of information in the user model, we may be able to harness some of the levers of persuasion discussed in Section 3 such a persuasion techniques, framing style, and argumentation style. I will discuss the progress we have made on this in Section 7.4.
In order to focus research on addressing these shortcomings, we can consider how computational persuasion can be developed and evaluated in the context of behaviour change applications.
Studies with participants
In order to have well-understood computational models of argument that correspond to human behaviour, there is a need to ground these models with studies with participants. The studies undertaken so far validate some aspects of these models, but also indicate some shortcomings in being able to model human behaviour.
Studies performed by Rahwan et al [124] and Cerruti et al [32] investigated various forms of reinstatement in argumentation. The users were presented several argument graphs and were asked to explain how acceptable a given argument is in their opinion. The results show that in some cases, the implicit knowledge about domains can substantially affect the given acceptability levels. However, more importantly, the experiments show that the attacked argument’s acceptability is lowered, but does not fall to 0, which is what would be predicted by the usual dialectical semantics for abstract argumentation. Additionally, introducing the defense for this argument raises its acceptability. However, typically it does not reach the value of 1, which is the level the usual dialectical semantics would predict.
In a study of argumentation dialogues, Rosenfeld and Kraus [129] undertook an experiment in order to develop a machine learning-based approach to predict the next move a participant would make in a dialogue. This work was further extended in [130,131]. The machine learning models were trained on data that incorporated the sequences of arguments in a dialogue that the participants accept. Once trained, the models were able to predict the acceptance an unseen case would have.
In another machine learning-based approach, Huang and Lin [68] developed a software agent for participating in dialogues with potential customers with the aim of persuading them to offer a higher price for goods. Dialogues were constructed from an argument graph, and training was done on simulated scenarios. In testing with users, the agent was able to persuade the participants to increase the mean price offer.
There are also studies with participants by Masthoff and co-workers that investigate the efficacy of using arguments as a way of persuading people when compared with other counselling methods indicating that argumentation may have disadvantages if used inappropriately [107], and that rather than a confrontational approach, argumentation that is based on appeal to friends, appeal to group, or appeal to fun, may be more efficacious [142,143].
Emotion in argumentation has also be the subject of a study with participants in a debate where the emotional state was estimated from EEG data and automated facial expression analysis. In this study, Benlamine et al [15] showed for instance that the number and the strength of arguments, attacks and supports exchanged between a participant could be correlated with particular emotions of the participant.
What is behaviour change?
There is a wide variety of problems that are dangerous or unhealthy or unhelpful for an individual, or for those around him/her, and that are expensive to government and/or to society (see Table 1 for examples). For each type of problem, we can conceivably tackle a small proportion of cases with substantial benefit to individuals, government and society using techniques for behaviour change.
Some examples where people could change their behaviour and for which there would be a substantial quantifiable benefit to themselves, and/or to society
Some examples where people could change their behaviour and for which there would be a substantial quantifiable benefit to themselves, and/or to society
Many organizations are involved in behaviour change, and many approaches are used to persuade people to change their behaviour including counselling, information resources, and advertising. Many diverse factors can influence how such approaches can be used effectively in practice such as the following.
As computing becomes involved in every sphere of life, so too is persuasion a target for applying computer-based solutions. There are
Over the past 10 years, a wide variety of systems have been developed to help users to control body weight [91], to reduce fizzy drink consumption [89], to increase physical exercise [149], and to decrease stress-related illness [85]. Many of these persuasion technologies for behaviour change are based on some combination of questionnaires for finding out information from users, provision of information for directing the users to better behaviour, computer games to enable users to explore different scenarios concerning their behaviour, provision of diaries for getting users to record ongoing behaviour, and messages to remind the persuadee to continue with the better behaviour.
Interestingly, argumentation is not central to the current manifestations of persuasion technologies. The arguments for good behaviour seem either to be assumed before the persuadee accesses the persuasion technology (e.g. when using diaries, or receiving email reminders), or arguments are provided implicitly in the persuasion technology (e.g. through provision of information, or through game playing). So explicit consideration of arguments and counterarguments are not supported with existing persuasion technologies. This creates interesting opportunities for computational persuasion to develop APSs for behaviour change where arguments are central.
Argument-based persuasion technology could complement other technologies by helping users when they contemplate change. This fits the technology into the Stages of Change model [119] which comprises the following phases that someone might go through (examples taken from [110]).
By acting at the contemplation stage, the user might be prepared to enter into a dialogue with an APS. The role of the APS would then be to provide context-specific (personalized) information to the user through arguments, and to handle the doubts and issues that the user might have in the form of counterarguments. In the next section, we consider the potential of this approach in more detail.
A strategy for an APS needs to find the best choice of move at each stage where best is determined in terms of some combination of the need to increase the likelihood that the persuadee is persuaded by the goal of the persuasion, and the need to decrease the likelihood that the persuadee disengages from the dialogue. For instance, at a certain point in the dialogue, the APS might have a choice of two arguments A and B to present. Suppose A involves further moves to be made (e.g. supporting arguments) whereas B is a single posit. So choosing A requires a longer dialogue (and higher probability of disengagement) than B. Also suppose that if the persuadee engages to the end of each dialogue, then it is more likely that the persuadee believes A than B. So if the APS is to make the best choice of move, it needs to consider both the risk and potential benefit from each of them.
An APS should present arguments and counterarguments that are informative, relevant, and believable, to the persuadee. If the APS presents uninformative, irrelevant, or unbelievable arguments (from the perspective of the persuadee), the probability of successful persuasion is reduced, and it may alienate the persuadee. A choice of strategy depends on the protocol, and on the kind of dynamic persuadee model. Various parameters can be considered in the strategy such as the preferences of the persuadee, the agenda of the persuadee, etc.
So argument-based persuasion for behaviour change offers a challenging and worthwhile field for developing and evaluating computational persuasion. As indicated by the review of computational models of argument in Section 4.1, there are some promising developments that could form the basis of APSs for behaviour change, as I discuss in the next section. Furthermore, there have already been some promising studies using dialogue games for health promotion [28,54–56], embodied conversational agents for encouraging exercise [108], dialogue management for persuasion [5], and tailored assistive living systems for encouraging exercise [57], that indicate the potential for APSs.
Computational models of argument drawing on ideas of abstract argumentation, logical argumentation, dialogical argumentation, together with techniques for argument dynamics and for rhetorics, offer an excellent starting point for developing computational persuasion for applications in behaviour change.
I assume that an APS for behaviour change is a software application running on a website or mobile device. Some difficult challenges to automate persuasion via an app are the following.
Simple example of an asymmetric dialogue between a user and an APS. As no natural language processing is assumed, the arguments posted by the user are actually selected by the user from a menu provided by the APS
Simple example of an asymmetric dialogue between a user and an APS. As no natural language processing is assumed, the arguments posted by the user are actually selected by the user from a menu provided by the APS

Interface for an asymmetric dialogue move for asking the user’s belief in an argument. (a) The top argument is by the APS, and the second argument is a counterargument presented by the APS. The user uses the menu to give his/her belief in the counterargument. (b) A query is asked that may be used in a user model and menu of answers is provided. (c) A query is asked by the system to determine the goals of the user. Here the user may select any number of the items on the list.
The dialogue may involve steps where the system finds out more about the persuadee’s beliefs, intentions and desires, and where the system offers arguments with the aim of changing the persuadee’s beliefs, intentions and desires. The system also needs to handle objections or doubts (represented by counterarguments) with the aim of providing a dialectically winning position. To illustrate how a dialogue can lead to the presentation of an appropriate context-sensitive argument consider the example in Table 2. In this, only the APS presents arguments, and when it is the user’s turn s/he can only answer questions (e.g. yes/no questions) or select arguments from a menu. In Fig. 2, a dialogue step is illustrated where a user can state the degree of agreement or disagreement in an argument.
Arguments can be automatically generated from a knowledgebase. For this, we can build a knowledgebase for each domain, though there are many commonalities in the knowledge required for each behaviour change application.
To represent and reason with the domain knowledge, we could harness a form of Belief-Desire-Intention (BDI) calculus in predicate logic for relating beliefs, behavioural goals, and behavioural states, to possible actions. We could then use the calculus with logical argumentation to generate arguments for persuasion. A small example of an argument graph that we might want to generate by this process is given in Fig. 3 including the persuasion goal giving up smoking will be good for your health.
To support the selection of arguments, we require persuadee models. For this, we can establish the probabilistic uncertainty associated with the APS model of the persuadee’s beliefs, behavioural state, behavioural goals, preferences, and tendencies etc by asking the persuadee appropriate questions, by considering previous usage of the APS by the persuadee, and by the type of the persuadee (i.e. by assignment to a built-in model learned from a class of similar users).

Example of an argument graph for persuasion.
In this section, I outline a framework for computational persuasion that is being developed in an ongoing project (for more information, see the project website2
).In our framework, we focus on the uncertainty surrounding the user’s awareness of arguments, and the user’s belief in the arguments s/he is aware of. For this, we have harnessed probabilistic argumentation. Two main approaches to probabilistic argumentation are the constellations and the epistemic approaches [74].
In the constellations approach, the uncertainty is in the topology of the graph (see for example [43,72,92]). As an example, this approach is useful when one agent is not sure what arguments and attacks another agent is aware of, and so this can be captured by a probability distribution over the space of possible argument graphs. The usual definition for extensions (grounded, preferred, stable, etc) can be applied to each subgraph, and then for each subset of arguments X, the probability that X is an extension for the grounded (respectively preferred, stable, etc) extension is the sum of the probability of each subgraph that has X as a grounded (respectively preferred, stable, etc) extension.
In the epistemic approach, the topology of the argument graph is fixed, but there is uncertainty about whether an argument is believed [74,82,84,135]. This is formalized by a probability distribution over the subsets of the set of arguments in the graph. In addition, postulates have been proposed to capture intuitive constraints such as the rational postulate which states that if an attacker has a probability greater than 0.5 (i.e. it is believed), then any attackee has a belief less than or equal to 0.5 (i.e. it is not believed). The epistemic approach can give a finer grained version of Dung’s approach, and it can be used to give a valuable alternative to Dung’s approach. For example, for a graph containing arguments A and B where B attacks A, it might be the case that a user believes A and not B, and if so the epistemic extension (the set of believed arguments) would be
The epistemic approach has been extended with a probability distribution over subsets of the set of attacks [116]. This can be used to represent an agent’s belief in each attack. This is potentially useful for handling enthymemes. Since most arguments are presented in natural language, different agents may interpret them differently, and hence some agents may be belief that an attack holds between a pair of arguments whereas other agents might not.
Studies with participants
We have undertaken studies with participants to evaluate how they deal with arguments arising in a dialogue [115]. We asked each participant for their belief in the arguments at each stage of the dialogue, and whether they saw a negative (i.e. attacking) or positive (i.e. supporting) relationship between the latest argument added in the dialogue and the previous arguments. For this study, we were able to make a number of observations including the following.
We have also developed methods for acquiring crowd-sourced opinions on arguments, and shown how they can be used for predicting opinions on arguments [79]. We evaluated our approach by crowd-sourcing opinions from 50 participants about 30 arguments. This work shows how it is viable to acquire data from a number of contributers to construct classifiers, and that these classifiers can then be deployed to substantially decrease the number of questions that need to be asked of any particular user in a persuasion dialogue.
Strategic argumentation
Given the potential for probabilistic argumentation to capture key aspects of uncertainty in the user model, I indicate below how strategic argumentation can be developed to harness the user model.
Key possible dimensions for modelling uncertainty are summarized in Table 3. These developments offer a framework with a well-understood theoretical methodology, and implementations that are computationally viable for strategic argumentation.
Possible dimensions of uncertainty in models of persuadee
Possible dimensions of uncertainty in models of persuadee
The key research issues for our project are summarized in Fig. 4, and described below. The aim is to address these issues in order to provide an integrated theory for applications in behaviour change.

Key aspects of our framework for computational persuasion. A solid arrow indicates a necessary flow of information whereas a dotted arrow indicates an optional flow of information.
We have put uncertainty in arguments, in particular in belief in arguments, at the core of our framework for computational persuasion, as we believe this is a minimum necessary for persuasive behaviour. However, we believe that other dimensions are highly desirable for a more comprehensive framework for computational persuasion. In particular, some of the dimensions considered in Section 3 are potentially valuable including rationality of arguments (in particular quality of arguments and of argumentation), persuasion techniques, argumentation style, framing of arguments, and emotion of arguments.
Computational persuasion, being based on computational models of argument, is a promising approach to technology for behaviour change applications. Advantages of dialogical persuasion over unidirectional persuasion for behaviour change include:
Developing an automated persuasion system (APS) involves research challenges including: undertaking the dialogue without using natural language processing; having an appropriate model of the domain in order to identify arguments; having an appropriate dynamic model of the persuadee; and having a strategy that increases the probability of persuading the persuadee. Furthermore, with even a modest set of arguments, the set of possible dialogues can be enormous, and so the protocols, persuadee models, and strategies need to be computationally viable.
Persuasion can be described as a process for overcoming barriers to behaviour change. There are many kinds of barriers. A simple dichotomy is that of informational barrier and psychological barrier. An
In the short-term, we may envisage that the dialogues between an APS and a user involve limited kinds of interaction. For example, the APS manages the dialogue by asking queries of the persuadee, where the allowed answers are given by a menu or are of restricted types (e.g. age), and by positing arguments, and the persuadee may present arguments that are selected from a menu presented by the APS. Obviously richer natural language interaction would be desirable, but it is not feasible in the short-term. Even with such restricted asymmetric dialogues, it may be possible that effective persuasion can be undertaken, and furthermore, we need to investigate this conjecture empirically with participants. In the longer-term, there are likely to be exciting opportunities for combining computational models of argument with computational linguistics for much more involved and convincing dialogical argumentation for persuasion.
Footnotes
Acknowledgements
I am grateful to the anonymous reviewers for helpful feedback for improving the paper. This research is part-funded by EPSRC grant EP/N008294/1 Framework for Computational Persuasion.
