Abstract
In this paper, we provide an overview of the benchmarks that have been recently employed in Abstract Argumentation. We first describe the benchmark suite from previous editions of the International Competition of Computational Models of Argumentation (ICCMA), and then briefly describe the benchmarks for non-Dung frameworks. This article is a contribution to the new Argument & Computation Community Resources (ACCR) corner.
Introduction
Computational argumentation has been a topic of interest for decades, especially since Dung’s seminal paper on Abstract Argumentation Frameworks (AFs) [18]. It has applications in various domains such as decision making [4], automated negotiation [17], reasoning with inconsistent knowledge [8], legal reasoning [7], and multi-agent systems [26]. However, the main efforts towards the development of efficient computational approaches are quite recent. These efforts have been channeled by the organization of the first International Competition on Computational Models of Argumentation1
The regular organization of a solver competition is important for theoretical and practical reasons, as proved in a number of neighbor fields, such as SAT [5], ASP [11,22], and SMT.4
In this paper, we focus our attention on benchmarks, by presenting an overview of such resources that have been recently employed in Abstract Argumentation. In particular, we first describe the benchmark suite from previous editions of ICCMA, followed by a presentation of the benchmarks that have been proposed for variants and extensions of the original AFs proposed by Dung.
In this section we outline the benchmark domains employed in the first two ICCMA competitions.
ICCMA’15 benchnarks
ICCMA’15 introduced three new AF generators, called This tool has been developed with the goal of producing AFs with large grounded extensions. It takes the number of arguments n and probability This tool has been developed with the goal of producing AFs such that the graph features many Strongly Connected Components. It first partitions the arguments (the number of which is given by parameter n) into This tool has been developed with the goal of producing AFs with a large number of stable extensions. It first identifies a set of arguments to form an acyclic subgraph of the AF and, consequently, to contain the grounded extension. Among the other arguments, subsets are iteratively singled out to form stable extensions by attacking all other arguments. Besides the parameter n for the number of arguments, the tool generates AFs taking into account a number of parameters which determine minimum and maximum values for the number and size of stable extensions and for the size of grounded extensions.
ICCMA’17 benchmarks
Herewith we briefly describe the domains that were evaluated at ICCMA’17, on top of those of ICCMA’15: These are assumption-based argumentation (ABA) benchmarks translated to AFs. ABA problems are one of the prevalent forms of structured argumentation in which, differently from AFs, the internal structure of arguments is made explicit through derivations from a more basic structure [23,29]. These are crafted benchmark examples for (strong) admissibility featuring a fixed structure composed of 4 sets of arguments and predetermined sets of attacks. Two “starting” and “terminal” sets are composed of only one element, one having only outgoing edges and the other only incoming edges. Given a parameter n of the generator, the two “intermediate” sets have cardinality n, and their attack relations are constructed in order to have only one complete labelling [12]. This is a set of random AFs generators of three different graph classes, with a configurable number of arguments [14]. The three classes correspond to: (i) Erdös-Rényi [19], which selects attacks randomly; (ii) Watts-Strogatz [30], which aims for a small-world topology of networks being not completely random nor regular, and (iii) Barabasi-Albert [6] for large networks. These are AFs obtained from translating the well-known Blocksworld and Ferry planning domains. Each planning instance is first encoded as a propositional formula, by using the method in [27]; then, each clause is transformed into a material implication; and, finally, to each material implication the transformation in [31] is applied. This is a crafted benchmark example for semi-stable semantics, with a fixed structure composed by 3 sets of arguments of equal cardinality, and predetermined sets of attacks. Attack relations are defined in a way that each instance has exactly These are graphs obtained from real world traffic networks data available at
Other benchmarks
In this section, we briefly describe other argumentation benchmarks that have been proposed, and for some of them we point out some directions that could lead to interesting new benchmarks.
ADF. Abstract Dialectical Frameworks (ADFs) are one of the main extensions of AFs, where every argument is equipped with an acceptance condition. The benchmarks employed in some of the recent papers on ADFs (e.g., [10,25]) are generated starting from some of the most “practical” benchmarks of ICCMA’17, e.g., Traffic and Planning2AF, and randomly generating acceptance conditions. Some of these benchmarks can be found at the YADF system page (
Dynamic argumentation. Argumentation is highly dynamic by nature. Indeed, debates are situations where agents add new attacks and arguments at each step of the process. However, computational approaches specifically designed for such dynamic situations have only been proposed recently. For different semantics and reasoning problems, the approach to generate benchmarks is the same: AFs from ICCMA’15 or ICCMA’17 are used, and updates are randomly generated (see, e.g., [1]). A similar approach is used to generate dynamic bipolar AFs [2] and dynamic extended AFs [3]. This generation approach allows to easily benefit from the existing ICCMA benchmarks. However, like any random-based benchmarks, they miss some connection with real world instances. A possible direction to obtain more meaningful dynamic AFs is the study of real life debates (e.g., debates coming from a democratic process, from negotiation situations, …).
Structured argumentation. Obtaining an argumentation framework from a knowledge base is not a trivial problem. Indeed, even for some relatively small knowledge base, the number of arguments and attacks can be exponentially high with respect to the size of the knowledge base. We are aware of some efforts to efficiently generate argumentation frameworks from a logical knowledge base. For instance, [34] provides a method for generating AFs from existential rules knowledge bases; a data set and a generator are available online. A similar contribution for AFs based on Datalog± has been proposed in [32], while [33] provides such a generator for argumentation hypergraphs (i.e., with collective attacks) based on Datalog±.
Assumption-based argumentation. Assumption-based Argumentation (ABA) [16] is a well-known approach to structured argumentation, in which arguments are represented in a compact way in terms of graph-based derivations from a given rule-based deductive system over sentences, starting from assumptions, and checking for acceptability queries. Randomly generated ABA frameworks and queries used in a number of papers (e.g., [16,24] can be found at
Conclusions
In this article we have presented a first contribution to the new born Argument & Computation Community Resources (ACCR) corner, in terms of an assessment of benchmarks for Abstract Argumentation. Subsequent contributions to the corner are expected to be focused on one particular resource (benchmark, software tool, etc), cf. the editorial introducing the corner in this issue, possibly among those we have overviewed here.
Footnotes
Acknowledgements
We would like to thank the Argument & Computation Editors-in-Chief, Pietro Baroni and Bart Verheij, for giving us the possibility to write this opening contribution to the new ACCR corner. We also would like to thank Martin Caminada for useful comments on a draft of this contribution.
