Abstract
The aim of this paper is to describe the basic AI techniques in “linguistic knowledge processing”, a field that at temps to get machines to understand natural languages. In particular, we will focus on how computing techniques can model the communication process. We will therefore be interested in what the various levels of linguistic knowledge (apart from “phonetic” competence, which we choose to ignore) contribute to the understanding process - and how in turn, these types of knowledge (syntactic, semantic, and pragmatic) can be represented in formal computer applications that model human understanding.
After some preliminary remarks about the theoretical and practical importance of this field, the paper introduces firstly a sample of the theories used to represent linguistic knowledge (transformational, case, systemic and unification grammars). This will be followed by a presentation of semantic representations (various logics and semantic networks). A section on pragmatic aspects of communication (often called “discourse analysis”) will complete the theoretical presentation.
The second part of the paper - “doing it on the computer” - begins with parsing systems, from morphological analysis via transition networks and lexicon driven analysers, to deterministic parsers.
Each system, theory, or model has its own limitations. This poses great problems when faced with the need to integrate it into a unified procedural whole with other modules of the understanding process.
Therefore, the final part of the paper adresses architectural issues. In particular, we show why we think that Distributed Artificial Intelligence and reflective systems offer the best framework to handle these problems. Examples taken from our own system (CARAMEL - acronym for “Compréhension Automatique de Récits, Apprentissage et Modélisation des Échanges Langagiers”) will illustrate this last point.
Get full access to this article
View all access options for this article.
