Abstract
Most present-day information retrieval systems use the presence or absence of keywords to determine whether a document is relevant to a user's query. Although some systems do sophisticated statistical weighting and word-stem extraction, or exploit a hierarchical controlled vocabulary, all suffer from the same basic limitation: their inability to represent relational information among primitive concepts. Research in artificial intelligence and natural-language processing has produced richer representations of texts, and techniques for reasoning about these representations. At the heart of these developments is the use of frames to provide a relational semantic representation of documents and user queries. This paper describes these frame-based knowledge representation methods as they apply to information retrieval, including research in user interfaces and automatic document classification, most notably the FERRET project at CMU, which classifies texts using a text skimming parser.
