Abstract
Though there is a sense of expectation that artificial intelligence (AI) methods will facilitate radical improvements in the way that information is provided and used, there have been few experimental studies with realistic test collections to determine the applicability of these and other advanced information storage and retrieval techniques. Furthermore, most ‘intelligent’ systems that have been built cannot easily be adapted to support controlled experimentation involving document analysis, representation, and retrieval for heterogeneous groups of users.
Two complementary efforts at Virginia Tech have been undertaken so that advanced retrieval methods can be carefully evaluated. The CODER (COmposite Document Expert/extended/effective Retrieval) system has been under development since 1985 to serve as a testbed for the use of AI techniques in information retrieval. Since few rules are known regarding how different users and user needs can best be served by such an advanced system, an extended version of the SMART system was adapted to carry out the REVTOLC (Retrieval Experiment—Virginia Tech OnLine Catalog) study. For this pilot investigation, fifty-three participants were divided into four groups: using Boolean, extended Boolean, vector, or vector with probabilistic feedback approaches. A preliminary analysis of their experiences in searching a collection of over 300,000 entries from the Virginia Tech online catalog is partially complete, and will influence the user interface and user modeling portions of CODER. Further development and experimental studies with REVTOLC and CODER are planned.
