Abstract
Human game players rely heavily on the experience gained by playing over the games of masters. A player may recall a previous game either to obtain the best move (if he has previously seen the identical position) or to suggest a best move (if similar to others seen). However, game-playing programs operate in isolation, relying on the combination of search and programmed knowledge to discover the best move, even in positions well-known to humans. At best, programs have only a limited amount of information about previous games. This paper discusses enhancing a chess-playing program to discover and extract implicit knowledge from previously played grandmaster games, and using it to improve the chess program’s performance. During a game, a database of positions is queried looking for identical or similar positions to those on the board. Similarity measures are determined by chunking the position and using these patterns as indices into the database. Relevant information is subsequently passed back to the chess program and used in its decision making process. As the number of games in the database increases, the “experience” available to the program improves the likelihood that relevant, useful information can be found for a given position.
Get full access to this article
View all access options for this article.
