Abstract
This article addresses the problem of determining the best of a number of proposed selective-search heuristics in computer chess. The area is shown to be beset by problems, which may be succinctly expressed by the vast amount of computations required to get standard deviations down to a reasonable level. We consider this level to have been achieved when, say, an estimate of playing strength in USCF rating points with a confidence of 95% is within 20 rating points of computation. A total of 15 techniques have been investigated, leading to the selection of a preferred technique for its accessory advantages. The results suggest that what human Masters and Grand Masters do is not, or no longer, a good yardstick for the evaluation of a computer’s move.
Get full access to this article
View all access options for this article.
