Chess, once famously referred to as the drosophila of artificial
intelligence (AI) research, has been a significant domain for developing
intelligent AI agents capable of achieving super-human performance in domains
previously dominated by humans. However, the emphasis on unceasingly improved
playing strength has come at the cost of neglecting other fundamental aspects of
intelligent agents, such as being capable of explaining the rationality behind
their decisions in human-understandable terms. The need for such capabilities
may be even more profound now than before, partly because such agents may be
capable of learning novel concepts of interest to us humans, for example, as
recently demonstrated in the game of chess. In this paper, we survey the state
of explainable AI in chess-playing agents, arguing that chess may indeed hold a
promise as an admissible domain for explainable AI.