Abstract
Goal Reasoning (GR) agents operating in partially observable environments need to hypothesize about hidden features in the current state to select an appropriate goal and create a plan to achieve it. The Online Iterative Explanation (OIE) problem is a variant of explanatory diagnosis tailored to the needs of these agents; it requires maintaining a complete plausible hypothetical execution history that is consistent with all previous observations, and which is updated iteratively with each new agent observation. Previous work has proposed and demonstrated a variety of OIE approaches for goal reasoning agents. Our contribution in this work is instead a formal investigation of the OIE solution space, which is the set of all consistent explanations at a given point during execution. This space spans a range of uncertainty (both unimportant and important) about the system’s ground truth execution history and state. We propose formal tools for exploring this space, recognizing its features, and understanding its dynamics over the course of execution. We approach this complex problem through two formalisms, starting with a rigorous formalization in the situation calculus, and followed by an application of a more intuitive state-set framework. This analysis will inform efforts to improve efficiency and reduce risk in future OIE algorithms.
Get full access to this article
View all access options for this article.
