Abstract
Difficulties in developing, maintaining, and validating knowledge base have motivated an increasing interest in adaptive systems. However, the symbolic paradigm has proven resistant to learning in environ ments that are dynamic or noisy. This report documents a simulation-based research program that demonstrates the utility of a modified connectionist memory unit as the basis for a robust reinforcement model that learns in such environments. This learning model generates effective rule- based behavior from a qualitative, goal-based architecture that further supports the development of secondary goals, or mental models, derived from experiences in these complex domains.
Get full access to this article
View all access options for this article.
