Abstract
The modeling and simulation of human performance forces the analyst to confront a range of well-known but difficult challenges. One challenge the analyst does not seem to face is a shortage of human performance modeling tools. But because there is no uniform framework for expressing the content and structure of a human performance model, it is difficult to understand what is at stake in the implementation of a given model and all but impossible to compare and contrast different models despite the proliferation of quantitative modeling tools. The inability to communicate model structure and content is not just a practical shortcoming: it is a major impediment to assessing the validity, plausibility, and extensibility of human performance models. The latter aspect is particularly important as it prevents the incremental construction of large human performance models along standard software engineering practices. The goal of this panel discussion is to review past and ongoing efforts to develop general languages that specify cognitive models at a functional level of description. We do not expect a standard to emerge from this discussion, but rather we hope to canvass both the theoretical and practical issues that confront any attempt to develop a uniform language that describes different modeling frameworks.
Get full access to this article
View all access options for this article.
