Abstract
The advent of Distributed Interactive Simulation (DIS) brings with it new issues and resulting challenges for evaluating the training experience from the user's perspective, particularly during formative development. At least some of the traditional principles and rules which have held for individual and collective training may not apply. This paper will explore some of the conditions which pose special challenges to evaluation of DIS training. Underlying these conditions is the mushrooming of complexity as training moves from single site to multi-site, multi-service DIS. One issue is discrepant perceptions of training value across sites and Services. Another is the need to tailor measurement instruments to accommodate site-specific characteristics. The paper will propose some ways to address these issues, based on results of an assessment of user reactions to the Multi-Service Distributed Training Testbed (MDT2) program of research.
Get full access to this article
View all access options for this article.
