Abstract
This panel focuses on key issues that confront investigators who use driving simulators and must (a) design experiments, (b) interpret data and (c) write reports in human factors and medical research-related applications of these simulators. Presentations from an experienced panel of researchers from the U.S. and Canada aim to raise awareness of the audience (and panel members) and discuss solutions to a number of thorny research issues confronting driving simulator users. This effort draws upon but is distinct from previous work that has customarily emphasized engineering concerns facing simulator developers.
The panel began with a “Collaboratory” at University of Iowa in March 2001. Discussions continued at the Driving Assessment conferences in Snowmass (2001) and in Park City (2003), the latter under the aegis of the Simulation Users Group (SUG). The SUG met again at the Transportation Research Board (TRB) in Washington, D.C. in January (2004) to address an array of topics including physical fidelity, simulation adaptation syndrome, standards for reporting methods, and variable selection. The session room was filled to capacity and the topics were deeper than could be addressed in the available time. Clearly, the discussions needed to continue beyond the TRB.
The current HFES panel session addresses a limited set of topics in depth and allow the audience to explore the issues at length during the open discussion period. The abstracts of each panelist address topics that are not much discussed in the literature. These topics include: simulator discomfort related drop-out rates, participant characteristics, measurement precision and missing data, and the need for simulation standards, e.g., for reporting methodology and as a prelude to clinical trials, e.g., to test efficacy of “treatments” such as in-vehicle driver alerting devices for at-risk drivers.
Get full access to this article
View all access options for this article.
