Abstract
Analog research of human or combined human and robotic missions is an established tool to explore the workflows, instruments, risks, and challenges of future planetary surface missions in a representative terrestrial environment. Analog missions that emulate selected aspects of such expeditions have risen in number, expanded their range of disciplines covered, and seen a significant increase in their operational and programmatic impact on mission planning. We propose a method to compare analog missions across agencies, disciplines, and complexities/fidelities to improve scientific output and mission safety and maximize effectiveness and efficiency. This algorithm measures mission performance, provides a tool for an objective postmission evaluation, and catalyzes programmatic progress. It does not evaluate individual sites or instruments but focuses at mission level. By applying the algorithm to several missions, we compare the missions' performance for benchmarking purposes. Methodically, a combination of objective data sets and questionnaires is used to evaluate three areas: two sections of closed and quantitative questions and a third section dedicated to the level or representativeness of the test site. By using a weighted metric, the complexity and fidelity of a mission are compared with reference missions, which yield strengths and weaknesses in mission planning.
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
