Abstract
We provide lessons learned in leveraging existing simulations to conduct human-subjects cybersecurity experiments and develop training for cybersecurity professionals. First, we provide criteria for the evaluation and categorization of existing simulation tools into four categories (competitions, testbeds, tabletop exercises, and simulations used in published research). Following this, eight criteria are offered to evaluate simulations on their suitability for use in experiments. We evaluated one representative product in each category. This paper serves as a resource for practitioners who use simulation as a method of training or evaluation. Further, this work is a starting point for researchers to efficiently find and leverage simulations to conduct cybersecurity research.
Get full access to this article
View all access options for this article.
