Abstract
Recently, cyber reasoning systems demonstrated near-human performance characteristics when they autonomously identified, proved, and mitigated vulnerabilities in software during a competitive event. New research seeks to augment human vulnerability research teams with cyber reasoning system teammates in collaborative work environments. However, the literature lacks a concrete understanding of vulnerability research workflows and practices, limiting designers’, engineers’, and researchers’ ability to successfully integrate these artificially intelligent entities into teams. This paper contributes a general workflow model of the vulnerability research process, and identifies specific collaboration challenges and opportunities anchored in this model. Contributions were derived from a qualitative field study of work habits, behaviors, and practices of human vulnerability research teams. These contributions will inform future work in the vulnerability research domain by establishing an empirically-driven workflow model that can be adapted to specific organizational and functional constraints placed on individual and teams.
Get full access to this article
View all access options for this article.
