Abstract
Many cybersecurity algorithms assume adversaries make perfectly rational decisions. However, human decisions are only boundedly rational and, according to Instance-Based Learning Theory, are based on the similarity of the present contextual features to past experiences. More must be understood about what available features are represented in the decision and how outcomes are evaluated. To these ends, we examined human behavior in a cybersecurity game designed to simulate an insider attack scenario. In a human-subjects experiment, we manipulated the information made available to participants (concealed or revealed decision probabilities) and the framing of the outcome (as losses or not). An endowment was given to frame negative outcomes as losses, but these were not framed as losses when no endowment was given. The results reveal differences in behavior when some information is concealed, but the framing of outcomes only affects behavior when all information is available. A cognitive model was developed to help understand the cognitive representation of these features and the implications of the behavioral results.
Get full access to this article
View all access options for this article.
