Abstract
This volume highlights the central role of the human factor in cybercrime and the need to develop a more interdisciplinary research agenda to understand better the constant evolution of online harms and craft more effective responses. The term “human factor” is understood very broadly and encompasses individual, institutional, and societal dimensions. It covers individual human behaviors and the social structures that enable collective action by groups and communities of various sizes, as well as the different types of institutional assemblages that shape societal responses. This volume is organized around three general themes whose complementary perspectives allow us to map the complex interplay between offenders, machines, and victims, moving beyond static typologies to offer a more dynamic analysis of the cybercrime ecology and its underlying behaviors. The contributions use quantitative and qualitative methodologies and bring together researchers from the United States, the United Kingdom, the Netherlands, Denmark, Australia, and Canada.
As technology access and use evolves across demographic groups and regions of the world, the threat posed by its misuse will continue to grow. Research examining the phenomena of cybercrime, or the misuse of technology to offend, is vital to increase our understanding of the nature of offender behavior, correlates of victimization, and the utility of policies to deter crime. The study and prevention of cybercrime cannot, however, be the sole preserve of computer scientists. Our understanding of the complexity of cybercrime and its negative impact on users, businesses, and governments makes it clear that online harms cannot be seen primarily as a technical problem fixable by “silver bullet” technologies such as encryption, biometrics, or artificial intelligence.
This monodisciplinary stance has failed and needs to be replaced by an approach that acknowledges the importance of the human factor inherent in all aspects of technology use. The pervasiveness and persistence of cyber-risks, as well as their unpredictability, are, for example, attributable to their “manufactured” nature (Giddens, 1999). They are initiated and executed by offenders who continuously innovate in their exploitation of the technical and behavioral vulnerabilities common to the digital ecosystem (Holt & Lampke, 2010; Jordan & Taylor, 1998; Taylor, 1999). In the highly competitive business environment that characterizes the online economy, a focus on innovation and growth leads companies to rush products and services to market, security being merely an afterthought that can be fixed later (Castells, 1996; Krannenbarg et al., 2018).
From an end-user perspective, a growing body of knowledge has identified psychological susceptibility traits and mismatches between human processes and expectations set by technology that explain why online scams and phishing emails still claim so many victims (Alsharnouby et al., 2015; Fischer et al., 2013; Norris et al., 2019; Whitty, 2019), despite a proliferation of warnings and awareness campaigns.
At the policy level, governments limit the effect of their interventions by continuing to use industrial-era policy and regulatory tools to address digital-era risks and lag in exploring more promising polycentric governance options (Broll, 2016; Dupont, 2017; Shackelford, 2013). Emerging technologies such as quantum computing, 5G networks, Cryptocurrencies, the Internet of Things, and artificial intelligence will only compound the challenges highlighted above and provide new criminal opportunities (Holt & Bossler, 2015; Kennedy et al., 2019).
To better understand this constant evolution of cybercrime and to craft more effective responses, interdisciplinary approaches are needed at the intersection of scientific fields such as criminology, computer science, psychology, political science, economics, and law, among others. We believe such a transdisciplinary effort should focus on the human actors involved as technology users and creators, and their interaction with devices. These interactions serve as a source of vulnerability for a diverse range of online harms (Leukfeldt & Holt, 2020) and as a formidable and underused asset that can prevent and mitigate their negative impact (Coventry et al., 2014) to increase cyber-resilience (Dupont, 2019).
The term “human factor” is understood very broadly and encompasses individual, institutional, and societal dimensions (e.g., Leukfeldt & Holt, 2020). It covers not only individual human behaviors but also the social structures that enable collective action by groups and communities of various sizes, as well as the different types of institutional assemblages that shape societal responses. To advance this research agenda, the Human Factor in Cybercrime Conference (https://www.hfc-conference.com/) was launched in 2018. It aims to bring together researchers from a broad range of disciplines to share their theoretical and methodological insights and develop innovative perspectives that could lead to more effective interventions.
This special issue originates from the Leiden conference, organized in 2019 by Vrije Universiteit Amsterdam and the Hague University of Applied Sciences, and where five of the seven articles included in this volume were first presented and discussed (Banerjee et al., in press; Burruss et al., in press; Cross & Layt, in press; Dupont & Lusthaus, in press; van der Bruggen & Blokland, in press). Two more articles were submitted independently but reflect the same concern for a deeper understanding of the role the human factor plays in cybercrime, with a particular focus on victims’ experiences and needs (Borwell et al., in press; Fissell et al., in press).
Content of the Special Issue
The special issue is organized around three general themes whose complementary perspectives allow us to map the complex interplay between offenders, machines, and victims, moving beyond static typologies to offer a more dynamic analysis of the cybercrime ecology and its underlying behaviors.
The first two contributions examine how cybercriminal communities operate and what strategies are used to sustain their activities over time in the face of a hostile environment. van der Bruggen and Blokland (in press) use longitudinal data to profile the developmental trajectories of the members of a large child sexual exploitation material (CESM) forum that was accessible on the darkweb between 2010 and 2014. Applying a group-based trajectory modeling methodology to 420,000 posts made by more than 14,800 forum members, they identify six different groups: lurkers, browsers, CSEM interested, escalators, vested members, and managers. These trajectories reflect very different posting and communication patterns that could be used to implement more targeted interventions to prevent online offending from spiraling out of control and offer offenders the most suitable treatment.
Dupont and Lusthaus (in press) examine the dispute resolution system used by the members of Darkode, one of the world’s most exclusive cybercrime forums until its takedown in 2015. Despite the vetting process used to screen for the most reliable and capable members, the overall level of complaints remained high. They mostly involved the lower ranked members of this criminal marketplace, explaining in part why the monetary harm suffered by complainants seems surprisingly small (median loss of US$300). What is even more surprising is that a clear outcome was reached in only 23.1% of disputes, suggesting cooperation remained very challenging for this group and raising more questions for the future study of cybercriminal governance.
The following two articles focus on a particular group of malicious hackers who specialize in website defacement, a form of cyberattack where humans target machines in a uniquely hybrid criminological configuration. Banerjee et al. (in press) use machine learning techniques to process large amounts of web defacement data (40,000 incidents) and identify various motivations and attack patterns. This methodology enables them to shed new light on the non-economic motives behind most of these attacks, a significant knowledge gap in the cybercrime literature. By applying a computer science toolset to a social science problem, they broaden our interdisciplinary horizon and identify several methodological challenges that stem from such approaches. Their findings could help build more resilient systems by matching security measures with the underlying motivations and technical capacity of attackers.
Burruss et al. (in press) adopt a finite mixture modeling approach to understand what differentiates prolific hackers from more occasional online offenders. They expand our range of malicious hacker classification tools by examining how attack frequency, social media presence, and attack content (such as political statements, music, pictures and animations) can help predict the intensity of future attack patterns. Their findings indicate that at least half of the hackers in the sample (48%) are active on one of the leading social media platforms, suggesting open-source monitoring strategies could yield significant insights. Politically motivated attackers also appear more prolific.
The third group of articles addresses the experiences and needs of victims, providing us with a more detailed understanding of how cybercrime impacts their lives and what types of interventions would benefit them most. Borwell et al. (in press) explore the psychological and financial impact of various types of cybercrime, using a representative sample of more than 33,000 Dutch citizens. Reviewing the psychological and financial harms caused by cybercrimes aimed at devices (hacking), money (online fraud), and the person of the victim (threats and stalking), they find that person-centered incidents, incidents where the offender was an acquaintance, and incidents where financial losses were not compensated generate a higher negative impact on victims’ emotional well-being. By contrast, victims with a higher income showed a lower negative emotional impact. These findings indicate that victims’ support and resources should be tailored to the types of cybercrime experienced. Loss compensation programs by government agencies could also be considered.
Cross and Layt (in press) explore the interactions between romance fraudsters and their victims, focusing on how the latter investigate doubts about partners’ identities. Using complaints to an Australian fraud reporting portal, they find that nearly one half of the victims ran internet searches—primarily relying on reverse image queries—to verify or refute their suspicions. Even if not all these searches were successful, they remain the most effective way to substantiate suspicions and should be emphasized in cybercrime prevention guidance. Fraud and scam websites effectively disseminate information about fake profiles but can cause unwanted harm to the victims of identity crime, whose pictures are stolen by scammers. The potential of artificial intelligence to generate unique images that can be used to lure victims is also identified as a looming trend.
Fissel et al. (in press) focus on intimate partner cyber abuse (IPCA). They develop and validate a new instrument that measures IPCA across 33 behaviors organized in five dimensions: cyber financial control, cyber sexual coercion, cyber control, cyber monitoring, cyber direct aggression. A sample of 1,500 American adults currently engaged in an intimate relationship completed the questionnaire, revealing that 28% of them experienced at least one form of IPCA victimization one or more times within the prior 6 months. These findings expand existing research by including older adults and distinguishing behavior that occurs with and without the partner’s permission. They also suggest that the victims of IPCA frequently experience offline intimate partner violence.
The eclectic insights generated by these seven contributions and the diversity of their theoretical and methodological approaches signal that we are still at the very early stages of our journey to refine our understanding of how the human factor shapes cybercrime. Many unknowns remain and an abundance of data, both qualitative and quantitative, must be analyzed. What is clear though is that interdisciplinary innovation will be required to apply this knowledge in ways that enable the scalable prevention and mitigation of online harms. Only then will we be able to truly impact the problem of cybercrime in a holistic fashion.
Footnotes
Declaration of Conflicting Interests
The authors declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The authors disclosed receipt of the following financial support for the research, authorship, and/or publication of this article: This work was supported by Social Sciences and Humanities Research Council of Canada (950-231178).
