Abstract
Criminometrics treats crime as an econometric object. This paper argues that such ambition cannot scale under current institutions. Crime categories are socially and legally contingent, and crime data are administrative artifacts shaped by reporting, recording, and enforcement. The result is unstable measurement and reflexive feedback that blurs predictive accuracy and fairness with policing practices. Using examples of hotspot prediction, local forecasting, and risk-scoring techniques, and reviewing statistical and causal approaches to bias and tangled cause-and-effect, the paper shows that accuracy gains are setting-specific and often optimize administration more than understanding. It concludes by urging methodological pluralism and ethical scrutiny.
Keywords
Introduction
Over the past century, econometrics has become a vital instrument in the social sciences, providing a statistical framework that allows economists to measure relationships, build models, and predict future outcomes. Based on probability theory and supported by structured datasets, econometrics has helped transform economics into a policy-focused discipline with strong predictive and explanatory abilities. From macroeconomic planning to financial regulation, it offers decision-makers models that, although imperfect, rely on relatively stable indicators such as gross domestic product, inflation, and unemployment rates. The strength of econometrics lies not just in its technical complexity but in its capacity to turn social and economic processes into measurable, and thus controllable, phenomena (Wooldridge, 2010).
To be clear, economic indicators are not “natural facts”: they are also institutionally produced, contested, and revised (e.g., definitional choices about unemployment, rebasing GDP, or reweighting price baskets) (ILO et al., 2020; United Nations et al., 2009). Economics is likewise reflexive, in the sense that forecasts can shape policy choices, and policy choices can in turn reshape the behavior being modelled (Lucas, 1976). The present argument is, therefore, not that economics is free from construction or feedback, but that its key variables are generally stabilized through long-running statistical infrastructures and harmonization practices to a degree that crime categories and crime recording systems are not.
The question arises whether criminology might follow a similar path: could there be a “criminometrics,” a subfield focused on predicting crime trends, identifying individuals at risk, and guiding policies through probabilistic models? At first glance, the appeal is clear. Governments and law enforcement agencies are drawn to the promise of predictive analytics, whether through hotspot policing (Sherman and Weisburd, 1995), actuarial risk assessments (Andrews and Bonta, 2010), or algorithmic crime mapping (Perry et al., 2013). The term criminometrics has two main origins. In the economics-of-crime tradition, it refers to econometric modeling of crime and enforcement (e.g., Aasness et al., 1994; Eide, 1994; Entorf and Spengler, 2002), and related work often uses the adjective “criminometric.” More recently, some scholars have repurposed criminometrics to refer to a psychometric measurement framework in criminology (e.g., Graham et al., 2025). This paper uses criminometrics in the former, econometric sense and argues that—even on its own terms—the project is conceptually inconsistent and morally problematic for reasons discussed below.
This paper argues that although criminometrics is theoretically imaginable, its strongest ambitions—especially high-stakes individual prediction and broad, jurisdiction-spanning deployment—are untenable in practice and difficult to justify in principle, given current data environments and institutional conditions. The analogy with econometrics collapses once the unstable ontology of crime, the limitations of criminological data, and the array of causal variables are brought into view. Unlike many economic indicators, which are institutionally standardized and harmonized enough to support routine comparisons despite ongoing contestation (United Nations et al., 2009), crime is a socially constructed category that varies across jurisdictions and historical contexts. The data on which criminometrics would depend are fragmentary and often distorted by underreporting (e.g., corporate and business-related crime), policing practices, court convictions, and political priorities (Maguire, 2012). Moreover, embedding crime prediction within statistical models risks reproducing systemic biases and obscuring the normative dimensions of criminal justice.
For clarity, this paper uses the term “criminometrics” to denote a programmatic goal rather than “quantitative criminology” (Desrosières, 1998; Espeland and Stevens, 1998). Criminometrics is the attempt to build econometrics-like models of crime that meet (at least) the following five conditions: (1) A stabilized target variable (“crime”) that can be measured consistently enough to support comparison across time and place; (2) A cumulative data infrastructure (standardized recording practices, stable categories, and comparable administrative series) that permits model correction and updating without shifting the meaning of the outcome; (3) Generalizable inference, i.e., parameter estimates or predictive relationships that travel beyond a single locality, unit, or short time horizon; (4) Action-guiding deployment, where outputs are used to allocate policing or penal resources (places, persons, or priorities) in ways that purport to be justified by the model; and (5) A defensible counterfactual—some credible basis for distinguishing “predicted risk” from the effects of enforcement, surveillance, and recording practices that generate the very data being modelled.
Nevertheless, the argument here would be weakened if criminometric systems could (a) demonstrate stable performance under distribution shift (across jurisdictions/time periods and under changing enforcement), (b) retain predictive validity when evaluated against outcomes less tightly coupled to police detection (e.g., victimization or independent harm proxies), and (c) show that deployment does not materially alter the data-generating process in ways that inflate apparent accuracy (Ensign et al., 2018; Quionero-Candela et al., 2009). These criteria specify what criminometrics would need to achieve to qualify as an econometrics-like predictive paradigm.
Econometrics as an epistemic ideal
Econometrics holds a unique position in the social sciences as both a methodological framework and an epistemological ideal. It aims to make social and economic processes understandable through mathematics, turning complex human behavior into measurable relationships between variables. At its core, econometrics depends on probability theory, regression analysis, and statistical inference to explain and forecast patterns in economic data. With enough data and a careful methodology, the underlying structures of economic activity can be identified, modelled, and—at least partly—anticipated (Greene, 2018).
The strength of econometrics comes not only from its techniques but also from the nature of the phenomena it analyzes. Economic indicators such as inflation, gross domestic product, and unemployment, while often debated and periodically revised, are typically grounded in relatively stable definitions and measurement protocols that enable cumulative comparisons (United Nations et al., 2009). The collection of economic data is organized through institutional systems—such as national accounts, central banks, and international reporting agencies—that ensure consistency and comparability over time and across regions (ILO et al., 2020). This consistency allows econometric models to operate within a framework of measurable uncertainty. Errors, residuals, and confidence intervals are expected and considered, but they exist within a largely cumulative data environment that can be corrected over time.
Econometrics has also gained authority through its usefulness. Its models enable forecasts that inform policy and deliver measurable results—such as growth rates, inflation targets, and investment estimates. Governments and financial institutions have incorporated econometric analysis into their decision-making processes, making it a key technology of governance. The epistemic status of econometrics is strengthened by its institutional integration: its results are actionable, its outputs impactful, and its limitations are regarded as technical rather than conceptual (Hendry and Nielsen, 2007). For example, central banks and treasuries routinely operationalize econometric forecasting in inflation targeting, macroeconomic surveillance, and fiscal planning, treating forecast errors as a managed feature of governance rather than a reason to abandon the enterprise (ILO et al., 2020).
This status makes econometrics an attractive model for other social sciences because it embodies the dream of prediction and control—transforming uncertainty into probabilistic order. However, criminometrics, seeking to imitate econometrics, confuses a dependent legal category with a stable, measurable variable.
The hypothetical promise of criminometrics
In general, a criminometric system would measure the factors that lead to criminal behavior, identify individuals or groups at high risk, and predict offending across different times and locations. It would incorporate social, economic, environmental, and psychological factors into predictive models to guide policing tactics and public policy. Essentially, criminometrics offers a data-driven approach to preventing harm, using resources effectively, and improving the rationality of criminal justice decisions.
The outlines of such a project are clear in current practices. Predictive policing uses statistical algorithms to identify potential crime hotspots, relying on past data to forecast where crimes are most likely to occur (Perry et al., 2013). Actuarial risk assessment methods estimate the probability of reoffending based on demographic and behavioral factors (Desmarais et al., 2016). Bayesian models have been used to predict specific crimes, such as burglary, in limited contexts (Mohler et al., 2011). Although often called innovations, these technologies may serve other goals: turning criminology into a predictive science that generates actionable insights in real time. In practice, however, “actionability” often means administrative optimization (where to patrol, whom to flag, how to triage caseloads) rather than durable explanatory knowledge (Brayne, 2017; Perry et al., 2013), and these actions can themselves reshape what is recorded as crime (Lum and Isaac, 2016).
Recent work in The Police Journal advances a police-centric framework—RUDI (Rationale, Development and Implementation)—for developing algorithmic models (Sayer et al., 2025). While RUDI aims to mitigate risks through structured rationale, development, and implementation, the present paper argues that such process safeguards cannot overcome the more profound ontological instability of “crime” and the reflexivity of crime data on which these models depend.
The appeal lies not only in practicality but also in symbolic promise. Criminometrics offers policymakers and practitioners an illusion of control over one of the most volatile and politically charged aspects of social life. In a climate of fiscal restraint and public demand for security, a probabilistic approach to crime management seems to reconcile the moral uncertainties of punishment with the quantitative rationality of governance. Statistical prediction appears as objective, value-neutral, and efficient—qualities that lend legitimacy to decisions about policing, sentencing, and resource allocation. In this way, criminometrics functions, like econometrics, as a technology of state rationality: a way of making social disorder measurable and thus manageable.
Nevertheless, this vision depends on several assumptions: that crime can be measured consistently across contexts; that data on offending and victimization are reliable and representative; and that the causal structure of crime is sufficiently stable to permit generalizable inference. These assumptions, while convenient, are conceptually fragile.
Methodological and empirical failures
The central difficulty confronting any project in criminometrics lies in the methodological instability of its foundations. Unlike econometrics, which relies on standardized indicators and systematic data collection, criminology operates in a field marked by conceptual ambiguity, inconsistent measurement, and contextual variability. The aspiration to model crime probabilistically assumes a coherence that does not exist. When examined closely, the more expansive versions of criminometrics—those that claim stable, generalizable prediction across settings—strain under the weight of their own assumptions, even if narrower, context-bound techniques can sometimes yield limited operational gains (Lum and Isaac, 2016; Mohler et al., 2015).
The multiplicity and indeterminacy of variables
The first obstacle is the multiplicity of causal factors that shape criminal behavior. Criminological theory distinguishes between proximal variables—immediate situational and motivational factors such as opportunity structures, peer influence, or substance use—and distal variables, including socio-economic inequality, neighborhood disorganization, familial background, and cultural norms. These factors operate across different levels of analysis—individual, situational, and structural—and interact in complex, often non-linear ways. Their salience shifts across time and place: what predicts youth violence in one social context may have little explanatory power in another (Weatherburn, 2001).
This multiplicity makes it improbable to construct stable regression models. In econometrics, independent variables are often treated as discrete, measurable inputs into a system governed by identifiable rules. In criminology, by contrast, causal relationships are contingent, recursive, and context dependent. Attempts to quantify them often produce models either so simplified as to be irrelevant or so complex as to be analytically brittle. The dream of a generalizable, predictive model of crime is therefore undermined at its methodological core.
The fragility and bias of crime data
Even if causal complexity could be tamed, criminometrics faces data unreliability. Crime statistics are not transparent reflections of social reality; they are institutional artifacts produced through the selective processes of reporting, recording, and classification (Maguire, 2012; author withheld, 2017). A significant portion of criminal activity—the dark figure of crime—never appears in official statistics, whether due to victims’ reluctance to report, the invisibility of certain offenses, law enforcement discretion, or the convictions recorded by courts. What is counted as crime is shaped as much by administrative priorities as by behavior (Maguire, 2012).
Policing practices introduce systematic distortions. Patterns of patrol, surveillance, and enforcement determine where and how crime is “found.” By way of example, over-policing of marginalized communities generates data that exaggerates their apparent criminality, while under-policing of affluent areas produces the opposite effect. Predictive algorithms that rely on such data reproduce these distortions, creating self-reinforcing feedback loops: the more the police focus on an area, the more crime they “find,” thereby justifying continued surveillance (Lum and Isaac, 2016). Far from neutral, criminometric models risk embedding historical and structural biases within ostensibly objective statistical systems.
The problem of construct validity
A further difficulty concerns the status of crime as a measurable object. Unlike inflation, interest rates, or employment levels—phenomena that, however contested, refer to quantifiable social processes—crime is not a natural category. It is a normative and legal construct, contingent upon social, political, and historical conditions. Acts defined as criminal in one jurisdiction may be lawful in another; behaviors criminalized in one era may later be decriminalized or reinterpreted. This definitional fluidity undermines any claim that crime data measure a stable underlying reality (Hulsman, 1986).
The implication is that criminometrics rests on an unstable ontology: it attempts to model not a consistent empirical phenomenon but a shifting terrain of legal and moral categorization. The variability of what counts as crime makes meaningful longitudinal or cross-cultural comparison difficult. It could be argued that the more one attempts to standardize crime data to achieve analytic coherence, the further one moves from the lived realities and moral complexities of offending and victimization.
Feedback loops and reflexivity
Criminometrics confronts a problem of reflexivity—its models do not describe crime but shape it. Predictive policing and actuarial risk assessment already demonstrate how algorithmic systems generate self-confirming cycles (Ensign et al., 2018). Data derived from past enforcement decisions is used to forecast future crime, directing police resources toward the same locations and populations that produced the initial data. This recursive logic amplifies existing inequalities and consolidates patterns of social control. What appears to be a prediction is, in fact, the reification of historical bias under the guise of statistical objectivity (Ensign et al., 2018; Prunckun, 2019).
Methodological “fixes” and their limits in crime prediction
Many of the technical problems raised by criminometrics—selection effects, omitted variables, feedback, and measurement error—have analogues in econometrics and are routinely discussed in the languages of endogeneity, dynamic causality, and violations of identifying assumptions. In principle, there are methodological responses: experimental and quasi-experimental designs to estimate intervention effects; instrumental variables or difference-in-differences strategies to reduce confounding; and, in machine-learning settings, procedures for calibration, reweighting, or fairness-aware adjustment. These methods matter and should not be ignored.
Recent machine learning and fairness literature have sharpened these issues. Work on distribution shift highlights that models can fail when the environment changes (including changes induced by policy and enforcement), complicating claims of portability (Quionero-Candela et al., 2009). Fairness research also shows that competing fairness metrics can be mutually incompatible, so “fair” performance depends on normative choices rather than purely technical tuning (Chouldechova, 2017; Kleinberg et al., 2017). Finally, simulation studies of feedback-loop dynamics demonstrate that predictive deployment can amplify measured risk through iterative allocation, even when models are statistically well specified (Ensign et al., 2018).
However, their relevance to criminometrics is limited by the distinctive structure of crime data and deployment. First, many “outcomes” available for modeling are not independent measurements of offending but administrative products of surveillance and recording; when the measurement process is itself part of the intervention, technical correction cannot fully separate “risk” from “visibility.” Second, even when causal inference is possible, it typically yields local, context-specific estimates (what worked there, then, under a particular enforcement regime), rather than portable predictive laws. Third, policing interventions generate interference and equilibrium effects: reallocating attention changes behavior, reporting, and detection in ways that violate the stable conditions required for generalizable prediction.
The upshot is not that quantitative methods are useless, but that the strongest criminometric ambition—an econometrics-like predictive science of crime that generalizes across contexts and guides routine, high-stakes decision-making—faces constraints that are not merely technical. Methodological fixes can sometimes improve narrow evaluations of specific interventions, yet they do not resolve the deeper problem that the object being modeled (“crime”) is simultaneously normative, institutionally produced, and altered by the very practices that prediction is meant to guide.
Comparison with econometrics
While economic data are imperfect, they are embedded in standardized measurement systems that stabilize meaning across time and place. Criminology lacks that stability: its core variable—crime—is a shifting legal and moral designation, and its data are reflexive products of enforcement and court processes. It follows that treating crime as an analogue of output or inflation is a category error that cannot be solved by statistical refinement alone. This is because the core difficulties are also conceptual and institutional (e.g., definition, recording, discretion, and reflexive feedback).
Ethical and epistemological constraints
Even if the methodological problems were solved, criminometrics would still confront ethical and epistemological limits. Predictive governance risks displacing questions of responsibility and justice with a technocratic logic of risk.
Ethical risks and the politics of prediction
Predictive systems built on biased or incomplete data amplify existing inequalities by encoding them into algorithmic form. Actuarial approaches for risk assessment often reproduce racial and socio-economic disparities under the veneer of objectivity (Angwin et al., 2016). When such models guide decisions about bail, parole, or sentencing, they transform structural disadvantage into individualized risk. Those already subject to heightened surveillance are deemed most likely to reoffend, perpetuating cycles of control and marginalization (Quinney, 1970).
This dynamic reflects the technocratic fallacy: the belief that moral and political problems yield to quantitative precision. Translating social behavior into statistical probabilities obscures the normative questions that underlie criminal justice—questions of responsibility, fairness, and the purpose of punishment. By foregrounding predictive accuracy, criminometrics legitimizes punitive practices as rational policy choices rather than contestable exercises of power. It reframes justice as risk management, turning citizens into data points and ethical judgment into algorithmic calculation.
The illusion of objectivity
Statistical models seem neutral because their numbers feel detached from ideology. However, data are never truly neutral. Choices about what to count, how to code, and how to model criminal events are influenced by institutional priorities and cultural assumptions (Bowker and Star, 1999). In crime analysis, these choices are deeply connected to the politics of criminalization and law enforcement. The idea that computational sophistication can eliminate bias is false. Algorithms do not remove subjectivity; they automate it.
This illusion functions ideologically. By converting social phenomena into numerical data, criminometrics obscures the normative choices embedded in criminal justice policy. It presents social control as a matter of technical optimization rather than political debate. The result is not only epistemic distortion but also moral displacement: responsibility for inequality and harm is shifted from social structures to statistically “risky” individuals.
Epistemological incompatibility
Criminology is not a unified science. It encompasses positivist, interpretive, and critical traditions, each grounded in different conceptions of causality and explanation (Newburn, 2017). While some strands employ quantitative methods, others emphasize meaning, agency, and social context. The field’s strength lies in its pluralism—its recognition that crime is simultaneously a behavioral, social, and moral phenomenon.
Imposing a criminometric paradigm would privilege one epistemology—the probabilistic and predictive—over others. It would reorient criminology toward a deterministic model of human behavior, treating action as the outcome of measurable risk factors rather than as the product of moral choice, social context, or structural constraints. Such a shift would narrow the field’s moral and intellectual horizons and undermine its capacity for critical reflection on the very categories—crime, deviance, justice—through which societies define themselves.
The ethical value of uncertainty
Paradoxically, what criminometrics seeks to eliminate—uncertainty—may be criminology’s most ethically valuable feature. Acknowledging the indeterminacy of human behavior preserves space for moral agency, accountability, and reform. Predictive certainty risks foreclosing that space. When individuals are treated as statistical risks rather than moral subjects, the possibility of rehabilitation diminishes (Hannah-Moffat, 2013), and justice becomes administrative rather than deliberative. This advances the argument that a criminology that embraces uncertainty, complexity, and contingency remains truer to its humanistic purpose than one that aspires to probabilistic control.
Misinterpretations and limited use cases
Commonly cited “successes” in predictive policing—hotspots, actuarial risk scores, and localized forecasting models—tend to be bounded operational interventions rather than steps toward an econometrics-like science of crime prediction. In most deployments, model performance is validated against police-generated outcomes (recorded incidents, detections, arrests) and is therefore inseparable from enforcement patterns and recording practices.
Empirical illustrations
Calls for criminometrics often point to “successful” predictive deployments, but the empirical record is more mixed, and the meaning of “success” is frequently operational rather than explanatory (Lum and Isaac, 2016; Perry et al., 2013). Place-based forecasting has produced some measurable gains under constrained conditions. For example, a randomized field-trial design in Los Angeles reported modest reductions in recorded crime when patrol time was guided by short-horizon algorithmic forecasts compared with analyst-led hotspot practices (Mohler et al., 2015). Yet even favorable results of this kind are typically local and intervention-dependent: they show that targeted patrol can suppress recorded incidents in selected micro-places over short windows, not that crime becomes a stable, portable object of prediction across jurisdictions, offense types, or changing enforcement regimes.
Person-based predictive research methods have proven even more fragile. Chicago’s Strategic Subject List (“heat list”) is a well-known attempt to identify individuals at elevated risk of gun violence; a quasi-experimental evaluation found limited or no clear violence-reduction effects attributable to the intervention as implemented, alongside substantial uncertainty about mechanisms and use (Saunders et al., 2016). Where such systems do show apparent “predictive” power, that power often derives from the same enforcement and detection processes that shape criminal-justice records, raising the question of whether the model is forecasting future harm or reproducing the administrative visibility of already-surveilled populations.
These experiences illustrate a recurring pattern: predictive programs are easiest to validate when the outcome is closely coupled to police activity (detections, arrests, recorded incidents) and when deployment itself helps generate the measured “success.” In that sense, many criminometric achievements are better interpreted as administrative optimization—a way of organizing patrol, attention, and case prioritization—than as cumulative advances toward a general science of crime prediction (Ensign et al., 2018; Lum and Isaac, 2016; Richardson et al., 2019).
Predictive policing and hotspot analysis
Hotspot analysis is often heralded as proof that crime follows predictable patterns (Braga et al., 2019). Using historical data, these models identify micro-locations with elevated probabilities of future offenses, guiding the allocation of police resources. Such methods can enhance operational efficiency, but within a constrained frame. They predict not the causes of crime, but the recurrence of recorded incidents, which reflect the geography of policing as much as the geography of offending.
Consequently, predictive policing methods tend to confirm existing enforcement patterns. They are self-referential and reliant on the data they help generate. Their apparent successes—modest reductions in recorded incidents within targeted areas—are local, transient, and tied to law enforcement’s operational practices. Presented as prototypes of criminometrics, such instruments are better understood as tactical optimization than as theoretical progress.
Actuarial risk assessment
Actuarial instruments claim to predict individual likelihoods of reoffending (Monahan and Skeem, 2016). They aggregate demographic, behavioral, and socio-economic indicators into risk scores. These scores may correlate with recidivism rates at the population level, but their predictive validity for individuals is weak and ethically contentious. More importantly, they rest on a circular premise: they predict “reoffending” based on data conditioned by enforcement and detection patterns.
They achieve classification rather than explanation—sorting individuals according to historically derived probabilities that reflect systemic inequities. Their utility lies not in understanding why people offend, but in rationalizing the distribution of penal resources. Algorithmic sophistication should not obscure that they do not—and cannot—model crime as a complex social phenomenon.
Localized forecasting and the limits of scale
Spatial-temporal and Bayesian models sometimes forecast specific crimes, such as burglary or car theft, within bounded contexts (Short et al., 2009). Under controlled conditions, limited forms of probabilistic forecasting are possible. Yet these models are context-dependent, temporally unstable, and methodologically constrained. Their effectiveness declines when extended beyond the environments or timeframes in which they were calibrated.
In other words, they do not scale. They cannot be generalized across jurisdictions, crime types, or social contexts without losing accuracy. Their limited success highlights the intractable problems of criminometrics.
Illustration of the rule, not the exception
As a whole, these applications illustrate the rule that criminometrics cannot be realized as a coherent discipline under current institutional conditions, particularly where models are expected to generalize across settings and guide routine high-stakes decisions. We conclude that predictive policing, risk assessment, and localized forecasting do not constitute a generalizable science; they are pragmatic methods at the intersection of statistics, administration, and policy. Their outputs are contingent, their assumptions fragile, and their ethical implications profound. Rather than foundations for criminometrics, they exemplify its contradictions: the conflation of data with reality, prediction with governance, and probability with truth.
Conclusion
Could criminology develop a methodological counterpart to econometrics—a criminometrics capable of predicting crime with probabilistic accuracy and guiding policy through quantitative foresight? The analysis suggests that a general, scalable criminometrics that claims to deliver stable predictions across contexts and guide policy in a routine, high-stakes way is not feasible under current conditions. This judgment might need revising if models could demonstrate robust performance under distribution shifts, validate their performance against outcomes not tightly coupled to police detection, and avoid feedback-driven inflation of apparent accuracy. Strong claims of equivalence between econometrics and criminometrics often rest on misunderstandings of what crime is, how it is measured, and how it functions as a social phenomenon.
Econometrics often operates with more stable categories and more standardized measurement infrastructures than those available for most crime indicators. Its core variables—production, consumption, inflation, employment—are defined through institutional consensus and embedded in data systems that ensure comparability and continuity. Econometric models, although imperfect, require sufficient coherence to support meaningful inference and prediction. Criminology lacks these prerequisites. The phenomena it studies are volatile, relational, and normatively charged. Crime is not an observable natural category but a legal and moral label that varies across cultures, jurisdictions, and periods. Its data are incomplete, biased, and reflexive, reflecting institutional practices as much as behavior.
Attempts to develop criminometric models as a general, scalable, high-stakes predictive paradigm are unlikely to succeed under current conditions, given persistent empirical fragility and deeper epistemological tensions. The multiplicity of causal pathways makes stable measurement difficult; unreliable, selectively recorded data weaken inference; and the social construction of crime makes broad predictive generalization hard to defend without strong boundary conditions and independent validation. Even when quantitative methods appear successful, their achievements are limited, operational, and context-dependent. They may improve decision-making within institutional frameworks, but they rarely attain—and cannot reliably sustain—the kind of systemic, portable predictive power often associated with econometric policy uses.
Beyond these methodological limits lie ethical and epistemological concerns. The pursuit of criminometrics risks turning criminology from a discipline focused on justice, meaning, and context into one obsessed with control and prediction. When misused, statistical modeling naturalizes inequality by turning historical biases into supposed objective knowledge. The authority of numbers obscures the moral and political choices embedded in criminal justice practices. In trying to imitate the predictive logic of econometrics, criminology could abandon its critical and normative roles—the very features that make it a field concerned not only with causation but also with the consequences.
If criminometrics is a myth, it is a revealing one: a technocratic desire to make uncertainty manageable and social disorder predictable. Criminology’s strength lies in its resistance to such reductionism. Crime cannot be forecast as if it were a stable economic indicator because it is not an economic variable but a social relationship shaped by power, norms, and meaning. The best way to respond to this complexity is not to oversimplify it but to engage with its intricacies.
Criminology should embrace methodological pluralism rather than pursue the illusory certainty of probabilistic prediction. Quantitative analysis has its place, but it must be tempered by qualitative understanding, theoretical reflexivity, and ethical awareness. A mature criminology recognizes that the value of its insights lies less in predictive power than in illuminating the social, cultural, and moral dimensions of crime and justice. In this light, criminometrics is not the future of criminology but a cautionary mirror—reminding us that the quest for control, when mistaken for knowledge, leads not to understanding but to distortion.
Footnotes
Funding
The author received no financial support for the research, authorship, and/or publication of this article.
Declaration of conflicting interests
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
