Abstract

This response offers a critical examination of the recent article published in Police Quarterly, which evaluates Force Science’s published research and its use in courtroom testimony. While the article presents itself as a scientific critique, it includes errors that warrant correction.
Having been a reviewer for Police Quarterly, Dr Lewinski and Force Science appreciate Dr Worrall’s invitation to offer this response and to engage constructively with the authors’ concerns. We are willing to assume good faith on behalf of the article’s authors and do not question their commitment to improving law enforcement outcomes. That said, we recognize that the errors presented in their article may be the product of bias and conflicts of interest that we will disclose before addressing the substance of their critiques.
Preliminary Observations: Undisclosed Conflicts of Interest
Readers familiar with Stoughton et al. will note that three of these authors frequently testify against law enforcement officers and opposite Force Science experts in high-profile litigation. Although not a listed author, Lisa Fournier, who is repeatedly cited in support of the authors’ positions, also testifies against officers and opposite Force Science-trained experts. For readers unfamiliar with these relationships, we begin by disclosing those material financial conflicts and the following professional conflicts.
Well-known for their textbook, Evaluating Police Uses of Force, and other publications advocating for police and legal reform, these authors approach policing issues from academic perspectives. As activists, they unapologetically and necessarily advocate for a change in current law and police practices. However, their advocacy and opinions on police practices, tactics, and the use of force are frequently challenged and rejected in and out of court by police professionals and experts on the use of force. These authors often fail to consider or apply human performance factors in use-of-force analysis. In their expert capacity, they have a reputation for imagining and testifying to a level of certainty and predictability in force encounters that officers typically do not experience.
By neglecting to incorporate human performance considerations in their evaluations, they consistently set themselves apart from Force Science and industry professionals who advocate for the integration of human performance factors in training, tactics, and force evaluations. These conflicts of interest appear to motivate their targeted critique and mischaracterization of Force Science.
While not a named author, Lisa Fournier is prominently cited throughout the article as an authoritative voice. Dr. Fournier serves as a highly paid expert witness, testifying against officers and challenging the opinions of police practices and use-of-force professionals. As the authors noted, she has published The Daubert Guidelines in which she advocates that “… peer reviewers should be selected based on their expert knowledge of the construct (e.g., specific cognitive process) under investigation, the method or technique employed, and the measures and analyses used to draw specific conclusions about the construct under investigation.” Dr. Fournier admits that she is not an expert in police practices or use-of-force decision-making. She has not conducted research in perception, cognition, or decision-making outside of a lab or involving the influence of physically or emotionally driven arousal states.
Although she attempts to limit her advocacy to the appropriateness of specific scientific research, her opinions ultimately frame the context in which police decisions are reviewed in court and bleed into a review of the reasonableness of police tactics. By her own guidelines (and industry standards), she is not qualified to review Force Science research or police practices and tactics involving areas outside of her expertise. In this context, her portrayal as a neutral scientific source reinforces concerns about the article’s lack of transparency regarding the contributors’ adversarial positioning.
Finally, it is worth noting that these authors have not criticized the research findings and application of implicit bias training, de-escalation tactics, crisis intervention models, or body-worn camera policies—despite the limited methodological rigor and questionable empirical support in these highly influential areas. This uneven application of academic scrutiny suggests that critiques are being driven less by scientific concern than by ideological and professional motivations.
Addressing the Core Concerns
To begin, we acknowledge the authors’ core concerns: (1) although they are unaware of any research that contradicts or discredits Force Science findings, they believe their opinion of the methodology should render it unsuitable for shaping policy or informing legal decisions; and (2) that using Force Science research in training, litigation, or courtrooms risks perpetuating harmful practices.
The authors do not explain or provide examples of how Force Science research, which is credited with saving countless lives, could perpetuate harmful practices. Nevertheless, we will engage with the substance of their critiques to address errors or mischaracterizations.
Scholarly Credentials of Dr. Bill Lewinski
Dr. Bill Lewinski, founder and Chief Researcher at Force Science, is a behavioral scientist with over 28 years of academic service at Minnesota State University, Mankato. Dr. Lewinski served as a tenured full professor, Director of the Law Enforcement Program, and Chair of the Department of Government. He continues his association with the university as Professor Emeritus of Law Enforcement.
In addition to his academic leadership, Dr. Lewinski has served as a peer reviewer for the American Psychological Association’s journal Psychology, Public Policy, and Law—a role reserved for experts entrusted to evaluate the scientific rigor and integrity of submitted research. He was selected for this responsibility based on his deep understanding of the scientific method, supported by formal education in experimental psychology, graduate-level coursework in research design, and a doctoral-level course in statistics.
Dr. Lewinski has been trusted to review respected peer-reviewed journals, including: • Policing: A Journal of Policy and Practice • Cognitive Processing – International Quarterly of Cognitive Science • Cognition, Technology and Work • Applied Ergonomics • Police Quarterly • Law Enforcement Executive Forum • Policing: An International Journal of Police Strategies and Management
Beyond journal reviews, Dr. Lewinski has evaluated research proposals for the Social Sciences and Humanities Research Council of Canada and has served on tenure and dissertation committees for universities in both the U.S. and Canada.
Taken together, these roles reflect his well-established reputation within the scientific and academic communities. His work is trusted not only for publication but also to shape the standards by which research is judged.
Clarifying the Nature of Force Science Research
Force Science research has always been positioned as descriptive and formative human performance science. Its central contributions involve documenting observable performance in human behavior under real-world constraints—such as reaction time during threat encounters, attention allocation, and perceptual limitations under stress.
The studies in question have never been offered to establish universal application or causality but rather describe possible (and even probable) patterns of performance in time-compressed, high-stress conditions. Opposing experts have argued that human performance considerations should be excluded because the trier of fact will, for example, be unable to discriminate between inattentional blindness and lying. It remains our opinion that where a person is accused of lying, the trier of fact deserves to know there are well-established scientific principles that may also explain the omission or discrepancy.
The human performance concepts recognized and applied by Force Science are well-established across various disciplines, including aviation, medicine, psychology, sports science, and military performance science. Concepts like sensory gating, attentional narrowing, heuristic decision-making, action-reaction timing, and memory limitations under stress are not unique to Force Science. Force Science’s contribution is the recognition and application of these long-established phenomena to human behavior involved in police decision-making. To that end, these insights continue to inform law enforcement (and civilian) training, policy, and post-event evaluations.
Misapplication of Evaluation Tools
The evaluation criteria applied in the article—specifically MMAT, NOQAS, and MSMS—are designed for experimental and clinical trial research, not for descriptive or naturalistic performance studies. The resulting critique penalizes Force Science research not for invalid design, but for lacking characteristics that are actually inappropriate or unethical in applied police contexts (e.g., randomization, control groups, blinding). The evaluation tools should match the purpose, constraints, and application of the research.
Descriptive research plays a vital role in identifying patterns, establishing baselines, and informing future hypothesis-driven studies. It is widely accepted in other performance-oriented fields. Consider, for example, the NFL Combine. For decades, data from the 40-yard dash and its 10- and 20-yard splits have been used to evaluate explosiveness and speed—not based on randomized trials, but through simple, repeated measurement. This type of real-world benchmarking is essential in high-performance professions. Much of Force Science’s research serves a similar purpose—translating observed realities into measurable, instructive data that can inform training, policy, and reasonable expectations of human performance.
In this case, the authors apply evaluative frameworks to descriptive, applied behavioral research that are better suited for randomized controlled trials. They admit these tools are not appropriate yet still use them to create a skewed judgment. This methodological misapplication is certain to confuse the reader, distort the value of Force Science’s work, and leave the authors exposed to allegations of hiding behind the veneer of complex evaluation models.
Additionally, the methodology used to critique Force Science—bibliometric analysis—is presented without clear adherence to recognized protocols. While such tools may have value in examining large and mature research fields, it is unusual to see them applied to a modestly sized body of work from a single organization. Bibliometric frameworks are typically reserved for retrospectives spanning hundreds, even thousands, of publications—not for targeted scrutiny of a few dozen applied studies. The introduction of this method, absent grounding in prior applications of this kind, raises questions about its appropriateness and reliability in this context. To their credit, the authors acknowledged in their limitations that their review was based on a non-random, incomplete sampling of Force Science work and that the assessment tools used are biased toward experimental methods and may oversimplify behavioral research.
Even so, when unfamiliar or novel evaluative tools are applied to scientific literature, it is customary to ensure that reviewers possess the relevant expertise. Without such expertise, important distinctions between descriptive, formative, and experimental research may be overlooked. In this case, the apparent absence of precedent for bibliometric analysis in similar contexts raises reasonable questions about whether the tool was applied with the appropriate depth and discipline. This matters—not only for the credibility of the critique—but for the broader trust in how scientific evaluation is conducted and communicated.
Scientific Utility and Admissibility
The critics article and responses raise what some may call epistemological issues, that is, questions about how we know what we know and what constitutes valid knowledge. In simpler terms, it asks whether knowledge derived from observing real-world patterns is as legitimate as knowledge produced in a laboratory under controlled conditions. In law enforcement, where decisions are made in real-time, both kinds of knowledge matter—but they serve different purposes.
Force Science acknowledges that descriptive science has limits, and it is incumbent on both researchers and courtroom experts to disclose those limits. This principle is widely accepted in fields like aviation, emergency medicine, and sports psychology—where data gathered through observation and performance studies guide training and policy without claiming courtroom-level certainty.
Descriptive studies, when presented transparently, are not only useful but often critical in understanding human behavior in emergent situations. These studies have never claimed to offer definitive evidence in any given case, but rather to inform plausible ranges of human response. Force Science experts are quick to point out that there is not, for example, a universal response time, sprint time, or assault duration. There is nothing in Force Science training or expert testimony that presumes the ability to attribute or preclude a specific human performance phenomenon to an individual.
The authors seem to conflate anecdotes of courtroom overreach with fundamental problems in the research itself. This is an important distinction. Like any field, misuse by an expert is a competence or ethics issue that should be addressed through proper training, cross examination, and judicial oversight—not by preemptively attempting to discredit an individual researcher and an entire body of scientific inquiry.
Lack of Contradictory Research
Despite their criticism, the authors concede they are unaware of any peer-reviewed study that contradicts Force Science’s key findings. These include concepts such as reaction times, attentional focus, and memory under stress. Force Science does not claim to have invented these ideas but rather contextualizes them for law enforcement. Despite how they may testify on a particular case, the authors acknowledge the potential application of human performance in their textbook.
Mischaracterization of “Force Science” Experts
The article mislabels professionals who reference Force Science research or attend its courses as “affiliates,” falsely implying formal organizational endorsement. Force Science does not control how graduates apply course material. This mischaracterization invites anecdotes outside the influence or control of Force Science and unfairly broadens the critique.
The authors expressly avoid criticizing Force Science experts admitted as non-scientific experts. However, their focused critique still presents a high likelihood that readers will conflate Force Science’s training and consulting programs with those of its research division. The consulting division relies on established frameworks for the use of force, legal standards, and police practices. If a Force Science consultant or expert witness relies on Force Science studies, they do so understanding that these studies were peer-reviewed, published in sources that are relied upon by experts in the industry, and are routinely supported by independent research. Again, the authors (and Lisa Fournier) have publicly and under oath admitted they are unaware of any research that contradicts Force Science’s key findings. Still, if the consulting division becomes aware of any contradictory research, its members remain ethically obligated to disclose those findings to students, clients, and courts.
The effort to undermine confidence in Dr. Lewinski’s research will not diminish Force Science’s influence on policy, training, and litigation. Force Science consultants and trainers possess decades of professional experience and training that extend well beyond Dr. Lewinski’s work. Still, Force Science experts confidently rely on Dr. Lewinski’s findings, as they are supported by a broader body of scientific literature. Attempting to discredit Force Science studies does not negate the established consensus on human performance limitations and capabilities in force encounters.
The article suggests that Force Science’s curriculum relies solely on its own studies. This portrayal of Force Science as a self-referential system is inaccurate. In reality, the Force Science curriculum incorporates decades of research from experts across multiple disciplines and was developed in consultation with faculty who are among the leading professionals in their respective industries.
In fairness, there are distinctions made between peer-reviewed “scientific” journals and peer-reviewed professional journals. The limitation of that distinction is highlighted in Dr. Fournier’s Daubert article, which calls into question the peer-review process itself and not the validity of Dr. Lewinski’s research.
Circumventing Judicial Gatekeeping
The article attempts to delegitimize Force Science by implying its research is inadmissible in court. Force Science testimony and research have been evaluated and accepted after adversarial processes. Experts affiliated with Force Science frequently cite a broad body of research in support of Force Science findings.
Admissibility, relevance, and application of scientific research in legal proceedings is not determined by extrajudicial academic debate, but by courts acting in their role as gatekeepers. Under Daubert, Frye, and their state-specific variants, judges decide whether scientific evidence or expert opinion meets the legal thresholds for reliability and relevance in the specific context of a case. It is within this framework that the purpose of evidence—whether to suggest possibilities, probabilities, or causality—is assessed. The authors of “Forced Science” know this. Yet instead of directing their efforts to courtroom challenges or peer rebuttals within the scientific process, they use a professional, non-scientific journal to bypass the courtroom and leave readers with the misleading impression that Force Science’s research has already been discredited or judicially discarded. This is both disingenuous and harmful.
In reality, Force Science research and human performance considerations remain foundational tools for well-trained force investigators, law enforcement evaluators, attorneys, and judges. Excluding these concepts from use-of-force investigations and legal proceedings is to withhold evidence that helps courts and communities understand how officers perceive, decide, and act in real time. To tell courts and communities, “We have no evidence that they are wrong, but we don’t trust that they are right,” is to abandon objectivity in favor of agenda.
Misrepresentation of LEEF as Non-Peer-Reviewed
The authors argued that Force Science published materials outside of peer-reviewed venues, specifically criticizing the Law Enforcement Executive Forum (LEEF). However, it is important to note that LEEF was a peer-reviewed publication affiliated with Western Illinois University and supported by a respected editorial board. Articles submitted to LEEF underwent a blind peer review process and adhered to APA standards. The decision to publish in LEEF was made strategically to engage relevant law enforcement audiences, and it was the type of publication reasonably relied upon by experts in the industry.
Critics of Force Science studies published in the Law Enforcement Executive Forum often dismiss the legitimacy of the journal’s peer review by asserting that the reviewers “must not have been qualified.” These assertions are not based on any evidence about the reviewers (whose identities are, by design, anonymous) but simply because the critics disagree with the methodology. These criticisms have ignored statistical data that was available to the reviewers, criticized methodologies that they themselves have used in research, and mischaracterized human performance to retroactively discredit the peer review process. In depositions, critics have admitted that they have not conducted research in the subject areas being critiqued—such as police performance under stress, reaction time, or attention in tactical scenarios—which by their own standards disqualify them to review the studies.
Reliance on the Debunked New York Times Opinion Piece
In 2015, the New York Times published an opinion article that misrepresented Force Science’s mission and unfairly characterized the work and motivations of its founder, Dr. Bill Lewinski. Although Dr. Lewinski offered sworn affidavits and additional context to correct factual errors, the author declined, explaining that as an opinion writer, he was only obligated to quote sources accurately—not to ensure factual accuracy overall.
The article has been widely criticized for its emotionally charged, hyperbolic tone and misleading assertions. It repeated the inflammatory phrase, “shoot first and he will answer questions later”—a line coined by Johnny Cochran in civil litigation. The piece appeared to be written to influence ongoing high-profile litigation and sow distrust in Force Science’s work.
The authors of the “Forced Science” critique, despite their academic credentials, continue to cite this discredited opinion piece as if it were authoritative. They are fully aware that it misrepresents both the application of human performance research and Force Science’s relationship with federal agencies and the American Psychological Association (APA), an association for which Dr. Lewinski was a reviewer. The article falsely suggested that Dr. Lisa Fournier spoke on behalf of the APA; in reality, she served as a paid expert for opposing counsel. Similarly, arguments made by an Assistant U.S. Attorney during litigation were wrongly portrayed as reflective of official DOJ policy. What was not reported was that the same attorney later stated there was no personal dispute and expressed interest in working with Force Science post-litigation.
These distortions have long been debunked and yet continue to be recycled in an effort to create the false impression that Force Science is broadly rejected by the Department of Justice and law enforcement. In fact, Force Science continues to maintain active, long-standing relationships with federal law enforcement agencies, including the DOJ, which relies on its research and training to enhance professional standards and training.
Commitment to Scientific Integrity
Force Science does not claim ownership or discovery of the human performance concepts it teaches and applies. Its value lies in recognizing, organizing, and applying established human performance research to policing. The training curriculum and consulting insights are drawn from highly respected researchers and institutions. Where Force Science has identified gaps in scientific inquiry, it has researched and published more than 30 peer-reviewed studies. Its faculty includes renowned experts like Dr. Marc Green, Dr. Gary Klein, Dr. Joan Vickers, and others, supported not by “Grey Literature” as implied by the critics, but instead backed by well over 1000 peer-reviewed publications collectively. The relevance and reliability of Force Science’s work are further supported by a broader body of scientific consensus. Force Science research is not an outlier, but part of a larger, well-recognized scientific body.
The “Forced Science” article is not a neutral academic critique. At best, it is an effort at post-publication peer review by authors who have misapplied evaluative models. Unfortunately, it may also be perceived as a targeted effort by financially and ideologically conflicted individuals to undermine a competing perspective. Without disclosing their conflicts, they have misapplied evaluation tools, ignored supporting science, and misrepresented Force Science affiliations.
That said, the scientific process envisions that past research and findings be scrutinized and tested as new information and technology become available. We welcome honest and evidence-based scrutiny. If research emerges that challenges our findings, we invite continued engagement and discussion. Until then, readers can be confident that Force Science remains committed to constant and never-ending improvement and will continue to pursue research with transparency, accountability, and the highest standards of scientific integrity.
