The term “generally accepted police practices” is often cited in legal proceedings to evaluate law enforcement actions, particularly in use of force incidents. However, experts Von Kliem, Jamie Borden, and Daniel King recently highlighted its frequent misuse, which can unfairly penalize officers and distort justice. This article, based on their in-depth discussion, explores the complexities of this issue, the pitfalls of vague standards, and practical strategies for officers, investigators, and agency leaders to navigate this challenging landscape. Designed for law enforcement professionals, legal stakeholders, and policymakers, it offers actionable steps to ensure fair, evidence-based evaluations.
The Problem with “Generally Accepted Police Practices” – No National Standard Exists
Von Kliem, Chief Consulting and Communications Officer at Force Science, emphasized that no single authority defines “generally accepted police practices.” Organizations like the International Association of Chiefs of Police (IACP), Lexipol, or the Department of Justice (DOJ) provide model policies, but these are not universally adopted. With over 18,000 law enforcement agencies in the U.S., policies vary based on local resources, laws, and community needs, making a national standard impractical.
Kliem noted that many guidelines come from small committees or activist groups, such as the Police Executive Research Forum (PERF), which may push agendas not grounded in operational reality. For example, DOJ consent decrees have forced agencies to adopt policies under legal pressure, not because they reflect widespread practice. This creates a disconnect between policy ideals and the street-level decisions officers make.
Misuse in Legal Proceedings
Jamie Borden, a use of force investigator, trainer, court certified expert in the field, and best-selling author of “Anatomy of A Critical Incident – Navigating Controversy” explained that academic experts often cite these vague standards or recommendations to criticize officers in legal proceedings. Lacking operational experience, these experts evaluate actions against self-defined “national standards” that don’t exist, ignoring the officer’s training and agency policies, not to mention the reality that the officer is facing in the moment. In one case, an expert suggested an officer should have de-escalated (un-defined as to what this de-escalation tactic should have been) while a suspect with a knife wrestled with another officer—a tactically unrealistic recommendation for multiple identifiable reasons. Keeping in mind in the review and analysis of these events, de-escalation must be looked at as a “goal”, not a “tactic”
Daniel King, a retired officer, use of force expert, and published author highlighted how prosecutors exploit this ambiguity. By charging officers with crimes like assault or manslaughter, they bypass constitutional standards like Graham v. Connor (1989), which judges use of force based on an officer’s reasonable perception of the situation. Polished academic witnesses sway jurors unfamiliar with policing, presenting speculative alternatives that sound logical but are operationally flawed at their core and unrealistic in the way things actually happen.
Hindsight Bias and Counterfactual Reasoning:The Fog of Critical Incidents
Kliem described how officers make decisions in “System 1” thinking—fast, intuitive, and based on limited information—under the “fog of war” in high-stress situations. Post-incident reviews, however, use “System 2” thinking—slow and outcome-informed—leading to hindsight bias. Reviewers assume events were more predictable than they were, criticizing officers for not choosing idealized alternatives.
Borden, referencing human factors experts like Sidney Dekker, stressed the need to stay “in the pipe,” evaluating decisions based on what the officer knew at the time. For example, an officer facing a suspect with a hatchet may not know if others are nearby, but a review might fault them for not negotiating from cover, ignoring the immediate threat to the community. This clearly falls into the risk v. threat assessment that officers are in constant evaluation of.
Counterfactual Critiques
Borden called these critiques “counterfactual reasoning,” where experts speculate about what “should have” happened. In a case King cited, an expert suggested an officer should have slapped a suspect instead of using lethal force—an absurd idea given the context of the incident. Such arguments assume better outcomes without evidence, misleading jurors who lack policing experience.
Departmental Policies: The True Standard and Focus on Agency Directives
Kliem urged officers to prioritize their agency’s policies and training, which vary widely across jurisdictions. These directives, not vague national standards, are what officers are held accountable to. Borden warned that signing off on policies (e.g., via PowerDMS) doesn’t ensure understanding. Officers must actively study their use of force guidelines to explain and educate reviewers regarding their actions effectively.
Prosecutorial Tactics
King shared cases where prosecutors disregarded agency policies, relying on academic experts to argue violations of “generally accepted practices.” In several cases, references to Graham v. Connor and state self-defense statutes were excluded from trial, leaving jurors to judge the officer against undefined standards. This tactic undermines due process, as officers lack notice of the expectations they’re held to.
Ethical and Legal Concerns: Prosecutorial Misconduct
As a former prosecutor, Kliem was troubled by prosecutors deviating from established standards for convenience. In one case, a prosecutor rejected their own prior Graham-based standards to secure a conviction, a move Kliem called ethically questionable. Kliem noted that changing jury instructions mid-trial to favor the prosecution further erodes fairness, prioritizing wins over justice.
Juror Confusion
Jurors, often unfamiliar with policing, are swayed by academic experts’ credentials and technical jargon. Kliem recounted a juror asking whether to follow local policy or “generally accepted practices,” highlighting how these terms confuse fact-finders. King emphasized that experts’ polished delivery masks their lack of operational insight, making their arguments persuasive to laypeople.
The Misnomer of “Best Practices” – Defining “Best Practices”
The term “best practices” is often misused in force reviews, treated as a catch-all for speculative critiques. Kliem and Borden argued that true best practices must be clearly defined, evidence-based, and formally adopted by the agency. Without this, the term invites arbitrary judgments, like an academic expert’s suggestion in one case to maintain 50 yards of distance from a bat-wielding suspect—impractical and disconnected from reality with no anchor for the expert recomendation.
Analysis vs. Critique
Borden distinguished between a critique (speculating on alternatives) and an analysis (reconstructing the incident’s context). Force reviews should prioritize analysis, using data like body camera footage, officer statements, and policies to understand what happened. Critiques belong in training, not legal or disciplinary decisions, as they assume predictable outcomes that don’t exist in dynamic encounters.
Human Performance Under Stress: The Science of Decision-Making
Kliem highlighted the importance of human performance science, a focus of Force Science from a scientific purview and CIR from an investigative principles’ perspective. Officers under stress experience physiological effects—tunnel vision, auditory exclusion, time distortion—that shape their decisions. Academic experts rarely account for these, evaluating actions as if officers had unlimited time and clarity. Borden and King argued that these phenomena happen in these critical incidents, the science doesn’t excuse it, the science explains it. These principles are applied in training and investigations as a common place component regarding the officer’s behavior. Investigators and all stakeholders should certainly be informed by the science that exists to find the issues where psychologists, bio-mechanical engineers, or even human factors researchers need to be involved.
Borden referenced “satisficing,” where officers choose the first viable solution under pressure, not the optimal one. This aligns with research by Dr. Gary Klein on naturalistic decision-making. Reviews that ignore these realities unfairly penalize officers for human limitations.
Video Evidence Limitations
Borden, King, and Kliem addressed the misuse of video evidence, often seen as definitive but flawed by technical issues like variable frame rates or distorted playback. In one case, a surveillance video’s unreliability was overlooked, leading to false conclusions about an officer’s actions. Investigators must understand these limitations to ensure fair evaluations. Video analysis, review, and examination have become a focus of CIR’s training, video is everywhere.
Implications for Policing: Officer Morale and Risk-Aversion
The misuse of vague standards increases legal and emotional risks for officers, potentially leading to risk-averse policing. Kliem noted that some experts advocate a “safest form of policing,” avoiding confrontation, but this neglects officers’ duty to protect communities. Such trends can erode morale and public safety.
Agency Challenges
Leaders must balance policy development with operational needs, resisting external standards that don’t fit local contexts. Borden warned that vague policy language or language in a report, like “best practices,” can be exploited in court, undermining defensibility.
Public Perception
Reliance on academic experts’ fuels narratives that policing is inherently flawed, even when actions are lawful. King argued that this oversimplifies use of force, ignoring the legal authority granted to officers to address threats like armed suspects in neighborhoods.
Actionable Strategies for Stakeholders
To counter the misuse of “generally accepted police practices” and promote fair evaluations, stakeholders can adopt these strategies:
For Officers
- Know Your Policies.
Study your agency’s use of force policies thoroughly. Attend training and clarify ambiguities to ensure you can articulate their application in reports or testimony.
- Document Context Clearly.
In reports, detail what you saw, heard, and believed, using simple terms to align with Graham v. Connor. Avoid vague descriptions that invite misinterpretation.
- Prepare for Testimony
Practice explaining your decisions in simple terms, anticipating questions about why alternatives were impractical. Mock cross-examinations can build confidence.
- Invest in Training
Attend courses from Critical Incident Review (criticalincidentreview.com) & Force Science (forcescience.com) to understand human performance and use of force principles. Read resources for deeper insight. CIR provides a lengthy reading list to provide viable means of good information for officers and investigators.
For Investigators
- Prioritize Analysis
Reconstruct incidents using officer statements, video, and policies, focusing on what the officer knew at the time. Use frameworks like Graham v. Connor’s Calculus for Objective Reasonableness. CIR has developed several analytical tools to be used in the investigative and review process. (Enhanced Force Investigations Course)
- Assess Video Critically
Recognize video limitations (e.g., frame rate issues) and consult experts to ensure reliability. Educate reviewers about these constraints to prevent overreliance.
- Incorporate Human Factors
Apply principles of stress, perception, and cognition to investigations. Reference Force Science or CIR resources to ground findings in science.
- Use Precise Language
Avoid terms like “best practices” unless tied to adopted standards. Cite specific sources (e.g., agency training manuals) to maintain clarity.
For Agency Leaders
- Craft Clear Policies
Develop specific, operationally feasible policies aligned with Graham v. Connor. Avoid vague references to “national standards” that invite misinterpretation.
- Promote Learning
Use tactical after-action reviews to improve performance, not punish. Encourage officers to attend external training and share knowledge.
- Evaluate External Standards
Critically assess guidelines from DOJ, PERF, or others before adoption. Document decisions to strengthen legal defensibility.
- Support Officers Legally
Provide access to qualified experts and counsel familiar with policing. Prepare officers for prosecution tactics, like bypassing Graham v. Connor.
Conclusion
The misuse of “generally accepted police practices” undermines fair evaluations of use of force, penalizing officers for adhering to their training and legal authority. By focusing on agency policies, grounding reviews in human performance science, and challenging vague standards, stakeholders can ensure justice reflects the realities of policing. Officers, investigators, leaders, and legal professionals must work together to educate, not excuse, fostering a system that balances accountability with operational truth. For further resources, visit contact Danny or Jamie at CriticalIncidentReview.com, or contact Von Kliem at von.kliem@forcescience.com. Or visit forcescience.com
To equip law enforcement professionals and investigators with the tools to apply the Common Thread theory effectively, I invite you to visit criticalincidentreview.com and explore our Enhanced Force Investigations Course. This training program is designed to deepen your understanding of context and suspect behavior in use-of-force cases, providing practical strategies to conduct objective, evidence-based investigations. Join us to learn how to navigate the complexities of critical incidents with clarity and confidence, ensuring your reviews align with the latest scientific insights and established and documented best practices.
Deconstructing "Generally Accepted Police Practices" in Use of Force: Briefing Doc
- The provided sources, “Standards_Best_Practices.pdf” and “jamie-von-daniel.txt,” offer a critical review of the pervasive and often misleading use of “generally accepted police practices” and “best practices” in evaluating use-of-force incidents. The authors, Von Kliem, Jamie Borden, and Daniel King, all experts in use-of-force investigations and training, argue that these terms are frequently misused in legal proceedings, leading to unfair judgments against officers and a distortion of justice.
- Briefing Document: Misuse of “Generally Accepted Police Practices” in Use of Force Cases
I. The Fundamental Problem: Lack of a National Standard
- The core issue highlighted across both sources is the absence of a single, universally defined “generally accepted police practices” standard in the United States.
- No Central Authority: Von Kliem emphasizes that “no single authority defines ‘generally accepted police practices.'” Organizations like the IACP, Lexipol, or the DOJ offer model policies, but these are not universally adopted due to variations in “local resources, laws, and community needs,” making a national standard “impractical.”
- Agendas over Operational Reality: Kliem points out that many guidelines originate from “small committees or activist groups, such as the Police Executive Research Forum (PERF), which may push agendas not grounded in operational reality.” He cites DOJ consent decrees as an example where agencies adopt policies “under legal pressure, not because they reflect widespread practice.”
- Coercion, Not Consensus: The podcast further elaborates on how DOJ consent decrees, presented as reflecting “generally accepted police practices,” were in fact “coerced” standards, “forcing agencies to adopt those standards in lieu of being in lieu of a civil rights case against them.”
II. Misuse in Legal Proceedings: Undermining Justice
- The vagueness and lack of a definitive standard for “generally accepted police practices” are systematically exploited in legal proceedings, particularly in criminal and civil cases against officers.
- Academic Experts as Critics: Jamie Borden explains that “academic experts often cite these vague standards or recommendations to criticize officers in legal proceedings.” These experts, frequently “lacking operational experience,” evaluate actions against “self-defined ‘national standards’ that don’t exist,” often ignoring the officer’s actual training, agency policies, and “the reality that the officer is facing in the moment.”
- Bypassing Constitutional Standards: Daniel King highlights how “prosecutors exploit this ambiguity.” By charging officers with crimes like assault or manslaughter, they “bypass constitutional standards like Graham v. Connor (1989),” which judges use to evaluate force based on an officer’s reasonable perception. King notes that “polished academic witnesses sway jurors unfamiliar with policing, presenting speculative alternatives that sound logical but are operationally flawed at their core and unrealistic in the way things actually happen.”
- Disregarding Agency Policies: Both sources detail instances where prosecutors and opposing experts “disregarded agency policies,” relying on undefined “generally accepted practices.” King notes that references to Graham v. Connor and state self-defense statutes were often “excluded from trial,” leaving jurors to judge officers against arbitrary standards, thereby “undermin[ing] due process.”
- Ethical Concerns in Prosecution: Kliem, a former prosecutor, expresses strong concern over prosecutors “deviating from established standards for convenience,” even rejecting their own prior Graham-based standards “to secure a conviction.” He calls this “ethically questionable” and points to changes in jury instructions “mid-trial to favor the prosecution” as eroding fairness.
III. Hindsight Bias and Counterfactual Reasoning
- A significant factor contributing to unfair evaluations is the application of hindsight bias and counterfactual reasoning in critical incident reviews.
- “Fog of Critical Incidents”: Kliem describes officers making decisions under the “fog of war” using “System 1” (fast, intuitive) thinking. Post-incident reviews, however, often use “System 2” (slow, outcome-informed) thinking, leading to hindsight bias where “reviewers assume events were more predictable than they were, criticizing officers for not choosing idealized alternatives.”
- “Staying in the Pipe”: Borden, referencing human factors experts like Sidney Dekker, stresses the need to evaluate decisions “based on what the officer knew at the time.” He provides the example of an officer facing a hatchet-wielding suspect, where a review might “fault them for not negotiating from cover, ignoring the immediate threat to the community.”
- “Counterfactual Critiques”: Borden labels these as “counterfactual reasoning,” where experts “speculate about what ‘should have’ happened.” King cites an absurd example where an expert suggested an officer “should have slapped a suspect instead of using lethal force.” These arguments “assume better outcomes without evidence, misleading jurors who lack policing experience.”
- The Problem of “Critique” vs. “Analysis”: Borden differentiates between a “critique” (speculating on alternatives) and an “analysis” (reconstructing the incident’s context). He argues that force reviews should prioritize “analysis, using data like body camera footage, officer statements, and policies to understand what happened,” while “critiques belong in training, not legal or disciplinary decisions.”
IV. The Misnomer of “Best Practices”
- Similar to “generally accepted police practices,” the term “best practices” is often misused and applied without proper definition or evidence.
- Undefined and Arbitrary: Kliem and Borden argue that “true best practices must be clearly defined, evidence-based, and formally adopted by the agency.” Without this, the term “invites arbitrary judgments,” such as an expert’s suggestion to maintain “50 yards of distance from a bat-wielding suspect—impractical and disconnected from reality.”
- Weaponizing Officer Safety: Daniel King and Von Kliem discuss the “safest form of policing” advocated by some experts, which prioritizes officer safety above all else, often at the expense of community safety. Kliem states that these experts have “weaponized officer safety,” identifying “everything we do tactically for officer safety and now say if you don’t do those things, you’re creating the exigency and it’s against generally accepted police principles.”
V. The Importance of Departmental Policies and Human Performance Science
- The sources advocate for a shift in focus from vague external standards to an officer’s specific agency policies and an understanding of human performance under stress.
- Agency Policies as the True Standard: Kliem urges officers to “prioritize their agency’s policies and training,” as “these directives, not vague national standards, are what officers are held accountable to.” Borden warns that merely signing off on policies “doesn’t ensure understanding,” stressing that “officers must actively study their use of force guidelines.”
- Human Performance Under Stress: Kliem highlights the “importance of human performance science.” Officers under stress experience “physiological effects—tunnel vision, auditory exclusion, time distortion—that shape their decisions.” He notes that “academic experts rarely account for these, evaluating actions as if officers had unlimited time and clarity.” Borden references “satisficing,” where officers “choose the first viable solution under pressure, not the optimal one.”
- Video Evidence Limitations: The authors caution against the “misuse of video evidence,” which is “often seen as definitive but flawed by technical issues like variable frame rates or distorted playback.” Investigators “must understand these limitations to ensure fair evaluations.”
- VI. Implications and Actionable Strategies
- The misuse of these vague standards has negative implications for officer morale, agency challenges, and public perception. The authors provide actionable strategies for various stakeholders to counter these issues.
- Impacts:Officer Morale: The increased legal and emotional risks lead to “risk-averse policing,” where officers may avoid confrontation, neglecting their duty to protect communities.
- Agency Challenges: Leaders must balance policy development with operational needs, resisting “external standards that don’t fit local contexts.” Vague policy language can be “exploited in court.”
- Public Perception: Reliance on academic experts “fuels narratives that policing is inherently flawed,” oversimplifying use of force and ignoring officers’ legal authority.
- Strategies for Officers:Know Your Policies: “Study your agency’s use of force policies thoroughly. Attend training and clarify ambiguities.”
- Document Context Clearly: “In reports, detail what you saw, heard, and believed, using simple terms to align with Graham v. Connor.”
- Prepare for Testimony: “Practice explaining your decisions in simple terms, anticipating questions about why alternatives were impractical.”
- Invest in Training: “Attend courses from Critical Incident Review (criticalincidentreview.com) & Force Science (forcescience.com) to understand human performance and use of force principles.”
- Strategies for Investigators: 5. Prioritize Analysis: “Reconstruct incidents using officer statements, video, and policies, focusing on what the officer knew at the time. Use frameworks like Graham v. Connor’s Calculus for Objective Reasonableness.” 6. Assess Video Critically: “Recognize video limitations… and consult experts to ensure reliability.” 7. Incorporate Human Factors: “Apply principles of stress, perception, and cognition to investigations. Reference Force Science or CIR resources to ground findings in science.” 8. Use Precise Language: “Avoid terms like ‘best practices’ unless tied to adopted standards. Cite specific sources.”
- Strategies for Agency Leaders: 9. Craft Clear Policies: “Develop specific, operationally feasible policies aligned with Graham v. Connor. Avoid vague references to ‘national standards’.” 10. Promote Learning: “Use tactical after-action reviews to improve performance, not punish.” 11. Evaluate External Standards: “Critically assess guidelines from DOJ, PERF, or others before adoption. Document decisions.” 12. Support Officers Legally: “Provide access to qualified experts and counsel familiar with policing. Prepare officers for prosecution tactics, like bypassing Graham v. Connor.”
Conclusion
- The experts collectively assert that the misuse of “generally accepted police practices” and “best practices” unjustly penalizes officers and distorts the legal process. They advocate for a system that prioritizes agency-specific policies, evidence-based analysis, an understanding of human performance under stress, and clear, defined standards to ensure fair and accurate evaluations of use-of-force incidents. The overarching call is for “stakeholders… to educate, not excuse, fostering a system that balances accountability with operational truth
FAQ
- 1. What is the core problem with the term “generally accepted police practices” in use-of-force cases?
- The core problem is that “generally accepted police practices” is a vague and undefined term, lacking a single national standard or authoritative source. Organizations like the IACP, Lexipol, or the Department of Justice offer model policies, but these are not universally adopted across the over 18,000 law enforcement agencies in the U.S. Policies vary significantly based on local resources, laws, and community needs, making a unified national standard impractical.
- Experts like Von Kliem, Jamie Borden, and Daniel King highlight that this ambiguity is often exploited in legal proceedings. Academic experts, often lacking operational experience, cite these vague “national standards” to criticize officers’ actions, even when those actions align with their specific agency’s training and policies. Prosecutors can then leverage this ambiguity to charge officers, sometimes bypassing established constitutional standards like Graham v. Connor (1989), which evaluates force based on an officer’s reasonable perception at the time of the incident. This misuse can lead to unfair penalties for officers and a distortion of justice.
- 2. How do “hindsight bias” and “counterfactual reasoning” undermine fair evaluations of police use of force?
- “Hindsight bias” and “counterfactual reasoning” significantly distort fair evaluations by judging high-stress, split-second decisions with the benefit of perfect information and a calm, deliberative mindset. Officers in critical incidents make decisions using “System 1” thinking—fast, intuitive, and based on limited, dynamic information—often described as operating in the “fog of war.” However, post-incident reviews typically employ “System 2” thinking—slow, analytical, and outcome-informed—leading reviewers to assume events were more predictable than they actually were.
- Jamie Borden refers to this as “counterfactual reasoning,” where experts speculate on what “should have” happened, imagining alternative actions that might have led to a better outcome, without considering the real-time constraints, immediate threats, and physiological effects (like tunnel vision or time distortion) the officer experienced. These critiques ignore human performance under stress and the concept of “satisficing,” where officers choose the first viable solution under pressure rather than an optimal one. This creates an unrealistic standard, as better outcomes are assumed without evidence, misleading jurors who lack understanding of operational realities.
- 3. Why are departmental policies and training the true standard officers should be held accountable to?
- Departmental policies and training represent the true standard officers should be held accountable to because these are the specific directives and guidelines they are trained under and expected to follow within their jurisdiction. Unlike vague “national standards” or “best practices” that may be proposed by academic groups or activist organizations with specific agendas, agency policies are tailored to local contexts, resources, and legal frameworks.
- Von Kliem emphasizes that officers prioritize their agency’s directives. Jamie Borden warns that simply acknowledging policies (e.g., by signing off digitally) doesn’t guarantee understanding; officers must actively study their use-of-force guidelines to effectively articulate their actions if questioned. Daniel King highlights that prosecutors sometimes disregard these specific agency policies, relying instead on academic experts to argue violations of ill-defined “generally accepted practices,” thereby undermining due process and holding officers to expectations they were never given notice of. Adherence to these clearly defined, operationally feasible agency policies, developed in alignment with Graham v. Connor, provides a legally defensible and practical framework for evaluating officer conduct.
- 4. How do academic experts and prosecutors exploit the ambiguity of “generally accepted police practices” in court?
- Academic experts, often with limited or no operational police experience, present themselves as authorities on “generally accepted police practices” and offer critiques that are divorced from the realities of policing. They frequently assess officers’ actions against self-defined “national standards” that do not exist, ignoring the officer’s actual training and agency policies. These experts, typically polished and persuasive, present speculative alternatives that sound logical to lay jurors but are tactically flawed or unrealistic in practice.
- Prosecutors exploit this ambiguity by using these academic experts to bypass constitutional standards like Graham v. Connor, which requires judging force from the perspective of a reasonable officer on the scene, without the benefit of 20/20 hindsight. Instead, prosecutors may charge officers with crimes like assault or manslaughter, shifting the legal focus away from Graham‘s objective reasonableness standard. They might even try to exclude references to Graham or state self-defense statutes from trials. This tactic confuses jurors, who are swayed by impressive credentials and technical jargon, leading them to judge officers against undefined and often ethically questionable standards, prioritizing convictions over fair justice.
- 5. What are the limitations of video evidence in use-of-force reviews, and why is critical assessment crucial?
- Video evidence, while seemingly definitive, has significant limitations that often go unacknowledged in use-of-force reviews. Technical issues such as variable frame rates or distorted playback can lead to misinterpretations of critical incident details. For example, surveillance video with varying frame rates (e.g., 12 to 24 frames per second) played back at a standard rate (e.g., 30 frames per second) can make movements appear faster than they actually occurred, rendering it unreliable for assessing precise time, distance, speed, and motion.
- Borden, King, and Kliem stress that investigators must understand these technical limitations. Over-reliance on video without critical assessment or consultation with video analysis experts can lead to false conclusions about an officer’s actions. While video can provide a general gist of an event and relative positioning, it is often unfit for drawing precise conclusions about specific actions or speeds. Educating reviewers about these constraints is crucial to prevent unjust evaluations and ensure that video evidence is used appropriately within the broader context of an incident.
- 6. What strategies can officers, investigators, and agency leaders implement to counter the misuse of vague standards?
- For Officers:
- Know Your Policies: Thoroughly study agency use-of-force policies and training to articulate their application clearly in reports or testimony.
- Document Context Clearly: Detail perceptions, beliefs, and actions in reports using simple language aligned with Graham v. Connor, avoiding vague descriptions.
- Prepare for Testimony: Practice explaining decisions simply and anticipate questions about why alternatives were impractical.
- Invest in Training: Attend courses from organizations like Critical Incident Review (CIR) and Force Science to understand human performance and use-of-force principles.
- For Investigators:
- Prioritize Analysis: Reconstruct incidents focusing on what the officer knew at the time, using frameworks like Graham v. Connor and CIR’s analytical tools.
- Assess Video Critically: Recognize video limitations (e.g., frame rate issues) and consult experts for reliability, educating reviewers on these constraints.
- Incorporate Human Factors: Apply principles of stress, perception, and cognition, referencing Force Science or CIR resources to ground findings in science.
- Use Precise Language: Avoid vague terms like “best practices” unless tied to adopted, evidence-based standards; cite specific agency documentation.
- For Agency Leaders:
- Craft Clear Policies: Develop specific, operationally feasible policies aligned with Graham v. Connor, avoiding vague references to “national standards.”
- Promote Learning: Use tactical after-action reviews for performance improvement, not punishment; encourage external training and knowledge sharing.
- Evaluate External Standards: Critically assess guidelines from external bodies like DOJ or PERF before adoption, documenting decisions for legal defensibility.
- Support Officers Legally: Provide access to qualified experts and counsel familiar with policing, preparing officers for prosecutorial tactics that bypass established legal standards.
- 7. What is the distinction between “analysis” and “critique” in use-of-force reviews, and why is it important?
- The distinction between “analysis” and “critique” is crucial for conducting fair and accurate use-of-force reviews. Jamie Borden defines analysis as reconstructing an incident by focusing on the context and information available to the officer at the time of the event. This involves using data such as body camera footage, officer statements, and agency policies to understand what happened and why the officer made the decisions they did within the dynamic and stressful environment. Analysis is about understanding the officer’s real-time perception and decision-making process.
- Conversely, a critique involves speculating about what should have, could have, or would have happened. It often employs counterfactual reasoning, imagining alternative actions that might have led to a different, often assumed better, outcome. Critiques frequently overlook the unpredictable nature of critical incidents and the human factors (like stress and limited information) influencing an officer’s actions. The source material argues that while critiques have a place in training to foster learning and improve future performance, they are inappropriate for legal or disciplinary decisions, as they impose an unrealistic standard of perfect predictability and often serve to assign blame based on hindsight bias.
- 8. How does ignoring human performance under stress contribute to unfair evaluations of police conduct?
- Ignoring human performance under stress leads to unfair evaluations by imposing an unrealistic expectation that officers should act with the clarity and deliberation of someone observing from a calm, safe environment. Von Kliem highlights that officers under stress experience significant physiological and cognitive effects, such as “tunnel vision” (loss of peripheral awareness), “auditory exclusion” (difficulty hearing), and “time distortion” (events seeming to slow down or speed up). These phenomena profoundly shape an officer’s perception and decision-making.
- Academic experts, often without understanding these realities, evaluate actions as if officers had unlimited time, perfect information, and a clear mind. Jamie Borden and Daniel King stress that while the science of human performance under stress doesn’t excuse actions, it explains them. Concepts like “satisficing”—where officers choose the first viable solution rather than the optimal one due to time pressure and cognitive load—are well-documented in research on naturalistic decision-making. Reviews that fail to account for these inherent human limitations unfairly penalize officers for responses that are natural consequences of operating in high-stress, high-stakes environments, where survival instincts and training take precedence over idealized, textbook responses.

