Not all AI is high-risk—but not knowing the difference is.
As AI tools expand into law enforcement workflows, agencies need to stop treating every new system like generic software. Some tools simply help. Others automate decisions that affect people, public safety, or civil rights. That’s where the risk lives—and that’s where policy and oversight have to start.
Use Risk Tiers, Not Guesswork
ClearCouncil recommends using a Low / Medium / High tiering system to evaluate AI systems before deployment. The goal isn’t complexity—it’s clarity.
Low Risk
- Does not impact public decisions, criminal cases, or deployment of resources
- Examples: Transcription tools, scheduling software, auto-tagging systems
- Minimal oversight required (but still worth documenting)
Medium Risk
- Supports decision-making with human review
- Examples: Dispatch triage, call prioritization, alert generation, initial crime tip sorting
- Requires HITL review, usage logs, and policy-level approval
High Risk
- Influences or automates decisions with operational impact or potential legal exposure
- Examples: Investigative lead scoring, public alerts, automated traffic citations, real-time field alerts
- Requires policy enforcement, public disclosure, audit protocols, and legal review
Why Tiering Matters
- You can’t build one policy that works for everything.
Trying to apply the same rules to a tip form and a real-time flagging system will fail both use cases. - Oversight boards and media will want to see your reasoning.
If you can't explain why one system was treated differently from another, your credibility suffers. - Procurement, legal, and command staff need to be aligned.
Tiering creates shared language and expectations—before problems start.