How to Talk About AI Use With Your Community

Public trust in law enforcement technology starts with transparency. Here’s how to communicate AI use before it becomes a problem.

Your AI Experts In Law Enforcement

Hero Icon
Blog Single Image
Date
04/10/2025
Writer
CLEAR Council Policy Team

If you don’t explain it, someone else will—and they won’t assume the best.

AI tools in law enforcement are already controversial. Even the most basic systems—like transcription tools or alert prioritization—can be misunderstood as surveillance or bias engines. If your agency doesn’t have a plan to explain AI use proactively, you’re inviting mistrust.

What the Public Wants to Know

1. What does the tool actually do?

Explain it like you would to a councilmember or high school student. No jargon. What task is it assisting with? What decision does it support?

2. Is the tool making decisions, or just helping staff?

Make the human-in-the-loop roles clear. “This system helps sort tips. A trained officer still reviews and decides how to respond.” That matters.

3. How do you prevent misuse or error?

Let people know there’s a process for reviewing outputs, correcting mistakes, and suspending the system if needed.

4. Who oversees it?

Give them confidence that the system isn’t running unchecked. Mention legal review, policy approval, oversight boards, or internal audits.

5. Is this public information?

If your answer is “sort of,” that’s not good enough. You need a public-facing statement that explains the system’s purpose, safeguards, and point of contact.

Sign-up to get interesting updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We won’t give your details to third party