Can AI Tools Pass a Public Records Request?

If your AI system can’t survive legal scrutiny, it shouldn’t be live. Here’s how to get ahead of it.

Your AI Experts In Law Enforcement

Hero Icon
Date
03/28/2025
Writer
CLEAR Council Policy Team
Blog Single Image

You may not call it “AI” in your contracts. But the press, public, and lawyers will.

If your agency is using software that makes decisions, flags risk, or generates outputs based on data—it’s AI by modern standards. And it’s subject to disclosure.

We’ve seen it happen: a journalist files a FOIA request for “any automated system used to flag individuals for additional review.” The agency is caught off guard. No logs. No SOPs. No idea who approved the system. Now it’s not just a records issue—it’s a trust issue.

What You’ll Be Asked to Disclose

1. The system’s function and decision logic

Can you describe what the system does, what data it uses, and how it makes recommendations or decisions?

2. The policy or SOP that governs it

Is there a written policy that explains when and how the system is used, and who is responsible for oversight?

3. Audit logs and HITL actions

Can you show who reviewed the outputs, when, and what action (if any) was taken? If not, your agency may be liable for outcomes without documentation.

4. Vendor agreements and data rights

Can you show what was promised, what’s being stored, and who owns the model outputs? If you don’t control the data, you don’t control the risk.

What Happens If You Can’t Produce It?

  • You lose credibility with oversight boards, media, and elected officials.
  • Your legal team loses leverage in defending decisions.
  • Your community loses confidence in how tech is used in policing.
Sign-up to get interesting updates

We won’t give your details to third party

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.