The First 5 Steps to Safe AI Procurement

You don’t need to be a technologist to buy AI tools—but you do need to ask the right questions before you commit.

Your AI Experts In Law Enforcement

Hero Icon
Blog Single Image
Date
04/14/2024
Writer
CLEAR Council Policy Team

You don’t need to understand machine learning. You need to understand risk.

Buying AI tools for your agency isn’t just about features. It’s about accountability. Because when something goes wrong—whether it’s bias, a false alert, or a misuse—the headlines won’t mention the vendor. They’ll mention your agency.

Here’s how to prevent that before the ink dries.

Step 1: Define AI in the RFP

Don’t let the vendor decide what counts as AI. Use a formal definition that matches federal or state guidance. If the system uses data to make or support decisions—it’s in scope.

Step 2: Require Risk Disclosure

Have vendors explain the system’s function, data inputs, decision-making logic, and where human review is required. If they can’t—or won’t—explain how the system works, don’t proceed.

Step 3: Include Suspension and Audit Clauses

Your contract should give the agency the right to pause use of the system during investigation, conduct audits, and demand vendor support in the event of harm or failure.

Step 4: Clarify Data Ownership and Usage

The agency should retain ownership of all outputs and be able to disclose purpose and safeguards to the public. No AI system should be a black box.

Step 5: Require Public-Facing Summary Materials

Make public transparency part of the contract. Require vendors to support the creation of a basic use disclosure—what the tool does, how it’s reviewed, and what guardrails are in place.

Sign-up to get interesting updates
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

We won’t give your details to third party