← Blog
· 10 min read

How Law Enforcement Actually Uses Face-Search (It's Not What You Think)

The public picture is a camera scanning a crowd. The reality is a detective at a desk, a still image, and a vendor portal that returns names.

Most coverage of police facial recognition shows the same image: a CCTV feed full of pedestrians with green boxes drawn around each face, names hovering above. Real-time scanning. Live identification.

That image is mostly inaccurate. It happens — UK police ran live facial recognition on 1.4M people in 2024 — but the vast majority of how US law enforcement uses face-search is much more mundane and much more consequential.

It's a detective at a desk. A still image lifted from a security camera or a phone. A vendor portal. A ranked list of candidate names. And, increasingly often, an arrest based on the top result without the corroboration the vendor's own terms-of-service required.

The Myth vs. The Workflow

The public mental model of police facial recognition has three components:

  1. Cameras everywhere
  2. Live face-matching against a watchlist
  3. Officers respond to alerts in real time

That model exists in some places. China runs it at scale. The Met Police in London runs limited deployments of it. Madison Square Garden runs a private version of it. But it's not how most US law enforcement uses face-search.

The actual workflow is different in three ways:

  1. The image is post-incident, not live. Pulled from a security camera after the crime, a phone someone shared, a doorbell cam, a social-media post.
  2. The search is desk-based, not field-based. An officer uploads the image to a vendor portal and reviews results on a computer.
  3. The result is a ranked candidate list, not an identification. Vendors return 5–20 possible matches with similarity scores. The officer picks one.

That last step — the officer picking one — is where most of the documented wrongful arrests happen.

What Actually Happens (Step by Step)

Here is the typical workflow for a face-search-driven investigation in the US, based on documents released through FOIA, civil suits, and disclosed police procedures:

Step 1: A still image gets captured

A robbery happens. The store's security camera records grainy footage. A detective is assigned. They isolate a single frame showing the suspect's face — sometimes from the front, often from a 3/4 angle, frequently low resolution.

Step 2: The image goes into a vendor portal

The detective logs into Clearview AI, DataWorks Plus, NEC, Cognitec, Idemia, or whatever face-recognition product their department licenses. They upload the still and click search.

Step 3: The vendor returns candidate matches

The system returns a ranked list — typically 5 to 20 candidate matches, each with a similarity score and a source URL. Clearview's matches come from web-scraped public photos (social media, news, blogs). State driver's-license databases return DMV photos. Mugshot databases return prior bookings.

Step 4: The detective picks one

This is the human-in-the-loop step that vendors point to as their accuracy safeguard. In practice, detectives often pick the top result without further investigation. Audits of Detroit, NYPD, and Florida departments have all found cases where the top-ranked candidate became the suspect of record without independent corroboration.

Step 5: An arrest follows

Best practice — written into most departments' formal policies — requires the detective to corroborate the face-recognition lead with other evidence before making an arrest. In documented wrongful-arrest cases, that corroboration didn't happen. The face match alone was treated as probable cause.

The Vendors Police Use

Most US law enforcement face-search runs on a small handful of products:

Clearview AI

The dominant vendor for general-purpose investigative use. Indexes 100B+ scraped public photos. ICE has used Clearview 100,000+ times since 2020 (FOIA-confirmed). 3,100+ US police agencies have at least trial access. Settled a BIPA class action in 2022 that prohibits selling face-print data to most US private entities, but US law enforcement use continues largely unimpeded.

DataWorks Plus

The vendor behind several documented wrongful arrests in Detroit. Their FACE Plus product matches against state DMV databases and integrated mugshot collections. Used heavily by the Michigan State Police and Detroit PD.

NEC

Japanese vendor with deep US federal contracts. Powers the Department of State's biometric exit program, CBP airport face-matching, and some major-city PDs. Generally tested better than Clearview on NIST benchmarks but used for narrower purposes (mostly identity verification rather than open-web matching).

Idemia

French biometrics company; the FBI's Next Generation Identification face-recognition system runs on Idemia infrastructure. Also licenses to many state-level criminal-justice systems.

Cognitec

German vendor focused on access-control and forensic-investigation products. Less common in the US than Clearview but present in several state criminal-investigation units.

State and regional fusion-center products

Many states run their own face-recognition systems on top of vendor APIs. Florida's FACES (county-shared) processed 13M+ queries in 2024. Maryland has MCPP. Most of these route searches against state DMV photos plus locally-scraped sources.

The "Investigative Lead Only" Myth

Almost every face-recognition vendor includes the same disclaimer in their terms of service: face-recognition results are an investigative lead only and are not intended to be used as probable cause for arrest.

Most departments that buy these products write the same language into their internal policies.

And then in practice: officers arrest people based on the match.

This is well-documented. The Detroit cases (Robert Williams, Michael Oliver, Porcha Woodruff, and others) all involved officers treating a face-recognition match as the basis for probable cause without independent corroboration. Each subsequent investigation found the formal policy required corroboration; each found the officers hadn't done it; each found the prosecutor's office hadn't asked whether they had.

The "investigative lead only" framing is a legal-liability shield for the vendor and the department. It is not a description of how the technology gets used.

The Disclosure Problem

When a defendant gets arrested based on a face-recognition match, two things tend to happen that they shouldn't:

  1. The use of face-recognition is not disclosed to the defense. In multiple confirmed cases, defendants and their attorneys learned that face-recognition was used in their investigation only after media reporting or post-conviction FOIA requests. Brady v. Maryland arguably requires disclosure of investigative methods that affect identification, but courts have not consistently treated face-recognition this way.
  2. The candidate-match list is not preserved. Most departments do not retain the ranked list of candidates the system returned, only the one the officer picked. That means a defendant cannot examine whether their face was in fact the highest-scoring match, or the third, or barely above threshold.

Several state legislatures have introduced bills requiring disclosure of face-recognition use in any investigation that leads to charges. Most have not passed. The federal landscape is silent.

The Probable Cause Gap

Courts have not fully decided whether a face-recognition match, on its own, constitutes probable cause for an arrest.

The vendor's own disclaimer says it doesn't. The department's own policy usually says it doesn't. But trial-court rulings have largely declined to suppress arrests made on FR matches alone, treating the match as probative even if not dispositive. Appellate review has been minimal because most face-recognition-driven cases settle, get charges dropped, or never produce an opinion that addresses the question directly.

Until that changes, departments will continue to use face-recognition matches as the basis for arrests. The doctrine has not caught up to the technology.

Why Face Removal Matters Here

The vendors that police use search against datasets built from public photos. Clearview's index is scraped social media, news, blogs, and other open-web sources. State DMV searches use government photos you don't control. Mugshot databases use booking photos. Of those three sources, only the first is one you have meaningful control over — and that's where face-removal services operate.

Filing removal requests across face-search engines doesn't pull your DMV photo from a state database. It doesn't expunge a mugshot. It does reduce the number of public-web photos that engines like Clearview return when an officer uploads a still of someone who isn't you but happens to look like you.

That's the actual mechanism. When fewer public photos of you are indexed, fewer wrong-person matches happen. The 14 documented wrongful-arrest cases all began with a candidate-match list that included someone who wasn't the suspect because their public-web face presence put them in the search index. Reducing that presence reduces the chance of being on someone else's candidate list.

It is not a guarantee. Nothing in this category is a guarantee. It's a defensive layer most people don't know exists and don't apply.

The defensive layer most people don't know exists.

FacePrivacy files removal requests with PimEyes, Precheck.ai, FaceCheck.ID, Lenso.ai, Clearview AI, and other face-search engines on your behalf. Monthly cadence. Honest about what's possible. $9.99/mo.

Start your removals →

Use code LAYER at checkout for 15% off your first month.