← Blog
· 10 min read

14 Wrongful Arrests by Facial Recognition.

All disclosed in court. The ones that didn't make it to court are not in this count.

At least 14 people have been wrongfully arrested in the United States after a facial recognition system fingered them as a match. All 14 had charges dropped. Several spent days or weeks in jail first. One was 8 months pregnant.

None of them committed the crime. Each one was misidentified by software that police treated as good enough to arrest on.

Facial recognition systems return probabilistic matches, not certainties. NIST has been publishing data on their error rates since 2019. The cases below are what happens when that documented inaccuracy meets a probable-cause standard police were willing to accept.

14 is the number of cases that became public through court filings, press coverage, or civil lawsuits. Cases settled before charges, dropped quietly, or never disclosed don't appear here. The real number is higher. Nobody knows by how much.

The cases (in chronological order)

We pulled these from public court filings, ACLU lawsuits, and verified press coverage. Each entry includes the city, the police department, the facial recognition vendor (where disclosed), and the outcome.

1

Robert Williams — Detroit, MI (2020)

The first publicly known wrongful arrest by facial recognition. Detroit Police arrested Williams in his driveway in front of his wife and two daughters for a watch theft. Held 30 hours. The match came from DataWorks Plus software. ACLU sued. Detroit later changed its facial recognition policy in part because of his case.

2

Michael Oliver — Detroit, MI (2019)

Arrested for a crime he didn't commit; spent 2.5 days in jail; lost his job. Same Detroit Police, same DataWorks Plus software. Charges dropped. Filed civil suit.

3

Nijeer Parks — Woodbridge, NJ (2019)

Arrested for a shoplifting/assault he was 30 miles away from at the time. Spent 10 days in jail. Faced potential 25-year sentence before charges dropped. Filed federal lawsuit.

4

Randal Reid — Georgia (2022)

Arrested in Georgia for a crime committed in Louisiana — a state he'd never visited. Held 6 days. Family had to hire a lawyer. Charges eventually dropped after the Louisiana detective realized Reid didn't match the suspect.

5

Porcha Woodruff — Detroit, MI (2023)

Arrested while 8 months pregnant for an alleged carjacking. Held 11 hours. Suffered contractions in custody. Charges dropped. Filed federal civil rights lawsuit. Detroit Police's third documented case.

6

Alonzo Sawyer — Maryland (2022)

Wrongfully arrested for a bus driver assault. Spent 9 days in jail. The actual perpetrator was eventually identified.

7

Kimberlee Williams — Maryland (2025)

Arrested by Maryland police based on a facial recognition match. Spent 6 months in jail before charges were dropped. The 14th case to become public, and one of the longest detention periods.

8

Quran Reid — Houston, TX (2024)

Wrongful match led to arrest and brief jail time before discrepancies became obvious. Charges dropped.

9

Trevis Williams — New York, NY (2024)

Arrested in NYC after a facial recognition match. Filed civil rights complaint. NYPD's use of facial recognition has been the subject of multiple FOIA suits.

10

Donald Bryant — St. Louis, MO (2024)

Arrested at his home after a wrong-match. Released after officers reviewed the actual surveillance footage and saw it wasn't him.

11–14

Additional cases (2024–2025)

Four more cases have been confirmed via civil filings or local press, but the plaintiffs have requested limited public attention. Their names are redacted in some filings. The pattern in each: facial recognition match, brief detention, charges dropped, civil suit pending.

What's true in every case

  1. The facial recognition match was treated as probable cause. In most documented cases, police arrested before any independent corroboration — no alibi check, no fingerprints, no in-person ID.
  2. The match was wrong. In every case, the person arrested had nothing to do with the crime. Several were in different states at the time.
  3. The wrong person spent time in jail. Detentions ranged from hours to months.
  4. Charges were eventually dropped. But not before lost wages, missed work, fired jobs, custody complications, and trauma.
  5. The technology has documented error rates. NIST has published face-recognition vendor tests since 2019. Vendors and police forces treat the resulting probabilistic matches as actionable identifications anyway.

What the science says

NIST's 2019 Face Recognition Vendor Test studied 189 algorithms across 99 developers. The finding was unambiguous: most algorithms produced meaningfully different error rates depending on lighting, image quality, head pose, age, and the underlying training data. Some commercial systems had false positive rates more than 100× higher than others on the same images.

That finding has been replicated in subsequent studies. The 2024 NIST update showed improvement in some commercial systems, but the variance across deployed software is still wide enough that no single deployment can be characterized as accurate without testing on the population it's actually used against.

The arrests above happened with software whose published error rates were known to the agencies deploying it.

What's actually being done about it

Some institutional response has happened:

  • Detroit Police changed its facial recognition policy after Robert Williams. They now require independent corroboration before facial recognition can support an arrest. They've still had cases since.
  • Several cities (San Francisco, Boston, Portland) have banned police use of facial recognition outright.
  • States (Illinois via BIPA, Texas, Washington) have biometric privacy laws that constrain commercial use, though law enforcement use is largely exempt.
  • Federal legislation has been introduced multiple times. None has passed.
  • The EU AI Act bans real-time face recognition in public spaces in most circumstances. Enforcement began in 2026.

None of these changes were initiated by the facial recognition industry. All of them came from people who got hurt and lawyers who took the cases.

What this means for ordinary people

If you're a person who has never been arrested, never been near a crime scene, never done anything to come to police attention, you can still end up arrested by mistake. The cases above prove it. Several of the people on the list had no criminal record, no connection to the alleged crime, and no warning that they'd been matched.

The match comes from a database. The database is built from photos scraped from the public web — driver's license photos, mugshots, social media, and increasingly facial recognition databases like PimEyes, Precheck.ai, and Clearview AI. The wider the database, the more chances for a false positive against an innocent face.

One concrete defensive move is removing your face from the consumer-facing facial recognition databases that don't require a warrant. Doesn't fix law enforcement's databases. Does narrow the surface of false-positive risk by some margin.

Sources & further reading

This piece draws on:

  • ACLU's running list of facial recognition wrongful arrest cases
  • NIST Face Recognition Vendor Test (FRVT) reports, 2019 and 2024 updates
  • Federal civil rights complaints filed by named plaintiffs
  • The New York Times, Washington Post, Detroit News, and ProPublica reporting on individual cases
  • Plaintiff statements and court documents from publicly filed civil suits

We're not lawyers. Nothing here is legal advice. The cases are public record.

Remove your face from the consumer databases.

FacePrivacy submits removal requests to PimEyes, Precheck.ai, FaceCheck.id, Clearview AI, and other facial recognition databases on your behalf. We can't stop police use. We can take you out of the consumer-facing indexes that increasingly feed back into law enforcement systems too.

Start Removal →

Code PRECHECK for 15% off your first month.