Filed
12:00 p.m. EDT
06.07.2025
Artificial intelligence is changing how police investigate crimes — and monitor citizens — as regulators struggle to keep pace.
A video surveillance camera is mounted to the side of a building in San Francisco, California, in 2019.
This is The Marshall Project’s Closing Argument newsletter, a weekly deep dive into a key criminal justice issue. Want this delivered to your inbox? Sign up for future newsletters.
If you’re a regular reader of this newsletter, you know that change in the criminal justice system is rarely linear. It comes in fits and starts, slowed by bureaucracy, politics, and just plain inertia. Reforms routinely get passed, then rolled back, watered down, or tied up in court.
However, there is one corner of the system where change is occurring rapidly and almost entirely in one direction: the adoption of artificial intelligence. From facial recognition to predictive analytics to the rise of increasingly convincing deepfakes and other synthetic video, new technologies are emerging faster than agencies, lawmakers, or watchdog groups can keep up.
Take New Orleans, where, for the past two years, police officers have quietly received real-time alerts from a private network of AI-equipped cameras, flagging the whereabouts of people on wanted lists, according to recent reporting by The Washington Post. Since 2023, the technology has been used in dozens of arrests, and it was deployed in two high-profile incidents this year that thrust the city into the national spotlight: the New Year’s Eve terror attack that killed 14 people and injured nearly 60, and the escape of 10 people from the city jail last month.
In 2022, City Council members attempted to put guardrails on the use of facial recognition, passing an ordinance that limited police use of that technology to specific violent crimes, and mandated oversight by trained examiners at a state facility.
But those guidelines assume it’s the police doing the searching. New Orleans police have hundreds of cameras, but the alerts in question came from a separate system: a network of 200 cameras equipped with facial recognition and installed by residents and businesses on private property, feeding video to a nonprofit called Project NOLA. Police officers who downloaded the group’s app then received notifications when someone on a wanted list was detected on the camera network, along with a location.
That has civil liberties groups and defense attorneys in Louisiana frustrated. “When you make this a private entity, all those guardrails that are supposed to be in place for law enforcement and prosecution are no longer there, and we don’t have the tools to do what we do, which is hold people accountable,” Danny Engelberg, New Orleans’ chief public defender, told the Post. Supporters of the effort, meanwhile, say it has contributed to a pronounced drop in crime in the city.
The police department said it would suspend the use of the technology shortly before the Post’s investigation was published.
New Orleans isn’t the only place where law enforcement has found a way around city-imposed limits for facial recognition. Police in San Francisco and Austin, Texas, have both circumvented restrictions by asking nearby or partnering law enforcement agencies to run facial recognition searches on their behalf, according to reporting by the Post last year.
Meanwhile, at least one city is considering a new way to gain the use of facial recognition technology: by sharing millions of jail booking photos with private software companies in exchange for free access. Last week, the Milwaukee Journal-Sentinel reported that the Milwaukee police department was considering such a swap, leveraging 2.5 million photos in return for $24,000 in search licenses. City officials say they would use the technology only in ongoing investigations, not to establish probable cause.
Another way departments can skirt facial recognition rules is to use AI analysis that doesn’t technically rely on faces. Last month, The Massachusetts Institute of Technology Review noted the rise of a tool called “Track,” offered by the company Veritone. It can identify people using “body size, gender, hair color and style, clothing, and accessories.” Notably, the algorithm can’t be used to track by skin color. Because the system is not based on biometric data, it evades most laws intended to restrain police use of identifying technology. Additionally, it would allow law enforcement to track people whose faces may be obscured by a mask or a bad camera angle.
In New York City, police are also exploring ways to use AI to identify people not just by face or appearance, but by behavior, too. “If someone is acting out, irrational… it could potentially trigger an alert that would trigger a response from either security and/or the police department,” the Metropolitan Transportation Authority’s Chief Security Officer Michael Kemper said in April, according to The Verge.
Beyond people’s physical locations and movements, police are also using AI to change how they engage with suspects. In April, Wired Magazine and 404 Media reported on a new AI platform called Massive Blue, which police are using to engage with suspects on social media and in chat apps. Some applications of the technology include intelligence gathering from protesters and activists, and undercover operations intended to ensnare people seeking sex with minors.
Like most things that AI is being employed to do, this kind of operation is not novel. Years ago, I covered efforts by the Memphis Police Department to connect with local activists via a department-run Facebook account for a fictional protester named “Bob Smith.” But like many facets of emerging AI, it’s not the intent that’s new — it’s that the digital tools for these kinds of efforts are more convincing, cheap and scalable.
But that sword cuts both ways. Police and the legal system more broadly are also contending with increasingly sophisticated AI-generated material in the context of investigations and evidence in trials. Lawyers are growing worried about the potential for deepfake AI-generated videos, which could be used to create fake alibis or falsely incriminate people. In turn, this technology creates the possibility of a “deepfake defense” that introduces doubt into even the clearest video evidence. Those concerns became even more urgent with the release of Google Gemini’s hyper-realistic video engine last month.
There are also questions about less duplicitous uses of AI in the courts. Last month, an Arizona court watched an impact statement of a murder victim, generated with AI by the man’s family. The defense attorney for the man convicted in the case has filed an appeal, according to local news reports, questioning whether the emotional weight of the synthetic video influenced the judge’s sentencing decision.