A friend once got pulled aside at a mall.
Not for anything he did.
A camera system thought his face looked like someone on a shoplifting watch list.
Twenty minutes, a manager, and an apology later, he walked out, shaken, not arrested.
The software was “confident.” It was also wrong.
Here’s the thing about AI in criminal justice: it can help, and it can hurt.
Where it helps
→ Predictive hotspot maps to place patrols where crime tends to spike
→ Real-time tools like gunshot detection to get responders out faster
→ Court support: transcribing body-cam footage, searching documents, triaging case backlogs
Where it can go wrong
→ Feedback loops: more patrols in one area mean more recorded incidents, which “prove” the model right
→ Face recognition mismatches, especially for certain demographics
→ Risk scores that influence bail or sentencing without clear explanations
Smarter policing or surveillance overreach isn’t about the tool.
It’s about how we use it, who checks it, and what happens when it’s wrong.
Would you feel safer with these systems in your city, if you could see the error rates and audit reports?


