AI Could Send You to Jail: How Algorithms Are Changing Justice

Forensic AI is shrouded in the trade secrets of the companies who make it. Some want to change that.
Eric James Beyer
Canva/Eric James Beyer

The United States does not have an enviable reputation when it comes to criminal justice. Despite incarceration rates falling slightly in recent years, the country can still claim to be the world leader in the per capita number of its citizens it puts behind bars. According to some estimates, at around 639 out of every 100,000 people is in some kind of jail or prison. 

The COVID-19 pandemic certainly hasn’t helped matters either, with case backlogs weighing down an already bloated system to the point where prosecutors in Chicago are preparing to drop thousands of low-level cases that can’t be brought to trial in time. 

It's therefore unsurprising that those working in the field have begun reaching out to the wonders of artificial intelligence in an effort to not only streamline judicial processes but actively reduce the practical and logistical burdens the system carries.

Forensic algorithms have formed a large part of this effort in recent years, and they are increasingly being applied to nearly every aspect of the justice system in the US.

Fingerprint matching software aims to correctly identify suspects with staggering speed and precision, facial recognition helps law enforcement agencies track people down, and probabilistic genotyping can work wonders to assist investigators in determining if a genetic sample from a crime scene is linked to a person of interest or not.

It’s true to say that, when used thoughtfully, these algorithms have the potential to reinforce and even better the proper application of justice in the courts. Just as some argue that using AI in weapons technology could reduce human error in life or death situations, proponents of the use of forensic algorithms say they could lead to more objective assessments of crime-scene data, lower incarceration rates, and eliminate wrongful sentencing. 

But, while AI is often hailed as a technology that can address many of the world’s problems and lead humanity into a better future, it’s not without its imperfections

Any technology is vulnerable to the same flaws present in the humans who design it, and those flaws scale up or down according to its capabilities. This makes AI particularly worrisome. With increasing frequency, we’re using it to do big, complex jobs across just about every industry and discipline there is. Getting things wrong with the technology has the potential to be the equivalent of misjudging your footing while ascending a cliff face — a small, human error that leads to grave and irreversible consequences. 

A closeup of a wooden gavel with golden metal band across its head resting on a platform.
Source: Bill Oxford/Unsplash

In no field is this more true than in criminal justice, and here, too, we’re a long way off from the idealized future AI often seems to promise. As it stands, we’re still very much working out the bugs in forensic algorithms

Most Popular

A 2017 District of Columbia court case is illustrative of this fact. In that case, an anonymous defendant who was being represented by Public Defender Service attorney Rachel Cicurel nearly experienced the fallout from faulty programming that was presented as evidence in court.   

The prosecutor in that case had originally agreed to probation as a fair sentence for Cicurel’s client. It wasn’t until after a forensic report based on an algorithm’s predictive analysis determined the defendant was too high a criminal risk that the prosecutor changed their mind and asked the judge to place the defendant in juvenile detention. 

Cicurel demanded her team be shown the mechanisms underlying the report and found that, not only had the technology not been reviewed by any independent judicial or scientific organization, the results appeared to be at least somewhat based on racially-biased input values. 

Thanks to Cicurel’s diligence, the judge threw the report out. But the flawed software might have just as easily gone unnoticed or unchallenged, as is often the case in criminal trials across the US. 

The worry that emerges borders on the dystopian — that flawed or biased systems are being dressed up in the unassailable robes of mathematics, machine-learning capabilities, and data and wrongfully taking away people’s freedoms.

Clearing the fog of forensic AI

It’s just this facet of human behavior — that the products of human minds are bound to the same subjective bias' as their designers — that has people like US House Representative Mark Takano (D-Calif.) worried. 

To help address this and related concerns, Rep. Takano introduced the Justice in Forensic Algorithms Act in 2019, a bill aimed at ensuring the protection of civil rights for defendants in criminal cases and establishing best practices for the use of forensic AI software. Takano reintroduced the bill earlier this year with co-sponsor Dwight Evans (D-Penn.) and believes allowing for greater transparency with the technology will ensure the integrity of people’s civil rights while on trial. 

“We simply don’t allow the argument by software companies that their proprietary software interests or trade secrets are more sacrosanct than the due process rights of the defendants,” Takano said in an interview with Interesting Engineering. 

The American flag set against a dark blue background.
Source: Clay Banks/Unsplash

The software companies producing these algorithms often claim that their methodologies, source codes, and testing processes must remain obscure and within the reach of intellectual property law lest they risk their trade secrets being stolen or otherwise compromised. 

"We need some way in which to provide some national guidance to the justice system."

But critics say that when these algorithms are hidden from cross-examination in a courtroom, defendants are forced to accept the validity and reliability of the programs being used to provide the evidence. People like Rep. Takano claim these software vendors are guilty of conflating proper criminal defense practices with the spectre of malicious corporate sabotage and putting financial gains ahead of an individual's rights.  

Currently, the technology is being used in cases across the US, and whether or not evidence from these algorithms is deemed admissible or worthy of cross-examination varies wildly, creating an uneven legal landscape. 

In a somewhat rare example of pushback to the secrecy surrounding such algorithms, the State Appeals Court of New Jersey recently ordered the forensic software company Cybergenetics to allow a defendant’s legal team access to the source code of the DNA analysis program that linked their client to a 2017 shooting. 

But decisions like these cannot be said to represent a general trend in the US. It’s just such irregularity that Takano sees as problematic. 

“You see courts all over the map with inconsistent conclusions,” he says. “We need some way in which to provide some national guidance to the justice system about what standards these programs need to meet. This is something that defendants and prosecutors can’t do on their own, and the software companies can’t do on their own. This is a perfect place for the role of government, of a federal agency such as NIST [the National Institute of Standards and Technology], to be able to set guidance to the courts.” 

Another tricky issue Takano’s bill attempts to remedy is the lack of independent review these companies face when evaluating the legitimacy of their software. At the Representative’s request, the US Government Accountability Office (GAO) released a report in early July that assesses the effectiveness and risks associated with using algorithms in forensic science. 

One thing the agency discovered was that, regarding probabilistic genotyping, for example, the majority of studies that evaluate DNA-matching software are carried out by the very same companies or law enforcement agencies who have developed the technology. 

The GAO report also notes that there is a trend among these companies to claim in court that their software is peer-reviewed and therefore should pass as admissible evidence, while simultaneously denying research licenses to the independent scientists who actually try to analyze it. 

A small bronze statue of Lady Justice in a law firm.
Source: Tingey Injury Law Firm/Unsplash

“It’s one thing to have peer reviewers that you yourself have hired,” Takano explains. “The independence is tainted in some way. The fact that you’re not willing to go through critical review for someone who stands to lose their freedom […] that falls short of the claim of objectivity.” 

New technology requires new awareness 

While explaining their decision to allow Cybergenetic’s source code to be assessed by the defense, the New Jersey State Appeals Court highlighted another important facet of the issue, stating that the transparency would allow the judge and other members of the trial to become more familiar with these programs. 

"This is something that defendants and prosecutors can’t do on their own."

One key aspect of Takano’s bill is to help members of the criminal justice system develop just such an awareness of the capabilities of forensic technologies and the roles they can and cannot play within it. 

“This is what we do in my bill, we charge NIST with, not deciding whether the software works or not, but establishing standards, establishing the guidance to courts and prosecutors and defenders what the software needs to be able to do in order to be valid and reliable.”

Machine learning to be human

Machine-learning algorithms are excellent at finding patterns in data. Give an algorithm enough statistics on crime and it will find interesting constellations in the dataset. But, as the MIT Technology Review rightfully points out, human interpretation of this information can often “turn correlative insights into causal scoring mechanisms,” potentially misrepresenting reality. That is a dangerous pitfall.

If forensic algorithms used to help courtrooms determine the guilt or innocence of a person are not properly assessed, the factors that led to certain groups historically being marginalized in the US justice system could manifest again, this time in law-enforcement tools more powerful than any that have ever existed. 

History offers a helpful perspective here, one that positions Representative Takano as an apt figure to help take up the cause of the defense of individual rights in this arena. 

The fact that hundreds of thousands of people of Japanese descent living in the United States during World War II — the majority of them American citizens — were sent to live in internment camps with no possibility of legal redress should remain an evergreen lesson on how easily a society's commitment to core democratic values can be forgotten, overlooked, or outright discarded.

“It’s part of my own family’s history,” Takano says. “My parents and grandparents were all in internment camps, having their due process rights completely ignored. Civil rights and civil liberties have always been a core interest of mine.”

It’s an absolute necessity that the tools we create for a better and more judicious future, especially revolutionarily powerful tools, ensure those rights don’t fall by the wayside — for anyone. 

message circleSHOW COMMENT (1)chevron