Can AI Be More Efficient Than People in the Judicial System?
AI is set to replace many human jobs in the future, but should lawyers and judges be among them? Here we explore where AI is already being used in judicial systems around the world, and discuss if it should play a broader role.
In particular, could, or should, AI ever be developed that could pass judgment on a living, breathing human being?
RELATED: CHINA HAS UNVEILED AN AI JUDGE THAT WILL 'HELP' WITH COURT PROCEEDINGS
How is AI currently being used in judicial systems?
Believe it or not, AI and other forms of advanced algorithms are already widely used in many judicial systems around the world. In a number of states within the United States, for example, predictive algorithms are currently being used to help reduce the load on the judicial system.
"Under immense pressure to reduce prison numbers without risking a rise in crime, courtrooms across the U.S. have turned to automated tools in attempts to shuffle defendants through the legal system as efficiently and safely as possible." - Technology Review.
In order to achieve this, U.S. Police Departments are using predictive algorithms to develop strategies for where to deploy their forces most effectively. Using an analysis of historical crime statistics and technology such as facial-recognition, it is hoped this level of automation will help improve the effectiveness of their human resources.
The U.S. judicial service is also using other forms of algorithms, called risk assessment algorithms, to help handle post-arrest cases, too.
"Risk assessment tools are designed to do one thing: take in the details of a defendant’s profile and spit out a recidivism score—a single number estimating the likelihood that he or she will re-offend.
A judge then factors that score into a myriad of decisions that can determine what type of rehabilitation services particular defendants should receive, whether they should be held in jail before trial, and how severe their sentences should be. A low score paves the way for a kinder fate. A high score does precisely the opposite." - Technology Review.
In China, AI-powered judges are also becoming a reality. Proclaimed as the "first of its kind in the world," the city of Beijing has introduced an internet-based litigation service center that features an AI-judge for certain types of casework.

The judge, called Xinhua, is an artificial female with a body, facial expressions, voice, and actions that are based on an existing living and breathing human female judge in the Beijing Judicial Service.
This virtual judge is primarily being used for basic repetitive casework, the Bejing Internet Court has said in a statement. 'She' mostly deals with litigation reception and online guidance rather than final judgment.
The hope is that use of the AI-powered judge and the online court will make access to the judicial process more effective and more wide-reaching for Beijing's citizens.
"According to court president Zhang Wen, integrating AI and cloud computing with the litigation service system will allow the public to better reap the benefits of technological innovation in China." - Radii China.
AI is also being used in China to sift through social media messages, comments, and other online activities to help build evidence against potential defendants. Traffic police in China are also beginning to use facial recognition technology to identify and convict offenders.
Other police forces around the world are also using similar tech.
Could Artificial Intelligence ever make good decisions?
The answer to this question is not a simple one. While AI can make some types of legal decisions, this doesn't mean it is necessarily a good idea.
Many AI systems and predictive algorithms that use machine learning tend to be trained by using existing data sets or existing historical information.
While this sounds like a relatively logical approach, it relies heavily on the type and quality of the data supplied.
"Junk in, junk out." as the saying goes.
One major use of machine learning and big data is to identify correlations, or apparent correlations, within data sets. This could potentially lead to false positives, in the case of crime data, and not actually be very useful for identifying the underlying causes of crime.
As another famous adage warns, "correlation is not causation."
Humans are often just as guilty of this logical fallacy as an artificial replica could potentially be. One famous example is the correlation between low income and a person's proclivity towards crime.
Poverty is not necessarily a direct cause of criminal behavior, but it can be an indirect cause, creating conditions that make crime more likely.
If similar errors of correlation are not handled correctly, an AI-law enforcement decision or judgment could quickly degenerate into a vicious cycle of imposing penalities that are too severe or too lenient.
As with everything in life, the situation is actually more nuanced than it appears. Humans are not perfect decision-making machines either.
If studies from 2018 are also correct, it seems that AI can be faster and more accurate at spotting potential legal issues than human beings. This supports the arguement that AI should be used in legal support roles, or at least reviewing legal precedent.
Could AI be used to replace human judges?
As we have already seen, AI and advanced algorithms are already in use around the world for certain clerical and data gathering tasks. They are, in effect, doing some of the "legwork" for human judges and lawyers.
But could they ever be used to completely replace humans in a judicial system? What exactly would be the advantages and disadvantages of doing so?

Many would claim that an AI should be able to remove any bias in the final judgment-making process. Their final decisions should, in theory, be based purely on the facts at hand and existing legal precedent.
This, of course, is supposed to already be the case with human judges. But any human is susceptible to incomlete knowledge, prejudice, and unconscious bias, despite the best of intentions.
But, probably more significantly, just because something is law doesn't necessary mean it's just. "Good" and "bad" behavior is not black or white, it is a highly nuanced and completely human construction.
Such questions remain within the realm of philosophy, not computer science. Although, others would likely disagree, and that might be seen as a "good" thing.
Judges also have the role of making decisions on the offender's punishment post-conviction. These can range from the minor (small fines) to the life-changing, such as imposing long-term imprisonment, or even the death penalty in areas where it is used.
Such decisions are generally based on a set of sentencing guidelines that takei nto account factors such as the severity of a crime, its effect on the victims, previous convictions, and the convict's likelihood of re-offending. As we have seen, this is one area where AI and predictive algorithms are already being used to help with the decision-making process.
Judges can, of course, completely ignore the recommendation from the AI. But this might not be possible if humans were completely removed from the process.
Perhaps a case could be made here for panels of AI judges made up of a generative adversarial network (GAN).
But that is beyond the scope of this article.
Would AI judges be unbiased?
One apparent benefit of using AI to make decisions is that algorithms can't really have a bias. This should make AI almost perfect for legal decisions, as the process should be evidence-based rather than subjective — as can be the case for human judges.
Sounds perfect, doesn't it? But "the grass isn't always greener on the other side."
Algorithms and AI are not perfect in-and-of-themselves in this regard. This is primarily because any algorithm or AI needs to first be coded by a human.
This can introduce unintended bias from the offset.
AIs may even learn and mimic bias from their human counterparts and from the specific data they have been trained with. Could this ever be mitigated against?
Another issue is who will oversee AI-judges? Could their decisions be challenged at a later date? Would human judges take precedence over a decision by an AI, or vice versa?
The World Government Summit held in 2018, made an interesting and poignant conclusion on this subject that bears repeating verbatim: -
"It is as yet uncertain which of these technologies may become widespread and how different governments and judiciaries will choose to monitor their use.
The day when technology will become the judge of good and bad human behavior and assign appropriate punishments still lies some way in the future.
However, legal systems often provide ideal examples of services that could be improved, while trials are likely to benefit from better data analysis. The law often requires a trial to set a precedent – so watch out for the test case of AI as a judge."
In conclusion, could AI ever replace human legal professionals or be more efficient at legal decision-making? The answer, it seems, is both yes and no.
Yes, with regards to performing support or advisory roles such as gathering evidence or estimating the likelihood of re-offending. No, with regards to making final judgments and sentencing decisions.
It is probably prudent to give human beings, rather than code, the last word when it comes to sentencing. The law and legal systems can, after all, be legitimately labeled as a human construction.
Existing legal systems are both beautifully jury-rigged and maddening illogical at times, and have been adapted and upgraded as sense and sensibilities evolved over time — and that suits human beings just fine. Most legal systems are not set in stone for all time; they evolve as society does.
It is not likely that a machine could ever be trained to understand, empathize, or pass judgment "in the spirit of the law."
Perhaps humans, with all our imperfections and logical inconsistencies, are the only possible arbiters of justice on one another. For this reason, it could be argued that "justice" should never be delegated to machines, as their "cold logic" could be seen as being at odds with the "human condition".
But we'll let you make up your own mind.
Amazon Web Services (AWS) is going to be hosting Aquila, a quantum computer (processor) in its special cloud server called Amazon Bracket.