Artificial Intelligence: A New Trojan Horse for Undue Influence on Judiciaries?

By Francesco Contini

Francesco Contini is currently a senior researcher at the Research Institute on Judicial Systems of the National Research Council in Italy ( IRSiG-CNR). He has worked closely with several international organizations and has contributed to the Council of Europe's recent Ethical Charter on the Use of Artificial Intelligence in Judicial Systems, as well as UNODC's Resource Guide on Strengthening Judicial Integrity and Capacity. Mr. Contini recently shared his views on incorporating artificial intelligence into judicial processes with UNODC, as part of the Organization's on-going work to promote judicial integrity. All opinions expressed in this piece are solely those of the author as an external expert and do not necessarily reflect the official position of UNODC.

______________________________

For more than three decades, information and communications technology (ICT) advancements have burst into the operations of courts and prosecutors' offices promising transparency, efficiency and radical changes to working practices, such as paperless courts. Even if in most jurisdictions such promises have yet to be fulfilled, software programmes and algorithms are already executing growing chunks of judicial procedures.  The impacts such technologies have on the functioning of justice systems and the values endorsed by the Bangalore Principles of Judicial Conduct are mostly positive.

The most recent wave of technology is based upon artificial intelligence (AI) and promises to change the ways in which judicial decisions are taken. This goal is mostly pursued through a specific technology called 'machine learning' that makes predictions by evaluating case files, both procedural documents and the associated judicial decisions. This data set, known as 'training data', is analyzed to build statistical correlations between the cases and the associated judicial decisions. The more the algorithm processes data, the more accurate it becomes in predicting decisions in new cases. For this reason, such systems 'learn' (even if just in terms of improved statistical accuracy) to replicate the outcomes that judges made in similar cases. Unlike the technological implements already in place that digitize the exchange of data and documents, this 'predictive justice' technology (as it is commonly mislabelled) aims to affect judicial decision-making. [1] It is not clear if this trend is leading to better decisions or if it is undermining the proper functioning of the system.

The potential impact of such technology on the administration of justice can be explored by considering the challenges posed by information technology already in place, such as case management and e-filing. In England and Wales, a simple calculation error embedded in the official form used in divorce cases led to the wrong calculation of alimonies in 3,600 cases over a period of 19 months. The problem is not the error per se, but the reasons why the Ministry of Justice and the form users did not detect the error for such a long time. Technology users tend to focus on the interfaces and on the tools that enable the use of technological systems and not on their internal functioning.

The mass of case-related data is made available by court technology to increase transparency, but how systems internally analyze it is hard to access and difficult to make accountable. Hence, a general question is what the possibilities are to deploy effective controls over ICTs' inner-workings and the algorithms that process the data. A further question is, indeed, how to guarantee proper oversight and accountability over the functioning of technology and whether AI (and more precisely machine learning) is a peculiar case in this accountability exercise.  

Several jurisdictions, particularly in the United States, have adopted technology that recommends how to make pre-trial detention decisions. Such applications use algorithms that calculate recidivism risks, and 'score' the defendant based on the probability they will commit a crime if released.

This kind of scoring places the judge in an awkward position. Suppose there is a case in which the judge is inclined to release the defendant into pre-trial custody, but the score identifies a high risk of recidivism. Should the judge be ready to go against the risk assessment calculation made by the machine? And what if the defendant is released and then commits a crime? The standard rebuttal to this argument emphasizes that the systems are just taking advantage of the data available. It is argued that the scientific methods employed calculate recidivism risks in a way that is more powerful and reliable than those used by individual judges. This argument is relevant, but how can we ensure that the data has no biases? Guaranteeing accountability is much more complicated than with simpler technology.

ProPublica, an American non-profit organization that conducts investigations in the public interest, compared actual recidivism with predicted recidivism. The analysis of 10,000 criminal defendants' cases found that "black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk." This instance shows that accountability is hard to achieve and that such systems can inject biases into judicial processes.

Technology - whether it is used for case management, simple web forms, or more complicated AI-based tasks - should be introduced into the judicial process if - and only if - proper accountability mechanisms are in place.

The problem with accountability is even more severe with AI systems that are based on machine learning. In this case, the prediction is based on algorithms that change over time. With machine learning, algorithms 'learn' (change) based on their own experience. As the algorithms change, we do not know how they work and why they do things in certain ways. If we cannot adopt effective control mechanisms, how can we guarantee proper accountability? The debate is ongoing and the precautionary principle should be adopted until such questions have been solved from a technical and institutional perspective.

The cautions and the precautionary principle mentioned in this article follow the same direction as the Council of Europe's Ethical Charter on the Use of Artificial Intelligence in Judicial Systems, particularly the principles of respecting fundamental rights and of keeping users in check. However, how to implement such guidelines is not yet clear. Lawyers, case parties and individual judges can definitely not be charged with such a task. It is a challenge that should be faced by pooling multifaceted competences, monitoring the functioning of the systems and benchmarking AI against the core values of the Bangalore Principles of Judicial Conduct. It is a challenge that Global Judicial Integrity Network participants are well-positioned to face.



[1] The label 'predictive justice' is dangerously misleading, as such systems make predictions, but not judicial decisions. Judicial decisions require, as a minimum standard, justifications based on an assessment of the relevant facts and applicable regulations. AI systems make statistical correlations and their forecasts are just the result of those correlations. Hence, it would only be proper to speak of actual predictive justice if the systems were to provide justifications in terms of facts and laws.