Artifical Intelligence and Gender in the Judiciaries

Interview with Roberta Solis and Melissa Evans

 

 (c) UNODC

 

 Vienna, February 2021- The Global Judicial Integrity Network, as part of the Doha Declaration Global Programme, assists judiciaries across the globe in strengthening judicial integrity and preventing corruption in the justice sector. In 2020, the Global Judicial Integrity Network won an award for its project focusing on gender bias and discrimination in artificial intelligence (AI) systems used in judiciaries. The project is a continuation of the Network’s previous work on gender-related issues in the judiciary as well as their work on the ethical use of AI.  

The Gender Team sat down with Roberta Solis, the Judicial Integrity Team Leader, and Melissa Evans, a consultant in the team, to discuss their project and the role that gender plays in AI in judiciary processes.

Gender Bulletin: What was the “a-ha” moment when you realised the importance of promoting gender equality in your area of work?

Roberta Solis: As a lawyer, I have always known that female representation in the judiciary and in the legal profession has been and continues to be an issue. This is just the tip of the iceberg; it shows much deeper issues that have grave consequences for example, to access to justice, the right to a fair trial, the right to be treated with equality and impartiality, and in some cases, even the right to basic civility in the treatment of the parties. So, it was very clear from day one of planning the Global Judicial Integrity Network that gender issues had to be at the forefront of our work. Once we also started to work on the use of AI in the global judiciaries and other emerging issues, it became clear that both issues should be connected and not looked at separately.

Melissa Evans: I think for me, I realised the importance in an UNHCR meeting on AI and data governance. They were talking a lot about the human rights implications of AI systems and it drove home how important it is and how it impacts human rights. AI programme development has to be pursued cautiously as unknown aspects may have very serious implications that for instance can lead to people being deprived of liberty. 

Gender Bulletin: When talking about Artificial Intelligence, one would not directly assume that gender-based discrimination plays a role. Why should gender be recognized in the use of AI? And more specifically, what role does it play in the use of AI in judiciary processes?

Roberta Solis: AI applications are built using data sets and find patterns to provide analysis and solutions. As a result, any AI application is only as good as the data it is built on. Gender-based discrimination, whether it is conscious or not, is a reality in every judiciary. The datasets that judiciaries use to develop the applications also reflect these historical biases. So, unless judiciaries are aware and recognize these biases, any application they develop would simply perpetuate or even deepen discrimination. This is not just true for gender but also for other forms of discrimination such as those based on race or poverty. If this is not addressed, then there could be grave consequences to using AI applications in the judiciary and any other sector. This is recognized in discussions worldwide on the ethical use of AI across industries and sectors.

It is crucial to address this in the framework of justice systems because the freedom of persons, their equal access to justice and their right to a fair trial could be deeply affected.

Gender Bulletin: The framework that you are developing aims to ensure that AI systems do not replicate or deepen gender-related issues in judiciary processes. Could you give an example of the practical implications of the framework?

Roberta Solis: We are seeing a shift to AI being used as a means of allocating cases to judges and to assigning priorities to cases when they reach the judiciary. When we consider cases of gender-based violence, which are naturally urgent cases, AI systems must be taught to correctly recognise and prioritise these cases in order to prevent any further physical harm to those involved. This could not only have a direct impact on the well-being of the women involved in the cases but also on the overall access to justice. If the cases are not treated in the right way, people may be discouraged from seeking judiciary protections. I think this is one of the strongest examples of how AI should be correctly developed to address the specific needs of the case.

Another interesting example is that AI is being used to calculate the recidivism risk of prisoners to determine their release. Again, biased case data can come into play here. We also have to question whether issues predominantly faced by women, for example women being single mothers, are being factored into the AI recidivism calculation. This is impacted by how the data is collected and stored. If for example, a judiciary is developing applications that perform these calculations based only on data related to the male prisoner population, there won’t be specific considerations about the needs of female prisoners.

The framework that we are developing will map out all the potential instances in which AI could be used by judiciaries and then try to identify the potential risks and considerations related to gender. This will have real practical implications on how it will be used for judiciaries.

Melissa Evans: Another example worth mentioning relates to the use of predictive AI, where the AI system gives the judge a suggested outcome for the case based upon historical case data. As Roberta mentioned, if the case data is biased then the outcome is also going to be biased. For instance, in cases of violence against women, if women in similar cases aren’t seen as credible witnesses for biased reasons, then that bias would carry over into the outcome of the AI system. I think even in these kind of predictive justice applications the gender bias could also be perpetuated.

Gender Bulletin: What wider impact do you think this framework is going to have on the ethical use of AI in judiciary processes?

Roberta Solis: The first step is to raise awareness and to make judiciaries understand the importance of addressing these issues; creating an understanding of the potential risks and the need to include gender issues.

We are developing a framework with international guidance for the development of applications, a repository of good practices and a self-assessment guide for judiciaries already implementing and developing the applications. We can’t tell them to stop the process until they have the full framework to guide them, but they will be able to later audit the applications to check whether or not they are perpetuating gender biases. This way, we are working with two stages of the process; supporting those that are developing new applications and those who have implemented their AI systems, so they can correct what they have developed, if needed.

However, not all judiciaries are developing these applications in-house and therefore don’t always have the expertise within the judiciary. A lot of judiciaries are outsourcing applications, so we hope this will be a helpful tool to any developer interested in building applications for judicial institutions. As a result, when they work with the judiciaries, they will know what parameters to follow when developing applications. Therefore, we also see this potential spill over effect to other industries.

In addition, the framework will contribute to the wider understanding of gender issues in the judiciary.  We have mentioned how caseload and data are biased, so, through the work of the Network in developing this framework, judiciaries may also better understand how case data reflect biased and discriminatory decisions. This can have an impact on the wider considerations about ethics in the judiciaries and how this can be addressed through for example training to address gender issues. This could help with improving the data itself. Overall, we hope that the biggest impact is on gender equality and making the judiciary more gender sensitive and inclusive.

The Global Judicial Integrity Network is currently welcoming inputs from colleagues who have an interest and expertise in this subject matter to further strengthen the framework. Please contact Roberta Solis at Roberta.Solis@UN.org for more information.