AI in the Family Justice System: opportunities and risks

Could AI help the family justice system in England and Wales? Aliya Saied-Tessier looks at the issues.

Artificial Intelligence (AI) is widely discussed, including in areas related to public sector services. However, there is frequent misuse of the term AI and claims about what AI technologies are capable of can be exaggerated. This article sets out some key definitions and explores potential uses of AI in the family justice system. It also outlines some challenges and risks, which vary depending on the specific AI application, and briefly discusses governance.

The aim of the article and the briefing paper on which it is based, is to prompt discussion about whether and what applications of AI would be helpful in the family justice system in England and Wales and think about what guidance or regulation are needed. The hope is to encourage professionals to think about how AI technologies can be used safely and fairly to help the families that go through family proceedings.

Definitions

There is no single accepted definition of ‘AI’ – but there is a consensus that the core concept refers to machines simulating human intelligence. Human intelligence processes are extensive and include the ability to acquire and process information, learn from experience, reason and make decisions. Therefore, AI tools are not homogenous – uses include facial recognition technologies, virtual assistants e.g. chatbots on websites, and assessing eligibility & risks such as the Home Office visa-screening tool used from 2015-2020.

AI usually refers to either generative AI or machine learning tools. Generative AI refers to AI that is used to create new text, video, audio or code[1] including large language models (LLMs) which are a type of generative AI that produce text output (Cabinet Office/CDDO 2024). Machine Learning is a subset of the tools that are currently being described and marketed as AI. While analysts building models to identify patterns and/or predict events is not new, the aim of machine learning is to enable computers to learn on their own, without being programmed by a human (Maini 2017).

How could AI be used in the family justice system?

We have categorised possible uses of AI in the family justice system into applications that can i) improve families’ experiences of the family justice system, ii) process administrative tasks leading to efficiencies or iii) support decision-making. Some examples are already being used while others are theoretical.

Improving family experiences

  1. Virtual assistants: AI-powered chatbots can help users find information, complete forms, and navigate the legal process. For example, New Jersey state courts have implemented a chatbot named JIA, which assists in handling public inquiries.
  2. Language ‘translation’: Large language models (LLMs) can simplify legal jargon into plain English or other languages, making court documents more accessible. Private companies like Legalese Decoder already offer such services in the US, though no research was found about the accuracy of these services.
  3. Legal advice: AI could theoretically support litigants by drafting submissions or citing relevant cases based on extensive case law databases. However, the risk of inaccuracies and "hallucinations" in AI-generated information must be carefully managed.

Enhancing administrative efficiencies

AI technologies could support with administrative tasks, leading to efficiencies for professionals:

  1. Document review: AI tools can rapidly scan and classify large volumes of documents for relevance, reducing the manual labour involved in document review.
  2. Case management: AI-driven case management systems can classify and route cases to appropriate teams. For instance, AI-empowered software in Florida courts classifies and dockets e-filed documents, speeding up case processing.
  3. Drafting: AI can assist in drafting various documents, from child-friendly judgments to presentations and summaries. The UK's judiciary has noted the potential for AI to draft summaries of large texts and write presentations, emphasising the need for accuracy checks.

Supporting decision making

AI could theoretically play a role in supporting decision-making processes within the family justice system:

  1. Online dispute resolution: AI tools can facilitate mediation and settlement negotiations on online dispute resolution platforms. For example, the Modria platform supports divorce, landlord-tenant, and employment disputes, resolving claims without human intervention. Though we found no independent research on the effectiveness of the tool.
  2. Predictive analysis and risk assessment: AI can predict outcomes based on historical data, to support professionals in making decisions. This type of analysis is well established when done without using AI tools for example statistical analysis. Recently, machine learning approaches have been adopted to analyse historical case data to predict various outcomes and assess risks such as.cases involving domestic violence or child abuse. While some machine learning models have outperformed traditional methods, concerns about bias, accuracy and transparency persist.

The examples above indicate that the potential uses of AI in the family justice system are broad and are also accompanied by a number of challenges such as the need for rigorous testing and ongoing monitoring to ensure they remain accurate and fair.

Challenges and risks

There are challenges and risks to using AI in any field, for example around fairness, accountability, safety and transparency. However, due to the nature of decisions made in family courts, tolerance of any errors may be lower than other areas in which AI is being used. 

Fairness

There is a lot of discussion about AI technologies being biased. Bias in using AI can come from two sources – the data and how the AI tools are used. AI systems can inherit biases present in training data, potentially leading to discriminatory outcomes. Machine learning algorithms use data generated by humans which can cause them to reproduce or exacerbate existing biases such a racial bias (e.g. the COMPAS system predicted higher risk of reoffending for black defendants) or gender bias (eg Google Ads showing fewer adverts for high paying jobs to women) (Ntoutsi et al. 2020). Addressing and mitigating algorithmic bias in AI systems is a complex challenge that requires ongoing monitoring and adjustment. There are also issues with bias around how algorithms are used. Organisations using machine learning algorithms are often not transparent about their usage and adopt them without consulting marginalised communities (Okidegbe 2022).

Finally, people with insufficient means to fund quality human alternatives may turn to lower-quality and/or unregulated software tools in their place. One example is people using unregulated AI tools that predict their likely financial settlements following family breakdown as an affordable alternative to professional legal advice.  Another example is potentially unreliable AI translation in place of a human interpreter. This tilts the level-playing field away from those with fewer financial resources and/or access to technology.

Accountability

The Alan Turing Institute describe a challenge in accountability when using AI technologies, because the complexity of AI design means it is not straightforward to establish responsibility among the large number of people involved including technical experts, data teams, policy experts and users.

Privacy and compliance

Handling confidential personal data in family law cases raises concerns about data privacy and security.[2] Protecting the privacy of individuals and families while using AI for case management and risk assessment is crucial. AI is included in the scope of existing data protection laws (GDPR and the Data Protection Act 2018) if it involves personal data in training, testing or using models (Information Commissioner Office). Guidance for judges and barristers emphasises caution in thinking about what is being fed into generative AI models. Is further clarity in legal standards and regulations concerning the use of AI within the family justice system required? Especially as new uses of technologies may be discovered.

Transparency

Being transparent about when AI has been used is likely to be critical across a number of domain to secure public trust and ensure effective regulation. Monitoring AI use by both professionals and the public is required to understand the impact of AI on the family justice system. The Ada Lovelace Institute recommend that there is transparency labelling for AI generated content.

Ensuring that the public and legal professionals have trust in AI systems used in the family justice system is paramount. Negative experiences or perceptions of AI may erode this trust. The Ada Lovelace Institute have done some research about public attitudes to AI in Britain and found that support for AI depended on how it was being used and for the majority of AI uses in their survey, people were broadly positive. For example, using AI to detect cancer was seen as beneficial by 9 out of 10 people but there were concerns about some uses such as driverless cars and robotic weapons (ALI 2023). The research highlighted that the public believe AI systems should be transparent.

AI models used in risk assessment and decision support can be highly complex, making it difficult for both professionals and the public to understand how decisions are reached, which can be a barrier to effective use. Professionals responsible for decisions taken based on AI output will need to understand how results are reached to evidence their decisions which can be difficult with machine learning models.

Research by a firm called Vectara found that the main LLMs hallucinate or invent information between 3-30% of the time (NY Times Nov 2023). Even when giving correct information, LLMs may provide advice that is not relevant such as legal information used to train LLMs has a US focus due to the nature of the data used to train the models (Courts and Tribunals Judiciary 2024). People seeking legal support or advice from LLMs may not be aware of the accuracy and relevance of information from generative AI.

Governance and regulation

Effective governance and regulation are paramount for the responsible use of AI in the family justice system. The Government published A pro-innovation approach to AI regulation in March 2023 which outlined five principles: i) safety, security and robustness, ii) appropriate transparency and explainability, iii) fairness, iv) accountability and governance and v) contestability and redress. In contrast to the UK’s principle-based approach, the EU has proposed a rules-based approach to AI regulation. The Artificial Intelligence Act proposes that AI uses are categorised based on the level of risk they pose to users as being i) unacceptable risk which are banned such as social scoring based on a person’s socio-economic status, ii) high risk which are uses in sensitive sectors such as law enforcement or healthcare, iii) generative AI such as ChatGPT which needs to meet transparency requirements such as disclosing when content has been generated by AI and finally, iv) limited risk.

The family justice system will need its own guidelines/ethical principles around governance AI applications. Earlier this year, the Bar Council published considerations when using ChatGPT and generative AI states that barristers who use LLMs should do so “responsibly” and weigh up the potential risks and challenges given their professional responsibilities (Bar Council Jan 2024). The guidance advised barristers using LLMs to check and verify the models’ outputs, respect legal privilege, confidential information and Data Protection compliance. The considerations end with a reminder to barristers to keep up to date with Civil Procedure Rules, which they note may be updated to include parties disclosing where they have used generative AI in preparation of materials.

Conclusion

There are numerous potential benefits of using various AI tools to support families and professionals in the family justice system, however these do not come without risks. Effective governance is required to ensure that challenges are successfully addressed allowing any gains to be realised in a way that does not undermine the public trust, accountability of professionals and access to justice. Before any regulation is agreed, given the potential challenges around public perceptions about AI, it will be important to engage members of the public and collaborate with other parts of the justice system as well as the public sector including academia, and international partners.

Aliya Saied-Tessier is a research at the Nuffield Family Justice Observatory.

[1] See pgs27-28 of the Centre for Teaching and Learning at the University of Oxford’s report Beyond ChatGPT State of AI in Academic Practice for a brief description of a range of generative AI tools.

[2] Samsung have restricted usage of ChatGPT following an engineer inputting confidential code into the model (The Economist 27/11/23).