One Small Step for Courts: Action Committee Guidelines a Welcome Improvement
Written by Daniel J Escott
Research Fellow – Artificial Intelligence Risk and Regulation Lab
danielescott@uvic.ca
Introduction
The dynamic and ever-growing Canadian discourse on AI continues to expand in fascinating ways. Recently, the Action Committee on Modernizing Court Operations has produced a set of Guidelines on the use of AI by courts, by litigants in court proceedings, and also a standalone document endeavouring to “demystify” AI. This new set of Guidelines is a welcome improvement over the Canadian Judicial Council’s (CJC) Guidelines solely on the use of AI by courts, which I thoroughly assessed and critiqued last month. The Action Committee’s Guidelines offer substantial improvements insofar as they seem to incorporate basic change management principles, and they highlight applications that would be considered lower risk in court settings. However, by artificially limiting the scope of their Guidelines to the question of “how should courts use AI”, these Guidelines also suffer many of the same deficiencies as the CJC Guidelines, including a lack of reflection on the state of institutional readiness for conversations about AI, and no effort to grapple with the looming question of judicial independence.
As with my assessment of the CJC Guidelines, I have analyzed the Action Committee Guidelines with a view to promoting best practices, identifying areas for improvement, and helping to ensure safe and responsible AI adoption in high-risk institutions. First, I will provide a brief overview of the three documents published by the Action Committee. Then I will identify their strengths, including where they supplement or improve upon the CJC Guidelines. I will move on to critiquing the weak points of the Action Committee Guidelines, and conclude with a series of recommendations for future improvements.
Overviews of the New Guidelines
Use of AI by Courts to Enhance Court Operations
This set of guidelines outlines the benefits and challenges of using AI by courts, including increased efficiency, accuracy, targeted resource allocation, and improved access to justice. However, it also highlights challenges such as barriers to access, inaccuracy, bias and discrimination, data management, transparency, privacy and cybersecurity, and the potential loss of personal connection. The document provides guidance on orienting principles for the responsible use of AI, emphasizing human oversight, communication, data privacy, and continuous learning. It also outlines key stages for the rollout of AI tools, including needs assessment, planning, project management, data handling, design, deployment, and decommissioning.
Use of AI by Court Users to Help Them Participate in Court Proceedings
As with the previous set of guidelines, this set addresses the use of AI by court users, including litigants, counsel, and others engaging directly with the courts. It outlines the benefits and challenges, emphasizing the importance of understanding AI use by court users and the potential impact on the administration of justice. The document also provides operational considerations for developing guidance for court users on the use of AI, including avoiding technical jargon, defining the scope of application, distinguishing between types of AI tools, and framing guidance around preventing inaccurate information.
Demystifying AI in Court Processes
This document aims to promote a common understanding of key terms and basic concepts surrounding AI. It provides definitions for various AI applications, including legal research, document review, legal analysis, translation, transcription, substantive assistance to court staff, case flow management, predictive analytics, legal triaging and assistance, and online dispute resolution. The document also discusses specific AI tools that courts may have heard of, such as ChatGPT, Google Gemini, and Microsoft Copilot.
Strengths & Improvements over CJC Guidelines
First, I must applaud the Action Committee for producing three separate documents with clearly different audiences. This allowed them to focus the contents of each document enough to provide relevant information and a level of nuance that more general approaches like the CJC Guidelines lack. Other notable strengths include:
Comprehensive Overview: The Action Committee guidelines provide a more comprehensive overview of the key issues surrounding AI in courts than the CJC Guidelines, which I criticized for being too general and lacking specificity. This gives courts a solid foundation for understanding the key issues surrounding AI, enabling them to make informed decisions about adoption and implementation.
Focus on Access to Justice: Both the Action Committee and CJC Guidelines emphasize the importance of using AI to enhance access to justice, ensuring marginalized or vulnerable groups are not negatively impacted. This helps to ensure that AI is used to promote fairness and equity in the justice system, making courts more accessible to everyone.
Plain Language and Accessibility: The Action Committee guidelines are written in plain language, making them more accessible to a wider audience compared to the CJC Guidelines, which I criticized for their lack of clarity. This ensures that the guidelines can be used by a wide range of stakeholders, including judges, court staff, lawyers, and the public, promoting transparency and understanding.
Practical Guidance: The Action Committee guidelines offer more practical guidance on the implementation and rollout of AI tools in courts than the CJC Guidelines, which I criticized for lacking concrete guidance on the “implementation” or “deployment” stage of a court’s AI journey (more on this later). This provides courts with a roadmap for effectively integrating AI into their operations, minimizing disruption and maximizing the potential benefits.
Emphasis on Human Oversight: The Action Committee guidelines place a stronger emphasis on human oversight in AI use compared to the CJC Guidelines, which I criticized for lacking a robust “human in the loop” principle. This helps to mitigate potential risks and biases associated with AI, ensuring that AI tools are used in a way that is consistent with the values and principles of the Canadian justice system.
Proactive Approach: Both the Action Committee and CJC Guidelines take a proactive approach to addressing the challenges and opportunities of AI in courts. This encourages courts to be mindful of the potential impacts of AI and to take steps to ensure that AI is used responsibly and ethically.
Focus on Canadian Context: Both the Action Committee and CJC Guidelines are specifically tailored to the Canadian legal landscape. This ensures that the guidelines are relevant and applicable to Canadian courts, promoting the responsible adoption of AI within the Canadian legal framework.
Regular Updates: Both the Action Committee and CJC Guidelines are committed to being regularly updated to reflect the evolving nature of AI. This ensures that the guidelines remain current and practical in a rapidly changing technological landscape, providing courts with up-to-date guidance on the responsible adoption and use of AI.
Areas for Improvement
While the Action Committee guidelines offer a valuable foundation for understanding and addressing AI in Canadian courts, they exhibit certain deficiencies and areas that require improvement:
Lack of Specificity and Practical Guidance
The Action Committee guidelines, while well-intentioned, often fall short of providing truly actionable advice for Canadian courts. The guidelines tend to offer high-level recommendations without the necessary specificity or practical guidance for the necessary and crucial steps preceding “implementation” or “deployment” in a court’s AI journey.
Vague and General Recommendations: The guidelines often provide general advice without concrete examples or specific recommendations for addressing the challenges and opportunities of AI in diverse court settings. The Action Committee takes a top-down view of their “one-size-fits-all” approach to tech in courts, leaving several concerning operational gaps that will remain barriers in adhering to these Guidelines.
Limited Guidance on Implementation: While the guidelines outline the stages of AI rollout, they lack detailed instructions on operationalizing the recommendations, particularly for conducting algorithmic impact assessments and implementing training and capacity-building measures. They likewise lack concrete guidance on how to responsibly explore AI use cases, safely evaluate them, and deploy them. In this sense the Guidelines feel aspirational, leaving much to be desired in bridging the gap between where Canadian courts are and where they would need to be in order to begin operationalizing the guidelines.
Insufficient Attention to Change Management: While some basic principles of change management have snuck their way into the guidelines, their inclusion is only surface-level. The guidelines lack a comprehensive approach to change management beginning at the “we know nothing about AI and we have no staff or infrastructure to deploy it” stage of a court’s AI journey, which is where Canadian courts currently are. This sort of guidance is crucial for the successful adoption and implementation of AI tools in courts.
Oversimplified AI Governance and Judicial Independence
The guidelines also demonstrate an oversimplified approach to AI governance and judicial independence. They do not adequately address the complexities of governance issues in the context of AI adoption in courts, and leave the question of the impact of AI on judicial independence completely unaddressed.
As seen with the CJC Guidelines, concern over judicial independence is one of (if not exclusively) the key concerns of the Canadian judiciary with the deployment of AI in courts. Guidelines on the responsible use of AI in courts are unlikely to achieve meaningful impact for stakeholders until the judiciary's concerns are adequately addressed. Courts will (rightfully) refrain from operationalizing any set of guidelines on how to use AI until they are comfortable with doing so.
Underdeveloped AI Governance Framework: The guidelines do not adequately address the complex governance issues raised by AI, including accountability, oversight, and the development of ethical frameworks. While their inclusion of human oversight is a welcome addition, the guidelines still lack a robust “human-in-the-loop” principle to guide AI use and prevent over-reliance on AI systems. A strong AI governance framework is essential for ensuring that AI is used responsibly and ethically in courts. This includes establishing clear lines of accountability for AI-driven decisions, implementing robust oversight mechanisms to monitor AI performance, and developing ethical guidelines to inform the design, development, and deployment of AI systems.
Limited Consideration of Judicial Independence: While mentioning judicial independence, the guidelines do not adequately address how AI may impact this fundamental principle. They lack a thorough analysis of how specific AI applications can impact judicial autonomy and decision-making. Judicial independence is a cornerstone of the Canadian justice system. The guidelines must provide a detailed analysis of how AI may impact judicial independence, including the potential benefits and risks. This analysis should consider how specific AI applications can be used to support judicial autonomy and decision-making while also addressing the potential for AI to undermine judicial independence.
Additional Areas for Improvement
Explainability and Transparency: The guidelines should provide more explicit guidance on addressing the “black box” problem of AI, ensuring the transparency and explainability of AI systems used in courts. The lack of transparency in how some AI systems make decisions raises concerns about fairness and accountability. The guidelines should provide specific guidance on how to ensure the explainability of AI systems used in courts, allowing humans to understand how AI-driven decisions are made.
Data Governance and Privacy: The guidelines need more detailed guidance on data governance and privacy, addressing the potential risks of data breaches and privacy violations associated with AI use. AI systems often rely on large datasets, raising concerns about data governance and privacy. The guidelines should provide specific guidance on how to ensure the responsible use of data in AI systems, including data security, data quality, and data privacy.
Bias Mitigation: While mentioning bias, the guidelines lack specific strategies for identifying and mitigating biases in AI systems, particularly those that may perpetuate discrimination against marginalized groups. AI systems can perpetuate and amplify existing biases in data, leading to discriminatory outcomes. The guidelines should provide specific strategies for identifying and mitigating biases in AI systems, ensuring that AI is used to promote fairness and equity in the justice system.
Recommendations
To address the deficiencies and areas needing improvement in the Action Committee guidelines, and drawing upon the best practices outlined in my Report on Exploring AI at High-Risk Legal Institutions, the following recommendations are provided:
Recommendation 1: Enhance Specificity and Practical Guidance
The Action Committee should revise its guidelines to provide more concrete examples and specific recommendations for addressing the challenges and opportunities of AI in diverse court settings, while also considering the unique circumstances and needs of each court.
Provide detailed guidance on implementation: Offer step-by-step instructions on operationalizing recommendations, including conducting needs assessments, evaluating AI tools, implementing training programs, and establishing capacity-building measures.
Tailor guidance to the specific needs of each court: Recognize that Canadian courts vary significantly in their size, resources, and technical expertise. Provide guidance that allows each court to mould the broad recommendations to their distinct circumstances and needs.
Reflect an understanding of the current capacities of Canadian courts: Acknowledge the current realities of Canadian courts, including their policy landscape, infrastructure limitations, staffing constraints, resource availability, and information technology capacities. Offer guidance that is practical and achievable within these constraints.
Recommendation 2: Strengthen AI Governance and Address Judicial Independence
The guidelines should be revised to provide a more robust AI governance framework and a more in-depth analysis of how AI may impact judicial independence.
Develop a comprehensive AI governance framework: Address accountability, oversight, and ethical frameworks, incorporating a robust “human-in-the-loop” principle to guide AI use and prevent over-reliance on AI systems.
Provide a detailed analysis of the impact on judicial independence: Examine how specific AI applications may impact judicial autonomy and decision-making, including potential benefits and risks.
Recommendation 3: Address Explainability, Data Governance, and Bias Mitigation
The guidelines should offer more explicit guidance on addressing the “black box” problem, ensuring data governance and privacy, and mitigating biases in AI systems.
Promote explainability and transparency: Provide guidance on ensuring the transparency and explainability of AI systems used in courts, allowing humans to understand how AI-driven decisions are made.
Strengthen data governance and privacy: Offer detailed guidance on data governance and privacy, addressing data security, data quality, and data privacy.
Develop specific strategies for bias mitigation: Provide clear guidance on identifying and mitigating biases in AI systems, ensuring that AI is used to promote fairness and equity in the justice system.
Recommendation 4: Emphasize Change Management, User-Centric Design, and Accessibility
The guidelines should strongly emphasize the importance of user-centric design and ensure that AI tools are accessible to all users, regardless of their technical skills or disabilities.
Incorporate best practices from other high-risk fields: Draw upon best practices in change management from other high-risk areas, such as healthcare, to inform the design and implementation of AI tools in courts. This includes understanding the importance of clear communication, comprehensive training, and addressing potential resistance to change.
Align with best practices from high-risk IT sectors: Incorporate best practices from high-risk information technology sectors, such as finance and national security, to ensure the security and privacy of legal data. This includes implementing robust cybersecurity measures, data encryption protocols, and strict access controls.
Promote user-centric design principles: Encourage the development of AI tools that are intuitive, easy to use, and address the needs of all users. This includes incorporating user feedback throughout the design and development process and ensuring that AI tools are designed to support and enhance human workflows.
Ensure accessibility for all users: Provide guidance on designing and implementing AI tools that are accessible to individuals with disabilities, ensuring that AI does not create new barriers to justice. This includes adhering to accessibility standards and guidelines, providing alternative formats for content, and ensuring that AI tools are compatible with assistive technologies.
Recommendation 5: Foster a Culture of Innovation and Collaboration
The guidelines should encourage courts to foster a culture of innovation and collaboration, promoting ongoing learning and knowledge sharing.
Encourage experimentation and knowledge sharing: Support courts in experimenting with new AI tools and technologies, and promote the sharing of best practices and lessons learned across the justice system.
Invest in training and capacity building: Provide ongoing training and support to court staff, judges, and other stakeholders to ensure they have the knowledge and skills to use AI tools effectively.
Conclusion
The Action Committee’s Guidelines represent a significant advancement over the CJC Guidelines, demonstrating a deeper understanding of the complexities associated with AI adoption in Canadian courts. They offer a more comprehensive overview of the key issues, a stronger emphasis on human oversight, and more practical guidance on implementation.
However, the Action Committee’s Guidelines still need improvement. They lack specificity and actionable advice, particularly in areas like change management and implementation. Additionally, they do not adequately address the complexities of AI governance and the potential impact on judicial independence.
Two critical unanswered questions remain: defining “legal tech change management”, and understanding AI’s impact on judicial independence. These areas require further exploration to ensure the responsible and effective adoption of AI in Canadian courts. Future publications will delve deeper into these questions, providing much-needed guidance on navigating these uncharted territories.
If you wish to discuss my analysis of the Action Committee’s Guidelines or my future work on legal tech change management and AI’s impact on judicial independence, please do not hesitate to contact me directly at danielescott@uvic.ca. Let’s work together to ensure a future where AI is used responsibly and ethically in Canadian courts, enhancing access to justice for all.