Towards Responsible AI Integration: Evaluating the CJC Guidelines for Canadian Courts
Written by Daniel J Escott
Research Fellow – Artificial Intelligence Risk and Regulation Lab
danielescott@uvic.ca
Introduction
Artificial intelligence (AI) 's rapid advancement presents opportunities and challenges for the Canadian justice system. While AI offers the potential to enhance efficiency, reduce workloads, and improve access to justice, it also raises concerns about fairness, bias, transparency, and the preservation of judicial independence. In response to these developments, the Canadian Judicial Council (CJC) has released its "Guidelines for the Use of Artificial Intelligence in Canadian Courts" (CJC Guidelines) to provide guidance for judges and court administrators navigating this new landscape. Given my experience drafting similar guidelines, I have conducted a comprehensive evaluation of the CJC Guidelines, analyzing the CJC Guidelines’ merits, deficiencies, and potential implications, while offering recommendations for improvement.
Practicality and Usefulness of the CJC Guidelines
The CJC Guidelines are intended to be a resource to begin discussing how Canadian courts can use AI. They represent a commendable step towards establishing a framework for the responsible use of AI in Canadian courts, articulating several key principles, including the importance of judicial independence, accountability, transparency, and ethical considerations in AI adoption. However, for guidance at this stage of AI development in the Canadian justice system to be truly effective, it must be both comprehensive and practical. The current iteration of the CJC Guidelines falls short of this goal.
The Guidelines are undoubtedly useful in laying a conceptual groundwork for AI adoption in courts. They highlight crucial considerations that need to be addressed for the ethical and responsible use of AI. For instance, their emphasis on judicial independence, accountability, and transparency provides a valuable framework for approaching AI implementation. Similarly, their guidance on ethical and legal considerations helps ensure that AI is used in a way that aligns with the core values of the Canadian justice system. It is also commendable that the CJC embarked on developing policies for AI, given the complexity and nascent nature of this rapidly developing field. Developing AI policy is challenging; however, it is vital that we approach it with rigour to ensure the long-term health and integrity of the Canadian justice system.
Notwithstanding these commendable efforts, the practicality of the Guidelines remains open to question. While they are useful in highlighting key principles, I contend that they lack concrete guidance on how to implement these principles in practice. This lack of direction could lead to confusion, inconsistency, and unintended risks in how courts adopt and use AI.
For example, the Guidelines recommend that courts "develop a program of education and provide user support" and "regularly track the impact of AI deployments," but do not provide any guidance on what these programs should entail, how they should be implemented, what metrics should be tracked, or what precisely is being tracked. This lack of specificity makes it difficult or impossible for courts to operationalize the CJC Guidelines. Courts across Canada vary significantly in their size, resources, and technical expertise. Without more specific guidance, it is possible that courts will adopt and use AI in a fragmented and inconsistent manner.
The disconnect between the Guidelines’ approach and the realities of Canadian courts is further highlighted by the fact that many courts currently lack the capacity to collect, analyze, and track data on the systems they already have. Canadian courts are chronically underfunded and understaffed. Court staff are already overburdened with their current workload and lack the capacity to take on new, complex tasks related to AI implementation and management. Furthermore, most courts lack the technical expertise to explore and implement AI systems. As such, Canadian courts will have little or no use for Guidelines on how to use these systems until they develop the infrastructure, resources, and expertise to explore and implement AI systems. Operationalizing the Guidelines would be an unrealistic expectation, absent this capacity, highlighting the need for more practical and tailored guidance. This disconnect represents a significant barrier to the adoption of technology, and AI specifically, in Canadian courts. Without a fundamental shift in priorities and funding, the unfortunate reality is that most courts cannot explore and implement AI responsibly in the short term.
Furthermore, the Guidelines' limited insights on the initial steps of exploring and implementing AI curtail their comprehensiveness. While they aim to provide a consistent approach to AI utilization and highlight the opportunities and risks, this focus is largely on the use of AI, neglecting meaningful elaboration on the crucial stages of exploration and implementation. This narrow focus may inadvertently give courts the impression that they have a license to use AI without properly considering safe and responsible exploration and implementation.
Merits of the CJC Guidelines
As a valuable foray into the depths of AI, the CJC Guidelines have several merits demonstrating that the CJC and the authors are alive to emerging concerns and how AI is changing the nature of justice:
Emphasis on Judicial Independence: The Guidelines underscore the importance of preserving judicial independence in the context of AI adoption. This is essential for maintaining public trust in the judiciary and ensuring that AI is not used to undermine judges' autonomy.
Recognition of Potential Benefits: The Guidelines acknowledge AI's potential benefits in improving the efficiency and accuracy of judicial decision-making. This encourages courts to explore the responsible use of AI to enhance the administration of justice.
Guidance on Ethical and Legal Considerations: The Guidelines provide guidance on how to adopt and use AI consistently with ethical principles and legal obligations. This helps ensure that AI is used fairly, unbiasedly, and transparently.
Deficiencies of the CJC Guidelines
While the CJC Guidelines are a useful introduction for courts on this crucial topic, several key points remain either vague and ambiguous or absent altogether. I have identified the following deficiencies:
Lack of Use-Case Specificity and Differentiation Between Different Types of AI: The Guidelines are very general and lack specific guidance on numerous issues, particularly concerning the nuances of different AI systems and how they are used. They do not meaningfully distinguish between different types of AI, such as generative AI, automated decision-making systems, and non-generative AI. I understand the choice to include generative and automated decision-making systems was intentional. However, the Guidelines’ lack of differentiation can be problematic because they purport to apply to generative and automated decision-making systems equally, and several of the Guidelines’ recommendations could not apply to many use cases or implementations of these systems even though they are the “types” being governed. Each type of AI system has unique characteristics that require specific considerations for their ethical and responsible use in court settings.
The Guidelines also discuss concern over “[n]umerous generative AI applications proposed for courts, including case management systems and alternative dispute resolution…”. This statement appears to either confuse or is an inaccurately broad statement about what generative AI systems are. Without further elaboration, it is difficult to have clarity, but at first glance, there are countless non-generative AI systems that would be useful in the contexts of case management or alternative dispute resolution. This illustrates one of the difficulties of drafting AI regulations “by committee”: many members of such committees have only just been introduced to the notion of “artificial intelligence” recently, and it is common to confuse “generative AI” with AI generally, due in no small part to the recent emergence of generative AI as a focal point for public discussion.
The approach of regulating AI systems by “type” as opposed to use-case is not abnormal; several court and regulatory policies make efforts to regulate generative systems specifically. However, this approach leads to the precise issue found in these Guidelines, attempting to regulate all AI systems relevant to courts as though they are generative, when I would suggest that most valuable uses of AI for courts flow from non-generative systems. This appears to be the root of why many of the Guidelines, such as the requirements for explainability or tracking, are either impractical or impossible to implement with many AI systems. In contrast, regulating by use-case allows for more precise and tailored guidance, recognizing that the same type of AI can be deployed in various ways with vastly different implications for fairness, transparency, and accountability. For example, guidelines for using a generative large language model to generate summaries of factual information would differ significantly from guidelines for using a generative evidence analytics tool to analyze evidence and predict outcomes, even if the same "type" of AI underlies both.
Limited Discussion of Legal Rights: The Guidelines lack a detailed discussion of human rights and how AI can impact human rights, such as the rights to a fair trial and privacy. AI could potentially be used to directly or inadvertently violate these rights. For example, AI could predict the likelihood of recidivism (see, for example, this discussion by Molly Callahan at Boston University), which could then be used to deny bail or impose harsher sentences, raising concerns about due process and discrimination. In future iterations, the Guidelines should be expanded to include a more thorough discussion of the potential impact of AI on human rights, drawing on sources like the UNESCO Draft Guidelines for the Use of AI Systems in Courts and Tribunals, which emphasize the need to protect principles such as fairness, non-discrimination, procedural fairness, and personal data protection.
Insufficient Guidance on Algorithmic Impact Assessments: The Guidelines do not provide meaningful guidance on conducting algorithmic impact assessments, which are essential for ensuring that AI systems are used responsibly and ethically. Algorithmic impact assessments can help identify and mitigate the potential risks of using AI. For example, an algorithmic impact assessment could help identify whether an AI system is biased against certain groups of people, ensuring fairness and equity in its application. As mentioned, most Canadian courts lack the technical expertise to explore and implement AI systems. It would, therefore, be reasonable to assume that most Canadian courts would also lack the technical expertise to conduct these assessments, which are a crucial component of the exploration phase, without clear and specific guidance. The Guidelines could be improved by providing more detailed guidance on conducting algorithmic impact assessments, such as recommending the use of UNESCO's Ethical Impact Assessment or the Government of Canada’s Algorithmic Impact Assessment.
Lack of Detailed Guidance on Training and Capacity-Building: The Guidelines lack detailed guidance on training and capacity building, which are essential for ensuring that court staff can use AI systems effectively and responsibly. This could lead to errors and inefficiencies in the use of AI. For example, court staff may need to be trained in using AI-powered legal research tools or identifying and mitigating potential biases in AI systems. The Guidelines could be improved by providing more detailed guidance on implementing training and capacity-building measures, such as developing tailored training programs for different court personnel, providing ongoing education and support, and reprioritizing court staffing to ensure they have the internal capacity to responsibly and safely manage risks associated with AI.
Absence of a Robust "Human in the Loop" Principle: While the CJC Guidelines touch upon the importance of human oversight in AI use, they lack a robust and explicit "human in the loop" principle like the one articulated by the Federal Court (see, for example, the Government of Canada’s Directive on Automated Decision-Making; see also the Alberta Courts’ Notice to the Profession & Public – Ensuring the Integrity of Court Submissions When Using Large Language Models; see also Federal Court’s Interim Principles and Guidelines on the Court’s Use of AI). This principle emphasizes that AI should not replace human judgment in judicial decision-making; instead, it should support and augment human decision-making, with humans retaining ultimate responsibility for decisions. The CJC Guidelines, while explicitly prohibiting the use of AI for core judicial decision-making or other use cases that risk jeopardizing the role of the judiciary, do not provide clear instructions on how human users of AI systems serve as the “check and balance” of these systems’ outputs. This omission raises concerns about the potential for over-reliance on AI and the erosion of judicial autonomy. The Guidelines' current approach may inadvertently create loopholes for the inappropriate application of AI in court administration and decision-making processes. A strong "human in the loop" principle is crucial to ensure that AI is used responsibly and ethically in Canadian courts. It safeguards against the potential risks of bias, errors, and the erosion of judicial independence. Future versions of the CJC Guidelines should be revised to incorporate a clear and comprehensive "human in the loop" principle, emphasizing the importance of human oversight and control in all aspects of AI use in judicial processes.
Lack of Guidance on Exploring and Implementing AI: The Guidelines primarily focus on the responsible use of AI. However, this focus is impractical unless courts are already equipped to explore and implement AI systems, which most are not. It would be difficult to operationalize the Guidelines without further guidance on the crucial steps of exploring and implementing AI. This omission leaves courts with little direction on navigating the complexities of selecting appropriate AI tools, assessing their suitability for their specific needs and context, and integrating them into existing workflows. This renders the Guidelines impractical for courts at the beginning stages of their AI journey.
Limited Consideration of Judicial Independence: While the Guidelines are motivated by a broad concern for judicial independence, they lack a comprehensive analysis or illustration of how different AI applications can impact this crucial principle. As highlighted above, the “type” of AI system being used matters significantly less than how the system is being used. Considering the potential for AI to both enhance and undermine judicial autonomy, this principle needs to be explored comprehensively. A more thorough examination of this issue is needed to ensure the responsible and ethical implementation and use of AI in Canadian courts.
Potential Consequences of the CJC Guidelines’ Deficiencies
While the CJC Guidelines provide a valuable framework for AI adoption in Canadian courts, following them precisely and exclusively could lead to some unintended consequences:
Lack of Specificity: The CJC Guidelines are very general, potentially leading to confusion and inconsistency in how courts adopt and use AI. Further, the Guidelines do not meaningfully distinguish between different AI systems and their uses. Generative AI, automated decision-making systems, and non-generative AI each have unique characteristics that require specific considerations.
Court Staff’s Unfamiliarity with AI: The CJC Guidelines provide useful guidance for judges and court staff to use generative AI for various tasks, but many courts may lack the necessary expertise to implement and use AI systems effectively, potentially leading to errors and inefficiencies.
Cybersecurity Breaches: The CJC Guidelines emphasize cybersecurity but do not provide specific guidance on implementing cybersecurity measures, potentially leaving courts vulnerable to breaches, especially with the added complexity of AI.
Data Governance and Privacy Concerns: The CJC Guidelines emphasize the importance of data governance and privacy but do not provide specific guidance on governing data or protecting privacy when using AI. This could potentially lead to direct or inadvertent violations of data privacy laws, particularly given the large datasets often involved in AI.
Ineffective Change Management Approaches: Effectively implementing AI systems requires a well-structured change management approach, which the CJC Guidelines do not provide. This could potentially lead to irresponsible errors or resistance from court staff and other stakeholders during the implementation of AI systems. Ineffective change management could also lead to a failure to realize the potential benefits of AI.
Stalled or Failed AI Adoption: The lack of guidance on exploring and implementing AI could lead to courts struggling to adopt AI effectively. Courts might choose unsuitable AI tools, face internal resistance due to inadequate change management, or fail to secure necessary resources, ultimately hindering the successful integration of AI and potentially delaying the realization of its benefits.
Change Management and AI Implementation
As highlighted earlier, the CJC Guidelines primarily focus on the responsible use of AI but do not delve into whether or how courts may be able to explore and implement AI to be used. However, they lack guidance on the crucial steps of exploring and implementing AI. This omission leaves courts with little direction on navigating the complexities of selecting appropriate AI tools, assessing their suitability, and integrating them into existing workflows.
My “Exploring AI at High-Risk Legal Institutions” report emphasizes the importance of a strategic, well-structured change management approach to integrating AI tools in high-risk legal institutions like courts and tribunals. I highlight the need for stakeholder engagement, user-centric design, risk management, holistic planning, and innovative governance. These aspects are crucial for successful AI adoption, ensuring that the technology is implemented to align with the needs and values of legal institutions, minimizing disruption while maximizing benefits.
Courts will face significant challenges in adopting AI without clear guidance on change management. The CJC Guidelines could be significantly enhanced by walking courts through best practices on change management with a focus on implementing new technologies. This would provide courts with practical advice to navigate the complexities and risks of AI implementation, ensuring a smoother transition, minimizing disruption, and maximizing the chances of successful AI adoption.
Judicial Independence
The CJC Guidelines state that their motivation is a broad concern that the use of AI could impact judicial independence. However, they do not meaningfully explore how AI can impact this fundamental principle. AI has the potential to both support and undermine judicial independence, but this interplay has not been explored or illustrated in a meaningful way.
The reality is the constitutional principle of judicial independence was never adapted for the 21st century. Judicial independence has three characteristics: security of tenure, financial security, and administrative independence (Ref re Remuneration of Judges of the Prov. Court of P.E.I.; Ref re Independence and Impartiality of Judges of the Prov. Court of P.E.I., 1997 CanLII 317 (SCC) at para 115). The use of AI in courts would not jeopardize judges’ security of tenure or financial security, so it could only ever threaten administrative independence. While this principle remains of central importance to our constitutional democracy, administrative independence of the judiciary is a concept elucidated in the days when “Zoom” was a sound made by fast cars and 16-bit computers were considered the “next generation” of high-end home computers. In Valente v The Queen, 1985 CanLII 25 (SCC) [Valente], the Supreme Court of Canada defined what is protected by administrative independence in narrow terms:
Assignment of judges;
Sittings of the court;
Court lists; and,
The related matters of allocation of courtrooms and direction of the administrative staff engaged in carrying out these functions. (Valente at 709)
In 1985, these tasks were performed by humans using paper. Little thought was given when courts began adopting technology to the idea that the technology itself could jeopardize judicial independence. What bears significant study and exploration is how the use of AI by judges or court staff could impact administrative independence, whether it could fall under the broader logic and reasoning of Valente’s terms or if it needs to be substantively revisited by the Supreme Court of Canada. This is not easily answered, but it is my next research focus with the Artificial Intelligence Risk and Regulation Lab. By addressing these questions, the CJC Guidelines can help ensure that AI is used to strengthen, rather than undermine, judicial independence in Canada.
Recommendations for Improving Guidelines
The CJC Guidelines or future guidelines on the use of AI by courts and tribunals would benefit from the following recommendations:
Provide Specific Guidance on Different Types and Use Cases of AI: Policies on the use of AI by courts and tribunals should move beyond general recommendations and delve into the specifics of different AI types and their potential applications within the court system. This nuanced approach is crucial because different AI systems have unique characteristics, capabilities, and limitations that necessitate tailored guidance for responsible and ethical implementation. For instance, the Guidelines should offer specific guidance on using automated decision-making systems (see, for example, the Government of Canada’s Directive on Automated Decision-Making). These systems raise concerns about transparency, accountability, and potential bias in their algorithms. Clear guidelines are needed on how any AI systems can be used; for example, guidance on how a system should be used to assist with tasks like risk assessment would differ widely from guidance on how a system should be used to assess case prioritization. Practical guidance should address how each type of AI can be used for specific tasks while emphasizing the importance of human oversight, verification, and critical evaluation to ensure fairness, accuracy, and transparency.
Expand the discussion of legal rights: Institutions can provide a more detailed discussion of how AI can impact specific rights and principles, such as the right to a fair trial or the principle of procedural fairness. Policies on the use of AI in these high-risk institutions must be informed by their interactions with human rights (see, for example, the Law Commission of Ontario’s Submission on Bill 194 - Strengthening Cyber Security and Building Trust in the Public Sector Act, 2024).
Provide a walkthrough of conducting algorithmic impact assessments: Courts would benefit from more detailed guidance on how to conduct algorithmic impact assessments, especially if the expectation is that court technology staff would conduct these assessments without sufficient expertise in AI. The Guidelines would also benefit from a pan-Canadian recommendation for the scope and format of these assessments, such as recommending the use of UNESCO's Ethical Impact Assessment or the Government of Canada’s Algorithmic Impact Assessment. Considering the pan-Canadian interaction between courts and the Constitution, assessing how their use of AI impacts stakeholders and court users should have a unified approach.
Provide more detailed guidance on training and capacity-building: More detailed guidance is needed on developing and implementing training and capacity-building measures, such as developing tailored training programs for different court personnel and providing ongoing education and support. A stronger emphasis on either reprioritizing or retraining court staff to bring expertise on AI in-house would significantly impact their capacity to operationalize AI guidelines.
Incorporate a Robust "Human in the Loop" Principle: The Guidelines should be revised to include a clear and comprehensive "human in the loop" principle. This principle should unequivocally state that AI should not replace human judgment in judicial decision-making. Instead, AI should support and augment human decision-making, with humans retaining ultimate responsibility for decisions. This principle should be ubiquitous and apply to all uses of AI in the justice system without exceptions or loopholes. This is crucial to mitigate the risks posed by the potential for hallucinations and the "black box" nature of many AI systems, which undermines the Guidelines’ recommendation on explainability. By emphasizing human oversight and control, policies can ensure that AI is used responsibly and ethically in Canadian courts, safeguarding against the potential risks of bias, errors, and the erosion of judicial independence.
Incorporate Guidance on Change Management to Facilitate AI Adoption: Given the importance and value of courts getting AI adoption “right”, courts will need a thorough walkthrough or explanation of best practices in AI exploration and implementation. This would involve guidance on assessing organizational needs, selecting appropriate AI tools, managing risks, engaging stakeholders, providing training and support, planning and implementing the deployment of AI systems, governing AI use, and evaluating the impact of AI deployments. By incorporating these elements, the Guidelines can empower courts to explore, implement, and use AI responsibly and safely, maximizing its potential benefits while minimizing potential risks. Without such guidance, courts may struggle to adopt AI effectively, potentially leading to the selection of unsuitable AI tools, facing internal resistance to change, wasting resources due to inadequate change management, or failing to secure necessary resources. This could ultimately hinder AI's successful integration and delay realizing its benefits.
Deepen the Analysis and Understanding of Judicial Independence: The Guidelines should dedicate a comprehensive section to explaining to judges and court staff the complex interplay between AI and judicial independence, contextualizing this principle within the Canadian constitutional framework and analyzing how AI can both bolster and threaten it. This analysis should differentiate between the roles of judges and court staff, examining how AI tools can enhance their respective autonomy and efficiency while also addressing the potential risks of over-reliance on AI and the erosion of human judgment and discretion. The Guidelines should emphasize the importance of maintaining human oversight and control in all AI applications, ensuring that AI remains a tool to augment, not replace, human judgment and that judicial independence is preserved in the face of technological advancements.
Conclusion & Next Steps
The CJC Guidelines provide a valuable starting point for the responsible and ethical adoption of AI in Canadian courts. However, these Guidelines and similar policies need to be further developed to address the deficiencies identified in this assessment. By incorporating these recommendations, the CJC and individual courts and tribunals can create a more comprehensive and informative set of guidelines that will help ensure that the use of AI promotes fairness, access to justice, and the efficient administration of justice.
As a next step in this field, I am embarking on an effort to provide a much clearer and more specific answer to whether, when, and how specific use cases of artificial intelligence may impact judicial independence. The aim of this next project is to inform ongoing discussions on how courts can responsibly and safely integrate technologies like AI without jeopardizing judicial independence or the public’s perception of judicial independence. As an overriding constitutional risk, it is paramount that any plan to explore, implement, and use AI in courts is contextualized and framed by the enduring necessity to preserve judicial independence.