About the Artificial Intelligence Risk and Regulation Lab

The Artificial Intelligence Risk and Regulation Lab was founded in 2023 at the University of Victoria Faculty of Law under the umbrella of the BC Access to Justice Centre for Excellence.

Initially the Lab focused on the implications of artificial intelligence (“AI”) implementation on access to justice and self represented litigants, but the Lab’s early work clearly demonstrated that the implementation of AI would have wide reaching impacts on the legal system in Canada and abroad and that the management of risk and the regulation of these technologies would be vital to ensure the integrity of our legal and justice system in Canada. The Lab began with a simple regulation mapping project, however, the opportunities for meaningful and impactful research have rapidly expanded of late and the Lab now seeks funding to expand its capacity to tackle these important research projects.

The Lab is focused on research regarding the regulation of AI and managing the risks of implementing artificial intelligence. As AI technologies rapidly evolve, it is crucial to establish robust frameworks to manage risks and ensure responsible innovation. The Lab aims to bridge the gap between technology and law, fostering interdisciplinary research that informs implementation and regulation through policies and methodologies grounded in research and evidence.

Our research will contribute to the development of robust risk management strategies, ensuring compliance with emerging legal standards and promoting public trust in AI technologies.

Moreover, the lab’s work is vital for Canada’s position as a leader in AI innovation. As Canada advances in AI research and development, it is imperative to address the associated risks proactively. The Lab will play a crucial role in this endeavor, helping to shape policies that balance innovation with risk management, fostering a safe and equitable technological landscape.

The Lab’s primary objectives include:

•  Conducting innovative research on AI risks and regulatory frameworks;

•  Developing risk frameworks and policy recommendations for government and industry stakeholders; and,

•  Promoting public awareness and understanding of AI-related legal issues.

Meet the Team


  • Lab Director

    Michael Litchfield, the Lab’s Director, is an academic, lawyer and management consultant. Michael is also the Director of the Business Law Clinic at the University of Victoria Faculty of Law, and Associate Director of ACE. Michael’s current research work focuses on law reform and the regulation and management of the risks of AI implementation. In the private sector, Michael regularly speaks on and advises organizations on AI policy and regulatory compliance. Michael’s private sector work as a lawyer and management consultant has focused on providing corporate governance and risk management advice to a wide clients in a wide array of sectors.

  • Research Fellow

    Daniel James Escott is the Lab’s first Research Fellow, in association with ACE, and has an extensive background in AI regulation, legal process engineering, and access to justice. He previously clerked at the Federal Court, and is currently pursuing an LLM at Osgoode Hall Law School, where he is writing a thesis on the impact of technology in legal processes on access to justice. Daniel authored the Federal Court’s Notice on the Use of Artificial Intelligence in Court Proceedings and its Interim Principles and Guidelines on the Court’s Use of Artificial Intelligence, and now advises Courts and other organizations on the use of AI in law, uniquely qualifying him in the field of AI Risk and Regulation.

    Prior to his work in artificial intelligence, Daniel was the Lead Researcher on Technology and Access to Justice at the Canadian Institute for the Administration of Justice.