Skip to Content
View site list

Profile

Pre-Bid Projects

Pre-Bid Projects

Click here to see Canada’s most comprehensive listing of projects in conceptual and planning stages

Government

Legal Notes: Increased integration increases between law and AI

John Bleasby
Legal Notes: Increased integration increases between law and AI

Artificial Intelligence, in the form of Large Language Models (LLMs) and chatbots, continues to make an impact in every profession. Law is no exception.

Recent developments within Canadian and British legal circles suggest increased integration, although with caution.

In late December, the Federal Court of Canada issued its outlook.

“The Court will not use AI, and more specifically automated decision-making tools, in making its judgments and orders, without first engaging in public consultations,” it says.

That amounts to what Marco Falco, partner with Torkin Manes LLP, describes as “essentially a moratorium on the use of AI by the Court.”

Meanwhile, Ontario will allow legal teams to use AI under rules 61.11 and 61.12 of the province’s Rules for Civil Procedures. However, anyone doing so must accompany their written submissions
(factum) with confirmation that the “person signing the certificate is satisfied as to the authenticity of every (legal) authority listed in the factum.”

“The inaccuracies and bias inherent in AI adjudication are only beginning to be understood,” says Falco. “Lawyers who rely on LLMs to assist in the drafting of legal submissions will bear the consequences of AI hallucinations and for providing a false representation to the Court.”

Other provinces can be expected to adopt similar rules in the near future.  

The United Kingdom is now allowing Justices to use AI to help them produce legal rulings.

In December, the Courts and Tribunals Judiciary, that being the judges, magistrates, tribunal members, and coroners who administer, interpret and apply the laws enacted by Parliament issued an eight-page handbook, Guidance for Judicial Office Holders, outlining the restrictions under which justices in England and Wales can use AI systems.

The guidance begins with warnings.

“Public AI chatbots do not provide answers from authoritative databases,” the guide says. “They generate new text using an algorithm based on the prompts they receive and the data they have been trained upon. This means the output which AI chatbots generate is what the model predicts to be the most likely combination of words (based on the documents and data that it holds as source information). It is not necessarily the most accurate answer.”

The judiciary reminds justices about their professional obligations regarding confidentiality and privacy, the need to ensure accountability and accuracy, and possible AI bias.

“Judicial office holders are personally responsible for material which is produced in their name. AI tools are a poor way of conducting research to find new information you cannot verify independently. The current public AI chatbots do not produce convincing analysis or reasoning.”

Geoffrey Vos, head of civil justice in England and Wales, told Reuters guidance was necessary. He explained AI “provides great opportunities for the justice system. But, because it is so new, we need to make sure that judges at all levels understand what it does, how it does it and what it cannot do.”

Another concern raised surrounds the risks should unguided clients begin to trust chatbots for their own legal purposes.

“AI chatbots are now being used by unrepresented litigants,” it says. “They may be the only source of advice or assistance some litigants receive. Litigants rarely have the skills independently to verify legal information provided by AI chatbots and may not be aware that they are prone to error.”

Smaller companies might also be attracted to use packaged platforms that provide a pre-vetted inventory of legal resources in order to both improve internal corporate knowledge and potentially reduce the cost of outside counsel.

The problem is accuracy. A recent Stanford University study found AI “hallucination” rates can range from 69 per cent to 88 per cent when responding to specific legal queries.

“These models often lack self-awareness about their errors and tend to reinforce incorrect legal assumptions and beliefs. These findings raise significant concerns about the reliability of LLMs in legal contexts, underscoring the importance of careful, supervised integration of these AI technologies into legal practice.”

John Bleasby is a Coldwater, Ont.-based freelance writer. Send comments and Legal Notes column ideas to editor@dailycommercialnews.com

Recent Comments

comments for this post are closed

You might also like