The recent Quebec decision Specter Aviation Limited v. Laprade marks a watershed moment: for the first time in Québec, a litigant has been sanctioned for submitting pleadings tainted by “hallucinated” legal authorities generated by artificial intelligence. 

What happened

The defendant, a 74-year-old self-represented litigant, used generative AI tools to draft his challenge of an arbitration-award ratification motion.  His motion record contained multiple references to non-existent case law or “decisions” that never were rendered.  The court found that these AI-generated submissions breached the fundamental duty of candour owed to the tribunal. 

The Court’s response

Under article 342 of the Code of Civil Procedure (Québec), the court characterized the conduct as a “substantial breach” justifying sanctions.  It imposed a penalty of CAD 5,000 on the litigant. The sanction was described as both punitive and deterrent.  The court did not declare a blanket ban on AI. Rather, it reiterated that AI-generated content remains subject to the same obligation of human oversight, verification and good faith as any filings, regardless whether the party is represented or self-represented. 

Broader Canadian context & the Choi case

Specter Aviation did not arise in a vacuum. Over the past eighteen months, Canadian courts have confronted a growing number of “AI hallucination” disputes:

In Lloyd’s Register Canada Ltd. v. Choi (2025 FC 1233), a self-represented litigant attempted to rely on AI-generated authorities in a motion record. The Federal Court struck the record and ordered its removal from the court file.  Other tribunals and courts have similarly condemned the submission of fictitious authorities as undermining not only the immediate case, but the integrity of the justice system itself. 

Put simply,even in the face of resource constraints or lack of legal representation, using AI is not a license to submit nonsense.

What the decision means for practitioners, litigants, and AI-users

Courts expect real, verifiable authorities. AI may assist drafting, but every citation must be verified manually and independently. The fact that AI generated it does not relieve the filer of responsibility. AI-enabled access to justice has limits. For self-represented litigants, AI may appear attractive, but the risk of “hallucinations” means trusting AI alone can backfire materially.

For lawyers, the use of AI is not novel, but its misuse can trigger the same duties of honesty, diligence, and candour they always had. The earlier decision involving Ko v. Li (2025 ONSC 2965) provides a sobering precedent of the risks when counsel fails to verify authorities.  Governance and risk management are now essential. Clear policies, mandatory verification, disclosure when AI is used, and internal audits of filings.

”.

Leave a Reply

I’m Amin

AMNLEGAL

I’m Amin, a lawyer based in Ontario who’s passionate about Commercial Law, Technology & Privacy. Through AMN Legal, I share insights on tech regulation, commercial law, and the practical challenges lawyers face in a digital world.

Disclaimer: The content of this blog is for general information only and does not constitute legal advice. 

Let’s connect

Discover more from AMNLegal

Subscribe now to keep reading and get access to the full archive.

Continue reading