Skip to content

Vancouver Lawyer’s Use of AI in Legal Proceedings Sparks Ethics Debate

How does AI fit into the legal profession?

Vancouver lawyer Chong Ke has recently found herself at the centre of a case concerning the ethics of using artificial intelligence (AI) in legal proceedings. The controversy unfolded when it was revealed that while representing businessman Wei Chen in a child custody case, Chong Ke filed an application containing fabricated cases generated by ChatGPT. This represents the first instance of AI-generated material making its way into a Canadian courtroom.

Ke had filed an application to allow Chen to travel with his children to China. The application included two cases as precedent: one in which a mother took her 7-year-old child to India for six weeks, and another where a mother’s application to travel with her 9-year-old child to China was approved. However it was soon discovered that these cases did not actually exist, and instead they had been fabricated by ChatGPT.

Allegedly, Ke asked ChatGPT to find relevant cases that could apply to her client’s circumstances. OpenAI’s chatbot generated three results, two of which Ke then used in the application. When lawyers of Nina Zhang, Chen’s ex-wife, were unable to locate the referenced cases, Ke realized her mistake. She attempted to withdraw the two cases and quietly provide a new list of real cases without informing the opposition. Zhang’s lawyers then demanded copies of the two original cases, leaving Ke no choice but to inform them expressly of her mistake. She wrote a letter acknowledging her actions, calling the error “serious” and expressing her regret. In an affidavit, Ke later admitted her “lack of knowledge” on the risks associated with using AI, saying it greatly embarrassed her to “[discover] that the cases were fictions.”

Justice David Masuhara, who presided over the case of Ke’s client, wrote in his ruling that “citing fake cases in court filings…is an abuse of process and is tantamount to making a false statement to the court,” going on to say that the improper use of AI could ultimately beget the miscarriage of justice. Masuhara mandated Ke to review her files and disclose if AI had been involved in any other materials she had submitted to the court.

Fraser MacLean, the lead counsel of Ke’s opposition, also emphasized the serious dangers of using AI-generated content: “what’s scary about these AI hallucinations is they’re not creating citations and summaries that are ambiguous, they look 100 per cent real.” He adds that it is important to be “vigilant” in verifying the validity of a legal citation.

Despite Masuhara finding Ke’s apology to be sincere, she will be held liable for the costs incurred by Zhang’s lawyers in remedying the confusion. The judge also acknowledged that she was suffering the effects of “significant negative publicity” following her misconduct. The Law Society of BC has also issued a warning to Ke affirming the ethical obligation for lawyers to ensure accuracy with the growing use of AI tools. In addition to incurring the debt of her opposition, Ke will also be facing an investigation from the Law Society of BC.

While Ke’s AI-generated content was removed before it could have any significant impact on court proceedings, this case underscores the ethical risks surrounding the use of AI in the legal field. Discussions are already being held around the importance of lawyers’ diligence when it comes to navigating AI tools in their work and the need for clear guidelines to prevent potential abuses of the process. Thompson Rivers University law librarian Michelle Terriss commented that this ruling sets a new precedent, indicating that “[these] issues are front and centre in the minds of the judiciary and that lawyers really can’t be making these mistakes.”

Lawyers have an ethical duty to acknowledge the risks and benefits that arise from the use of AI tools. But as the use of AI grows, new questions around its implementation in the legal field are beginning to emerge, including whether or not a lawyer can ethically bill a client for work that an AI tool performed or if using AI to handle court materials is a breach of confidentiality. The latter is especially concerning as most AI tools, including ChatGPT, do not guarantee the confidentiality of user inputs – in fact, OpenAI’s terms of service state that a user’s exchange with the program “may be reviewed” by OpenAI employees in order to improve the system, and that the responsibility of maintaining confidentiality lies with the users themselves.

While AI can provide significant improvements to tasks including electronic discovery, litigation analysis, and legal research, concerns persist about biases and prejudices in the system in addition to the potential for legal fabrication. Bias in AI technology is common and results from the training process of AI tools. For instance, Microsoft’s AI tool for text-based conversations with individuals was found to mirror discriminatory viewpoints that had been inputted in training conversations. These biases have already made it into the legal field, with a prominent example being the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) system, an AI algorithm many US judges used in making decisions regarding bail and sentencing. Investigations revealed that the system, in assessing whether or not a past offender would re-offend, was found to generate “false positives” for people of colour and “false negatives” for white people. The issue lies in the training of AI, as many are programmed to “quantify the world as it is right now, or as it has been in the past, and [to] continue to repeat that, because it’s more efficient,” says AI and robotics expert Professor Kristen Thomasen of UBC.

While the future of AI in the legal field and its ethical implications remain ambiguous, many legal and AI experts, including Professor Thomasen and Justice Masuhara, have weighed in, expressing their beliefs that an AI system could never “truly replace the work of a lawyer,” and that “generative AI is still no substitute for the professional expertise that the justice system requires of lawyers.”