Computer science professors receive new CIFAR grants to address AI safety in Canada and the Global South

Professors Maura R. Grossman, Yuntian Deng, and Wenhu Chen join the Solution Networks to create trustworthy and innovative AI systems

Wednesday, November 19, 2025

Three Waterloo computer science professors are at the forefront of two new research initiatives that are developing cutting-edge, inclusive, and trustworthy AI systems.

The inaugural research initiatives — Solution Networks — are funded through the Canadian Institute for Advanced Research’s (CIFAR) Canadian AI Safety Institute (CAISI) Research Program. In 2024, the federal government launched the CAISI Research Program as part of its AI safety strategy.

Three individuals are posed closely together. The person on the left has curly hair and is wearing a light sweater. The middle person has short hair and is wearing a dark sweater. The person on the right has short hair and is wearing a white t-shirt.

L to R: Professors Maura R. Grossman, Yuntian Deng, and Wenhu Chen join the Solution Networks to tackle, respectively, AI safety in the legal system and linguistic inequality.

“CIFAR’s Solution Networks provide a unique approach to trustworthy AI research and development, bringing together exceptional teams of interdisciplinary researchers — who might not otherwise cross paths — to address issues of global importance, but more importantly, to design, develop and implement solutions,” says Elissa Strome, Executive Director, Pan-Canadian AI Strategy at CIFAR. “Core to the work of both of these Solution Networks is exploring ways to mitigate the potential harms of AI to people in Canada and around the world.”

Professors Maura R. Grossman, Yuntian Deng, and Wenhu Chen are part of the inaugural Solution Networks. Each network will receive $700,000 over two years to tackle, respectively, AI safety in the legal system and linguistic inequality.

Safeguarding Courts from Synthetic AI Content

Professor Grossman is co-directing the first Solution Network alongside University of Toronto’s Professor Ebrahim Bagheri. The team also includes Professor Deng as well as legal and computer science researchers from the University of Ottawa, Western University, and the University of British Columbia.

The team is tackling one of the most pressing challenges in today’s justice system: AI-generated content. For example, AI can be used to forge evidence, including images, videos, or audio recordings — with little time, effort, and cost.  

 “Courts are currently ill-equipped to distinguish authentic from inauthentic AI-generated evidence and the consequences of errors in this area can be devastating to litigants, particularly in criminal and family matters,” said Professor Grossman.

Moreover, lawyers and self-represented litigants are using AI to produce court documents, which are critical to legal proceedings. Many AI models are prone to hallucinations, such that they generate incorrect or even non-existent case citations which are presented as precedent.

“To date, there have been over 500 cases that have involved lawyers citing fictitious opinions or making other misrepresentations of fact and law,” adds Professor Grossman.  

With the rise of AI, judges and juries may have their decisions swayed by AI-generated images, audio, and videos. Unfortunately, hiring an expert to vet AI evidence is expensive and delays the trial process.  It is simply infeasible for all but the largest cases.

The team’s solution is to create an open-source, free, and user-friendly system that can help identify AI-generated content. Their groundbreaking tool could help restore justice and order in the courtroom. Most importantly, it will benefit self-represented litigants and court officers, who don’t always have access to high-quality legal resources.

“A core theme of my research is understanding how machines generate information, and how we can reliably detect those signals. The legal system is one of the places where the stakes are highest. I'm excited to work with this interdisciplinary team to build tools that help courts distinguish real from synthetic content in a way that is practical, transparent, and aligned with legal norms,” says Professor Deng.

Mitigating Dialect Bias

Despite its wide and recent usage, large language models (LLMs) do not accommodate non-standard English. Sometimes, they misinterpret these dialects as toxic and offensive, leading to issues like social media censorship or discrimination in delivery services.

Fortunately, Professor Chen is co-developing an AI system that is tailored for Nigerian Pidgin English (NPE), which is spoken by more than 140 million people across West Africa. He and his teammates are developing the first-ever bias and safety benchmarks for NPE, as part of a wider open-source audit and mitigation toolkit. Overall, their system could account for unique language nuances, allowing NPE users to voice their thoughts without fearing any penalties.

“I am thrilled to receive the funding to help mitigate the dialect bias. I will work with my colleagues from Canada and Africa to push this direction forward,” says Professor Chen.

What’s core to the team’s research is inclusivity. Their data sets and LLMs will be evaluated by a citizen network in Nigeria, creating a locally grounded and culturally representative AI system.

Ultimately, their innovative technology can empower more people to use AI across the Global South. It could also pioneer AI tools for other non-standard English dialects, including immigrant and Indigenous communities.