A recent court case in South Africa has raised significant concerns regarding the use of artificial intelligence in legal practice. In the case of Mavundla v MEC: Department of Co-Operative Government and Traditional Affairs KwaZulu-Natal, the legal team submitted nine case authorities to the High Court, only to discover that seven of them were fabricated by ChatGPT, a generative AI developed by OpenAI. The court deemed this conduct “irresponsible and unprofessional” and referred the matter to the Legal Practice Council, which regulates legal practitioners in the country, for further investigation.
This incident is not an isolated one; a similar occurrence happened in Parker v Forsyth in 2023, where the judge found no intent to mislead. However, the Mavundla ruling signals a shift. Courts are becoming increasingly intolerant of legal professionals who misuse AI technologies.
Legal academics have been researching the implications of AI in legal education and practice. Although generative AI can enhance efficiency, its irresponsible use can lead to severe professional consequences. The Mavundla case serves as a stark reminder for law schools to prepare students adequately for the ethical challenges posed by AI.
The advocate involved in the Mavundla case admitted to not verifying the sources cited, relying instead on research conducted by a junior colleague. This colleague claimed to have accessed the material through an online research tool. Although she denied using ChatGPT, her situation mirrored global incidents where lawyers unknowingly submitted AI-generated content as legitimate legal citations.
In another notable case from the United States, Park v Kim, an attorney cited non-existent case law generated by ChatGPT in her legal documents, acknowledging the source during the proceedings. Similarly, in Canada, Zhang v Chen involved a lawyer who filed documents containing fabricated case authorities created by the same AI. The judge in the Mavundla case was clear: regardless of how advanced AI technology becomes, lawyers bear the ultimate responsibility for the accuracy of the sources they present in court. Ignorance or workload pressure cannot serve as justifications for negligence.
The case also highlighted the importance of appropriate supervision within law firms. The judge criticized the supervising attorney for failing to review the documents before submission. This incident underscores a broader ethical principle: senior lawyers must ensure that junior colleagues are adequately trained and supervised.
As the legal profession evolves, the Mavundla case serves as a wake-up call for universities. If established practitioners can fall victim to AI-generated misinformation, students are equally susceptible. Generative AI tools like ChatGPT can be invaluable for summarizing cases, drafting arguments, and analyzing complex texts. However, they can also produce confidently incorrect information that appears credible.
The risks for students are twofold. Firstly, excessive reliance on AI can hinder the development of essential research skills. Secondly, submitting AI-generated content may lead to academic or professional misconduct. Such actions could result in disciplinary measures at educational institutions and long-lasting damage to one’s reputation in the legal field.
In light of these challenges, the authors advocate for law schools to embrace a proactive approach to AI education. Instead of prohibiting AI tools, institutions should focus on teaching students how to use them responsibly. This includes fostering “AI literacy”—the ability to question, verify, and contextualize AI-generated information. Students should learn to treat AI systems as assistants, rather than authorities.
Legal educators can integrate AI literacy into existing courses on research methodology, professional ethics, and legal writing. Exercises could involve verifying AI-generated summaries against real judgments or examining the ethical implications of relying on machine-generated arguments.
Teaching responsible AI use goes beyond avoiding courtroom embarrassment; it is essential for maintaining the integrity of the justice system. The Mavundla case illustrates how one candidate attorney’s uncritical application of AI led to professional scrutiny and reputational damage for her firm. Moreover, the financial implications are significant, as courts can impose costs on lawyers found guilty of serious professional misconduct.
The future of AI in legal practice is inevitable, and the challenge lies not in whether it should be used, but how it should be integrated responsibly. Law schools have a crucial opportunity—and an ethical responsibility—to prepare future lawyers for a landscape where technology and human judgment must work in harmony. Speed and convenience cannot replace the fundamental values of accuracy and integrity. As AI becomes a standard component of legal research, tomorrow’s lawyers must be trained not only to prompt AI but also to think critically about its outputs.

































