EthicsAI in the Courtroom: A Top Lawyer’s Blunder and the New Reality for Australian Law

16 August 2025

A senior Victorian lawyer’s apology to a Supreme Court Justice has sent a shockwave through the Australian legal community, highlighting the high-stakes reality of using artificial intelligence in practice. In August 2025, Rishi Nathwani KC, a respected King’s Counsel, was forced to take “full responsibility” for filing submissions in a murder case that were riddled with AI-generated errors. The document included entirely fabricated quotes from legislative speeches and cited Supreme Court cases that simply did not exist.

This wasn’t a minor clerical error; it caused a 24-hour delay in the case and drew a sharp rebuke from Justice James Elliott, who stressed that “the ability of the court to rely upon the accuracy of submissions made by counsel is fundamental to the due administration of justice”. The incident serves as a critical case study for every practitioner in Australia, from junior solicitors to silks. It perfectly captures the dual narrative of AI in law today: a story of incredible, transformative potential set against a backdrop of serious professional risk. While headlines focus on the perils, a quieter revolution is underway, with AI adoption in the legal profession soaring from 22% to 80% in just one year. The challenge for Australian lawyers is not if they should use AI, but how to do so without compromising their professional duties.

The Perils: “Hallucinations” and the Judicial Hammer

The central problem exposed by the Nathwani case is the phenomenon of AI “hallucinations”—instances where the technology generates plausible-sounding but completely false information. For a profession built on the bedrock of accuracy and precedent, this is a profound threat. Research from Stanford University found that specialised legal AI models can hallucinate in one out of every six queries.

The Victorian murder case demonstrates the real-world consequences. Justice Elliott’s associates discovered the errors when they were unable to locate the cited cases and asked the defence team for copies. The lawyers admitted the citations “do not exist” and contained “fictitious quotes”. Troublingly, they explained they had “checked that the initial citations were accurate and wrongly assumed the others would also be correct”. This highlights a critical vulnerability: the selective verification of AI-generated work is a recipe for disaster.

Courts across the globe are responding with increasing severity. While US federal judges have issued fines of up to $5,000 for submitting fictitious AI research , British courts have warned that such actions could amount to contempt of court or even perverting the course of justice, an offence carrying a potential life sentence. In his judgment, Justice Elliott pointedly reminded the parties that the Supreme Court of Victoria had released guidelines for AI use, stating, “It is not acceptable for artificial intelligence to be used unless the product of that use is independently and thoroughly verified”. This sets a clear and non-negotiable standard for all Victorian legal practitioners.

The Promise: A Revolution in Efficiency and Practice

Despite these significant risks, the reason for AI’s meteoric rise in law firms is the staggering efficiency it offers. The technology is fundamentally reshaping how legal work is done, promising to reduce costs and free up lawyers to focus on high-value strategic thinking.

The statistics are compelling. For an average litigation matter, AI-assisted tools can slash legal research time from 17-28 hours down to a mere 3-5.5 hours. In the labour-intensive field of document review, some AI implementations have achieved cost savings of up to 99.97%, reducing six hours of a lawyer’s work to just minutes. This is not a distant future; it is happening now. A recent survey found that 89% of legal professionals see practical applications for AI in their own work, and 55% feel excited or hopeful about its integration.

This transformation is powered by sophisticated, professional-grade tools. Platforms like Thomson Reuters’ CoCounsel and LexisNexis’ Lexis+ AI are not general-purpose chatbots. They are built on curated databases of authoritative legal content, which significantly reduces the risk of hallucinations compared to systems trained on the open internet. These tools can understand natural language queries, summarise cases, draft contracts, and analyse thousands of documents for key clauses or patterns, revolutionising everything from due diligence to discovery.

The Path Forward: Best Practices for Victorian Lawyers

The lesson from the Nathwani case is not to abandon AI, but to manage it with rigorous professionalism. The legal profession’s core ethical duties of competence, confidentiality, and diligence apply with full force to AI-assisted work. For Victorian lawyers navigating this new landscape, a clear set of best practices is emerging.

  1. Embrace the Verification Imperative: As Justice Elliott made clear, every single piece of AI-generated content must be independently and thoroughly verified. This means checking every case citation against the original source, fact-checking every claim, and ensuring all legal reasoning is sound. There are no shortcuts.
  2. Invest in Competence and Training: Under professional conduct rules, lawyers must be competent in the tools they use. This requires dedicated training to understand not only an AI tool’s capabilities but also its limitations and potential for error.
  3. Prioritise Professional-Grade Tools: Avoid using general consumer AI systems for substantive legal work. Instead, choose professional platforms that are built on authoritative, verified legal content and designed specifically for the profession.
  4. Maintain Human Oversight: AI is a tool to augment a lawyer’s expertise, not replace it. The ultimate responsibility for the accuracy and quality of all legal work remains with the practitioner.
  5. Communicate Transparently with Clients: Building trust requires open communication about how AI is being used in a client’s matter. This includes explaining the safeguards in place to protect confidentiality and ensure the quality of the work.

The AI revolution is here, and it is reshaping the practice of law in Australia. The firms and practitioners who thrive will be those who harness its incredible power responsibly, integrating this new technology with the timeless professional values of diligence, accuracy, and unwavering integrity.

A Slap on the Wrist for a King’s Counsel? Don’t Make Us Laugh.

Let’s be blunt: the apology from Rishi Nathwani KC for submitting AI-generated fantasies to the Supreme Court is not nearly enough. It’s a professional embarrassment that exposes a deep-seated arrogance at the top of the legal profession. This wasn’t a “teachable moment” about new technology; it was an abject failure of basic, day-one competence from a lawyer whose title and exorbitant fees are supposed to guarantee the exact opposite.

When a client pays a King’s Counsel upwards of $5,000++ a day, they are not paying for someone to outsource their thinking to a chatbot and then fail at the most elementary task of checking the output. Verifying case law isn’t a high-level skill; it’s a foundational duty taught to every first-year law student. To file fabricated quotes and non-existent cases in a murder trial is not a simple mistake—it’s a dereliction of duty that delayed justice and undermined the court’s authority.

The excuse offered—that the team “checked the initial citations were accurate and wrongly assumed the others would also be correct”—is perhaps the most damning part of this whole affair. An assumption? From a KC? That is an admission of pure negligence. It’s the kind of lazy, corner-cutting sloppiness that would see a junior solicitor hauled over the coals, yet from a silk, it’s met with an apology and a headline. This isn’t a rookie mistake; it’s a systematic failure of professional judgment from someone who should, and is paid to, know better.

An apology does not cut it. Where is the accountability? This incident demands more than a red-faced moment in court. It warrants a full-throated investigation by the Victorian Legal Services Board. If the prestigious “KC” designation is to mean anything more than a licence to charge higher fees, it must come with consequences for such profound lapses in professionalism.

This case sets a dangerous precedent. If a senior barrister can be caught taking the mickey with such sloppy work and walk away with little more than public embarrassment, it sends a clear message to the entire profession: the standards don’t matter as much as your title. The legal community must demand better. Public trust in the justice system depends on it.

 

This article was researched and compiled by Manus AI, drawing on reporting from ABC News (abc.net.au/news) and The Associated Press (AP). The final analysis and editorial refinement were provided by Gemini (gemini.google.com)