EthicsFrom Fluffy Cats to the Supreme Court: Why AI is a Lawyer’s FIT

1 March 2026

A practising lawyer’s unfiltered take on the tools that are reshaping the profession — and why the naysayers are missing the point.

The Arc of a Revolution

Let’s begin in 2017, because context matters enormously in law, and it matters here too.

In 2017, the artificial intelligence conversation was still largely one of charming, well-publicised failure. Researchers at MIT’s LabSix demonstrated that Google’s image-recognition AI, InceptionV3, could be confidently tricked into identifying a photograph of a cat as guacamole — not by altering the image dramatically, but by changing a single pixel [1]. Around the same time, other researchers were showing that neural networks could mistake a fluffy cat for cotton candy, a baseball for an espresso, a school bus for an ostrich [2]. These weren’t fringe experiments. They were published, peer-reviewed demonstrations of a fundamental limitation in the technology of the day. The AI community was honest about it: these systems were brittle, narrow, and easily fooled.

The legal profession, for its part, barely noticed. AI was something for tech companies and academic labs, not for the barrister’s chambers or the solicitor’s office.

Then came November 30, 2022.

OpenAI released ChatGPT to the public, and within five days, more than one million people had signed up [3]. Within two months, it had one hundred million users — the fastest adoption of any consumer technology in history [4]. For lawyers who tried it early, the reaction was a mixture of wonder and frustration. The wonder: it could draft a letter, summarise a contract, explain a complex legal concept in plain English, and hold a coherent conversation about almost any area of law. The frustration: it couldn’t reliably add two numbers together. It would confidently cite cases that did not exist. It would invent statutes, fabricate quotations from real judges, and present all of it with the serene confidence of a senior partner who has never once been wrong.

That tension — extraordinary capability alongside extraordinary unreliability — defined the first chapter of AI in legal practice. Many lawyers, understandably, walked away. Others, more curious, stayed and learned to work with the grain of the technology rather than against it.

Here we are now, in 2026. And the profession looks different.

 

The Toolkit of a Modern Practitioner

I am a practising lawyer. I use three AI tools every working day, and I am not embarrassed to say that they have materially changed how I work — for the better. Let me be specific, because vagueness is the enemy of useful professional discourse.

First: ChatGPT with Retrieval-Augmented Generation (RAG) modules. The base ChatGPT of 2022 was a general-purpose language model trained on a static dataset. It knew a great deal about the world up to a certain point, and nothing after it. RAG changed that calculus entirely. Retrieval-Augmented Generation is a technique that allows the model to reach into a live, curated document corpus — legislation, case law, regulatory guidance — and ground its responses in actual, verifiable sources rather than relying solely on what it was trained on [5]. For a lawyer researching a specific statutory provision or tracking the evolution of a line of cases in a particular jurisdiction, this is transformative. The research that once took half a day now takes twenty minutes, and the output is sourced, traceable, and auditable.

Second: Google Gemini, integrated into Google Workspace. The integration of AI directly into the tools lawyers already use — email, documents, spreadsheets — is quietly one of the most significant developments in legal productivity. Gemini’s presence inside Google Workspace means that drafting, editing, summarising correspondence, and managing large volumes of documents can be done within the existing workflow, without switching platforms or copying and pasting between applications. The friction has been removed, and friction, in a busy practice, is the enemy of quality work.

Third: Manus, for deep research and applied development. This is where things get genuinely interesting. Manus operates at a different level of depth — conducting multi-source research, synthesising complex information across jurisdictions, and producing structured, referenced analysis that would take a junior associate days to compile. But its most remarkable recent capability, from a practical legal standpoint, is its ability to build functional web applications. I have used it to build websites with embedded legal calculators — tools that actually work, that perform the arithmetic correctly, and that can be deployed for client-facing use. The days of a lawyer having to commission a developer for a basic functional tool are, for many purposes, over.

Addressing the Naysayers

Every professional community has its skeptics, and the legal profession is no exception. The objections to AI in legal practice tend to cluster around two concerns: privacy and hallucination. Both deserve a serious response.

On privacy: The concern is that inputting client information into an AI tool constitutes a breach of confidentiality. This is a legitimate concern when applied to free, consumer-grade tools that use input data to train their models. It is not a legitimate concern when applied to enterprise-grade, professionally licensed platforms that contractually prohibit the use of client data for training and operate under robust data processing agreements [6]. The distinction is not subtle; it is the same distinction a lawyer makes between sending a confidential document via encrypted secure email versus posting it on a public noticeboard. The tool is not the problem. The failure to choose the right tool is. Professional responsibility demands that lawyers understand the tools they use. That obligation has not changed; only the tools have.

On hallucination: This one requires a more nuanced answer, because the risk is real and the consequences in a legal context can be severe. There is a well-documented and growing body of cases in which lawyers have submitted AI-generated court filings containing fabricated citations — cases that do not exist, quotations from judgments that were never written [7]. In September 2025, a California court issued a historic fine after 21 of 23 quotes in a lawyer’s opening brief were found to be AI-generated fabrications [8]. These are not cautionary tales about AI. They are cautionary tales about professional negligence. The lawyer who submits an AI-generated document without reading it, verifying every citation, and taking personal responsibility for its contents has not been betrayed by technology. They have abandoned their professional duty.

The correct response to hallucination is not to discard the tool. It is to use the tool correctly. Every output from an AI system is a draft. It is a starting point, a research assistant’s first pass, a junior associate’s initial memo. It requires review, verification, and the application of professional judgment. That has always been the lawyer’s role. AI has not changed it; it has merely made the volume of material requiring review larger and the speed at which it arrives faster.

The Supreme Court, the Tariffs, and the Limits of Memory

Let me give you a concrete example of why AI is genuinely, profoundly useful for legal analysis, and why the ability to hold vast amounts of information in accessible, searchable memory is the single most underrated capability these tools possess.

On February 20, 2026, the United States Supreme Court handed down its decision in Learning Resources, Inc. v. Trump, striking down President Trump’s sweeping emergency tariffs in a 6-3 ruling [9]. The majority opinion was authored by Chief Justice John Roberts and held that the International Emergency Economic Powers Act (IEEPA) does not authorise the President to impose tariffs [10].

Now, here is where it gets interesting — and where the legal analyst in you should sit up straight.

The six justices in the majority were: Chief Justice Roberts, Justice Gorsuch, and Justice Barrett (all Republican-appointed) alongside Justices Sotomayor, Kagan, and Jackson (all Democrat-appointed) [11]. The three dissenters were Justices Kavanaugh, Thomas, and Alito — all Republican-appointed conservatives [12]. So the court split not along the predictable partisan lines, but in a configuration that saw three conservative justices join the three liberal justices to form a majority, while three other conservatives dissented.

How do you square that circle? How do you explain to a client, a journalist, or a parliamentary committee why three of the court’s six Republican-appointed justices voted to strike down a Republican president’s signature economic policy? The answer lies in the doctrine of the major questions doctrine, the non-delegation principle, the specific statutory text of IEEPA, and a decades-long intellectual debate within conservative jurisprudence about the proper limits of executive power [13]. Justice Gorsuch, notably, wrote a concurrence that was as much a rebuke of his fellow conservatives as it was a statement of legal principle.

To understand that decision fully — to advise a client on its implications, to draft a submission to a regulatory body, to write an article for a legal journal — you need to hold in your mind simultaneously the statutory history of IEEPA, the evolution of the major questions doctrine from West Virginia v. EPA through to the present, the intellectual biographies of nine justices, and the political context of a second Trump administration. No human lawyer can hold all of that with perfect recall. An AI tool, properly prompted and properly verified, can surface all of it in minutes.

Memory is the killer application. It is the capability that makes everything else possible. The ability to instantly recall, cross-reference, and synthesise information across thousands of cases, statutes, and secondary sources is not a nice-to-have. For a lawyer advising on complex, multi-jurisdictional matters, it is the difference between adequate advice and exceptional advice.

 

What AI Actually Does for Lawyers

It is worth being specific about the practical applications, because the conversation about AI in law tends to oscillate between breathless enthusiasm and apocalyptic fear, with very little attention paid to the mundane, daily reality of how these tools are actually being used.

 

Application What AI Does Well Where Human Judgment Remains Essential
Legal Research Rapidly surfaces relevant cases, statutes, and commentary across jurisdictions Assessing the weight, currency, and applicability of authority
Document Drafting Generates first drafts of contracts, letters, pleadings, and submissions Ensuring accuracy, appropriateness to context, and professional responsibility
Contract Review Identifies non-standard clauses, flags potential risks, compares against precedent Advising on commercial risk, negotiating strategy, and client-specific considerations
Case Strategy Synthesises factual and legal material, identifies arguments and counter-arguments Making the judgment calls that determine how a case is run
Client Communication Drafts explanatory letters and summaries in plain language Maintaining the relationship, managing expectations, and exercising empathy
Regulatory Analysis Tracks legislative changes, summarises regulatory guidance, monitors developments Advising on compliance strategy and interpreting ambiguous provisions

 

The pattern is consistent. AI handles the volume, the recall, and the first draft. The lawyer handles the judgment, the strategy, and the accountability. This is not a diminishment of the profession. It is a clarification of what the profession is actually for.

There is also a legitimate and underappreciated role for what might be called constructive advocacy. A good lawyer does not simply present the law neutrally; they present it in the light most favourable to their client’s position, within the bounds of professional ethics and their duty to the court. AI tools can assist in identifying the framings, the arguments, and the lines of authority that best support a client’s case. A little analytical bias, properly directed and professionally supervised, is not a flaw. It is advocacy.

The Name Problem

We have been calling these things “Artificial Intelligence” since the term was coined at the Dartmouth Conference in 1956. The name has served us well enough, but it carries baggage — science fiction baggage, existential threat baggage, the baggage of a thousand think-pieces about robots taking jobs. For lawyers, in particular, the name triggers a defensive crouch.

Perhaps it is time for a rebrand.

I propose we start calling them what they are: FIT — Fucking Incredible Tools.

Not sentient. Not autonomous. Not a replacement for professional judgment. Tools. Extraordinary, powerful, occasionally unreliable tools that, in the hands of a skilled and diligent practitioner, produce better outcomes for clients than the same practitioner working without them. The hammer did not replace the carpenter. The word processor did not replace the writer. The legal database did not replace the lawyer. FIT will not replace the practitioner either. But the practitioner who refuses to learn how to use FIT will, in time, be replaced by one who has.

A Note on the Road Ahead

The evolution from 2017 to 2026 has been faster than almost anyone predicted. The cat-and-cotton-candy era feels like ancient history. The ChatGPT-can’t-add-numbers era is only four years ago and already feels quaint. The tools available today — the RAG-enhanced research platforms, the workspace-integrated assistants, the deep research and application-building capabilities — would have seemed implausible to a lawyer in 2020.

The question is not whether AI will continue to develop. It will, at a pace that will continue to surprise us. The question is whether the legal profession will engage with that development thoughtfully, critically, and on its own terms — or whether it will cede the conversation to technologists who do not understand the law, and to regulators who are perpetually catching up.

The naysayers will always be with us. Privacy concerns are worth taking seriously and managing properly. Hallucination is a real risk that demands professional vigilance. The ethical questions around AI in legal practice are genuine and require ongoing engagement from bar associations, courts, and practitioners alike.

But the tools are here. They work. They are getting better. And for the lawyer who is willing to learn them, verify their outputs, and apply professional judgment to everything they produce, they represent something genuinely remarkable: more time to think, more depth of research, and better outcomes for the people who need legal help.

That is, in the end, what the law is for.

The author is a practising lawyer. The views expressed are personal and do not constitute legal advice. All AI outputs referenced in practice should be independently verified before reliance.

References

[1] MIT fooled Google’s AI into believing a cat was guacamole — Mashable, November 2017

 

[2] AI image recognition fooled by single pixel change — BBC News, November 2017

 

[3] ChatGPT, the generative AI chatbot, is released — History.com

 

[4] The ChatGPT (Generative Artificial Intelligence) Revolution Has Made Generative AI Accessible to the Masses — PMC / NCBI, 2023

 

[5] Intro to retrieval-augmented generation (RAG) in legal tech — Thomson Reuters, December 2024

 

[6] Consumer and professional AI privacy standards for legal work — Thomson Reuters, November 2025

 

[7] AI Hallucination Cases Tracker — Natural and Artificial Law

 

[8] California issues historic fine over lawyer’s ChatGPT fabrications — CalMatters, September 2025

 

[9] Supreme Court strikes down tariffs — SCOTUSblog, February 2026

 

[10] Supreme Court Strikes Down IEEPA Tariffs — Skadden, February 2026

 

[11] Learning Resources, Inc. v. Trump — Ballotpedia

 

[12] A breakdown of the court’s tariff decision — SCOTUSblog, February 2026

 

[13] How and why the conservative justices differed on tariffs — SCOTUSblog, February 2026