Three years ago, on Nov. 30, 2022, OpenAI quietly released a
free large language model (LLM) chatbot that could seemingly answer any
question you had. Chatbots existed before, but they sounded, well, like a
chatbot. The brilliance of ChatGPT is that it can process complex information
to create new content. Older, rule-based chatbots mostly performed predefined
tasks depending on the input. An engineer would have to pre-program the chatbot
to identify user queries and provide corresponding pre-programmed responses
This presented a crucial limitation because no one could predict every possible
query and response in advance. That is why when you try to use the rudimentary
chatbot usually present on online shopping platforms, the chatbot often
misunderstands your question and ultimately connects you to a live agent.
ChatGPT has no such limitation. It is built on an artificial
neural network and trained on a vast amount of human-generated data to
understand patterns from large amounts of information, including text, images,
audio and video. When responding to a user request, the chatbot uses the
patterns it has learned to predict and create new content. In other words, it's
designed to talk like us. And ChatGPT seems to have no question it cannot
answer, no problem it cannot solve, not because it has the expertise or is
trained to answer those questions or solve those problems, but because it is
designed to generate a response based on your prompt, which provides the
context.
ChatGPT's success became proof of concept for generative AI:
People were not only ready for it, they were enamored
by it. And this AI boom has affected every industry, including the legal sector.
It is hardly surprising that a profession that constantly demands faster and
cheaper work product would turn to a technology promising exactly that. Indeed,
traditional legal databases like Thomson Reuters (parent of Westlaw) and RELX
(parent of LexisNexis) have worked tirelessly to bring these promised fruits to
lawyers through products like Westlaw's CoCounsel and
LexisNexis's Lexis+ AI. Brand new platforms like Harvey AI have also begun
offering law-specific AI tools that threaten to disrupt the legal technology
sector.
Despite these heavily advertised time-saving tools, some lawyers
remain skeptical of generative AI, preferring to simply ignore its existence
and rely on good old-fashioned boolean searches and
human juniors to do grunt work. And headlines support their caution. There has
been no shortage of sanctions against lawyers and law firms who include
hallucinated case citations, invented quotes, and outright false propositions
in their briefs. But reading these widely publicized examples as a general
indictment against generative AI absolves lawyers of their responsibilities. As
the California Court of Appeals recently held: "Simply stated, no brief,
pleading, motion, or any other paper filed in any court should contain any citations--whether provided by generative AI or any
other source--that the attorney responsible for submitting the pleading
has not personally read and verified." Noland v. Land of the Free, L.P.,
114 Cal.App.5th 426, 431 (2025). It is that simple because while technology has
no doubt transformed the landscape of legal practice, our ethical duties remain
the same.
It is a fundamental duty that a lawyer provides competent
representation to clients. Model Rule 1.1 specifies that "Competent
representation requires the legal knowledge, skill, thoroughness and
preparation reasonably necessary for the representation." Model R. of Pro.
Conduct R. 1.1 (2023). Lawyers who rely on AI-generated content without
independently verifying the output will likely fall short of this duty. See ABA
Formal Opinion 512 at 3-4 ("a lawyer's reliance on, or submission of, a GAI
tool's output--without an appropriate degree of independent verification or
review of its output--could violate the duty to provide competent representation
as required by Model Rule 1.1."). Meanwhile, Model Rule 3.1 prohibits lawyers
from knowingly making a false statement of fact or law to a tribunal or failing
to correct a false statement of material fact or law previously made. And Model
Rules 5.1 and 5.3 specifically require lawyers who have supervisory authority
over other lawyers or managerial authority within a firm ("senior lawyers") to
make reasonable efforts to ensure that the firm and its lawyers conform to the
Rules of Professional Conduct. In certain circumstances, the rules require
senior lawyers to be responsible for other lawyers' violations and to ensure
that non-lawyers' conduct is compatible with the lawyer's professional
obligations. The rules are clear: A lawyer is responsible whether a false
citation arises from a junior attorney's human error or an AI tool's
hallucination.
The problems do not stop at hallucinations. Recent advancements
in hyper-realistic image generation, like Google's Nano Banana Pro, put the
authenticity of evidence presented in legal proceedings at risk. What if a
client uses these tools to fabricate evidence in their favor? We are operating
in a minefield.
Since we cannot put generative AI back in its box, how do we
navigate the vastness ahead? We call for a return to the basics. Every lawyer,
aspiring lawyer, and law student should reread their applicable ethics rules
and ensure they understand what is required when using AI. Law firms should
offer recurring training as technology develops not only to understand how the
technology works but also how it intersects with the lawyer's professional
responsibilities. Attorneys, young and old, should learn or relearn the
foundational skills of legal research and writing without the assistance of AI,
both so they learn to think for themselves and so they know what a good work
product is to verify what AI has produced. Law firms should also establish and
publicize best practices to safeguard against AI errors. For example, to ensure
accuracy of case citations, junior lawyers could be required to send PDFs of
each opinion cited, and senior lawyers could set aside time to verify the
accuracy of the cited cases. Lawyers should keep up to date with the various AI
tools and their strengths and weaknesses, and law firms should supply basic AI
training to its lawyers.
Your new co-counsel may be AI, but the counsel responsible will
still be you.
Submit your own column for publication to Diana Bosetti
For reprint rights or to order a copy of your photo:
Email
Jeremy_Ellis@dailyjournal.com
for prices.
Direct dial: 213-229-5424
Send a letter to the editor:
Email: letters@dailyjournal.com




