This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Law Practice,
Ethics/Professional Responsibility

Dec. 6, 2023

AI in legal practice: New California Bar guidelines

Merri A. Baldwin

Shareholder, Rogers Joseph O'Donnell

Phone: (415) 956-2828

Email: mbaldwin@rjo.com

Baldwin is a shareholder in the San Francisco office of Rogers Joseph O'Donnell. She is a former chair of the State Bar of California Committee on Professional Responsibility and Conduct and is a member of the California Lawyers Association Legal Ethics Committee. She also teaches professional responsibility at the University of California, Berkeley School of Law.

In the year since the large language model chatbot ChatGPT debuted, the legal profession has become consumed with the possibility of using generative artificial intelligence (AI) in the practice of law. Such tools consist of deep learning models that train on vast amounts of data, and generate text, images and other data based on that data. This enables sophisticated outputs that come closer than ever before to human thinking and understanding, as well as significant efficiencies in processing significant amounts of data. The possible applications for AI in legal practice appear to be almost limitless and raise significant questions about what it means to practice law.

Law firms have begun to develop their own large language model AI tools, and legal vendors have incorporated AI into their products, with new tools (and possible applications) emerging every day. Risks emerged early on: two lawyers in New York were sanctioned in May 2023 for submitting a brief purportedly citing to published authorities that actually were fake cases (or "hallucinations") made up by ChatGPT, and for failing to immediately disclose that fact to the court. Courts are grappling with how to deal with the use of AI, and an increasing number of courts now require lawyers to disclose any use of AI in drafting court filings.

Legal technology experts expect the use of AI in legal practice to expand exponentially (and not only because ChatGPT is said to be able to generate essay responses sufficient to pass the bar exam.) A Wolters Kluwer study released in November 2023 reported that 73% of lawyers expect to use integrate generative AI in their legal work in the next 12 months. Commentators have raised ethical concerns, including potential breaches of client confidential information, the possibility of biased outputs generated as a result of bias contained in the underlying data, when client disclosure is required, and the possible breach of the duty of competence and the standard of care by inaccurate output.

The California Bar addressed this dynamic environment in November 2023 by adopting guidelines for the use of artificial intelligence, "Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law." This was an important development: California is the first state to approve regulatory guidance for the use of AI. The guidelines are non-binding and do not implement changes to any professional rules or regulations, although the Bar will continue review and consideration of the regulation of AI in the legal profession. Additional issues to be explored include whether the unauthorized practice of law needs to be more clearly defined, whether generative AI products should be licensed, and possible rules regarding AI and the bar exam.

The Bar's AI guidelines are organized by ethical duties implicated in the use of AI, which enhances the practical application of the analysis. The guidance emphasizes the duty of competence, and in particular Rule 1.1, comment [1], requiring a lawyer to "keep abreast of the changes in the law and its practice, including the benefits and risks associated with relevant technology." The guidelines make clear that lawyers have a duty to understand to a "reasonable degree" how AI technology works and its risks and emphasizes the importance of the lawyer's exercise of independent judgment, training and skill. "A lawyer's professional judgment cannot be delegated to generative AI and remains the lawyer's responsibility at all times." Also important: a lawyer must "critically review, validate and correct" any AI output to ensure accuracy. But that obligation means more than simply detecting and eliminating "false AI-generated results": it also requires that a lawyer ensure both the inputs and outputs accurately "reflects and supports the interests and priorities of the client." This requirement could apply to the decision whether to use AI in connection with a client matter as well as how it is used.

With regard to confidentiality, it is important that lawyers understand the extent to which an AI product may share with third parties information that a user inputs, including prompts as well as documents uploaded. "A lawyer may not input any confidential information of the client into any generative AI solution that lacks adequate confidentiality and security protections." Equally straightforward is the guidance with respect to the duty to comply with the law: a lawyer needs to examine AI-specific laws, privacy laws, cross-border data rules and intellectual property law, and must comply with those laws and counsel clients to do so in their use of AI.

Certain recommendations are more specific. Applying Rule 5.1 (Duty to Supervise), the guidance states that supervisory lawyers and law firm managers "should establish clear policies" regarding the use of generative AI and take steps to ensure that any such use by lawyers or nonlawyer staff complies with the ethical rules. To ensure compliance with Rule 3.3 (Candor to the Court), a lawyer must review all AI outputs before submission to a court and correct errors and misleading statements. The guidance does not contain specific direction as to when a lawyer has a duty to inform a client as to the lawyer's use of AI but suggests the lawyer "should consider" disclosure to the client where the lawyer intends to use AI in the client's representation.

The guidance also addresses the issue of how attorneys may charge for AI, or, rather, how attorneys may not charge. Noting that use of generative AI may enable a lawyer to "more efficiently create work product," the guidelines state that a lawyer "must not charge hourly fees for the time saved by using generative AI." This is correct as applied to a customary hourly fee arrangement. However, lawyers may be able to reach agreements with clients on fee arrangements that reflect efficiencies created using AI in certain circumstances that may utilize fee structures other than an hourly rate, particularly where the law firm has invested significant resources in developing the AI tool. This will likely be an evolving area of lawyer concern.

While the Bar's AI guidance is non-binding, it helps set the current standard of practice regarding the use of generative AI. Law firms and lawyers have no choice but to pay close attention to this fast-evolving technology and its implications for the practice of law.

#376101


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email jeremy@reprintpros.com for prices.
Direct dial: 949-702-5390

Send a letter to the editor:

Email: letters@dailyjournal.com