This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

self-study / Legal Ethics

May 15, 2023

Programming professionalism: the ethics of artificial intelligence

Wendy L. Patrick

Wendy is a California lawyer, past chair and advisor of the California State Bar Ethics Committee (Committee on Professional Responsibility and Conduct), and past chair of the San Diego County Bar Association Legal Ethics Committee. Any opinions expressed here are her own, and do not reflect that of her employer. This article does not constitute legal advice.

Within virtually every industry today, artificial intelligence (AI) is a hot topic. Cutting-edge and controversial, it has become a household buzz word due to the increasing number of services that are conducted through efficient automation. Yet one of the things that distinguishes AI is that it has moved beyond automation, and can now think and learn.

But can it practice law? The tentative answer seems to be, "not yet."

Legal Ethics and AI

Although AI has proven over the years to be a helpful way to enhance the speed and accuracy of tasks - such as legal research - it cannot replace judgment, morality, or chemistry with clients, court, counsel, or colleagues. And in front of a jury, a lawyer's silver tongue remains a unique, individualized feature of skilled advocacy. Yet even the most talented trial lawyers may nonetheless improve the speed and efficiency of some of the more mundane aspects of legal work through automating services. But which ones, and at what cost?

The use of AI in the practice of law implicates a variety of ethical rules. Not surprisingly, the first one deals with the obligation to know how to use AI in the first place: the duty of competence.

The current, revised rule 1.1 Competence was approved by the Supreme Court, effective March 22, 2021. It states that a lawyer shall not "intentionally, recklessly, with gross negligence, or repeatedly fail to perform legal services with competence." In the second part of the rule, "competence" is defined as not only learning and skill, but also the "mental, emotional, and physical ability reasonably necessary for the performance of such service." Law firms, bar associations and law schools will no doubt be increasingly incorporating AI into required curriculum in order to equip lawyers to become competent in not only legal knowledge, but the use of AI.

Rule 5.1 Responsibilities of Managerial and Supervisory Lawyers requires leadership to ensure the competence and ethical savvy of the lawyers they supervise. It requires those with managerial authority to "make reasonable efforts to ensure that the firm has in effect measures giving reasonable assurance that all lawyers in the firm comply with these rules and the State Bar Act." Rule 5.3 extends this duty to non-lawyer members of the firm, ensuring applicable measures give "reasonable assurance that the nonlawyer's conduct is compatible with the professional obligations of the lawyer."

AI may also assist lawyers in working faster in some aspects of legal practice, which can facilitate compliance with rule 1.3, Diligence, which states that a lawyer shall not "intentionally, repeatedly, recklessly or with gross negligence fail to act with reasonable diligence in representing a client," defining "reasonable diligence" as acting "with commitment and dedication to the interests of the client," as well as ensuring one does not "neglect or disregard, or unduly delay a legal matter" that has been entrusted to the lawyer.

From the Chatroom to the Courtroom

Using AI to prepare legal briefs and arguments implicates California rule 3.3, Candor Toward the Tribunal, which states in subdivision (a) that a lawyer shall not "knowingly make a false statement of fact or law to a tribunal or fail to correct a false statement of material fact or law previously made to the tribunal by the lawyer" . . . or "offer evidence known to be false." California Business and Professions Code 6068(d) contains similar language, advising lawyers they must never mislead a court "by artifice or false statement."

Complying with the duty of candor requires lawyers to know when they should augment AI-related research with good old-fashioned fact checking. "Robot briefs" are not (yet) reliable enough to submit to a court without verifying both the citations and applicable legal analysis. And if mistakes are made and something sneaks onto the record inadvertently that is subsequently discovered to be false, rule 3.3(a) provides, "If a lawyer, the lawyer's client, or a witness called by the lawyer, has offered material evidence, and the lawyer comes to know of its falsity, the lawyer shall take reasonable remedial measures, including, if necessary, disclosure to the tribunal."

AI is Not a Charismatic Communicator

California rule 1.4 Communication with Clients, approved by the Supreme Court, effective January 1, 2023, states among other things, that a lawyer must "reasonably consult with the client about the means by which to accomplish the client's objectives in the representation," "keep the client reasonably informed about significant developments relating to the representation," and advise the client about any "relevant limitation on the lawyer's conduct when the lawyer knows that the client expects assistance not permitted by the Rules of Professional Conduct or other law." All of this requires individualized legal analysis tailored to the specific case at hand, no doubt incorporating lessons learned through other cases.

In addition, the duty to reasonably consult with clients about how to accomplish the objectives of representation can include providing information about the use of AI. When it does, in order to satisfy the spirit of rule 1.4, the lawyer has to know enough about the use of AI (relating back to rule 1.1, Competence), to be able to explain it to a client.

And rule 1.4 demands even more than a duty of communication. It requires a lawyer to explain a matter to a client "to the extent reasonably necessary to permit the client to make informed decisions regarding the representation." Lawyers must form relationships of trust with clients sufficient to allow them to determine what is "reasonably necessary" in each case. And rule 1.4(c) allows the exercise of even more discretion in that it actually permits delayed transmission of information to a client "if the lawyer reasonably believes that the client would be likely to react in a way that may cause imminent harm to the client or others." AI cannot make these subtle, interpersonal determinations, which require an authentic attorney-client relationship of trust and perception.

Note also that rule 1.6, Confidentiality, encompasses one of the hallmarks of an attorney-client relationship in its careful guarding of client confidential information. This is one of the guarantees clients both anticipate and expect when retaining legal services. When lawyers input client information into an AI database like Chat GPT to generate content, whether pleadings, arguments, or analysis, the question becomes whether they might be inadvertently sharing confidential information--and with whom. This ties back to the competence rule 1.1 requirement; lawyers should know the answer before they engage.

AI Cannot Give Individualized Legal Advice

AI is not a sentient entity who can thoughtfully reason with individual clients beyond utilizing programmable data. Therefore, an AI assistant cannot decide how to implement a broader, more amorphous ethics rule such as 2.1. California rule 2.1 Advisor, states that when representing clients, lawyers shall "exercise independent professional judgment and render candid advice." Comment [2] presents a non-inclusive list of considerations an AI assistant could arguably never fulfill. It allows a lawyer in rendering advice to refer to "considerations other than the law, such as moral, economic, social and political factors that may be relevant to the client's situation." This type of analysis requires human emotion, experience, and judgment.

The Personalization of Professionalism

In practicing law, it appears that important decisions and representations should be made by advocates, not artificial assistants. Apparently, the best intelligence is emotional and authentic, not synthetic, when representing the interests of individual clients, and pursuing justice for all.

#1281

Submit your own column for publication to Diana Bosetti


Related Tests for Legal ethics

self-study/Legal Ethics

Identifying and screening conflicted non-attorney staff

By Alanna G. Clair, Shari L. Klevens

self-study/Legal Ethics

Legal ethics lessons learned from COPRAC opinions

By Joanna L. Storey Mishler

self-study/Legal Ethics

How attorneys can end up on Santa's 'nice' list

By Alanna G. Clair, Shari L. Klevens

self-study/Legal Ethics

How much can judges socialize with lawyers?

By Wendy L. Patrick

self-study/Legal Ethics

Appellate ruling limits appearances by trustees and executors in pro per

By Mark J. Phillips, Jake V. Phillips

self-study/Legal Ethics

Lessons learned from classic fictional attorneys

By Joanna L. Storey Mishler

self-study/Legal Ethics

Trauma-informed lawyering is our professional responsibility

By Sarah Abraham, Brenda Star Adams