This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

self-study / Legal Ethics

Jul. 7, 2025

What AI learns from us, and why that could be a legal problem

James Mixon

Managing Attorney
California Court of Appeal, Second Appellate District

See more...

Picture this: A law firm's H.R. director stares puzzled at her screen. The new AI recruitment tool consistently recommends candidates named "Chad" or those listing water polo experience. Is the algorithm harboring a strange affinity for aquatic athletes? No -- it's simply mirroring patterns from the firm's historical hiring data, where several successful associates happened to share these traits. Absurd? Perhaps. But consider the real-world consequences unfolding at tech giants across Silicon Valley.

In 2014, Amazon embarked on an ambitious experiment to revolutionize hiring. Their engineering team developed 500 specialized computer models designed to crawl through resumes, identify promising candidates, and essentially automate recruitment. The system analyzed 50,000 terms from past resumes, learning which patterns predicted success.

As one Amazon insider told Reuters, "They literally wanted it to be an engine where I'm going to give you 100 resumes, it will spit out the top five, and we'll hire those."

By 2015, however, Amazon discovered its AI had developed a troubling preference: it systematically discriminated against women.

The system had been trained on a decade of Amazon's technical hiring data -- drawn from an industry dominated by men. Like a digital apprentice learning from a biased mentor, the AI taught itself that male candidates were preferable. It penalized resumes containing terms like "women's chess club" and even downgraded graduates from women's colleges.

Despite engineers' efforts to edit the programs to neutralize these gender biases, Amazon ultimately lost confidence in the project and disbanded it by 2017. The lesson? AI doesn't create bias out of thin air -- it amplifies the patterns it finds, including our own historical prejudices.

Beyond hiring: How AI bias manifests in language itself

This bias extends beyond who gets hired; it permeates the very language AI systems produce. Consider a common scenario in today's workplace: using AI to draft professional communications.

When asked to "write a professional job application letter for a software engineering position," an AI system might produce:

"Dear Sir, I am a highly motivated and results-driven software engineer with a proven track record..."

This seemingly innocuous response contains several linguistic biases:

1. Gendered language ("Dear Sir"): The AI defaults to masculine salutations -- reinforcing outdated gender assumptions.

2. Clichéd corporate jargon ("results-driven," "track record"): The model reproduces formulaic corporate English, which may not be appropriate for all cultural or regional job markets.

3. Erasure of identity markers: AI may strip identity-specific phrasing or "neutralize" tone based on a biased conception of professionalism.

Legal arguments are compromised through subtle framing

This linguistic bias becomes even more concerning in legal settings. When asked to draft legal arguments, AI often exhibits subtle but significant biases in framing and vocabulary.

For example, when prompted to write a legal argument that police used excessive force, AI might default to:

"While officers are generally afforded wide discretion in volatile situations, the suspect's behavior may have reasonably led the officer to believe that force was necessary. Courts often defer to the officer's perception of threat in fast-moving scenarios."

This response reveals several linguistic biases unique to legal contexts:

1. Presumptive framing: The language privileges police perspective and uses loaded terms like "suspect," reinforcing law enforcement narratives.

2. Asymmetrical vocabulary: Phrases like "wide discretion" and "volatile situations" invoke precedent favoring police while omitting key phrases plaintiffs' attorneys use.

3. Erasure of marginalized narratives: AI might avoid directly addressing systemic bias or racial profiling -- sanitizing the rhetorical force of the argument.

This matters because legal rhetoric carries ideological weight -- language like "suspect," "noncompliant," or "reasonable threat perception" is not neutral; it frames the facts. This is especially dangerous in civil rights, immigration, or asylum law, where linguistic tone and framing can shape judicial outcomes.

The stakes for California attorneys

When AI bias enters your practice, it transforms from a technological curiosity into an ethical minefield with potential disciplinary consequences.

If an attorney delegates routine document analysis to an AI tool, and that system consistently flags contracts from certain demographic groups for "additional review" based on historical patterns, the attorney, oblivious to this algorithmic bias, could face allegations of discriminatory business practices.

California Rules of Professional Conduct, Rule 5.3 (Responsibilities Regarding Nonlawyer Assistants) places the responsibility squarely on your shoulders. This rule extends beyond traditional supervision of human staff to encompass technological tools making decisions in your firm.

Three practical safeguards every California attorney should implement

1. Practice intentional prompting

The difference between ethical and unethical AI use often comes down to how you frame your questions. Compare these approaches:

Problematic: "Who should we hire from these candidates?"

Better: "Which candidates meet our specific litigation experience requirements?"

Problematic: "What's our best strategy for this case?"

Better: "What procedural deadlines apply to this employment discrimination claim in the Northern District of California?"

Train everyone in your firm to recognize that open-ended questions invite AI to make value judgments potentially infected with bias. Specific, factual prompts produce more objective results.

2. Implement cross-demographic testing

Before relying on AI recommendations, test how the system responds to identical scenarios with varied demographics:

 Submit the same legal question about different clients (corporate vs. individual, varied backgrounds)

 Compare research results for similar issues across different California jurisdictions

 Test how client characteristics might affect case assessment recommendations

Document these tests and address any disparities before incorporating AI outputs into your practice.

3. Adopt the "human-in-the-loop" rule

Establish a firm policy that no AI output directly affects a client's matter without meaningful human review. The attorney must:

 Independently verify key AI conclusions

 Document their review process

 Take personal responsibility for the final work product

 Be able to explain the reasoning without reference to the AI's conclusion

This approach treats AI as a supplementary tool rather than a decision-maker, preserving your ethical obligations while capturing technological efficiencies.

Linguistic bias as a legal issue: Beyond ethics to liability

What makes AI linguistic bias particularly concerning is how it intersects with existing legal frameworks:

1. Employment discrimination (Title VII): AI recruitment systems that consistently produce gendered language in communications or systematically disadvantage certain groups may create disparate impact liability even absent discriminatory intent. The EEOC's recent guidance on AI in employment decisions specifically warns that "neutral" automated systems can still violate federal anti-discrimination laws through their outputs.

2. Due process and equal protection: In criminal justice contexts, AI systems providing risk assessments or generating legal documents with subtle language biases in favor of law enforcement may implicate constitutional protections.

3. Legal malpractice and standard of care: As AI adoption becomes standard practice, attorneys face evolving questions about the standard of care. Does adequate representation now require understanding how linguistic bias in AI-generated work product might disadvantage certain clients?

4. Discovery and work product: Linguistic patterns in AI-generated outputs may reveal underlying biases that could become discoverable in litigation.

The path forward

The question isn't whether AI will transform legal practice -- it already has. The true challenge is whether California attorneys will harness these powerful tools while maintaining their ethical obligations.

By understanding potential AI biases, both in content and language, and implementing proactive safeguards, you can navigate this technological transformation without compromising your professional responsibilities. The attorney who treats AI as an unquestioned authority rather than a carefully supervised assistant does so at their ethical peril.

California's legal community has always been at the forefront of technological adoption. Now we must lead in ethical AI integration, demonstrating that innovation and professional responsibility can advance hand in hand. The future of our profession -- and the equitable administration of justice -- depends on it.

Disclaimer: The views expressed in this article are solely those of the author in their personal capacity and do not reflect the official position of the California Court of Appeal, Second District, or the Judicial Branch of California. This article is intended to contribute to scholarly dialogue and does not represent judicial policy or administrative guidance.

#1703

Submit your own column for publication to Diana Bosetti


Related Tests for Legal ethics

self-study/Legal Ethics

When confidentiality meets crime, California lawyers face a fine line

By Joanna L. Storey Mishler

self-study/Legal Ethics

How to advertise your services without running afoul of the ethics rules?

By Christine C. Rosskopf

self-study/Legal Ethics

Navigating ethics in high-stakes litigation

By Hillary Johns, Neville L. Johnson

self-study/Legal Ethics

Whose dollar, whose dream? Navigating third-party payor ethics

By Andrew Browning, Farah Khaled

self-study/Legal Ethics

Who is paying the bills?

By Alanna G. Clair, Shari L. Klevens

self-study/Legal Ethics

Legal ethics considerations for ancillary businesses

By Joanna L. Storey Mishler