This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.

Technology,
Labor/Employment

May 15, 2026

The new frontier of age discrimination: When 'AI fluency' becomes the new dog whistle

AI is now a leading cause of U.S. layoffs, and employers who use neutral-sounding criteria like "AI fluency" to push out older workers--or who eliminate their roles only to hand the work to younger employees running AI tools--may be building an age discrimination case against themselves.

Benjamin Heller

Counsel
RFZ Law LLP

See more...

The new frontier of age discrimination: When 'AI fluency' becomes the new dog whistle
Shutterstock

When the executive outplacement firm Challenger, Gray & Christmas released its April 2026 jobs report last week, the numbers landed hard. U.S.-based employers announced 83,387 job cuts--a 38% increase from March--and for the second month in a row, artificial intelligence was the leading reason cited. Employers attributed 21,490 of those planned layoffs--roughly 26% of all April cuts--directly to AI and automation, with the technology sector leading all industries in job cuts.

But the headline numbers, alarming as they are, tell only part of the story. As CNN Business reported this week through Tech Editor Lisa Eadicicco, AI is not simply replacing entire positions--it is restructuring them: automating certain tasks while preserving others, and recalibrating who performs which functions. The result is that a company can eliminate an older employee's role, deploy an AI tool to handle the automated portions, and hire a younger employee to manage the rest. The older worker may not have entirely lost their job to a machine--they may have also lost it, at least in part, to a younger person operating one.

That pattern--older workers replaced by younger ones--did not begin with AI. In its 2024 report, "High Tech, Low Inclusion," the Equal Employment Opportunity Commission found that workers over 40 in the high-tech workforce lost ground between 2014 and 2022. More pointedly, the agency found that discrimination charges filed by tech workers were more likely to involve age than charges filed in other sectors. It is worth noting that under the Age Discrimination in Employment Act (ADEA), any employee 40 years of age or older is protected by age discrimination statutes. AI has not created this legal landscape so much as it has introduced new vocabulary and new mechanisms through which age-based employment decisions will now be made--and challenged.

The legal framework: When AI-driven displacement is, and is not, actionable

In a vacuum, replacing an employee's function with artificial intelligence does not automatically give rise to an employment discrimination claim. Courts, such as the California Court of Appeal in the 1994 Martin v. Lockheed Missiles & Space Co. decision, have long recognized that technological innovation, business restructuring, and economic necessity can constitute legitimate, non-discriminatory justifications for workforce changes -- and employers have successfully invoked those defenses in age discrimination cases arising from prior waves of automation. The printing press did not give rise to discrimination claims. Neither, by itself, does a large language model.

The analysis changes, however, when additional facts enter the picture demonstrating disparate treatment or disparate impact.

Age-coded language as evidence of discriminatory intent

A new vocabulary this moment has produced--such as "AI fluency"--is a useful place to start, because it illustrates precisely how age-coded language operates in the modern workplace. The term "AI fluency" began as a legitimate framework for measuring technological competency. In a recent article for Harvard Business Impact, Jeff Pacheco recently defined "AI fluency" as the proficiency of frequent generative AI users who demonstrate a strong understanding of its capabilities. There is a growing risk, however, that such terms, or more specifically a lack of "AI fluency" are being used as something else: labels applied disproportionately to older workers as pretext to justify their exclusion from opportunities, their removal from roles, and their termination. Age-coded language, consistently applied to exclude members of a protected class, does not become legal simply because it sounds innocuous--a principle a case resolved just this week makes vividly clear.

On May 8, 2026, the United States District Court for the Northern District of California granted final approval to a $50 million class action settlement in Curley v. Google--not an age discrimination case, but a racial discrimination one, and a direct illustration of how neutral-sounding criteria can still give rise to claims of discriminatory impact in practice. The lawsuit alleged that Google engaged in a systematic pattern of racial discrimination: "underleveling" Black employees, paying them less than non-Black counterparts, denying advancement and retaliating against those who raised concerns. Among the specific practices alleged were Google's use of a "cultural fit" interview to assess candidates' "Googleyness"--a criterion the plaintiffs characterized as a plain dog whistle for race discrimination, used to screen out well-qualified Black candidates who had otherwise performed strongly throughout the process. The allegations underscore a principle that applies with equal force in the age discrimination context: facially neutral language, consistently applied to exclude a protected class, does not become legal simply because it sounds innocuous.

Courts have recognized that use of dog whistle terminology can be admissible evidence of discriminatory intent in support of discrimination claims. In the 2010 decision Marlow v. Chesterfield County School Bd., a Virginia court held that referring to a young employee as a "digital native" and an older employee as a "digital immigrant" in a PowerPoint presentation was evidence of age discrimination. As another example, in Hoglund v. Sierra Nevada Memorial-Miners Hospital, decided by the California Court of Appeal in 2024, a supervisor's commentary alone, such as expressing a preference for hiring "babies" because younger employees were easier to train, supported a judgment finding age discrimination under California's Fair Employment and Housing Act (FEHA).

Together, the allegations in Curley v. Google and the holdings in Marlow and Hoglund illustrate that executives or managers who deploy age-coded language--characterizing older employees as lacking AI adaptability or suggesting that younger workers are inherently better suited to an AI-driven environment--may be creating evidence of age discrimination. Whether such language appears in a performance review, a restructuring memo, or an offhand remark, courts have shown they are willing to let juries decide what it means.

Assumptions about adaptability and unequal access to training

Age-coded language is often the surface expression of a deeper assumption--that older workers are less capable of adapting to new technology. When an employer acts on that assumption, the conduct that follows can itself constitute age discrimination, independent of whatever language was used to describe the decision.

In the 1997 case O'Mary v. Mitsubishi Electronics America, Inc., the California Court of Appeal reversed a defense verdict after finding that the trial court improperly excluded a vice president's statement that senior executives had discussed a policy of getting rid of managers over 40 and replacing them with younger employees, whom he assumed were more "aggressive." The court held that this evidence went directly to the employer's discriminatory motive, underscoring that when an employer acts on the assumption that age predicts capability or drive, that is precisely the stereotyping age discrimination laws are designed to prohibit.

Courts have extended that principle directly to training. In Vogl v. Arrow Pattern & Foundry Co., a 1994 case out of the Northern District of Illinois, the court denied the employer's motion for summary judgment, citing evidence that the employer had deliberately excluded an employee from computerized equipment training based on his age, while training younger workers. The Vogl court reasoned that by withholding training on the basis of age, the employer may have effectively engineered employee's inability to perform his job and used that manufactured deficiency as a pretext for his termination.

That reasoning is particularly significant in the AI context. Harvard Business Publishing's aforementioned research on AI fluency shows that the skill is built through hands-on experimentation and organizational support--not through innate generational aptitude. An employer who assumes older workers are less capable of adapting to AI tools, withholds equal access to training or other opportunities on that basis, and then cites their lack of AI fluency as a performance deficiency has done two things at once: acted on an assumption the law prohibits, and manufactured the very deficiency it will later rely on. Offering AI training, certification programs, and upskilling opportunities to younger employees while withholding them from older ones is not a neutral business decision--it is potential evidence of discrimination.

Elimination, reorganization and the transfer of duties to younger workers

A distinct question arises when a company reorganization redistributes duties or a restructuring assigns the same functions to different employees, and courts have found that when the work of an older employee ends up in younger hands, the label the employer places on the transition matters far less than the substance of what occurred.

In the 1998 decision Bedell v. American Yearbook Co., a federal court sitting in the District of Kansas held that a prima facie case of age discrimination under the ADEA can be established when a younger employee is trained to perform at least a portion of an older employee's redistributed job duties, even if the employer claims the tasks were eliminated by automation rather than reassigned. In that case, a production control clerk with twenty years of service had her position allegedly automated by a new computer system while a younger employee was simultaneously trained to perform at least part of her responsibilities--creating a factual dispute sufficient to survive summary judgment. Similarly, in the 1979 case Moore v. Sears, Roebuck & Co., the California Court of Appeal held that an employee whose position was technically abolished could still establish a constructive "replacement" for purposes of the ADEA if her duties were reallocated principally to younger employees. The absence of a one-for-one replacement, the Moore court reasoned, does not foreclose an inference of age discrimination.

That framework maps directly onto the AI reorganization scenario now playing out across industries: a company eliminates a senior employee's role, cites AI as the reason, and a younger employee's responsibilities absorb what the older worker was doing--now executed through AI tools. Bedell and Moore demonstrate how courts look beyond what the employer calls the restructuring to examine where the work actually went and who is now doing it.

The retaliation risk: AI's impact beyond termination

The Curley v. Google complaint raises another dimension worth attention--a reminder that AI's legal impact on employees does not begin and end with textbook discrimination. Among the many claims the plaintiffs asserted was that employees who surfaced concerns through internal reporting channels faced professional retaliation such as pretextual performance plans and exclusion from opportunities. That pattern is not unique to race discrimination and is entirely predictable wherever employees are disadvantaged on the basis of a protected characteristic.

An older worker who raises concerns about AI-driven displacement, questions why reskilling is not being offered equitably, or challenges whether reorganization decisions are tracking age is engaging in protected activity under both the ADEA and applicable state laws, such as California's FEHA. The retaliation that follows need not be termination--it can come through exclusion from new AI initiatives, negative performance reviews tied to alleged lack of "AI fluency," or being quietly managed out. Each of those actions, taken in response to a protected complaint, is independently actionable and may be layered on top of an underlying discrimination claim, expanding potential liability and damages.

"AI fluency" today, litigation tomorrow

The intersection of artificial intelligence and age discrimination is not a problem forming on the horizon. It is here. AI is now the leading cited cause of layoffs for two consecutive months, which follows EEOC's research regarding the erosion of older workers' standing in the tech sector over more than a decade. While the courts have a well-developed body of precedent that applies to similar past scenarios, the law will become more fully fleshed out as it relates specifically to AI as employees inevitably bring age discrimination claims in light of what appears to be a landscape of what will become even more job loss.

#391415


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com