This is the property of the Daily Journal Corporation and fully protected by copyright. It is made available only to Daily Journal subscribers for personal or collaborative purposes and may not be distributed, reproduced, modified, stored or transferred without written permission. Please click "Reprint" to order presentation-ready copies to distribute to clients or use in commercial marketing materials or for permission to post on a website. and copyright (showing year of publication) at the bottom.
You have to be a subscriber to view this page.

Technology,
Real Estate/Development,
Civil Rights

Aug. 12, 2025

How algorithmic bias keeps renters out and puts fair housing to the test

Bias baked into tenant-screening algorithms is locking out Black, Latino, and immigrant renters, testing whether fair housing laws can withstand political rollbacks in the AI era.

Gary W. Rhoades

Civil Rights Attorney Specializing in Fair Housing

Email: garyrhoades2323@gmail.com

UC Davis SOL King Hall; Davis CA

Gary Rhoades was previously the Litigation Director of the Southern California Housing Rights Center and a Deputy City Attorney for Santa Monica, drafting and enforcing the City's laws against housing discrimination and tenant harassment. Gary is currently on the editorial board for the ABA's Human Rights Magazine and Special Counsel for the City of Inglewood's Housing Protection Department.

See more...

How algorithmic bias keeps renters out and puts fair housing to the test
Shutterstock

Few laws protected tenants from housing discrimination before the federal Fair Housing Act (FHA) became law in April of 1968. With overt in-your-face racism, landlords routinely and explicitly refused Black applicants. Black families were especially affected by discrimination, were forced to spend more time and money to find housing, and often ended up in overcrowded, unsafe neighborhoods.

In Los Angeles, rental housing discrimination, along with redlining and racially restrictive covenants, led to segregation. By the time Dr. Martin Luther King visited here in March of 1968 to tout the fair housing bill, banks, developers and landlords had disfigured Los Angeles into one of the nation's most segregated cities.

Entrenched segregation leaves an impact easily discerned in data today, including for current and prior addresses, zip codes, civil filings (eviction attempts and tenants' attempts to get repairs) arrests and convictions. Forged in segregation with overrepresentation by persons of color and riddled with misidentification problems, these datasets reflect the past's ugly bias. But they also figure in deciding who gets housing in the present as landlords increasingly rely on tenant screening software that use algorithms trained with or otherwise relying on the tainted records to come up with a predictive risk score. The flawed records impacted by old in-your-face discrimination now abet behind-your-back discrimination whenever the decision to give an applicant of color a worse score transpires inside a computer.

Algorithms impact access to millions of apartments nationwide. How did tenant selection get to the point where technology exacerbates rather than curbs housing discrimination? Is the FHA at age 57 strong enough to resist a president who is both hostile to fair housing and eager to unleash most AI tools except those used to protect civil rights?

In Los Angeles, tenants report algorithm-related denials but legal claims largely focus on credit reporting. On the East Coast, however, and at the center of a landmark class action lawsuit, the ordeal of Massachusetts resident Mary Louis illustrates algorithmic bias's fair housing impact.

Mary learned of an apartment in one of the City of Malden's safe neighborhoods that had two bathrooms and a washer and dryer in the unit. It was a housing opportunity that checked her wish boxes. She submitted an application to Granada Highlands, a housing provider that used a tenant screening software program called SafeRent.

Mary, who is Black and uses a housing voucher to pay for approximately 69% of her rent, had proof that she had paid her rent on time for sixteen years. However, according to her lawsuit, Mary's application was denied because SafeRent's algorithm gave her a low score that ignored the value of her voucher and spotless tenancy record. The lawsuit also alleged that the algorithm "assigns disproportionately lower SafeRent Scores to Black and Hispanic rental applicants compared to white rental applicants."

After SafeRent closed the door on Mary, all she knew was that she ended up having to spend more time and money to apply elsewhere and then settled for a home with less room, in a less safe neighborhood, and paying a higher rent. But because of a lack of transparency, what SafeRent used in calculating the score was a mystery.

Granada Highlands claimed it was a mystery to them, too. The software's inner workings, inside the so-called "black box," are often unknown to the user. Worse yet in some cases, even the creators of such algorithms claim to have remained behind, down in the dark lab, without following or knowing where their creatures go.

The Frankenstein dangers of data-centric technology have long captured our imagination, as I related in a pre-election article for Human Rights Magazine last year. (Gary Rhoades, "Ghosts in the Machine: How Past and Present Biases Haunt Algorithmic Tenant Screening Systems," Human Rights Magazine, June 2024) In 1968, the same year the Fair Housing Act was passed, the world met a sentient AI entity named HAL 3000 on a spaceship in the film "2001 Space Odyssey." Speaking in its iconic voice of eerie calm, HAL also decided for the sake of the mission to close the door against someone who wanted in.

Writers such as Kazuo Ishiguro have addressed the issue. Ishiguro's novel "Klara and the Sun" (2021) explores integration of AI into humanity and society. Ishiguro told WIRED Magazine, "I think there is this issue about how we could really hardwire the prejudices and biases of our age into black boxes and won't be able to unpack them."

On the documentary side, "Coded Bias" (2020) presents how Joy Buolamwini, an MIT-trained computer scientist and founder of the Algorithmic Justice League, discovered algorithmic bias in facial recognition systems and while working with Cathy O'Neil, mathematician and author of "Weapons of Math Destruction," fought to expose it.

AI's notoriety has since blown up. With the advent of CHAT-GPT and other products, algorithms have been writing magazine articles, researching new drugs, making music and practicing law. In the current AI boom, algorithms have seemingly advanced into our lives to points of no return and, accordingly, from 2022, roughly, to the end of 2024, lawmakers, enforcement agencies (especially California's), and civil rights and consumer rights attorneys had ramped up their examination of AI's pitfalls. AI's roles in decision-making in housing, employment, policing and privacy were also featured in lawsuits, new laws and congressional hearings.

If a collective AI Skynet-type entity used a tracking algorithm before Donald Trump's inauguration, it would have sensed a fast-growing legal shield from algorithmic damage around tenants like Mary Louis along with employees and consumers.

Mary's story is emblematic, but to see how systems perpetuate bias from another angle, consider how tenant screening software is conceptualized and built -- from first mission briefing to marketed algorithm -- by peeking in on the creation of a hypothetical tenant screening program.

A software team is gathered and told that landlords want a quick, easy-to-use product. One that predicts who might be unreliable tenants for immediate rejection. The sales team is on standby, ready to switch out unreliable for the more provocative unsafe. Whether or not the software will just use algorithms or be truly AI-enhanced -- generative AI -- the sales team drafts puffery to promote a scoring system that boasts magical and infallible powers.

If the software team chooses the AI-enhanced route, it then writes instructions for the computer to sift through training data until it finds a way to predict and score how unsafe an applicant will be. Emphasis will be on the combing of government and private repositories holding information about prior addresses, zip codes, arrests, convictions, credit scores, eviction attempts and other civil litigation, and now in 2025, immigration-related markers such as visa overstays. Reporting by groups such as the Urban Institute confirm the fallible nature of such records (including poor data matching) and failures to curb the length of look-back periods along with an overrepresentation of persons of color, especially in segregated cities.

Despite the records' shortcomings and dangers, the team lets the set of algorithms either use the records to judge a particular applicant or, potentially worse, train on those tainted records until it derives a mysterious formula for scoring applicants, a formula that is the basis of software to be sold to landlords across the nation. Information about the formula is proprietary, claim the creators, another aspect of the black box.

Even with the proof challenges when bias is hidden in software and presence of third-party purveyors, fair housing law can still protect tenants and potential tenants from algorithmic bias. The federal and California fair housing laws prohibit a wide array of discrimination, up and down the chain of housing decisions. When the 1968 fair housing law finally prohibited the in-your-face intentional discrimination that had spread through Los Angeles' neighborhoods, new and subtle tactics by landlords emerged to thwart the law, forcing housing advocates to do testing or use statistics to prove that seemingly race-neutral policies had a discriminatory impact.

Advocates breathed a sigh of relief in 2015 when the U.S. Supreme Court in Texas Department of Housing and Community Affairs v. Inclusive Communities Project, Inc. upheld the use of statistical impact analysis in fair housing cases. Specifically, the Court ruled that under the disparate impact theory and the FHA, practices are unlawful even without evidence of intent if they "significantly disadvantage a protected class" unless the provider can prove the policy is necessary and no less discriminatory alternative exists." (See Texas Dept. of Housing & Community Affairs v. Inclusive Communities Project, 576 U.S. 519, 2015.)

Despite the Inclusive Communities case, the first Trump Administration issued a HUD regulation to make it more difficult to prove algorithmic discrimination claims including exemptions for housing providers who relied on a third party for the screening product and otherwise provide safe harbor against impacts claims unless malicious intent was clear. That rule was stayed nationwide by court order, and by 2023 the Biden Administration had committed to aggressive enforcement of impact standards. Biden's HUD scuttled Trump's regulation and implemented its own to further prohibit algorithmic discrimination.

Now it's Trump's turn again. It does not take expensive AI-enhanced software to predict that Trump will be an unsafe tenant in the White House with respect to, among other things, stopping algorithmic housing bias. Trump has feared and loathed housing rights efforts since at least 1973. That's when the U.S. Department of Justice filed its first successful major Fair Housing Act lawsuit against Trump himself and his father, alleging a vast discrimination scheme against Black tenants in their New York empire of 14,000 apartments.

His administration's current plans include dismantling the disparate impact theory, expanding AI usage in housing, and reducing oversight and funding at HUD, CFPD and the FTC -- all while targeting "woke" AI tools that might help enforce civil rights laws. There are also proposals afoot in the Republican-led Congress to amend the Gramm Leach Bliley Act in ways that will weaken states' ability to address violations of related consumer protection and privacy laws, including a preemption of such laws and enforcement.

Along with the anti-immigrant rhetoric and brutal ICE removal tactics, Trump's second term also includes a controversial expansion of immigration enforcement technology that will likely impact tenant screening. In April 2025, ICE awarded Palantir Technologies a $29.8 million contract to build "ImmigrationOS," a platform designed to track visa overstays, prioritize certain "removals", and integrate enforcement data in near real-time. Criteria included in the "risk" analysis include country of origin, biometric history and social media. While intended for government enforcement, these datasets could be accessed by landlords or screening firms who purchase such access. Thus, an applicant whose lawful rental history should clear scoring, but who appears in ICE data as a "visa overstay" or "migratory target," may receive a lower score or otherwise be denied. The result is yet another screening risk for immigrant tenants, whose national origin status is protected under the FHA and whose immigrant status is protected in California. (See Gary Rhoades, " Unscrupulous landlords and attorneys who help them target immigrants could pay the price," Daily Journal, April 29, 2025.)

While the creation and use of a screening program with a statistically significant effect of excluding minorities should still be unlawful under the FHA (and also under the stronger California laws), enforcement must still weather the Trump onslaught. Advocates and lawmakers must challenge proposed bills and regulations that attempt to weaken the fair housing laws. Test cases like Mary Louis' against SafeRent -- which recently settled for $2.3 million and comprehensive injunctive relief -- must continue to be filed to hold the line, often incorporating other laws such as the Fair Credit Reporting Act and those protecting data privacy. Also, software can be proposed that furthers fair housing, either by auditing itself for error or bias probability, or by going on the offense by building data scrapers that identify biased tenant screening systems.

In her interview on "Coded Bias," Cathy O'Neil, mathematician and author of "Weapons of Math Destruction," reaffirms that algorithms use the past to make predictions about the future and that they do it in a way that perpetuates inequalities of the past. But then she points past the machines: "It's not what AI is doing to us, but what the powerful are doing to us using AI." O'Neil's words haunt the neighborhoods of cities like this one, harkening the racist actions of powerful but long dead owners and developers who through data-centric technology can bequeath that poison forward. Ultimately, we must ensure that our laws and enforcement agencies confront and thwart that legacy.

#387067


Submit your own column for publication to Diana Bosetti


For reprint rights or to order a copy of your photo:

Email Jeremy_Ellis@dailyjournal.com for prices.
Direct dial: 213-229-5424

Send a letter to the editor:

Email: letters@dailyjournal.com