
Image by Khay Edwards, from Unsplash
Landmark Case Challenges AI In Housing
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a landmark case highlighting the potential harms of AI decision-making in housing, SafeRent, an AI-powered tenant screening company, has agreed to a $2.3 million settlement and to halt its scoring system.
In a Rush? Here are the Quick Facts!
- SafeRent rejected Mary Louis’s rental application despite a strong reference from her landlord.
- A lawsuit alleged SafeRent’s scoring discriminated against Black and Hispanic renters using vouchers.
- Federal agencies are monitoring the case as AI regulation in housing remains limited.
The lawsuit, brought by tenants Mary Louis and Monica Douglas, alleged that the algorithm used by SafeRent disproportionately discriminated against Black and Hispanic renters who relied on housing vouchers, violating the Fair Housing Act, as first reported by The Guardian .
Mary Louis, a security guard in Massachusetts, was among over 400 renters affected by SafeRent’s controversial system. Despite receiving a glowing reference from her landlord of 17 years and using a low-income housing voucher guaranteeing partial rent payment, her application was rejected.
The rejection came after SafeRent assigned her a score of 324, far below the management company’s minimum requirement of 443. No explanation for the score or appeal process was provided, as reported by The Guardian.
The lawsuit, filed in 2022, accused SafeRent of using an opaque scoring system that factored in irrelevant financial data, such as credit card debt, while ignoring the guaranteed payments provided by housing vouchers, said The Guardian.
Studies show that Black and Hispanic renters are more likely to have lower credit scores and rely on vouchers than white applicants, exacerbating existing inequalities, as reported by the National Consumer Law Center .
Louis described her frustration with the algorithm’s lack of context. “I knew my credit wasn’t good. But the AI doesn’t know my behavior – it knew I fell behind on paying my credit card but it didn’t know I always pay my rent,” she said to The Guardian.
The settlement, approved on November 20, is notable not only for its financial component but also for mandating operational changes.
SafeRent can no longer use a scoring system or recommend tenancy decisions for applicants using housing vouchers without independent validation by a third-party fair housing organization. Such adjustments are rare in settlements involving tech companies, which typically avoid altering core products, noted The Guardian.
“Removing the thumbs-up, thumbs-down determination really allows the tenant to say: ‘I’m a great tenant’,” said Todd Kaplan, an attorney representing the plaintiffs, as reported by The Guardian.
The case underscores growing concerns about the use of AI in foundational aspects of life, including housing, employment, and healthcare.
A 2024 Consumer Reports survey revealed widespread discomfort with algorithmic decision-making, particularly in high-stakes areas. Critics argue that these systems often rely on flawed statistical assumptions, leading to discriminatory outcomes.
Kevin de Liban, a legal expert on AI harms, noted that companies face little incentive to create equitable systems for low-income individuals. “The market forces don’t work when it comes to poor people,” he said, emphasizing the need for stronger regulations, as reported by The Guardian.
“To the extent that this is a landmark case, it has a potential to provide a roadmap for how to look at these cases and encourage other challenges,” Kaplan said, though experts caution that litigation alone cannot keep companies accountable, as reported by The Guardian.
For renters like Louis, however, the settlement represents a hard-fought victory, paving the way for fairer treatment of those reliant on housing assistance programs.

Image by pikisuperstar, from Freepik
UK Considers New Legal Protections Against AI Cloning Of Celebrities
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The UK government is exploring new legal measures to protect celebrities and public figures from having their likenesses mimicked by AI, as reported by Politico .
In a Rush? Here are the Quick Facts!
- Proposed changes could allow AI companies to use copyrighted works unless rights holders opt out.
- A new “personality right” could protect public figures from AI mimicking their likeness.
- The creative industry opposes changes, claiming they may undermine originality and copyright protection.
These potential changes come amid a larger debate over the country’s evolving copyright laws, which have drawn significant opposition from the creative industries.
Ministers are preparing to launch a consultation on proposed updates to the UK’s copyright regime. One of the key aspects of the consultation is a plan that would allow AI companies to use copyrighted works for training their models, unless the rights holders explicitly opt out, reports Politico.
This initiative is seen as a way to attract AI investment to the UK, but it has sparked a backlash from artists and other content creators who argue that it could harm their ability to control and profit from their intellectual property, as reported by Politico in an earlier article .
To address these concerns, the government is considering introducing a new “personality right.” This would grant individuals, particularly those who rely on their public image, additional legal protections against AI tools using their likenesses without permission, said Politico.
The new right would also aim to combat the growing threat of malicious deepfakes, which can generate realistic but fake images and videos of individuals. Such personality rights already exist in some other regions, including parts of the United States, says Politico.
However, the proposal is unlikely to ease the broader concerns of the creative sector, which argues that the government’s changes to copyright law could undermine originality.
Notable figures such as author Kate Mosse and musician Paul McCartney have expressed their opposition, warning that the plans could diminish the value of creative works, as reported by Politico.
Critics contend that the proposed “opt-out” model for AI training is unfair to content creators, who should be given more control over how their works are used. The sector argues that a system where content holders must explicitly “opt in” to AI training is fairer, noted Politico.
International industry bodies have also raised alarms. The Copyright Alliance, a US-based group, warned that any weakening of copyright protection in the UK could discourage both UK and US creators from investing in British creative industries, as reported by Politico.
Despite the criticism, the government insists that the consultation will explore a range of options. Culture Secretary Lisa Nandy emphasized that ministers have not yet decided on the final approach, acknowledging the need to balance the interests of both AI development and the creative sectors, as reported by Politico.
While the consultation will seek feedback on these issues, many questions remain, particularly about how content holders would indicate their opposition to AI companies using their data.
Politico says that the industry group TechUK has stressed the need for a clear commercial licensing framework to ensure that AI development can proceed without stifling innovation in other sectors.