
Image by jpellgen, from Flickr
L.A. Times Sparks Controversy With AI “Bias Meter”
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Patrick Soon-Shiong, owner of the Los Angeles Times, has announced plans to implement an artificial intelligence-powered “bias meter” on the newspaper’s articles, as first reported on Thursday by CNN .
In a Rush? Here are the Quick Facts!
- L.A. Times owner Patrick Soon-Shiong plans to introduce an AI-powered “bias meter.”
- The bias meter aims to highlight bias and offer readers alternative perspectives.
- The initiative has sparked staff criticism, including claims of undermining journalistic integrity.
The move, aimed at providing readers with “both sides” of a story, comes amid sweeping changes to the editorial board and growing criticism from staff and columnists, said CNN.
Soon-Shiong, who purchased the Times in 2018, revealed the initiative during an interview on Scott Jennings’ Flyover Country podcast, as reported by CNN.
The AI meter, set to launch in January, will identify potential biases in articles and offer readers alternative perspectives at the push of a button. He described the technology as an extension of his work in augmented intelligence for healthcare, says CNN.
“Somebody could understand as they read it that the source of the article has some level of bias,” Soon-Shiong explained, according to CNN.
The announcement has drawn sharp criticism, particularly from the Los Angeles Times Guild, which represents the newsroom staff. In a statement, the union accused Soon-Shiong of publicly questioning his staff’s integrity without evidence, reported CNN.
“Our members — and all Times staffers — abide by a strict set of ethics guidelines, which call for fairness, precision, transparency, vigilance against bias, and an earnest search to understand all sides of an issue,” the Guild said, reaffirming its commitment to impartial reporting, as reported by CNN.
The changes have already prompted high-profile resignations. Harry Litman, senior legal affairs columnist, and Kerry Cavanaugh, assistant editorial page editor, have both stepped down, reported CNN.
Litman cited the owner’s “repugnant and dangerous” efforts to align the paper with Donald Trump’s administration as his reason for resigning, as reported by CNN.
He condemned Soon-Shiong’s decision to block a pre-drafted endorsement of Vice President Kamala Harris ahead of the election, calling it “a deep insult to the paper’s readership,” says CNN.
Soon-Shiong has also begun reviewing all opinion headlines and plans to diversify the editorial board with more conservative and centrist voices. While he defends his actions as necessary for balance, critics argue they undermine the Times’ independence, reports CNN.
The controversy has fueled concerns over the role of AI in journalism and the influence of ownership on editorial freedom, as the Times navigates a tumultuous period of restructuring.

Image by Rucksack Magazine, from Free Range Stock
Bias In UK Welfare Fraud Detection AI Sparks Concerns Over Fairness
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The UK government’s artificial intelligence system for detecting welfare fraud has been found to exhibit bias based on age, disability, marital status, and nationality, according to internal assessments, as reported today by The Guardian .
In a Rush? Here are the Quick Facts!
- UK welfare fraud detection AI shows bias against certain demographic groups, including disabled people.
- Internal analysis revealed “statistically significant” disparities in how claims were flagged for fraud.
- DWP claims human caseworkers still make final decisions despite using the AI tool.
The system, used to assess universal credit claims across England, disproportionately flags certain groups for investigation, raising fears of systemic discrimination, said The Guardian.
The bias, described as a “statistically significant outcome disparity,” was revealed in a fairness analysis conducted by the Department for Work and Pensions (DWP) in February.
The analysis found that the machine-learning program selected people from some demographic groups more frequently than others when determining who should be investigated for potential fraud, reports The Guardian.
This disclosure contrasts with the DWP’s earlier claims that the AI system posed no immediate risks of discrimination or unfair treatment.
The department defended the system, emphasizing that final decisions are made by human caseworkers and arguing that the tool is “reasonable and proportionate” given the estimated £8 billion annual cost of fraud and errors in the benefits system, reported The Guardian.
However, the analysis did not explore potential biases related to race, gender, religion, sexual orientation, or pregnancy, leaving significant gaps in understanding the system’s fairness.
Critics, including the Public Law Project, accuse the government of adopting a “hurt first, fix later” approach, calling for greater transparency and safeguards against targeting marginalized groups, as reported by The Guardian.
“It is clear that in a vast majority of cases the DWP did not assess whether their automated processes risked unfairly targeting marginalised groups,” said Caroline Selman, a senior research fellow at the Public Law Project,as reported by The Guardian.
The findings come amid increasing scrutiny of AI use in public services. Independent reports suggest that at least 55 automated tools are in operation across UK public authorities, potentially affecting decisions for millions, says The Guardian.
Yet, the government’s official AI register lists only nine systems, revealing a significant oversight in accountability, says The Guardian.
Moreover, the UK government is facing criticism for not recording AI use on the mandatory register , sparking concerns about transparency and accountability as AI adoption grows.
The DWP redacted critical details from its fairness analysis, including which age groups or nationalities were disproportionately flagged. Officials argued that revealing such specifics could enable fraudsters to manipulate the system, noted The Guardian.
A DWP spokesperson emphasized that human judgment remains central to decision-making, stating, as reported by The Guardian. The revelations add to broader concerns about the government’s transparency in deploying AI, with critics urging stricter oversight and robust safeguards to prevent misuse.