Ex-OpenAI Researcher And Whistleblower Found Dead - 1

Image by Emiliano Vittoriosi, from Unsplsh

Ex-OpenAI Researcher And Whistleblower Found Dead

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A former OpenAI researcher turned whistleblower, Suchir Balaji, 26, was found dead in a San Francisco apartment, authorities confirmed, as first reported by The Mercury News .

In a Rush? Here are the Quick Facts!

  • Former OpenAI researcher Suchir Balaji was found dead in a San Francisco apartment.
  • Balaji’s death on November 26 was ruled a suicide with no signs of foul play.
  • Balaji publicly criticized OpenAI’s practices, including its data-gathering methods, before his death.

Police discovered Balaji’s body on November 26 after receiving a welfare check request. The San Francisco medical examiner’s office ruled the death a suicide, and investigators found no signs of foul play, said BBC .

In the months leading up to his death, Balaji had publicly criticized OpenAI’s practices. The company is currently facing multiple lawsuits over its data-gathering methods.

I recently participated in a NYT story about fair use and generative AI, and why I’m skeptical “fair use” would be a plausible defense for a lot of generative AI products. I also wrote a blog post ( https://t.co/xhiVyCk2Vk ) about the nitty-gritty details of fair use and why I… — Suchir Balaji (@suchirbalaji) October 23, 2024

In a recent interview with the New York Times , Mr. Balaji said he saw the threats posed by AI as immediate and significant. He argued that ChatGPT and similar chatbots are undermining the commercial viability of individuals, businesses, and internet services that originally created the digital data used to train these systems.

OpenAI, Microsoft, and other companies maintain that training their AI systems on internet data falls under the “fair use” doctrine.

This doctrine considers four factors, and these companies assert they meet the criteria, including significantly transforming copyrighted works and not directly competing in the same market as those works.

Mr. Balaji disagreed. He contended that systems like GPT-4 make complete copies of training data. While companies like OpenAI can program these systems to either replicate the data or produce entirely new outputs, the reality, he says, lies somewhere in between, as reported by The Times.

Mr. Balaji published an essay on his personal website, offering what he describes as a mathematical analysis to support this claim. “If you believe what I believe, you have to just leave the company,” he said, as reported by The Times.

According to Mr. Balaji, the technology violates copyright law because it often directly competes with the works it was trained on. Generative models, designed to mimic online data, can substitute for nearly anything on the internet, from news articles to online forums, reported The Times.

Balaji’s death occurred just one day after a court filing identified him as a person whose professional files OpenAI would review in connection with a lawsuit filed by several authors against the startup, noted Forbes .

Beyond legal concerns, Mr. Balaji warned that AI technologies are degrading the internet . As these tools replace existing services, they often generate false or entirely fabricated information — a phenomenon researchers call “hallucinations.” He believed this shift is changing the internet for the worse, reported The Times.

Bradley J. Hulbert, an intellectual property lawyer, noted that current copyright laws were established long before the advent of AI and that no court has yet ruled on whether technologies like ChatGPT violate these laws, as reported by The Times.

He emphasized the need for legislative action. “Given that A.I. is evolving so quickly,” he said, “it is time for Congress to step in.” Mr. Balaji concurred, stating, “The only way out of all this is regulation,” reported The Times.

Landmark Case Challenges AI In Housing - 2

Image by Khay Edwards, from Unsplash

Landmark Case Challenges AI In Housing

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a landmark case highlighting the potential harms of AI decision-making in housing, SafeRent, an AI-powered tenant screening company, has agreed to a $2.3 million settlement and to halt its scoring system.

In a Rush? Here are the Quick Facts!

  • SafeRent rejected Mary Louis’s rental application despite a strong reference from her landlord.
  • A lawsuit alleged SafeRent’s scoring discriminated against Black and Hispanic renters using vouchers.
  • Federal agencies are monitoring the case as AI regulation in housing remains limited.

The lawsuit, brought by tenants Mary Louis and Monica Douglas, alleged that the algorithm used by SafeRent disproportionately discriminated against Black and Hispanic renters who relied on housing vouchers, violating the Fair Housing Act, as first reported by The Guardian .

Mary Louis, a security guard in Massachusetts, was among over 400 renters affected by SafeRent’s controversial system. Despite receiving a glowing reference from her landlord of 17 years and using a low-income housing voucher guaranteeing partial rent payment, her application was rejected.

The rejection came after SafeRent assigned her a score of 324, far below the management company’s minimum requirement of 443. No explanation for the score or appeal process was provided, as reported by The Guardian.

The lawsuit, filed in 2022, accused SafeRent of using an opaque scoring system that factored in irrelevant financial data, such as credit card debt, while ignoring the guaranteed payments provided by housing vouchers, said The Guardian.

Studies show that Black and Hispanic renters are more likely to have lower credit scores and rely on vouchers than white applicants, exacerbating existing inequalities, as reported by the National Consumer Law Center .

Louis described her frustration with the algorithm’s lack of context. “I knew my credit wasn’t good. But the AI doesn’t know my behavior – it knew I fell behind on paying my credit card but it didn’t know I always pay my rent,” she said to The Guardian.

The settlement, approved on November 20, is notable not only for its financial component but also for mandating operational changes.

SafeRent can no longer use a scoring system or recommend tenancy decisions for applicants using housing vouchers without independent validation by a third-party fair housing organization. Such adjustments are rare in settlements involving tech companies, which typically avoid altering core products, noted The Guardian.

“Removing the thumbs-up, thumbs-down determination really allows the tenant to say: ‘I’m a great tenant’,” said Todd Kaplan, an attorney representing the plaintiffs, as reported by The Guardian.

The case underscores growing concerns about the use of AI in foundational aspects of life, including housing, employment, and healthcare.

A 2024 Consumer Reports survey revealed widespread discomfort with algorithmic decision-making, particularly in high-stakes areas. Critics argue that these systems often rely on flawed statistical assumptions, leading to discriminatory outcomes.

Kevin de Liban, a legal expert on AI harms, noted that companies face little incentive to create equitable systems for low-income individuals. “The market forces don’t work when it comes to poor people,” he said, emphasizing the need for stronger regulations, as reported by The Guardian.

“To the extent that this is a landmark case, it has a potential to provide a roadmap for how to look at these cases and encourage other challenges,” Kaplan said, though experts caution that litigation alone cannot keep companies accountable, as reported by The Guardian.

For renters like Louis, however, the settlement represents a hard-fought victory, paving the way for fairer treatment of those reliant on housing assistance programs.