OpenAI Whistleblower’s Death Sparks Controversy: Family Demands Answers As Police Probe Nears End - 1

Image by ishmael daro, from Flickr

OpenAI Whistleblower’s Death Sparks Controversy: Family Demands Answers As Police Probe Nears End

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The death of former OpenAI researcher and whistleblower Suchir Balaji continues to stir controversy, with his family pushing for transparency as the San Francisco Police Department (SFPD) prepares to release its final report by the end of February.

In a Rush? Here are the Quick Facts!

  • His family disputes the suicide ruling, citing anomalies in an independent autopsy.
  • Balaji had accused OpenAI of copyright violations, becoming a key witness in a lawsuit.
  • The SFPD investigation remains open, with a final report expected by end of February.

Balaji, a 26-year-old software engineer who publicly criticized OpenAI’s alleged copyright violations, was found dead in his Hayes Valley apartment on November 26, 2023. Authorities initially ruled his death a suicide, but his family and independent experts have raised serious doubts.

Balaji’s parents, Poornima Ramarao and Balaji Ramamurthy, have been vocal in their quest for answers, as reported in a detailed article by Fortune . On January 31, they filed a lawsuit against the SFPD, demanding the release of the full investigative report into their son’s death.

Ramarao has also taken her concerns public, appearing on The Tucker Carlson Show in January and launching a social media campaign that has garnered millions of views.

Her posts, which claim her son was “murdered,” have drawn attention from high-profile figures,

including Elon Musk, who tweeted, “This doesn’t seem like a suicide.”

This doesn’t seem like a suicide — Elon Musk (@elonmusk) December 29, 2024

The SFPD has maintained that no evidence of foul play was found during the initial investigation, as stated in a February 7 update. However, the case remains open, and the Office of the Chief Medical Examiner (OCME) has declined to comment, as reported by Fortune.

Meanwhile, an independent autopsy conducted by forensic pathologist Dr. Joseph Cohen in December revealed anomalies, including an “atypical” bullet trajectory and a contusion on the back of Balaji’s head, raising questions about the suicide ruling, as reported by Fortune.

Balaji, a former OpenAI researcher who helped develop the GPT-4 model, had become a whistleblower months before his death. In October 2023, he publicly accused OpenAI of copyright violations, sparking a landmark lawsuit by The New York Times.

His death has fueled widespread speculation and conspiracy theories, particularly within the tech community, where concerns about AI ethics and corporate power run high.

As the SFPD’s final report looms, Balaji’s family and friends remain in limbo. “We will take this to the public,” Ramarao told Fortune. “We will be taking it everywhere. We will even send it to President Trump.”

For now, the world waits for answers, hoping the February report will bring clarity to a tragedy that has become a flashpoint in the debate over AI’s future.

pic.twitter.com/LBThrROv3W — OpenAI Newsroom (@OpenAINewsroom) January 16, 2025

Meta Emails Reveal Torrenting Of Pirated Books For AI Training - 2

Image by Nokia621, from Wiki Commons

Meta Emails Reveal Torrenting Of Pirated Books For AI Training

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Newly unsealed emails have surfaced as what book authors are calling the “most damning evidence” against Meta in an ongoing copyright lawsuit, as first reported by Ars Technica .

In a Rush? Here are the Quick Facts!

  • Meta torrented 81.7 terabytes of pirated books from shadow libraries like LibGen and Z-Library.
  • Internal emails show Meta employees raised legal concerns about torrenting and seeding copyrighted material.
  • Meta allegedly concealed torrenting by avoiding Facebook servers and minimizing seeding activity.

Ars Technica reports that the authors allege that Meta illegally trained its AI models on pirated books, and the emails reveal internal concerns about the legality of torrenting and seeding copyrighted material.

Last month, Meta admitted to torrenting a controversial dataset known as LibGen, which contains tens of millions of pirated books.

However, details remained unclear until the unredacted emails were made public.

According to the authors’ court filing , Meta torrented “at least 81.7 terabytes of data across multiple shadow libraries through the site Anna’s Archive, including at least 35.7 terabytes of data from Z-Library and LibGen.” Additionally, “Meta also previously torrented 80.6 terabytes of data from LibGen.”

“The magnitude of Meta’s unlawful torrenting scheme is astonishing,” the authors’ filing stated, noting that even “vastly smaller acts of data piracy—just .008 percent of the amount of copyrighted works Meta pirated—have resulted in Judges referring the conduct to the US Attorneys’ office for criminal investigation.”

Ars Technica notes that the emails also reveal internal unease among Meta employees. In April 2023, research engineer Nikolay Bashlykov wrote, “Torrenting from a corporate laptop doesn’t feel right,” adding a smiley emoji.

He expressed concern about using Meta IP addresses “to load through torrents pirate content.” By September 2023, Bashlykov had dropped the humor, consulting Meta’s legal team and warning that “using torrents would entail ‘seeding’ the files—i.e., sharing the content outside, this could be legally not OK.”

Despite these warnings, authors allege that Meta continued torrenting and seeding pirated content, even attempting to conceal its activities.

Ars Technica reports that internal messages show that Meta avoided using Facebook servers to download the dataset to “avoid” the “risk” of anyone “tracing back the seeder/downloader,” as described by researcher Frank Zhang.

Michael Clark, a Meta executive, also admitted in a deposition that settings were modified “so that the smallest amount of seeding possible could occur.”

The authors now argue that Meta staff involved in the torrenting decision must be deposed again, as the new evidence allegedly “contradicts prior deposition testimony.”

For instance, while CEO Mark Zuckerberg claimed no involvement in using LibGen for AI training, unredacted messages suggest the “decision to use LibGen occurred” after “a prior escalation to MZ.”

Ars Technica reports that Meta has maintained that its AI training on LibGen constitutes “fair use” and denied any unlawful distribution of the authors’ works. However, the torrenting revelations have complicated its defense, allowing authors to expand their claims of direct copyright infringement.

As the case proceeds, Meta faces mounting scrutiny over its handling of copyrighted material, with the authors determined to hold the tech giant accountable for what they describe as a “massive unlawful torrenting scheme.”