
Image by Julia Koblitz, from Unsplash
Controversial AI Paper Withdrawn After MIT Investigation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a rush? Here are the quick facts:
- MIT disavowed a widely circulated AI research paper by a former student.
- The paper claimed AI boosted lab discoveries but lowered scientists’ satisfaction.
- MIT cited lack of confidence in the paper’s data and conclusions.
The paper had gained wide attention for claiming that using an AI tool in a materials science lab resulted in more discoveries but also made scientists feel less satisfied with their work.
MIT released a statement Friday saying it “has no confidence in the provenance, reliability or validity of the data and has no confidence in the veracity of the research contained in the paper.” The university did not name the student, citing privacy laws, but the author has been identified by The Wall Street Journal (WSJ) as Aidan Toner-Rodgers. He is no longer at MIT.
Toner-Rodgers presented the paper, titled “Artificial Intelligence, Scientific Discovery, and Product Innovation,” at a major economics conference and posted it online last year. It was praised at the time by MIT economists Daron Acemoğlu, who won the 2024 Nobel Prize, and David Autor, who said he was “floored” by the findings, as previously reported by the WSJ .
But in January, a computer scientist questioned the lab’s existence and how the AI tool worked. Unable to resolve the doubts, Acemoglu and Autor alerted MIT, which then conducted a confidential review, as reported by the WSJ.
Following that, the university requested that the paper be removed from both the academic journal where it had been submitted and from the public preprint site arXiv. The WSJ reported that MIT refused to specify what the paper’s errors were, and said it based this decision on “student privacy laws and MIT policy.”
MIT emphasized that protecting the integrity of research is vital, saying the paper “should be withdrawn from public discourse” to avoid spreading incorrect claims about AI’s impact.
The incident has heightened existing worries about the application of generative AI in scientific research. The increasing adoption of ChatGPT and similar tools in academic work has led experts to warn about the rising danger of AI-generated content .
Specifically, the lack of detectable manipulation of these images makes it difficult to identify fraudulent activities. Researchers believe that AI-generated content may already be entering journals without detection, threatening the trustworthiness of scientific literature.

Image by Fath, from Unsplash
Spotify Hosted Fake Podcasts Selling Prescription Drugs
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Spotify is facing backlash after hundreds of fake podcasts promoted illegal drug sales, raising concerns about AI abuse and lax platform moderation.
In a rush? Here are the quick facts:
- Spotify hosted fake podcasts selling prescription drugs without prescriptions.
- Some podcasts promoted opioids like Oxycodone and Vicodin.
- Over 200 pages were removed after media exposure.
Spotify is facing criticism because it hosted numerous fake podcasts which advertise prescription drugs without medical authorization.
Business Insider (BI) and CNN investigations revealed that these so-called podcasts were little more than 10-second audio clips, or even remained silent, while potentially serving as fronts for illegal online pharmacies.
The content blatantly broke Spotify’s own rules and, in some cases, violated federal law. One podcast claimed, “Buy tramadol online in just one click […] without a prescription with legal delivery in the USA,” as reported by BI. CNN notes that others had titles like “My Adderall Store” and promoted opioids like Oxycodone and Vicodin.
The platform took down more than 200 of these pages following both Business Insider’s reporting and social media user reports. CNN discovered additional live podcasts persisted after some of them had been taken down. The Spotify spokesperson explained to CNN and BI that the company maintains continuous efforts to identify and eliminate violating content throughout its service.
Experts say the rise of AI tools has made it easier than ever to create these scam podcasts. Many used robotic voices or just cover art with clickable links, bypassing Spotify’s automated moderation systems. According to CNN, some had been active for months before removal.
Katie Paul from the Tech Transparency Project warned that “most platforms lack accountability for user-generated content like these fake podcasts,” adding that voice-based content is especially hard to moderate.
The fake shows directed users to websites which promised to deliver Adderall and Methadone without prescriptions, yet, neither BI nor CNN could verify successful purchases. The law establishes that prescription medications must be dispensed by authorized medical practitioners.
The incident has sparked new demands for enhanced monitoring because families and officials worry about the rising threat of online counterfeit drug sales following recent teen overdose deaths from internet-bought pills.