Generative AI Sparks Alarm In Science As Fake Data Threatens Credibility - 1

Image by Freepik

Generative AI Sparks Alarm In Science As Fake Data Threatens Credibility

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • Generative AI enables rapid creation of realistic yet fake scientific data and images.
  • Researchers struggle to detect AI-generated images due to lack of obvious manipulation signs.
  • AI-generated figures may already be in scientific journals.

AI-generated images are raising major concerns among researchers and publishers, as new generative AI tools make it alarmingly easy to create fake scientific data and images, as noted in a press release by Nature .

This advancement threatens the credibility of academic literature, with experts fearing a surge in AI-driven, fabricated studies that may be difficult to identify.

Jana Christopher, an image-integrity analyst at FEBS Press in Germany, emphasizes that the rapid evolution of generative AI is raising growing concerns about its potential for misuse in science.

“The people that work in my field — image integrity and publication ethics — are getting increasingly worried about the possibilities that it offers,” Jane said as reported by Nature.

She notes that, while some journals may accept AI-generated text under certain guidelines, images and data generated by AI are seen as crossing a line that could deeply impact research integrity, as noted by Nature.

Detecting these AI-created images has become a primary challenge, says Nature. Unlike previous digital manipulations, AI-generated images often lack the usual signs of forgery, making it hard to prove any deception.

Image-forensics specialist Elisabeth Bik and other researchers suggest that AI-produced figures, particularly in molecular and cell biology, could already be present in published literature, as reported by Nature.

Tools such as ChatGPT are now regularly used for drafting papers, identifiable by typical chatbot phrases left unedited, but AI-generated images are far harder to spot. Responding to these challenges, technology companies and research institutions are developing detection tools, noted Nature.

AI-powered tools like Imagetwin and Proofig are leading the charge by training their algorithms to identify generative AI content. Proofig’s co-founder Dror Kolodkin-Gal reports that their tool successfully detects AI images 98% of the time, but he notes that human verification remains crucial to validate results, said Nature.

In the publishing world, journals like Science use Proofig for initial scans of submissions, and publishing giant Springer Nature is developing proprietary tools, Geppetto and SnapShot, for identifying irregularities in text and images, as reported by Nature.

Other organizations, such as the International Association of Scientific, Technical and Medical Publishers, are also launching initiatives to combat paper mills and ensure research integrity, as reported by Nature.

However, experts warn that publishers must act quickly. Scientific-image sleuth Kevin Patrick worries that, if action lags, AI-generated content could become yet another unresolved problem in scholarly literature, as reported by Nature.

Despite these concerns, many remain hopeful that future technology will evolve to detect today’s AI-generated deceptions, offering a long-term solution to safeguard academic research integrity.

Apple Faces First EU Fine Under Digital Markets Act Over App Store Practices - 2

Image by Jay Rogers, from Flickr

Apple Faces First EU Fine Under Digital Markets Act Over App Store Practices

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • The fine targets Apple’s failure to allow alternative payment options outside its App Store.
  • The penalty follows a €1.8 billion fine in the Spotify case earlier this year.
  • The EU’s DMA aims to prevent anti-competitive behavior before it harms markets.

Apple Inc. is set to face a significant fine under the European Union’s Digital Markets Act (DMA) for anticompetitive practices related to its App Store, as first reported today by Bloomberg . The fine marks the first enforcement of the new rules targeting big tech companies accused of monopolistic behavior.

The European Commission is preparing the penalty after Apple allegedly failed to allow app developers to direct users to alternative, cheaper deals outside the App Store, as noted by Bloomberg. Reuters reports that sources suggest the fine is expected to be issued this month, though the timing could change.

This move comes after a similar €1.8 billion fine was imposed on Apple earlier this year for blocking Spotify from promoting cheaper subscriptions outside of Apple’s platform, said Bloomberg.

The EU’s Digital Markets Act (DMA) is designed to prevent anti-competitive behavior before it can harm the market. The penalty, expected to be issued soon, could include additional periodic fines if Apple fails to comply with the new regulations, reported Bloomberg.

Under the DMA, regulators can fine tech giants up to 10% of their global annual sales, with higher penalties for repeated violations. Additionally, the EU has forced Apple to allow third parties to access iPhones’ payment chips, opening up competition to Apple Pay, noted Bloomberg.

Apple has not commented on the potential fine, and the European Commission declined to provide further details, as reported by Bloomberg.