
Photo by Tati Odintsova on Unsplash
Australia To Ban Social Media For Children Under 16 In World-Leading Initiative
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- The legislation plan is expected to be effective by the end of next year
- The Australian government is testing an age-verification system to block social media platforms from anyone under 16 years old
- Instagram, Facebook, TikTok, X, and probably YouTube are among the platforms affected by the legislation
Australian Prime Minister Anthony Albanese announced today a legislation plan for a social media ban for children and teenagers under 16.
“Social media is doing harm to our kids, and I’m calling time on it,” said the Prime Minister during a morning conference.
According to Reuters , Albanese called the measure a world-leading package and expects it to become a law by the end of next year.
The government is testing an age-verification system to block social media platforms for children, one of the most severe measures across the globe.
Once the system is deployed and under the law next year in Australia, there will be no exemptions, even if parents want to allow their children to access these platforms.
Experts have previously warned about the risks of this initiative . Australian professors, parents, and teenagers have expressed concerns about isolation—especially for minorities like members of the LGBTQIA+ community, and immigrants with families abroad who find connections and build relationships through social media platforms.
The Prime Minister highlighted the risks to the physical and mental health of children from consuming content from these platforms and their addictive algorithms. Albanese mentioned how misogynist content aimed at boys and harmful body image messages aimed at girls could affect them during their development.
According to Al Jazeera , the initiative would involve multiple social media platforms. Michelle Rowland, Australia’s Minister of Communications, said that Meta’s Facebook and Instagram, Tiktok, X, and probably YouTube are among the networks affected by this new measure.
Apple recently introduced a safety feature on iMessage to report nudes in messages sent to children under 13.

Image by Freepik
Generative AI Sparks Alarm In Science As Fake Data Threatens Credibility
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a Rush? Here are the Quick Facts!
- Generative AI enables rapid creation of realistic yet fake scientific data and images.
- Researchers struggle to detect AI-generated images due to lack of obvious manipulation signs.
- AI-generated figures may already be in scientific journals.
AI-generated images are raising major concerns among researchers and publishers, as new generative AI tools make it alarmingly easy to create fake scientific data and images, as noted in a press release by Nature .
This advancement threatens the credibility of academic literature, with experts fearing a surge in AI-driven, fabricated studies that may be difficult to identify.
Jana Christopher, an image-integrity analyst at FEBS Press in Germany, emphasizes that the rapid evolution of generative AI is raising growing concerns about its potential for misuse in science.
“The people that work in my field — image integrity and publication ethics — are getting increasingly worried about the possibilities that it offers,” Jane said as reported by Nature.
She notes that, while some journals may accept AI-generated text under certain guidelines, images and data generated by AI are seen as crossing a line that could deeply impact research integrity, as noted by Nature.
Detecting these AI-created images has become a primary challenge, says Nature. Unlike previous digital manipulations, AI-generated images often lack the usual signs of forgery, making it hard to prove any deception.
Image-forensics specialist Elisabeth Bik and other researchers suggest that AI-produced figures, particularly in molecular and cell biology, could already be present in published literature, as reported by Nature.
Tools such as ChatGPT are now regularly used for drafting papers, identifiable by typical chatbot phrases left unedited, but AI-generated images are far harder to spot. Responding to these challenges, technology companies and research institutions are developing detection tools, noted Nature.
AI-powered tools like Imagetwin and Proofig are leading the charge by training their algorithms to identify generative AI content. Proofig’s co-founder Dror Kolodkin-Gal reports that their tool successfully detects AI images 98% of the time, but he notes that human verification remains crucial to validate results, said Nature.
In the publishing world, journals like Science use Proofig for initial scans of submissions, and publishing giant Springer Nature is developing proprietary tools, Geppetto and SnapShot, for identifying irregularities in text and images, as reported by Nature.
Other organizations, such as the International Association of Scientific, Technical and Medical Publishers, are also launching initiatives to combat paper mills and ensure research integrity, as reported by Nature.
However, experts warn that publishers must act quickly. Scientific-image sleuth Kevin Patrick worries that, if action lags, AI-generated content could become yet another unresolved problem in scholarly literature, as reported by Nature.
Despite these concerns, many remain hopeful that future technology will evolve to detect today’s AI-generated deceptions, offering a long-term solution to safeguard academic research integrity.