
Photo by Alexander Shatov on Unsplash
Discord Tests Face Scans And ID Requests For Age Verification
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The social platform Discord is testing a new age verification system in Australia and the United Kingdom, scanning users’ faces and requesting official government identification to access certain content.
In a rush? Here are the quick facts:
- Discord is testing a new age verification system in Australia and the UK.
- Users are required to provide an official government ID or accept a face scan across all devices.
- The company hasn’t disclosed if it will expand the feature to more countries or for how long it will make this new requirement.
According to GameSpot , the verification process used by the popular platform among gamers includes every device—even PlayStation 5 and Xbox—but the company hasn’t disclosed for how long this methodology will be tested or if it will be expanded to other countries as well.
Users trying to interact with content that Discord has flagged as “harmful” or sensitive through its filters get a pop-up request to verify age with an ID—such as a driver’s license—or a face scan. After receiving the information, within just a few minutes, the user gets a message with an age group category.
After Discord assigns a user to an age group, it doesn’t request the verification again, unless they request a re-verification—probably as they get older.
Discord shared a “ How to verify your Age Group ” in its support section, with more details of this update.
“Some Discord settings and content are designed for certain age groups,” states the document. “We’re experimenting with a streamlined way for you to verify your age group when you try to access those settings or content.”
If the user’s age is below the minimum age required, users could get their accounts banned, but have the right to appeal if they consider it unfair.
Age verification systems for social media platforms have been a heated debate in the past few months. Australia recently became the first country in the world to ban social media platforms for children under 16 officially, but a recent study revealed that children can easily bypass age verification systems used by popular platforms such as Facebook, Instagram, TikTok, Snap, Reddit, and Discord. And the UK has been investigating social media platforms over children’s data privacy and recently became the first country to criminalize AI-generated child abuse content .
Discord’s new update could set an example for other platforms and begin a new shift in the industry that complies more with government requirements for child protection.

Image by Sangharsh Lohakare, from Unsplash
DNA Sequencing May Become Prime Target For Hackers
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Next-generation DNA sequencing is revolutionizing medicine—but a new study warns it’s increasingly vulnerable to cyberattacks that could compromise health and privacy.
In a rush? Here are the quick facts:
- Genomic data can be de-anonymized through public genealogy databases.
- AI tools used in genetic analysis are vulnerable to adversarial attacks.
- Attacks on genomic data could cause misdiagnoses or flawed treatments.
Next-generation sequencing (NGS) has revolutionized genomics by enabling rapid, cost-effective analysis of DNA and RNA .
Its applications span personalized medicine, cancer diagnostics, and forensic science, with millions of genomes already sequenced and projections estimating that 60 million people will have undergone genomic analysis by the end of 2025.
Yet as NGS adoption accelerates, so do the associated cyber-biosecurity risks. A recent study published in IEEE Access highlights the growing threats throughout the NGS workflow—from raw data generation to analysis and reporting—and stresses the urgency of securing sensitive genetic information.
Despite the transformative potential of NGS, the rapid expansion of genomic data has exposed serious vulnerabilities.
Genomic datasets can expose information about a person’s illness predispositions, ancestral background, and family relationships. This has made them attractive to cybercriminals, as they exploit vulnerabilities in sequencing software, data-sharing protocols, and cloud infrastructure.
The research analyzed multiple security threats that affect the entire sequencing process. For example, during data generation, researchers found that synthetic DNA can be infected with malicious code.
When sequencers process this DNA, the malware can damage the software systems that control them.
The researchers also point out that there’s also a privacy issue with ‘‘re-identification attacks’’, where attackers can link anonymized genetic data with public family history databases, revealing individuals’ identities.
Furthermore, the hardware and software used in sequencing are vulnerable—if the equipment or updates are compromised, hackers can gain unauthorized access.
During quality checks and data preparation, attackers might tamper with the data, causing incorrect results in the analysis. Ransomware is another threat—cybercriminals can lock important files, and demand money to unlock them.
Once the data is being analyzed, threats can target cloud platforms and AI tools used for genomic analysis. DDoS attacks could disrupt the analysis systems, while insiders with access to the data might leak or manipulate it.
The researchers say that even AI tools like DeepVariant , used to analyze genetic variations, can be tricked by malicious inputs, leading to wrong conclusions about genetic data.
In the final stage, attackers could inject false information into clinical reports, potentially leading to wrong diagnoses or poor treatment decisions.
The researchers point out that these kinds of risks are real. For instance, recent cyberattacks, like the one on Synnovis, which handles blood testing for NHS England, exposed sensitive patient data, as reported by the BBC . Other attacks on companies like 23andMe , and Octapharma Plasma have disrupted research and put patient information at risk.
To conclude, the study emphasizes the need for better collaboration between cybersecurity experts, bioinformaticians, and policymakers to create a secure framework for genomic data.