
Image by DC Studio, from Freepik
EU’s World-First Artificial Intelligence Act Takes Effect
- Written by Kiara Fabbri Former Tech News Writer
Starting today, the European Union’s Artificial Intelligence Act (AI Act) comes into force, marking a shift in the regulation of artificial intelligence (AI) technologies within the EU.
The European AI Act is the first comprehensive law regulating artificial intelligence. Its primary goal is to ensure that AI used in the EU is trustworthy, safe, and respects fundamental human rights. It also aims to create a favorable environment for AI innovation and investment within the EU.
The Act categorizes AI systems based on their risk level:
- Minimal risk : AI like spam filters or recommendation systems pose little risk and are largely unregulated.
- Limited risk : AI like chatbots must be transparent about being AI and disclose when deepfakes or biometric data is used.
- High risk : AI used in critical areas like recruitment, loan approval, or autonomous robots face strict regulations, including data quality checks, human oversight, and cybersecurity measures.
- Unacceptable risk : AI that manipulates human behavior, social scoring, or certain biometric uses is outright banned.
European Commission President Ursula von der Leyen stated, “With our artificial intelligence act, we create new guardrails not only to protect people and their interests, but also to give business and innovators clear rules and certainty,” in a France 24 report.
Several advisory bodies will be established to support the enforcement process. The European Artificial Intelligence Board will ensure consistent application of the AI Act across EU countries and facilitate cooperation. A scientific panel will provide expert advice, including warnings about potential risks in general-purpose AI. Additionally, a stakeholder forum will offer input on the Act’s implementation.
Companies that violate the AI Act face substantial fines, with the most severe penalties for banned AI applications.
The majority of the AI Act’s rules will come into full effect on August 2, 2026. However, restrictions on high-risk AI systems will be implemented sooner. To prepare for full implementation, the EU Commission is encouraging voluntary adoption of the Act’s principles through the AI Pact.
Companies breaching the EU AI Act could face fines ranging from 35 million euros ($41 million) or 7% of their global annual revenues, whichever is higher, to 7.5 million euros or 1.5% of global annual revenues.
The AI Act serves as a model for other regions around the world to develop their own AI regulations.

Image by firmbee, from Uplash
Google Updates Policies to Combat Nonconsensual Explicit Deepfakes
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Yesterday, Google has announced significant policy updates aimed at helping individuals control explicit “deepfakes”. These are AI generated images and videos depicting people in sexually explicit contexts without their consent.
As one of the key updates, people can request the removal of non-consensual fake explicit imagery from Search. Google has made this process simpler and more effective. When a request is successful, Google’s systems will also filter all explicit results on similar searches and remove duplicates.
Additionally, to combat harmful content, Google has updated its ranking systems to prioritize high-quality information. Ranking updates will lower explicit fake content in search results. For searches specifically seeking explicit content with people’s names, high-quality, non-explicit content will be prioritized. This has reduced exposure to explicit images by over 70%.
Google is also working to better distinguish real, consensual explicit content from fake explicit content. Sites with high volumes of removed fake explicit imagery will be demoted.
Supporting the urgency of these updates, an independent researcher recently shared a study with WIRED , revealing the extensive reach of nonconsensual deepfake videos. The study shows that at least 244,625 videos have been uploaded to the top 35 websites hosting deepfake porn over the past seven years. Notably, 113,000 of these videos were uploaded in the first nine months of 2023 alone—a 54 percent increase from the total number of videos uploaded in 2022. These alarming figures emphasize the growing prevalence of this harmful content.
A WIRED report in March revealed that Google received over 13,000 requests to remove links to explicit deepfakes from a dozen popular websites. The tech giant complied with around 82% of these demands.This statistic demonstrates the significant volume of harmful content and the demand for removal.
Personal stories highlight the severe impact of deepfakes. Yesterday the New York time reported the story of Sabrina Javellana, a rising political star in Florida, who discovered deepfakes of herself circulating online. Facing ongoing harassment, Javellana stepped down from politics and chose a less public role.
In another case, the BBC reported this week that a family claims West Yorkshire Police failed to protect their 12-year-old daughter. Bullies had posted a deepfake explicit image of her on Snapchat. Initially, the police told the family that nothing could be done because Snapchat is US-based. The image traumatized the girl and spread widely.
Google’s policy updates are a significant step toward addressing the deepfake crisis. However, the issue’s complexity requires ongoing efforts and collaborations to fully tackle the problem.