Loopholes In EU AI Bans Could Allow Police To Use Controversial Tech - 1

Image by Freepik

Loopholes In EU AI Bans Could Allow Police To Use Controversial Tech

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The EU’s new AI Act aims to regulate AI but faces criticism over loopholes, exemptions, and corporate influence.

In a Rush? Here are the Quick Facts!

  • Critics argue the law has loopholes, especially for law enforcement and migration authorities.
  • Exemptions allow AI practices like real-time facial recognition and emotion detection in some cases.
  • Digital rights groups warn the law’s exceptions weaken protections against misuse.

The European Union’s new AI Act , marks a significant step in regulating artificial intelligence. The world-first legislation bans certain “unacceptable” uses of AI technology, aiming to protect citizens and preserve democratic values.

Among the prohibitions are predictive policing, scraping facial images from the internet for recognition, and using AI to detect emotions from biometric data. However, critics argue that the law contains significant loopholes, particularly when it comes to policing and migration authorities.

While the AI Act bans certain AI uses in principle, it includes exemptions that could allow European police and migration authorities to continue utilizing controversial AI practices, as first reported by Politico .

For instance, real-time facial recognition in public spaces, although largely banned, can still be allowed in exceptional cases, such as serious criminal investigations.

Similarly, the detection of emotions in public settings is prohibited, but exceptions could be made for law enforcement and migration purposes, raising concerns about the potential use of AI to identify deception at borders.

The law’s broad exemptions have raised alarm among digital rights groups. A coalition of 22 organizations warned that the AI Act fails to adequately address concerns regarding law enforcement’s use of the technology.

“The most glaring loophole is the fact that the bans do not apply to law enforcement and migrational authorities,” said Caterina Rodelli, EU policy analyst at Access Now, as reported by Politico.

The EU’s AI Act also bans the use of AI for societal control, a measure introduced to prevent AI from being used to undermine individual freedoms or democracy.

Brando Benifei, an Italian lawmaker involved in drafting the legislation, explained that the goal is to avoid AI technologies being exploited for “societal control” or the “compression of our freedoms,” as reported by Politico.

According to Politico, this stance was influenced by high-profile incidents like the Dutch tax authorities’ controversial use of AI to identify fraud in 2019, which wrongfully accused some 26,000 people of fraud.

In this case, the authorities used an algorithm to spot potential childcare benefits fraud, but the faulty algorithm led to widespread misidentifications and damage to innocent citizens’ lives.

The controversy surrounding this event played a major role in shaping the law’s restrictions on predictive policing and other forms of AI misuse.

Meanwhile, a report by Corporate Europe Observatory (CEO) raises concerns about the influence of Big Tech companies on the development of EU AI standards . The report reveals that over half of the members of the Joint Technical Committee on AI (JTC21), responsible for setting AI standards, represent corporate or consultancy interests.

This corporate influence has raised alarms that the EU’s AI Act could be undermined by industry interests focused on profitability over ethical considerations. Additionally, civil society and academic representatives face financial and logistical challenges in participating in the standard-setting process.

The report highlights the lack of transparency and democratic accountability within standard-setting organizations like CEN and CENELEC, sparking concerns about the fairness and inclusivity of the standards development process.

While the AI Act puts the EU at the forefront of global AI regulation, the ongoing debate over its loopholes suggests that balancing innovation with safeguarding human rights will be a delicate task moving forward.

Google Play’s Security: 2.36 Million Apps Blocked And 158,000 Developer Accounts Banned - 2

Image by Mika Baumeister, from Unsplash

Google Play’s Security: 2.36 Million Apps Blocked And 158,000 Developer Accounts Banned

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Google Play’s 2024 security update emphasizes the importance of AI-driven threat detection, developer compliance, and multi-layered protections to maintain a secure platform for users and developers alike.

In a Rush? Here are the Quick Facts!

  • Over 1.3 million apps are restricted from accessing excessive user data in 2024.
  • Google Play Protect scanned 200 billion apps daily to defend against mobile threats.
  • Enhanced fraud protection blocked over 36 million risky installation attempts globally.

One of the key developments was the expanded use of AI to improve threat detection. AI now plays a central role in identifying harmful apps and streamlining the review process for developers.

Over 92% of human reviews for harmful apps are now AI-assisted, allowing for quicker identification and removal of apps that violate security standards.

In 2024, this approach helped prevent 2.36 million policy-violating apps from being published on Google Play and resulted in the banning of over 158,000 developer accounts attempting to distribute malicious apps.

Google also took steps to address privacy concerns, working with developers to limit unnecessary access to sensitive user data. In 2024, the company prevented 1.3 million apps from accessing excessive data and required greater transparency around data handling practices.

This included introducing new developer guidelines and a “Data Deletion” option, which gives users more control over the data apps collect and retain. To further bolster security, Google introduced tools to help developers protect their apps from fraud and abuse.

The Play Integrity API, for instance, allows developers to detect and prevent tampering and fraud within their apps, while the Google Play SDK Index provides insight into the security of third-party software development kits (SDKs) used by developers.

In addition to these measures, Google Play’s security protections, including Google Play Protect, continued to evolve. Play Protect scans apps for malicious behavior, not only within the Play Store but also on apps installed from external sources.

In 2024, Google Play Protect identified over 13 million new malicious apps from sources outside of the Play Store.

Google also worked with governments and industry partners to address broader security concerns. New fraud protection pilots, launched in nine countries, helped protect millions of devices from risky installation attempts.

Moreover, Google collaborated with organizations such as Microsoft and Meta to develop new security standards aimed at strengthening mobile app security across the industry.

Despite these advancements, Google acknowledges that maintaining a safe environment requires constant vigilance and collaboration with a range of stakeholders to adapt to new and evolving security challenges.