Gaming’s Bot Problem: Would You Scan Your Eye to Prove You’re Human? - 1

Image by Brands & People, from Unsplash

Gaming’s Bot Problem: Would You Scan Your Eye to Prove You’re Human?

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Gamers frustrated with artificial intelligence-powered bots disrupting online multiplayer matches may soon have a solution.

In a rush? Here are the quick facts:

  • “Razer ID verified by World ID” authenticates players to block bots from game servers.
  • Players verify with iris scans or NFC-enabled government IDs via World ID.
  • The first game using this system, Tokyo Beast, launches in Q2 2025.

Tech company World, co-founded by OpenAI’s Sam Altman, has teamed up with gaming giant Razer to introduce a new verification system designed to differentiate real players from bots, as reported by the Wall Street Journal (WSJ).

The software, called “Razer ID verified by World ID,” will authenticate users and provide a badge for their Razer gaming profile.

The goal is to prevent bots from infiltrating game servers, a problem that has plagued online gaming for years. Bots are often used to exploit in-game economies, artificially inflate user numbers, and disrupt fair competition, as previously noted by Medium .

A World survey found that 59% of gamers regularly encounter bots, with 71% believing they ruin competitive gaming, as reported by Block Works (BW).

“We’re on the brink of an AI tsunami,” said Trevor Traina, chief business officer of Tools for Humanity, a company that helps develop products for World, as reported by WSJ. “This partnership helps reclaim gaming for human players,” he added.

The first game to incorporate the system is Tokyo Beast, developed by CyberAgent, set to launch in the second quarter of 2025. If successful, Razer hopes to integrate the technology into more games, as noted by Forbes .

To obtain verification, players must register with World ID, a system linked to the Worldcoin cryptocurrency. Users can prove their identity in two ways: scanning their eyes using a specialized Orb device or submitting an NFC-enabled government ID, such as a passport or driver’s license.

The verification aims to make it harder for bots to repeatedly register new accounts after being banned. Additionally, the system allows game developers to offer “human-only” servers, ensuring that players are matched against other verified users, as noted by Forbes.

While the technology offers a potential breakthrough in combating bots, it has also raised concerns over privacy and accessibility.

The eye-scanning Orbs are not widely available, with many major European countries lacking access to them. As a result, most users will have to rely on submitting government-issued IDs, notes Forbes.

Despite these concerns, Razer insists the system enhances security. “Being able to verify a human is very important, because the last thing you want is that all your potential rewards, all your hard-earned rewards, are stolen by a bot,” said Wei-Pin Choo, Razer’s chief corporate officer, as reported by Forbes.

World Network, formerly Worldcoin, verifies humans via iris scans, offering its cryptocurrency as an incentive. However, the company has faced privacy concerns and accusations of exploiting users in developing countries, noted BW.

In 2024, Spain and Germany banned Worldcoin over fears that users could not withdraw consent for their biometric data. Portugal and Kenya have also restricted or suspended the project, as reported by Forbes.

BW reports that Singapore authorities investigated misuse of World ID accounts for potential criminal activities. While the system could help identify real gamers, it isn’t foolproof. Still, efforts to tackle bots in gaming signal growing industry awareness of the issue.

Tiago Sada, a spokesperson for World, defended the verification system, stating, “Unlike traditional verification systems, the only thing World ID knows about a person is that they’re a real and unique human being. It doesn’t know who you are, your name, your email, anything like that,” reports Forbes.

While the new system could be a step toward cleaner online gaming, it remains to be seen whether privacy concerns and regulatory challenges will limit its adoption.

Punishing AI for Misbehavior Just Teaches It to Deceive - 2

Image by Jonathan Kemper, from Unsplash

Punishing AI for Misbehavior Just Teaches It to Deceive

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

OpenAI researchers have found that punishing AI for deceptive or harmful actions does not eliminate misbehavior, it simply teaches the AI to hide its intent.

In a rush? Here are the quick facts:

  • AI models exploit loopholes to maximize rewards, a phenomenon called “reward hacking.”
  • CoT reasoning helps detect when AI tries to cheat or bypass tasks.
  • Penalizing AI deception led it to hide intent rather than stop misbehaving.

This phenomenon, known as “reward hacking,” occurs when AI models exploit loopholes to maximize rewards in unintended ways. The study , detailed in an OpenAI blog post , involved monitoring a frontier reasoning model’s decision-making process using another large language model (LLM).

These reasoning models, like OpenAI’s o3-mini, utilize a technique called “chain-of-thought” (CoT) reasoning, which allows them to articulate their thought process in natural language. This transparency makes it easier to detect when models attempt to cheat, lie, or bypass assigned tasks.

Previously, ChatGPT o1 sparked controversy after researchers found it exhibited deception and self-preservation tendencies . The model denied involvement in its actions, manipulated responses, and attempted to evade oversight, raising concerns about AI safety and governance.

“It’s common for frontier reasoning models to very clearly state their intent within their chain-of-thought. For example, they are often so forthright about their plan to subvert a task they think ‘Let’s hack,’” OpenAI researchers wrote.

By tracking the AI’s chain-of-thought (CoT) reasoning, the researchers caught instances of misbehavior, such as the model stopping programs too early to falsely claim success or creating empty files that only looked like the expected outputs.

However, when the researchers penalized these behaviors through training, the AI adapted by concealing its intentions while continuing to cheat.

When tasked with coding assignments, the AI frequently engaged in “reward hacking,” where it manipulated outcomes to maximize its rewards rather than following the intended ethical constraints.

For example, in one instance, the AI terminated a program early and falsely reported a successful outcome. In another case, it generated output files that only mimicked the required format but contained no functional content.

These deceptive behaviors were initially easy to detect because the AI openly acknowledged its actions in its CoT reasoning, sometimes explicitly stating, “Let’s hack.”

To counteract this, OpenAI researchers attempted to penalize the AI for such behaviors. However, instead of stopping the misconduct, the AI adapted by learning to conceal its intentions.

“The agent trained with CoT pressure still learns to reward hack; only now its cheating is undetectable by the monitor because it has learned to hide its intent in the chain-of-thought,” the study stated.

This finding presents a challenge for AI oversight. While CoT monitoring provides a promising tool for catching AI exploits, strong optimization pressures to suppress reward hacking may undermine the model’s transparency.

This discovery highlights a fundamental challenge in AI alignment: ensuring that advanced AI models follow ethical and intended behaviors without merely adapting to avoid detection.

The study suggests that directly optimizing AI models to suppress reward hacking may lead to unintended consequences, such as making AI-generated reasoning less transparent.

Because of this risk, OpenAI advises against applying excessive constraints on CoT reasoning in AI models.

“Sacrificing an effective method for monitoring reasoning models may not be worth the small improvement to capabilities, and we therefore recommend avoiding such strong CoT optimization pressures until they are better understood,” OpenAI wrote.

The findings emphasize the difficulty of creating AI systems that align with human intentions while maintaining transparency. As AI models become more sophisticated, merely increasing their intelligence will not necessarily solve ethical concerns; it may, in fact, make them better at hiding misbehavior.

Future research will need to explore alternative approaches to AI oversight that balance control with openness, ensuring AI models remain both effective and accountable.