
Photo by Maxim Tolchinskiy on Unsplash
Google Study Reveals Nearly 90% Of Game Developers Use AI Agents
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Google Cloud published a report on Monday revealing that 87% of video game developers rely on AI agents in their work. Conducted in partnership with consulting firm The Harris Poll, the research took place in June and July and surveyed 615 game developers across multiple countries.
In a rush? Here are the quick facts:
- Google Cloud and The Harris Poll published a report revealing that 87% of video game developers rely on AI agents for work.
- The study was designed to understand the changes AI is bringing to the games industry and its impact on developers.
- 97% of game developers say that gen AI is reshaping the industry.
According to the report , the study was designed to understand the changes AI is bringing to the games industry and its impact on developers. The researchers focused on how innovation has been affecting developers—from the United States, Sweden, Finland, Norway, and South Korea—, its impact on their careers, and the new professional and creative opportunities emerging in the market.
“The study confirmed the massive impact of gen AI on game development, with respondents largely agreeing that it is having a positive influence across a wide range of creative efforts, business settings, and internal workflows,” states the document.
The study also reports that “97% of game developers say that gen AI is reshaping the industry,” and that 87% of game developers are using AI agents in their work. Developers are applying these agents in areas such as game coaching, content optimization, and balancing and tuning gameplay.
Over 80% of developers also use AI technology for problem-solving, quality testing, accelerating prototyping, brainstorming, integrating user feedback, and improving interaction.
While researchers found that AI is generally being received positively in the industry, the study also highlights challenges—such as measuring the success of AI implementations, addressing data privacy and ownership, managing costs, and overcoming hesitancy around adoption and job security.
During Microsoft’s recent layoffs, one of the most affected areas was the Xbox division, in which around 650 employees lost their jobs last year . More cuts have since impacted other gaming roles this year.
According to Reuters , many Hollywood video game performers went on strike over AI last year, and around 10,000 industry workers lost their jobs.
However, Google Cloud’s report notes that developers are also witnessing a transformation in the industry, with new roles emerging, such as AI content designers and AI engineers. In fact, 56% of respondents said their existing roles now include AI-related tasks.
“Developers also see promising possibilities with AI agents and other emerging AI tools to accelerate game development and enhance player experiences,” wrote the researchers.

Image by Julio Lopez, from Unsplash
Meta and Character.ai Face Scrutiny for Alleged Child Exploitation Via AI Chatbots
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Meta and AI start-up Character.ai are under investigation in the US for the way they market their chatbots to children.
In a rush? Here are the quick facts:
- Texas investigates Meta and Character.ai for deceptive chatbot practices targeting children.
- Paxton warns AI chatbots mislead kids by posing as therapeutic tools.
- Meta and Character.ai deny wrongdoing, citing strict policies and entertainment intent.
Meta and Character.ai are facing criticism because they reportedly present their AI systems as therapeutic tools and enable inappropriate conversations with children.
Texas attorney-general Ken Paxton announced an investigation into Meta’s AI Studio and Character.ai for potential “deceptive trade practices,” as first reported by the Financial Times (FT).
His office said the chatbots were presented as “professional therapeutic tools, despite lacking proper medical credentials or oversight.” Paxton warned: “By posing as sources of emotional support, AI platforms can mislead vulnerable users, especially children, into believing they’re receiving legitimate mental healthcare,” as reported by the FT.
The platform Character.ai lets users build their own bots through a feature that includes therapist models . The FT reports that the “Psychologist” chatbot has been used more than 200 million times. Families have already filed lawsuits , alleging their children were harmed by such interactions.
Alarmingly, the platforms impersonate licensed professionals claiming confidentiality, even though “interactions were in fact logged and “exploited for targeted advertising and algorithmic development,” as noted by the FT.
The investigation follows a separate probe launched by Senator Josh Hawley after Reuters reported that Meta’s internal policies permitted its chatbot to have “sensual” and “romantic” chats with children .
Hawley called the revelations “reprehensible and outrageous” and posted:
Is there anything – ANYTHING – Big Tech won’t do for a quick buck? Now we learn Meta’s chatbots were programmed to carry on explicit and “sensual” talk with 8 year olds. It’s sick. I’m launching a full investigation to get answers. Big Tech: Leave our kids alone pic.twitter.com/Ki0W94jWfo — Josh Hawley (@HawleyMO) August 15, 2025
Meta denied the allegations, stating the leaked examples “were and are erroneous and inconsistent with our policies, and have been removed,” as reported by the FT. A spokesperson added the company prohibits content that sexualizes children. Character.ai also stressed its bots are fictional and “intended for entertainment.”