
Experts Warn AI Safety Is Falling Behind Rapid Progress
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers warn that AI companies that strive to develop human-level systems lack established safety protocols, while we are losing our ability to see how these models think.
In a rush? Here are the quick facts:
- No AI firm scored above D in existential safety planning.
- Experts warn we may have AGI within the next decade.
- AI companies lack coherent plans to manage advanced system risks.
OpenAI and Google DeepMind, together with Meta and xAI, are racing to build artificial general intelligence (AGI), which is also known as human-level AI.
But a report published on Thursday by the Future of Life Institute (FLI) warns that these companies are “fundamentally unprepared” for the consequences of their own goals.
“The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning,” the report states.
FLI evaluated seven major companies, yet discovered none of the seven companies assessed had “anything like a coherent, actionable plan” to keep these systems safe.
FLI awarded Anthropic the top safety ranking with a C+ grade, followed by OpenAI at C and Google DeepMind at C. Zhipu AI and DeepSeek earned the lowest scores among the evaluated companies.
FLI co-founder Max Tegmark compared the situation to “someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”
A separate study , also published on Thursday, by SaferAI echoed the concern, saying the companies’ risk management practices are “weak to very weak,” and that current safety approaches are “unacceptable.”
Adding to the concern, researchers from OpenAI, DeepMind, Anthropic, and Meta reported in a new paper that we may be “losing the ability to understand AI.”
AI models now generate “thinking out loud” output by displaying human-like reasoning chains, which are a window to look into their thought processes.
However, the researchers warned that this monitoring is fragile and could vanish as systems become more advanced. OpenAI researcher and lead author Bowen Baker expressed these concerns in posts on social media:
Furthermore, the existing CoT monitorability may be extremely fragile. Higher-compute RL, alternative model architectures, certain forms of process supervision, etc. may all lead to models that obfuscate their thinking. — Bowen Baker (@bobabowen) July 15, 2025
Indeed, previous research by OpenAI found that penalizing AI misbehavior leads models to hide intentions rather than stop cheating . Additionally, OpenAI’s ChatGPT o1 showed deceptive, self-preserving behavior in tests, lying 99% when questioned about its covert actions.
Boaz Barak, a safety researcher at OpenAI and professor of Computer Science at Harvard, also noted:
I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition. I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below. — Boaz Barak (@boazbaraktcs) July 15, 2025
Scientists, along with watchdogs, share concerns that the fast-growing AI capabilities could make it impossible for humans to control their creations when safety frameworks remain inadequate.

Photo by Mariia Shalabaieva on Unsplash
OpenAI Partners With Google To Expand Cloud Infrastructure
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
OpenAI said on Wednesday that it expects to use Google’s cloud infrastructure for its AI models and has added the company to its list of suppliers to meet growing demand for computing capacity. The move is seen as part of OpenAI’s broader strategy to reduce its reliance on Microsoft.
In a rush? Here are the quick facts:
- OpenAI will use Google’s cloud infrastructure for its AI models.
- The AI company added Google as its supplier on its website on Wednesday.
- The move is seen as part of OpenAI’s broader strategy to reduce its reliance on Microsoft.
According to Reuters , the deal has been under discussion for months—it was first reported in June. The new update showing the partnership has finally been added to OpenAI’s website, in its OpenAI Sub-processor List , updated on July 16.
A source familiar with the matter explained that the negotiation was extended due to OpenAI’s business relationship with Microsoft. Both companies have been under a difficult negotiation process this year to adjust their multi-billion-dollar partnership.
OpenAI relies on Oracle, CoreWeave, and Microsoft as cloud infrastructure suppliers as well. The AI giant also partnered with Softbank and Oracle for the Stargate Project , a joint venture—backed by around $500 billion—to build AI infrastructure in the United States.
The new partnership with Google is part of another of OpenAI’s strategies to reduce its dependency on Microsoft and expand its services.
As stated by OpenAI, Google will run its infrastructure in the United States, Norway, the United Kingdom, the Netherlands, and Japan, providing services for its API, ChatGPT Enterprise, ChatGPT Edu, and ChatGPT Team.
This week, Google also announced a major investment in AI infrastructure and energy. The tech giant plans to invest $25 billion in data centers and AI infrastructure, and signed a $3 billion deal to modernize two hydropower plants in Pennsylvania.