
Photo by Kai Wenzel on Unsplash
Google Invests An Additional $750 Million In Anthropic To Enhance AI Capabilities
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
A New York Times report revealed that the tech giant Google owns 14% of the startup Anthropic and has invested over $3 billion in the company—and plans to invest an additional $750 million through convertible debt in September.
In a Rush? Here are the Quick Facts!
- Google has invested over $3 billion in AI startup Anthropic and plans an additional $750 million investment through convertible debt in September.
- The U.S. Department of Justice is scrutinizing Google’s investments to assess potential antitrust law violations.
- Legal filings accessed by The New York Times reveal Google’s strategies to maintain its market dominance amid ongoing antitrust investigations.
Google has been developing its own generative AI technologies while also investing in promising AI companies such as Anthropic to stay ahead in the AI race . However, the company is under investigation by the U.S. Department of Justice (DOJ) for alleged antitrust violations and has been required to submit documentation and files for the case.
The New York Times got access to legal filings , revealing the giant’s strategies to gain leverage in the market. Google’s investments have been under evaluation to determine whether the company has been using its power to take advantage of the market.
“A big company like Google knows that there is a race to A.I., and it has a big enough cash pile that it can bet on multiple horses,” said Chris V. Nicholson, an investor with the venture capital firm Page One Ventures who focuses on A.I. technologies, to the New York Times.
While Google doesn’t have much control over Anthropic—and cannot surpass a 15% ownership—it is still willing to keep investing. According to the documents reviewed by the journal, both companies have made multiple agreements, including a 2023 agreement of a $2 billion investment.

Image by Mika Baumeister, from Unsplash
AI Chatbots Vulnerable To Memory Injection Attack
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers have discovered a new way to manipulate AI chatbots, raising concerns about the security of AI models with memory.
In a Rush? Here are the Quick Facts!
- Researchers from three universities developed MINJA, showing its high success in deception.
- The attack alters chatbot responses, affecting product recommendations and medical information.
- MINJA bypasses safety measures, achieving a 95% Injection Success Rate in tests.
The attack, called MINJA (Memory INJection Attack), can be carried out by simply interacting with an AI system like a regular user, without needing access to its backend, as first reported by The Register .
Developed by researchers from Michigan State University, the University of Georgia, and Singapore Management University, MINJA works by poisoning an AI’s memory through misleading prompts. Once a chatbot stores these deceptive inputs, they can alter future responses for other users.
“Nowadays, AI agents typically incorporate a memory bank which stores task queries and executions based on human feedback for future reference,” explained Zhen Xiang, an assistant professor at the University of Georgia, as reported by The Register.
“For example, after each session of ChatGPT, the user can optionally give a positive or negative rating. And this rating can help ChatGPT to decide whether or not the session information will be incorporated into their memory or database,” he added.
The researchers tested the attack on AI models powered by OpenAI’s GPT-4 and GPT-4o, including a web-shopping assistant, a healthcare chatbot, and a question-answering agent.
The Register reports that they found that MINJA could cause serious disruptions. In a healthcare chatbot, for instance, it altered patient records, associating one patient’s data with another. In an online store, it tricked the AI into showing customers the wrong products.
“In contrast, our work shows that the attack can be launched by just interacting with the agent like a regular user,” Xiang said, reports The Register. “Any user can easily affect the task execution for any other user. Therefore, we say our attack is a practical threat to LLM agents,” he added.
The attack is particularly concerning because it bypasses existing AI safety measures. The researchers reported a 95% success rate in injecting misleading information, making it a serious vulnerability for AI developers to address.
As AI models with memory become more common, the study highlights the need for stronger safeguards to prevent malicious actors from manipulating chatbots and misleading users.