
Image by wayhomestudio, from Freeik
Supporter AI Glitch Exposes The Risks of Replacing Workers With Automation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A support bot for AI startup Cursor made up a login policy, sparking confusion, user backlash, and raising serious concerns over automated service.
In a rush? Here are the quick facts:
- Users canceled subscriptions after misleading AI response.
- Cofounder confirmed it was an AI hallucination.
- AI-powered support systems save labor costs but risk damaging trust.
Anysphere, the AI startup behind the popular coding assistant Cursor, has hit a rough patch after its AI-powered support bot gave out false information, triggering user frustration and subscription cancellations, as first reported by Fortune .
Cursor, which launched in 2023, has seen explosive growth—reaching $100 million in annual revenue and attracting a near $10 billion valuation. But this week, its support system became the center of controversy when users were mysteriously logged out while switching devices.
A Hacker News user shared the strange experience, revealing that when they reached out to customer support, a bot named “Sam” responded with an email saying the logouts were part of a “new login policy.”
There was just one problem: that policy didn’t exist. The explanation was a hallucination—AI-speak for made-up information. No human was involved.
As news spread through the developer community, trust quickly eroded. Cofounder Michael Truell acknowledged the issue in a Reddit post, confirming it was an “incorrect response from a front-line AI support bot.” He also noted the team was investigating a bug causing the logouts, adding, “Apologies about the confusion here.”
But for many users, the damage was done. “Support gave the same canned, likely AI-generated response multiple times,” said Cursor user Melanie Warrick, co-founder of Fight Health Insurance. “I stopped using it—the agent wasn’t working, and chasing a fix was too disruptive.”
Experts say this serves as a red flag for overreliance on automation. “Customer support requires a level of empathy, nuance, and problem-solving that AI alone currently struggles to deliver,” warned Sanketh Balakrishna of Datadog.
Amiran Shachar, CEO of Upwind, said this mirrors past AI blunders, like Air Canada’s chatbot fabricating a refund policy. “AI doesn’t understand your users or how they work,” he explained. “Without the right constraints, it will ‘confidently’ fill in gaps with unsupported information.”
Security researchers are now warning that such incidents could open the door to more serious threats. A newly discovered vulnerability known as MINJA (Memory INJection Attack) demonstrates how AI chatbots with memory can be exploited through regular user interactions, essentially poisoning the AI’s internal knowledge.
MINJA allows malicious users to embed deceptive prompts that persist in the model’s memory, potentially influencing future conversations with other users. The attack bypasses backend access and safety filters, and in testing showed a 95% success rate.
“Any user can easily affect the task execution for any other user. Therefore, we say our attack is a practical threat to LLM agents,” said Zhen Xiang, assistant professor at the University of Georgia.
Yet despite these risks, enterprise trust in AI agents is on the rise. A recent survey of over 1,000 IT leaders found that 84% trust AI agents as much as or more than humans. With 92% expecting measurable business outcomes within 18 months, and 79% prioritizing agent deployment this year, the enterprise push is clear—even as privacy concerns and hallucination risks remain obstacles
While AI agents promise reduced labor costs, a single misstep can harm customer trust. “This is exactly the worst-case scenario,” one expert told Fortune.
The Cursor case is now a cautionary tale for startups: even the smartest bots can cause real damage if left unsupervised.

Image by Emiliano Vittoriosi, from Unsplash
Saying “Please” To AI Could Be Burning Millions in Energy
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Experts say politeness to AI shapes its tone, while OpenAI’s Sam Altman admits the courtesy is costing millions in electricity.
In a rush? Here are the quick facts:
- Sam Altman says politeness to AI costs “tens of millions” in electricity.
- 12% use manners to appease AI in case of a future uprising.
- AI data centers already consume about 2% of global energy.
Saying “please” and “thank you” to AI chatbots might sound like harmless manners—or even pointless. But it’s actually costing tech giants millions, and some say it’s a habit worth keeping.
OpenAI CEO Sam Altman recently confirmed that courtesy isn’t free. Using “please” and “thank you” with AI chatbots appears to be nothing more than polite manners, yet, as reported in a recent article by Futurism , it might result in millions of dollars in losses for tech companies.
User @tomiinlove posted on X that he wondered about the electricity expenses OpenAI incurred from users thanking their models. To which OpenAI CEO Sam Altman replied with:
tens of millions of dollars well spent–you never know — Sam Altman (@sama) April 16, 2025
Microsoft’s Kurtis Beavers, a director on the design team for Microsoft Copilot, says in an interview with WorkLab, that while AI doesn’t have feelings, it responds more collaboratively when users set a respectful tone.
Because generative AI models are trained on human conversations, they reflect the politeness, professionalism, and clarity of your input. “It’s a conversation,” Beavers notes, and the user guides the tone. Saying “please” and “thank you” not only improves the chatbot’s responses.
Futurism reports that a survey from late 2024 revealed that 67% of U.S. users are polite to their chatbots. Of those, 55% said they do it because “it’s the right thing to do,” while 12% admitted it’s just in case of an AI uprising.
A Washington Post investigation together with researchers from the University of California studied the environmental impact of messages produced by AI. The energy consumption from sending one AI-assisted email weekly throughout a year equals 7.5 kWh which matches the power usage of nine Washington D.C. households during one hour.
Although AI etiquette may seem minor, it highlights a larger issue: our digital behavior has real-world energy consequences. Indeed notes that the data centers running these AI tools already use about 2% of global electricity—a figure expected to rise sharply as AI becomes more integrated into daily life.