
Photo by ilgmyzin on Unsplash
Bluesky Introduces New Verification System
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The decentralized social media platform Bluesky announced a new verification system this Monday, as a strategy to gain users’ trust and align with other platforms’ authenticity requirements. The company introduced new blue check marks for verified accounts and a trusted verifiers program.
In a rush? Here are the quick facts:
- Bluesky introduces a blue check mark and Trusted Verifiers.
- The company will “proactively verify” accounts, assigning the check marks next to verified users’ names and allowing trustworthy accounts to assign them as well.
- The social media platform is not accepting verification requests at the moment.
According to the official announcement shared by Bluesky on its blog, the company explains its historic process of including multiple verification systems to gain users’ trust. In 2023, the platform allowed users and organizations to use their domains as verification systems and handles as well. Since then, especially after the U.S. elections, Bluesky has gained millions of new users , and is including new layers of account verification.
Starting this week, the company is introducing a blue check next to the user’s name—similar to what other platforms such as X, Instagram, and TikTok use—for accounts verified by them, and a new title, Trusted Verifiers, for users and organizations who can issue blue checks directly.
“Bluesky will proactively verify authentic and notable accounts and display a blue check next to their names,” states the document. “Additionally, through our Trusted Verifiers feature, select independent organizations can verify accounts directly. Bluesky will review these verifications as well to ensure authenticity.”
Bluesky’s moderation team will review every account to validate the information. The company explained that trustworthy accounts, such as The New York Times, will be able to assign a blue check to their journalists through the app. Users will be able to recognize trusted verifiers by the scalloped blue checks next to the username.
For now, Bluesky is working on reaching out to potential trusted verifiers and verifying selected accounts, but it won’t accept verification requests. Users can also opt out by turning off the verification mode through the app settings.
A few weeks ago, Bluesky released a free photo-sharing app called Flashes that allows users to share up to 4 photos per post and runs on Bluesky’s AT Protocol.

Image by wayhomestudio, from Freeik
Supporter AI Glitch Exposes The Risks of Replacing Workers With Automation
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A support bot for AI startup Cursor made up a login policy, sparking confusion, user backlash, and raising serious concerns over automated service.
In a rush? Here are the quick facts:
- Users canceled subscriptions after misleading AI response.
- Cofounder confirmed it was an AI hallucination.
- AI-powered support systems save labor costs but risk damaging trust.
Anysphere, the AI startup behind the popular coding assistant Cursor, has hit a rough patch after its AI-powered support bot gave out false information, triggering user frustration and subscription cancellations, as first reported by Fortune .
Cursor, which launched in 2023, has seen explosive growth—reaching $100 million in annual revenue and attracting a near $10 billion valuation. But this week, its support system became the center of controversy when users were mysteriously logged out while switching devices.
A Hacker News user shared the strange experience, revealing that when they reached out to customer support, a bot named “Sam” responded with an email saying the logouts were part of a “new login policy.”
There was just one problem: that policy didn’t exist. The explanation was a hallucination—AI-speak for made-up information. No human was involved.
As news spread through the developer community, trust quickly eroded. Cofounder Michael Truell acknowledged the issue in a Reddit post, confirming it was an “incorrect response from a front-line AI support bot.” He also noted the team was investigating a bug causing the logouts, adding, “Apologies about the confusion here.”
But for many users, the damage was done. “Support gave the same canned, likely AI-generated response multiple times,” said Cursor user Melanie Warrick, co-founder of Fight Health Insurance. “I stopped using it—the agent wasn’t working, and chasing a fix was too disruptive.”
Experts say this serves as a red flag for overreliance on automation. “Customer support requires a level of empathy, nuance, and problem-solving that AI alone currently struggles to deliver,” warned Sanketh Balakrishna of Datadog.
Amiran Shachar, CEO of Upwind, said this mirrors past AI blunders, like Air Canada’s chatbot fabricating a refund policy. “AI doesn’t understand your users or how they work,” he explained. “Without the right constraints, it will ‘confidently’ fill in gaps with unsupported information.”
Security researchers are now warning that such incidents could open the door to more serious threats. A newly discovered vulnerability known as MINJA (Memory INJection Attack) demonstrates how AI chatbots with memory can be exploited through regular user interactions, essentially poisoning the AI’s internal knowledge.
MINJA allows malicious users to embed deceptive prompts that persist in the model’s memory, potentially influencing future conversations with other users. The attack bypasses backend access and safety filters, and in testing showed a 95% success rate.
“Any user can easily affect the task execution for any other user. Therefore, we say our attack is a practical threat to LLM agents,” said Zhen Xiang, assistant professor at the University of Georgia.
Yet despite these risks, enterprise trust in AI agents is on the rise. A recent survey of over 1,000 IT leaders found that 84% trust AI agents as much as or more than humans. With 92% expecting measurable business outcomes within 18 months, and 79% prioritizing agent deployment this year, the enterprise push is clear—even as privacy concerns and hallucination risks remain obstacles
While AI agents promise reduced labor costs, a single misstep can harm customer trust. “This is exactly the worst-case scenario,” one expert told Fortune.
The Cursor case is now a cautionary tale for startups: even the smartest bots can cause real damage if left unsupervised.