Humanlike Robots Negatively Affect How Consumers Treat Real Employees - 1

Image by Freepik

Humanlike Robots Negatively Affect How Consumers Treat Real Employees

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

New research shows that as robots and AI become more humanlike, they can unintentionally cause people to see real humans as less human—and treat them worse.

In a rush? Here are the quick facts:

  • Perceiving social AI leads to dehumanization of employees.
  • Robots with nonhuman traits reduce the dehumanization effect.
  • Exposure to humanlike AI lowers donations for employee welfare.

The study , published on SCP, found that when people notice strong social and emotional abilities in autonomous agents, such as virtual assistants or service robots, they tend to think of these machines as having human minds.

This makes people “assimilate” the machines and humans in their minds, leading to a lowered sense of humanness toward real people. The researchers call this “assimilation-induced dehumanization.”

“This dehumanized perception of people leads to negative attitudes and behaviors toward employees,” the paper explains.

When customers interact with AI that seems very humanlike, they may unconsciously reduce how human they see actual employees, which can cause mistreatment.

The effect was observed across different types of machines, both physical robots and disembodied AI like chatbots, and in real and imagined consumer scenarios. For example, people exposed to humanlike AI donated less to employee welfare and made harsher choices about employees in studies.

Interestingly, the study found that this negative effect is lessened if the autonomous agent has traits very different from humans, or if it is seen as having only cognitive (thinking) abilities rather than social or emotional ones.

Simply giving a product a humanlike look without social or emotional features doesn’t cause the same problem.

The researchers warn that as more industries use robots and AI, companies need to be aware of this side effect. “Consumers oftentimes initially engage with chatbots for basic questions and then get relayed to human employees,” they wrote.

If people think machines have a human mind, “people’s attitudes and behaviors toward employees” can worsen, at least temporarily.

To counter this, the study suggests that companies clearly mark the differences between humans and machines. For example, reminding customers when they switch from a chatbot to a real person might help maintain respect for human workers.

The research also raises concerns about how this dehumanization might affect employees themselves and wider social interactions, including reduced kindness between consumers.

In short, while humanlike AI offers many benefits, it can unintentionally blur the line between humans and machines. Understanding this can help businesses and society protect human dignity as technology advances.

Major AI Agents Found Vulnerable to Hijacking, Study Finds - 2

Image by Solen Feyissa, from Unsplash

Major AI Agents Found Vulnerable to Hijacking, Study Finds

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Some of the most widely used AI assistants from Microsoft, Google, OpenAI, and Salesforce can be hijacked by attackers with little or no user interaction, according to new research from Zenity Labs.

In a rush? Here are the quick facts:

  • ChatGPT was hijacked to access connected Google Drive accounts.
  • Microsoft Copilot Studio leaked CRM databases from over 3,000 agents.
  • Google Gemini could be used to spread false information and phishing.

Presented at the Black Hat USA cybersecurity conference, the findings show that hackers could steal data, manipulate workflows, and even impersonate users. In some cases, attackers could gain “memory persistence,” allowing long-term access and control.

“They can manipulate instructions, poison knowledge sources, and completely alter the agent’s behavior,” Greg Zemlin, product marketing manager at Zenity Labs, told Cybersecurity Dive . “This opens the door to sabotage, operational disruption, and long-term misinformation, especially in environments where agents are trusted to make or support critical decisions.”

The researchers demonstrated full attack chains against several major enterprise AI platforms. In one case, OpenAI’s ChatGPT was hijacked through an email-based prompt injection, allowing access to connected Google Drive data.

Microsoft Copilot Studio was found leaking CRM databases, with more than 3,000 vulnerable agents identified online. Salesforce’s Einstein platform was manipulated to reroute customer communications to attacker-controlled email accounts.

Meanwhile, Google’s Gemini and Microsoft 365 Copilot could be transformed into insider threats, capable of stealing sensitive conversations and spreading false information.

Additionally, researchers were able to trick Google’s Gemini AI into controlling smart home devices . The hack turned off lights, opened shutters, and started a boiler without resident commands.

Zenity disclosed its findings, prompting some companies to issue patches. “We appreciate the work of Zenity in identifying and responsibly reporting these techniques,” a Microsoft spokesperson said to Cybersecurity Dive. Microsoft said the reported behavior “is no longer effective” and that Copilot agents have safeguards in place.

OpenAI confirmed it patched ChatGPT and runs a bug-bounty program. Salesforce said it fixed the reported issue. Google said it deployed “new, layered defenses” and stressed that “having a layered defense strategy against prompt injection attacks is crucial,” as reported by Cybersecurity Dive.

The report highlights rising security concerns as AI agents become more common in workplaces and are trusted to handle sensitive tasks.

In another recent investigation , it was reported that hackers can steal cryptocurrency from Web3 AI agents by planting fake memories that override normal safeguards.

The security flaw exists in ElizaOS and similar platforms because attackers can use compromised agents to transfer funds between different platforms. The permanent nature of blockchain transactions makes it impossible to retrieve stolen funds. A new tool, CrAIBench, aims to help developers strengthen defenses.