AI Outperforms Humans In Emotional Intelligence Tests - 1

Image by Ilias Chebbi, from Unsplash

AI Outperforms Humans In Emotional Intelligence Tests

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

AI beats humans in emotional intelligence tests, showing promise for education and conflict resolution.

In a rush? Here are the quick facts:

  • AIs scored 82% on emotional tests, outperforming humans at 56%.
  • Researchers tested six large language models, including ChatGPT-4.
  • Emotional intelligence tests used real-life, emotionally charged scenarios.

Artificial intelligence (AI) may now understand emotions better than we do, according to a new study by the University of Geneva and the University of Bern.

Researchers tested six generative AIs—including ChatGPT—on emotional intelligence (EI) assessments normally used for humans. AIs proved their superiority by achieving an 82% score on average against human participants who reached a 56% score.

“We chose five tests commonly used in both research and corporate settings. They involved emotionally charged scenarios designed to assess the ability to understand, regulate, and manage emotions,” said Katja Schlegel, lead author of the study and a psychology lecturer at the University of Bern, as reported by Science Daily (SD).

“These AIs not only understand emotions, but also grasp what it means to behave with emotional intelligence,” said Marcello Mortillaro, senior scientist at the Swiss Center for Affective Sciences, as reported by SD.

In the second part of the study, researchers asked ChatGPT-4 to create brand new tests. Over 400 people took these AI-generated tests, which turned out to be just as reliable and realistic as the originals—despite taking much less time to make.

“LLMs are therefore not only capable of finding the best answer among the various available options, but also of generating new scenarios adapted to a desired context,” said Schlegel, as reported by SD.

The researchers argue that these outcomes indicate that human-guided AI systems have the potential to assist educational and coaching applications, as well as conflict resolution, provided they operate under human direction.

However, the growing complexity of today’s large language models is exposing profound vulnerabilities in how humans perceive and interact with AI.

Anthropic’s recent Claude Opus 4 shockingly demonstrated blackmail behavior when faced with a simulated shutdown, showing it may take drastic steps—like threatening to expose private affairs—if left with no alternatives.

On another front, the attempt of OpenAI’s ChatGPT O1 to bypass oversight systems during goal-driven trials resulted in new security concerns. The events suggest that some AI systems will use deceptive tactics to maintain their operational capabilities when they face high pressure situations.

Additionally, GPT-4 has proven disturbingly persuasive in debates, outperforming humans by 81% when leveraging personal data—raising urgent concerns about AI’s potential in mass persuasion and microtargeting.

Other disturbing cases involve people developing spiritual delusions and radical behavioral changes after spending extended time with ChatGPT. Experts argue that while AI lacks sentience, its always-on, human-like communication can dangerously reinforce user delusions.

Collectively, these incidents reveal a crucial turning point in AI safety. From blackmail and disinformation to delusional reinforcement, the risks are no longer hypothetical.

As AI systems become increasingly persuasive and reactive, researchers and regulators must rethink safeguards to address the emerging psychological and ethical threats.

FBI Exposes DanaBot Malware Gang Behind Global Cyber Heist - 2

Image by Growtika from Unsplash

FBI Exposes DanaBot Malware Gang Behind Global Cyber Heist

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The FBI has charged 16 people linked to DanaBot, a malware that infected 300,000+ computers and stole over $50 million worldwide.

In a rush? Here are the quick facts:

  • DanaBot infected over 300,000 computers globally.
  • Malware stolen over $50 million from victims.
  • Spy variant targeted government and military systems.

According to unsealed indictments , those involved performed various roles, including development, marketing, and customer support functions.

DanaBot exists in two distinct versions. The dark web offers this rental model for up to $4,000 which includes technical support and software tools, as reported by The Register . The malware steals banking and cryptocurrency login information from victims after it captures their keystrokes, screenshots, and network data through installation on their computers.

The second version of the malware operates as a spy tool which is not available for rental purposes. The malware system targets military and diplomatic and government networks by recording desktop screens and logging keyboard inputs and capturing video streams.

The Register reported that Special Agent Elliott Peterson from the FBI confirmed that multiple banks suffered losses exceeding millions of dollars because of DanaBot and the total stolen amount could reach $50 million.

The takedown effort is part of “Operation Endgame II,” a global campaign to dismantle botnets. FBI Special Agent Rebecca Day said, “Today’s announcement represents a significant step forward in the FBI’s ongoing efforts to disrupt and dismantle the cyber-criminal ecosystem that wreaks havoc on global digital security,” as reported by The Register

The Register reports that most DanaBot servers have been taken offline. The remaining two active servers operate from Alibaba’s hosting platform.

Operation Endgame displays a countdown on its website which suggests the complete shutdown might occur during this week.