Kimsuky Data Breach Reveals South Korean Government Targets - 1

Image by Oleksandr Chumak, from Unsplash

Kimsuky Data Breach Reveals South Korean Government Targets

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a rush? Here are the quick facts:

  • North Korean hacker group Kimsuky suffered a major data breach.
  • Hackers ‘Saber’ and ‘cyb0rg’ leaked 8.9GB of Kimsuky’s data.
  • Leak includes phishing logs targeting South Korean government domains.

Two hackers calling themselves ‘Saber’ and ‘cyb0rg’ stole and publicly leaked Kimsuky’s internal data, criticizing the group for their political motives and greed, as first reported by BleepingComputer (BC).

“Kimsuky, you are not a hacker. You are driven by financial greed, to enrich your leaders, and to fulfill their political agenda,” the hackers wrote in a message published in the latest issue of Phrack, as noted by BC.

“You steal from others and favour your own. You value yourself above the others: You are morally perverted,” the message reads.

The leaked data, totaling 8.9GB and hosted on the Distributed Denial of Secrets website, exposes Kimsuky’s tools and some stolen information that could reveal unknown hacking campaigns.

BC reports that among the data are phishing logs targeting South Korean government domains like dcc.mil.kr (Defense Counterintelligence Command), spo.go.kr, and korea.kr, as well as popular platforms such as daum.net, kakao.com, and naver.com.

The leak also includes the full source code of South Korea’s Ministry of Foreign Affairs email system, “Kebi,” along with lists of university professors and citizen certificates, as noted by BC.

Tools uncovered include phishing site generators with evasion tricks, live phishing kits, unknown binary files, and hacking utilities like Cobalt Strike loaders and reverse shells.

Additionally, BC says that the dump reveals Chrome browsing histories connected to suspicious GitHub accounts, VPN purchases, and hacking forums. There are signs of activity linked to Taiwan government and military websites and internal SSH connections.

While some of these details were previously known, the leak connects Kimsuky’s tools and operations in new ways, effectively exposing their infrastructure. Security experts say the breach may cause short-term disruptions but is unlikely to stop Kimsuky’s activities long-term.

BC say it is attempting to reach out to security researchers to verify the leak’s authenticity and will update with new information as it becomes available.

Humanlike Robots Negatively Affect How Consumers Treat Real Employees - 2

Image by Freepik

Humanlike Robots Negatively Affect How Consumers Treat Real Employees

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

New research shows that as robots and AI become more humanlike, they can unintentionally cause people to see real humans as less human—and treat them worse.

In a rush? Here are the quick facts:

  • Perceiving social AI leads to dehumanization of employees.
  • Robots with nonhuman traits reduce the dehumanization effect.
  • Exposure to humanlike AI lowers donations for employee welfare.

The study , published on SCP, found that when people notice strong social and emotional abilities in autonomous agents, such as virtual assistants or service robots, they tend to think of these machines as having human minds.

This makes people “assimilate” the machines and humans in their minds, leading to a lowered sense of humanness toward real people. The researchers call this “assimilation-induced dehumanization.”

“This dehumanized perception of people leads to negative attitudes and behaviors toward employees,” the paper explains.

When customers interact with AI that seems very humanlike, they may unconsciously reduce how human they see actual employees, which can cause mistreatment.

The effect was observed across different types of machines, both physical robots and disembodied AI like chatbots, and in real and imagined consumer scenarios. For example, people exposed to humanlike AI donated less to employee welfare and made harsher choices about employees in studies.

Interestingly, the study found that this negative effect is lessened if the autonomous agent has traits very different from humans, or if it is seen as having only cognitive (thinking) abilities rather than social or emotional ones.

Simply giving a product a humanlike look without social or emotional features doesn’t cause the same problem.

The researchers warn that as more industries use robots and AI, companies need to be aware of this side effect. “Consumers oftentimes initially engage with chatbots for basic questions and then get relayed to human employees,” they wrote.

If people think machines have a human mind, “people’s attitudes and behaviors toward employees” can worsen, at least temporarily.

To counter this, the study suggests that companies clearly mark the differences between humans and machines. For example, reminding customers when they switch from a chatbot to a real person might help maintain respect for human workers.

The research also raises concerns about how this dehumanization might affect employees themselves and wider social interactions, including reduced kindness between consumers.

In short, while humanlike AI offers many benefits, it can unintentionally blur the line between humans and machines. Understanding this can help businesses and society protect human dignity as technology advances.