DOJ Busts North Korean Tech Job Scam Using Stolen U.S. Identities - 1

Image by Sigmund, from Unsplash

DOJ Busts North Korean Tech Job Scam Using Stolen U.S. Identities

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The U.S. Justice Department dismantled a North Korean scheme that used stolen American identities to infiltrate tech jobs and fund the Kim regime.

In a rush? Here are the quick facts:

  • North Koreans used stolen U.S. IDs to land tech jobs remotely.
  • DOJ seized 200 computers across 16 states in crackdown.
  • Two Americans charged with aiding North Korean impersonation scheme.

The U.S. Department of Justice (DOJ) uncovered a significant operation where North Korean workers used stolen American identities to obtain remote tech positions in U.S. companies, as first reported by WIRED .

In their announcement on Monday, the authorities revealed how they conducted searches at 29 “laptop farms” across 16 states, while seizing 200 computers, together with 21 websites and 29 financial accounts that belonged to the scheme.

The workers stole more than 80 American identities to get jobs at over 100 companies, while sending all their earnings to the North Korean government. Two Americans, Kejia Wang and Zhenxing Wang, from New Jersey, face charges for their role in creating fake identities and establishing remote access points for impersonators. Only Zhenxing Wang has been arrested.

“Whenever you have a laptop farm like this, that’s the soft underbelly of these operations. Shutting them down across so many states, that’s massive,” said Michael Barnhart, an investigator at security firm DTEX, as reported by WIRED.

The Wangs obtained private information from more than 700 Americans to enable North Koreans to create false identities. The stolen credentials originated from criminal forums operating on the dark web.

Barnhart noted, “They have a stable of these […] they’re just going to piggyback [on data breaches] because it’s already out there.”

The fake workers penetrated multiple high-stakes companies during their operations. WIRED reported that a California defense contractor suffered a breach when the impersonator accessed AI-related data that fell under export law regulations.

North Korean hackers stole more than $900,000 from cryptocurrency firms, with $740,000 coming from an Atlanta-based company, as reported by the DOJ.

While this crackdown is a major blow to the operation, Barnhart warns, “This is going to put a heavy dent in what they’re doing. But as we adapt, they adapt.”

Opinion: AI In Warfare—The Tech Industry’s Quiet Shift Toward the Battlefield - 2

Image generated using ChatGPT

Opinion: AI In Warfare—The Tech Industry’s Quiet Shift Toward the Battlefield

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The debate over autonomous weapons, tech security policies, and AI ethics in the military has been ongoing, but recent days have brought major developments. Leaders from OpenAI, DeepSeek, and even Spotify’s founder announced new agreements to work with governments on defense technologies and strategic AI

Tensions around the use of artificial intelligence in warfare have intensified in the past few days. This month, several tech companies announced new strategic partnerships with governments to develop defense projects. And, as with much in the AI space, there’s been a sharp shift in recent months in how AI is being approached for military and weapons development.

Just days ago, OpenAI and the U.S. government announced a $200 million deal to develop AI-powered defense tools. Details remain scarce, with officials emphasizing “administrative operations” as the primary application.

Meanwhile, Swedish entrepreneur and Spotify founder Daniel Ek has backed the German company Helsing by leading a €600 million investment round . Helsing, which originally focused on software technology, is now moving into drones, submarines, and aircraft development.

Reuters recently revealed that DeepSeek is helping China’s military and intelligence operations . A senior U.S. official said that the AI startup has been helping solve the challenges in the U.S.-China trade war, and its open-source model is helping the Chinese government in surveillance operations .

Tech giants are collaborating with governments in ways we’re not used to seeing —at least not so publicly—and they’re getting involved in activities that traditionally haven’t been part of their role, like senior tech executives joining the U.S. Army Reserve.

The Army’s Detachment 201: Executive Innovation Corps is an effort to recruit senior tech executives to serve part-time in the Army Reserve as senior advisors to help guide rapid and scalable tech solutions to complex problems. ⤵️ https://t.co/95LjcCmbYe — U.S. Army Reserve (@USArmyReserve) June 24, 2025

What’s going on?

A Shift in Speech

Tech companies went from “We would never use AI for military purposes” to “Maybe we will silently delete this clause from our policies” to “Great news, we are now building AI-powered weapons for the government!”

At least, that’s how it appears to the attentive observer.

Not long ago, AI giants seemed proud to declare they would never support military applications, but something changed . Google is a great example.

In 2017, the U.S. Department of Defense launched Project Maven , the Algorithmic Warfare Cross-Functional Team, an initiative to integrate AI into military operations. Google was initially involved, but internal protests—driven by employee concerns over ethics—prompted the company to withdraw temporarily.

Last year, another push towards military activities arose, and almost 200 Google DeepMind workers urged the company to drop the military contracts .

“Any involvement with military and weapon manufacturing impacts our position as leaders in ethical and responsible AI, and goes against our mission statement and stated AI Principles,” wrote the concerned employees.

This time, Google’s response was to wait and quietly update its AI ethics guidelines by removing the part where they said they would never develop AI technology that could cause harm . Demis Hassabis, Google’s AI head, explained that they were just adapting to changes in the world.

While Google’s case illustrates the evolving relationship between AI and military use, it’s just one example of a broader industry-wide shift toward serving defense objectives.

AI Is Reshaping the Military and Defense Sector

The launch of Project Maven, or as some might call it, “when the U.S. government realized large language models could be extremely useful in warfare,” disclosed one of the reasons why the U.S. government is interested in AI.

AI systems’ abilities to process massive amounts of data, identify objects on the battlefield, and analyze imagery are especially appealing in the defense sector.

Enhanced Analyses, Beyond Human Capabilities

Since 2022, both Ukraine and Russia have been integrating AI systems into their military operations.

The Ukrainian government has partnered with tech companies and deployed multiple strategies to make the most out of large language models. It recently processed 2 million hours of battlefield footage —the equivalent of 228 years of video—to train AI models for military processes. How many humans would they need to analyze that much data?

“This is food for the AI: If you want to teach an AI, you give it 2 million hours (of video), it will become something supernatural,” explained the founder of the non-profit digital system OCHI, Oleksandr Dmitriev. The footage can optimize weapons performance and help improve combat tactics.

Another AI system, Avengers, is the AI-powered intelligence platform developed by Ukraine’s Ministry of Defense Innovation Center, which processes live videos from drones and identifies up to 12,000 enemy units weekly.

Drones: A Hot Commodity In The Battlefield

Drones on the battlefield—often referred to as “killing machines”—are currently among the most valuable technologies in modern warfare due to their autonomy, precision, and low cost. These robots allow warring nations to carry out high-impact strikes without risking human pilots and at a fraction of the traditional expense.

By May this year, Russia had deployed over 3,000 Veter kamikaze drones in Ukraine. These systems are capable of identifying targets and executing attacks autonomously.

Just days ago, Ukrainian soldiers deployed the Gogol-M drone, a “mothership” drone that can travel up to 300 kilometers, carry other drones, evade radar by flying at low altitudes, and scan the ground beneath it to detect and attack enemy troops.

According to The Guardian , each attack using this powerful drone costs around $10,000, whereas a missile system using slightly older technology would have cost between $3 and $5 million.

The brand new startup Theseus quickly raised $4.3 million after its young founders shared a post on the social media platform X last year, saying that they had built a drone for less than $500 that could fly without a GPS signal.

we designed, 3d printed and built a <$500 drone with that calculates GPS coordinates without a signal using a camera + google maps in 24h pic.twitter.com/8P2QoQMNbW — Ian Laffey (@ilaffey2) February 18, 2024

Although drone technology is not yet as precise as some developers hope—especially when affected by weather conditions that reduce its “visibility”—it has shown great potential in the sector.

A Hard-to-Reach Global Consensus

It’s not just countries at war or the world’s major powers that are developing new AI-powered technologies for defense. Many nations have been integrating AI into cybersecurity efforts and autonomous weapons development for years. This isn’t just a 2025 phenomenon.

Since 2014, the United Nations has been attempting to agree on regulatory frameworks with multiple nations, without success .

Over 90 nations recently gathered at the U.N. General Assembly in New York to discuss the future of AI-controlled autonomous weapons and their regulations. They did not reach consensus , and the General Assembly has only passed a non-binding resolution from 2023, which warns about the need to address lethal autonomous weapons systems (LAWS).

The big debate now is on whether to implement a global framework or not. Many countries agree on the need for new global guidelines that can regulate private AI companies and nations . Other countries, such as the U.S., China, Russia, and India, prefer to keep the current international laws and create, independently, new ones for each nation according to their local needs—or interests. And we’ve just witnessed how chaotic the process of creating new AI regulations was, even at the state level in California .

Tech Companies More And More Involved

Activists such as Laura Nolan of Stop Killer Robots worry about the lack of safety measures and legal frameworks that control the advancement of tech companies in the development of autonomous weapons and AI software for the military.

“We do not generally trust industries to self-regulate … There is no reason why defence or technology companies should be more worthy of trust,” said Nolan to Reuters .

In 2024, researchers revealed that Chinese institutions have been using Meta’s open-source large language model Llama for military purposes. The Pentagon reached a deal with Scale AI to develop Thunderforge —an AI project to modernize military decision-making. And OpenAI partnered with military contractor Anduril—a defense ally of the U.S. Military, the UK, Ukraine, and Australia.

Defense startups have also grown in Europe , gaining ground not only in the development of new technologies and projects but also in attracting top talent.

A Complicated Development

Another factor closely tied to tech companies’ involvement in national defense strategies is nationalism. More and more software developers and AI experts are choosing to work on projects that align with their ideals and cultural roots rather than simply chasing higher salaries. Some have even turned down jobs in the U.S. that offered twice the pay—such as Google or OpenAI—to join European ventures like Helsing, for example.

The threads of politics, technology, nationalism, and ideological battles are becoming increasingly intertwined—often leaving behind considerations of ethics, morality, and humanism.

Recent developments make it clear that tech giants are playing a huge role in military and national defense efforts around the world. The development of autonomous weapons and war-related technologies is advancing at an ultra-fast pace, while efforts by the United Nations to establish international agreements and regulations for the future of humanity appear increasingly minimized.

Without international agreements—and with ambitious tech companies backed by governments to develop the world’s most powerful weapons using AI—what does the future hold for humanity in the years to come?