Hackers Target Aerospace With Fake Job Offers and Hidden Malware - 1

Image by DC Studio, from Freepik

Hackers Target Aerospace With Fake Job Offers and Hidden Malware

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • The campaign uses LinkedIn profiles and fake companies to deliver SnailResin malware.
  • The malware bypasses antivirus by hiding in legitimate cloud services like GitHub.
  • The campaign has targeted organizations since September 2023, constantly changing tactics.

A recent cyber campaign, known as the “Iranian Dream Job” campaign, is targeting employees in the aerospace, aviation, and defense sectors by promising attractive job offers.

Cybersecurity firm ClearSky revealed that this campaign is the work of a group linked to the Iranian hacking organization known as “Charming Kitten” (also referred to as APT35).

The campaign aims to infiltrate targeted companies and steal sensitive information by tricking individuals into downloading malicious software disguised as job-related materials.

ClearSky says that the “Dream Job” scam involves fake recruiter profiles on LinkedIn, often using bogus companies to lure victims into downloading malware. The malware in question, called SnailResin, infects the victim’s computer, enabling the hackers to gather confidential data and monitor activities within the network.

ClearSky notes that these hackers have refined their techniques, such as using genuine cloud services like Cloudflare and GitHub to hide malicious links, making detection challenging.

Interestingly, the Iranian hackers have adopted tactics from North Korea ‘s Lazarus Group, who pioneered the “Dream Job” scam back in 2020. By mirroring Lazarus’ approach, Iranian hackers mislead investigators, making it harder to trace the attacks back to them.

ClearSky explains that the attack uses a method called DLL side-loading, which allows malware to infiltrate a computer by posing as a legitimate software file. This technique, along with the use of encrypted files and complex coding, helps the hackers bypass common security measures.

According to ClearSky, the malware successfully evades many antivirus programs, with only a few security tools able to identify it. Since September 2023, Iran’s “Dream Job” campaign has adapted and evolved, regularly updating its tactics and malware to stay one step ahead of cybersecurity defenses, says ClearSky.

Major cybersecurity firms, including Mandiant, have detected its activity across various countries, especially in the Middle East, notes ClearSky. They highlight its persistence and sophistication, noting that the campaign’s structure changes frequently, making it a constant threat to the targeted industries.

ClearSky warns that organizations in aerospace, defense, and similar high-stakes sectors should stay vigilant and adopt proactive measures to combat these types of attacks.

By educating employees about the risks of phishing and fake job offers and implementing robust security protocols, companies can help reduce vulnerability to these highly deceptive cyber threats.

University Of Chicago’s Glaze And Nightshade Offer Artists A Defense Against AI - 2

Image by Glashier, from Freepik

University Of Chicago’s Glaze And Nightshade Offer Artists A Defense Against AI

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a Rush? Here are the Quick Facts!

  • Glaze and Nightshade protect artists’ work from unauthorized AI training use.
  • Glaze masks images to prevent AI from replicating an artist’s style.
  • Nightshade disrupts AI by adding “poisoned” pixels that corrupt training data.

Artists are fighting back against exploitative AI models with Glaze and Nightshade , two tools developed by Ben Zhao and his team at the University of Chicago’s SAND Lab, as reported today by MIT Technology Review .

These tools aim to protect artists’ work from being used without consent to train AI models, a practice many creators see as theft. Glaze, downloaded over 4 million times since its release in March 2023, masks images by adding subtle changes that prevent AI from learning an artist’s style, says MIT.

Nightshade, an “offensive” counterpart, further disrupts AI models by introducing invisible alterations that can corrupt AI learning if used in training, as noted by MIT.

The tools were inspired by artists’ concerns about the rapid rise of generative AI, which often relies on online images to create new works. MIT reports that fantasy illustrator Karla Ortiz and other creators have voiced fears about losing their livelihoods as AI models replicate their distinct styles without permission or payment.

For artists, posting online is essential for visibility and income, yet many considered removing their work to avoid being scraped for AI training, an act that would hinder their careers, as noted by MIT.

Nightshade, launched a year after Glaze, delivers a more aggressive defense, reports MIT. By adding “poisoned” pixels to images, it disrupts AI training, causing the models to produce distorted results if these images are scraped.

Nightshade’s symbolic effect has resonated with artists, who see it as poetic justice: if their work is stolen for AI training, it can damage the very systems exploiting it.

MIT argues that the tools have faced some skepticism, as artists initially worried about data privacy. To address this, SAND Lab released a version of Glaze that operates offline, ensuring no data transfer and building trust with artists wary of exploitation.

The lab has also recently expanded access by partnering with Cara, a new social platform that prohibits AI-generated content, as noted by MIT.

Zhao and his team aim to shift the power dynamic between individual creators and AI companies.

By offering tools that protect creativity from large corporations, Zhao hopes to empower artists to maintain control over their work and redefine ethical standards around AI and intellectual property, says MIT.

The effort is gaining momentum, but some experts caution that the tools may not offer foolproof protection, as hackers and AI developers explore ways to bypass these safeguards, as noted by MIT.

With Glaze and Nightshade now accessible for free, Zhao’s SAND Lab continues to lead the charge in defending artistic integrity against the expanding influence of AI-driven content creation.