Hackers Selling Stolen Military And Defense Contractor Credentials For $10 - 1

Image by Kevin Ku, from Unsplash

Hackers Selling Stolen Military And Defense Contractor Credentials For $10

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A new report by cybersecurity firm Hudson Rock reveals a troubling security breach affecting the U.S. military, federal agencies, and top defense contractors, including Lockheed Martin, Boeing, and Honeywell.

In a Rush? Here are the Quick Facts!

  • Malware steals VPN access, email logins, and multi-factor authentication session cookies.
  • 398 Honeywell employees were infected, exposing internal systems and third-party credentials.
  • U.S. Navy personnel had their login details stolen, risking military system breaches.

The report claims that employees in these organizations have been infected with “infostealer” malware, which collects login credentials, email access, and other sensitive data.

Unlike traditional cyberattacks that involve hacking into networks, infostealer malware waits for a user to unknowingly download an infected file—often a game mod, pirated software, or a malicious email attachment.

Once installed, the malware collects login details, browsing history, and stored passwords. Cybercriminals then sell this stolen data for as little as $10 per compromised computer on underground markets, as detailed in the report.

Among the stolen information are credentials for VPNs, government email accounts, and classified procurement systems. Even multi-factor authentication (MFA) can be bypassed using stolen session cookies, allowing hackers to gain unauthorized access to secure systems.

The report highlights that employees at some of the most critical U.S. defense companies have been affected. One case study shows that 398 Honeywell employees had their credentials leaked, exposing internal portals and software tools. Additionally, 472 third-party accounts connected to Microsoft, Cisco, and SAP were also compromised.

Beyond the private sector, the U.S. Army and Navy have also been targeted, with at least 30 Navy personnel having their login credentials and browsing history stolen. Even the FBI and Government Accountability Office (GAO) have been impacted, raising concerns about national security risks.

Hudson Rock warns that these breaches don’t just affect the individual companies involved. Many organizations work together in the defense industry, meaning a security breach in one company can expose its entire network of partners, suppliers, and government agencies.

While cybersecurity measures exist to monitor and detect such breaches, experts stress the importance of prevention.

The researchers say that companies and government agencies must enforce stronger cybersecurity practices, including stricter download policies, improved employee training, and enhanced malware detection tools.

AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers - 2

Photo by Saúl Bucio on Unsplash

AI-Generated Errors in Court Papers Lead to Legal Trouble for Lawyers

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A report shared by Reuters yesterday reveals that AI’s hallucinations—errors and made-up information created by generative AI models—are causing legal problems in courts in the United States.

In a Rush? Here are the Quick Facts!

  • Morgan & Morgan sent an email to 1,000 lawyers warning about the risks of AI.
  • The recent case of Walmart lawyers admitting to using AI for their cases has raised alarms in the legal community.
  • The use of chatbot hallucinations in court statements has become a recurring issue in recent years.

This month, the law firm Morgan & Morgan sent an email warning over 1,000 lawyers about the risks of using chatbots and fake cases generated by artificial intelligence.

A few days ago, two lawyers in Wyoming admitted including fake cases generated by AI in a court filing for a lawsuit against Walmart, and a federal judge threatened to sanction them.

In December, Stanford professor and misinformation expert Jeff Hancock was accused of using AI to fabricate court declaration citations as part of his statement in defense of the state’s 2023 law criminalizing the use of deepfakes to influence elections.

Multiple cases like these, throughout the past few years, have been generating legal friction and adding trouble to judges and litigants. Morgan & Morgan and Walmart declined to comment on this issue.

Generative AI has been helping reduce research time for lawyers, but its hallucinations can incur significant costs. Last year, Thomson Reuters’s survey revealed that 63% of lawyers used AI for work and 12% did it regularly.

Last year, the American Bar Association reminded its 400,000 members of the attorney ethics rules, which include lawyers standing by all the information in their court filings, and noted that this included AI-generated information, even if it was unintentional—as in Hancock’s case .

“When lawyers are caught using ChatGPT or any generative AI tool to create citations without checking them, that’s incompetence, just pure and simple,” said Andrew Perlman, dean of Suffolk University’s law school to Reuters.

A few days ago, the BBC also shared a report warning about fake quotes generated by AI and the issues with AI tools in journalism.