
Image byBilly Freeman, from Unsplash
Construction Sector At Risk As Hackers Exploit FOUNDATION Software
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Hackers exploit default passwords in software typically used by the construction industry
- Public database access through the mobile app creates risks.
- Huntress suggests immediate password changes and security fixes.
Security researchers at Huntress announced today that they discovered a hacking campaign targeting companies using FOUNDATION Accounting Software, a popular program in the construction industry.
The hackers are taking advantage of a simple weakness: many companies haven’t changed the default passwords that come with the software.
Normally, databases like the one used by FOUNDATION Accounting Software are kept private and protected by a firewall or VPN.
However, FOUNDATION’s mobile app feature allows for public access to the database through a specific TCP port. This makes the database more vulnerable to attacks.
Once inside, the attackers can take control of the system and run harmful commands that allow them to steal information or cause damage.
Huntress observed that the attack was automated, hitting multiple companies in just a few minutes. In one case, attackers made over 35,000 attempts before finally getting access.
To protect against this threat, Huntress recommends that all companies using FOUNDATION immediately change the default passwords, avoid exposing the software to the public internet, and disable certain risky features that hackers can exploit.
The researchers reported that they initially identified the malicious activity targeting Foundation last week.
Huntress has already taken action by isolating affected machines and notifying customers who may be at risk. Although the vulnerability is a serious concern, taking these security measures can prevent further attacks.
FOUNDATION did not respond to Recorded Future News ‘ (RFN) request for comment by the time of publication on Tuesday.
While the extent of the damage caused by these attacks remains unclear, as noted by the RFN, it is crucial for affected companies to investigate and take appropriate steps to mitigate any potential harm.

Image from Freepik
Google To Flag AI-Generated Images In Search
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Google Rolls Out Tool to Identify AI-Edited Images in Search Result
- Google joined the C2PA to create standards that trace digital content origins
- The system relies on the C2PA standard but adoption is limited among companies and tools.
Google announced today that it plans to roll out changes to Google Search to make it clearer which images in results were generated or edited using AI tools.
The tech giant is leveraging a technology called “Provenance” to identify and label such images, aiming to enhance user transparency and combat the spread of misinformation.
Google explained that Provenance technology can determine if a photo was captured with a camera, altered by software, or created entirely by generative AI.
This information will be made available to users through the “About this image” feature, providing them with more context and helping them make informed decisions about the content they consume.
To bolster its efforts, Google joined the Coalition for Content Provenance and Authenticity (C2PA) as a steering committee member earlier this year. The C2PA has been working to develop standards for tracing the history of digital content, including images and videos.
However, TechCrunch (TC) notes that only images with “C2PA metadata” will be flagged as AI-manipulated in Google Search.
As noted by The Verge , only a limited number of generative AI tools and cameras, such as those from Leica and Sony, support C2PA specifications.
Additionally TC notes that C2PA metadata, like any form of metadata, can be removed, damaged, or become unreadable. Many popular AI tools, like Flux, which powers xAI’s Grok chatbot, don’t include C2PA metadata, partly because their developers haven’t adopted the standard.
While this initiative shows promise in combating harmful deepfakes content, its success lies in widespread adoption of the C2PA watermarking system by camera manufacturers and generative AI developers.
However, even with C2PA in place, malicious actors can still remove or manipulate an image’s metadata, potentially undermining Google’s ability to accurately detect AI-generated content.