
Image by DC Studio, from Freepik
New Hacker Group Found Hiding in Legitimate Websites
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
“Curly COMrades,” a hacker group with advanced espionage tactics, is targeting governments and energy companies in Eastern Europe.
In a rush? Here are the quick facts:
- The hackers steal passwords to keep breaking into systems.
- They use a special backdoor to stay hidden on computers.
- Stolen data is sent through real but hacked websites.
Bitdefender Labs has identified a new hacker group, “Curly COMrades,” believed to be operating in support of Russian interests and targeting nations undergoing political change. Since mid-2024, the group has attacked judicial and government bodies in Georgia and an energy company in Moldova.
The hackers’ main goal is to “maintain long-term access to target networks and steal valid credentials.” They repeatedly tried to extract the NTDS database, which stores Windows user passwords, and dump LSASS memory to recover login details, possibly in plain text.
The “Curly COMrades” operation depends on establishing robust access points through Resocks, SSH, and Stunnel tools. The attackers use MucorAgent as their custom backdoor, which hides their access by hijacking Windows .NET Native Image Generator CLSIDs. The unpredictable nature of this persistence method makes it difficult to detect.
The attackers hide their operations by sending stolen data and remote commands through compromised legitimate websites, mixing malicious traffic with typical network activity. Bitdefender says “it’s very likely that what we’ve observed is just a small part of a much larger network of compromised web infrastructure they control.”
The lack of sufficient evidence led Bitdefender to avoid linking the group to any known hacking organizations. The researchers created a new name based on technical indicators, including ‘curl.exe’ usage and ‘COM object’ hijacking, to avoid glamorizing cybercrime activities.
The investigation started after proxy software activity raised suspicions which led to the discovery of a larger espionage operation. The researchers consider this group to be a major threat to high-value political and infrastructure targets given its tactics and persistence.

Image by Justin Lane, from Unsplash
Scientists Hide Light Codes To Expose Fake AI Videos
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cornell researchers developed a new technology to help fact-checkers detect fake or manipulated videos, and did so by embedding secret watermarks in light.
In a rush? Here are the quick facts:
- Cornell researchers developed light-based watermarks to detect fake or altered videos.
- The method hides secret codes in nearly invisible lighting fluctuations.
- Watermarks work regardless of the camera used to record footage.
The researchers explain that this method hides nearly invisible fluctuations in lighting during important events or at key locations, such as press conferences or even entire buildings.
These fluctuations, unnoticed by the human eye, are captured in any video filmed under the special lighting, which can be programmed into computer screens, photography lamps, or existing built-in fixtures.
“Video used to be treated as a source of truth, but that’s no longer an assumption we can make,” said Abe Davis, assistant professor of computer science at Cornell, who conceived the idea.
“Now you can pretty much create video of whatever you want. That can be fun, but also problematic, because it’s only getting harder to tell what’s real,” Davis added.
Traditional watermarking techniques modify video files directly, requiring cooperation from the camera or AI model used to create them. Davis and his team bypassed this limitation by embedding the code in the lighting itself, ensuring any real video of the subject contains the hidden watermark, no matter who records it.
Each coded light produces a low-fidelity, time-stamped “code video” of the scene under slightly different lighting. “When someone manipulates a video, the manipulated parts start to contradict what we see in these code videos,” Davis explained.
“And if someone tries to generate fake video with AI, the resulting code videos just look like random variations,” Davis added.
The project leader Peter Michael explained that the team created imperceptible light codes by drawing on human perception research. The system uses normal lighting “noise” patterns to make detection challenging without the secret key. Programmable lights can be coded with software, while older lamps can use a small chip the size of a postage stamp.
The team achieved successful implementation of up to three separate codes for different lights within the same scene, which significantly increases the difficulty of forging them. The system demonstrated its effectiveness outdoors, and across various skin tones.
Still, Davis warns the battle against misinformation is far from over. “This is an important ongoing problem,” he said. “It’s not going to go away, and in fact, it’s only going to harder.”