
New Dark Pink APT Group Attacks the Government Bodies and Militaries in Asia-Pacific
- Written by Ari Denial Cybersecurity & Tech Writer
Dark Pink is an advanced persistent threat (APT) group that uses spear phishing techniques to target various entities across Asia-Pacific and Europe.
Between June and December 2022, a group called Dark Pink launched numerous APTs. There were attacks against a number of Asian countries, including Vietnam, Cambodia, Indonesia, the Philippines, and Malaysia. Bosnia and Herzegovina, a country in Europe, was also attacked.
“Group-IB’s early research into Dark Pink has revealed that these threat actors are leveraging a new set of tactics, techniques, and procedures rarely utilized by previously known APT groups. They leverage a custom toolkit, featuring TelePowerBot, KamiKakaBot, and Cucky and Ctealer information stealers (all names dubbed by Group-IB) with the aim of stealing confidential documentation held on the networks of government and military organizations,” said a Group-IB Malware Analyst .
According to reports, the initial vector of Dark Pink’s attacks was spear phishing campaigns, where the operators would impersonate job applicants. Dark Pink can also infect USB devices connected to infected computers. Additionally, it has the ability to access the messengers installed on compromised computers.
The security team informed that the threat actors had also created PowerShell scripts to communicate between victims and their infrastructure, and they used Telegram API to communicate with infected infrastructure.
“Countries of the Asia-Pacific region have long been the target of advanced persistent threat (APT) groups. Earlier Group-IB research found that this region has often been a “key arena” of APT activity, and a mixture of nation-state threat actors from China, North Korea, Iran, and Pakistan have been tied to a wave of attacks in the region. More often than not, the primary motive for APT attacks in the Asia-Pacific (APAC) region is not financial gain, but rather espionage,” Group-IB officials figured out.
In their research report (published on January 2023), the Group-IB security analysts informed that the Dark Pink APT group and the threats are still active. The officials are investing the issue further to determine its scope. They suggested organizations take the precautions mentioned below to prevent hacking:
- Use business email protection tools .
- Introduce a cybersecurity culture in the workspace.
- Limit file-sharing access to confidential resources.
- Only use trustworthy tools with good reputations to get things done.

Cyber Threat Actors Are Using ChatGPT to Code Deployable Malware
- Written by Ari Denial Cybersecurity & Tech Writer
Hackers are using ChatGPT, the brainchild of Open AI to write malicious codes and deploy malware.
As per the reports, less experienced cybercriminals are utilizing ChatGPT to easily create malware strains that can be used for different cybercrimes. Hackers are also using this open-source AI app to create dark websites, steal personal files, obtain bank account credentials, and prepare other fraudulent schemes.
ChatGPT provides step-by-step instructions for hackers to replicate malware and ransomware strains. In a recent experiment, cybersecurity researchers ethically hacked a website in under 45 minutes using the hacking script generated using ChatGPT.
“Just as it can be used for good to assist developers in writing code for good, it can (and already has) been used for malicious purposes,” said Matt Psencik, the Director of Endpoint Security Specialist at Tanium .
“A couple of examples I’ve already seen are asking the bot to create convincing phishing emails or assist in reverse engineering code to find zero-day exploits that could be used maliciously instead of reporting them to a vendor,” he added.
Hackers are exploiting ChatGPT to create malicious scripts to perform cyber crimes. The files are then sold and shared on the dark web and other underground community forums.
When reporters asked ChatGPT personnel for clarification, they said — “Threat actors may use artificial intelligence and machine learning to carry out their malicious activities. Open AI is not responsible for any abuse of its technology by third parties.”
“While we’ve made efforts to make the model refuse inappropriate requests, it will sometimes respond to harmful instructions or exhibit biased behavior. We’re using the Moderation API to warn or block certain types of unsafe content, but we expect it to have some false negatives and positives for now. We’re eager to collect user feedback to aid our ongoing work to improve this system,” they added.