
Image by Freepik
Kimsuky Hacking Group Adopts Malwareless Phishing, Evading Detection Systems
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Kimsuky uses malwareless phishing tactics, Russian email services, and convincing sites to target researchers, institutions, and financial organizations, evading detection.
In a Rush? Here are the Quick Facts!
- Kimsuky uses malwareless phishing tactics to bypass major EDR detection systems.
- The group shifted from Japanese to Russian email services for phishing campaigns.
- Attacks rely on convincing emails impersonating public and financial institutions.
Researchers in South Korea have uncovered a shift in the tactics of the North Korean hacking group Kimsuky , which has begun employing malwareless phishing attacks designed to bypass major Endpoint Detection and Response (EDR) systems, as reported by Cyber Security News (CSN).
This group, active for several years, has targeted researchers and organizations that focus on North Korea. Its evolving strategies aim to evade detection and increase the success rate of its campaigns.
CSN reports that significant change in Kimsuky’s approach involves its email attack methods. Previously, the group relied heavily on Japanese email services for its phishing campaigns.
However, recent findings reveal a transition to Russian email services, making it more challenging for targets to identify suspicious communications and avoid potential compromises, says CSN.
Kimsuky has increasingly adopted malwareless phishing attacks, relying on carefully crafted URL-based phishing emails that lack malware attachments, rendering them harder to detect, according to CSN.
These emails often impersonate entities such as electronic document services, email security managers, public institutions, and financial organizations.
The group’s emails are highly sophisticated, frequently incorporating familiar financial themes to increase their credibility and the likelihood of user engagement, says CSN.
Reports have identified Kimsuky’s use of domains from “MyDomain[.]Korea,” a free Korean domain registration service, to create convincing phishing sites, notes CSN.
A timeline of activities detailed by Genians highlights the group’s gradual shift in domain usage, beginning with Japanese and US domains in April 2024, moving to Korean services by May, and eventually adopting fabricated Russian domains by September, says CSN.
These Russian domains, linked to a phishing tool called “star 3.0,” are registered to bolster the group’s campaigns. A file associated with these attacks, named “1.doc,” was flagged on VirusTotal, with some anti-malware services identifying it as connected to Kimsuky, reports CSN.
Interestingly, the group’s use of the “star 3.0” mailer ties back to earlier campaigns identified in 2021. At that time, the mailer was discovered on the website of Evangelia University, a US-based institution, and was linked to North Korean threat actors in reports by Proofpoint.
The evolving tactics of Kimsuky emphasize the need for vigilance among potential targets.
Cybersecurity experts recommend heightened scrutiny of suspicious communications, particularly those related to financial matters, and the adoption of advanced endpoint defenses.
Staying informed about the group’s methods and updating security policies in response to emerging threats are crucial for protecting sensitive information and maintaining robust cybersecurity measures.

Image by Sanket Mishra, from Pexels
ChatGPT’s Struggles with Accurate Citations, Raising Concerns For Publishers
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
ChatGPT’s frequent citation errors, even with licensed content, undermine publisher trust and highlight risks of generative AI tools misrepresenting journalism.
In a Rush? Here are the Quick Facts!
- ChatGPT often fabricates or misrepresents citations, raising concerns for publishers.
- Researchers found 153 out of 200 were incorrect citations, undermining trust in ChatGPT.
- ChatGPT sometimes cites plagiarized sources, rewarding unlicensed content over original journalism.
A recent study from Columbia Journalism School’s Tow Center for Digital Journalism has cast a critical spotlight on ChatGPT’s citation practices, revealing significant challenges for publishers relying on OpenAI’s generative AI tool.
The findings suggest that publishers face potential reputational and commercial risks due to ChatGPT’s inconsistent and often inaccurate sourcing, even when licensing deals are in place.
The study tested ChatGPT’s ability to attribute quotes from 200 articles across 20 publishers, including those with licensing deals and those in litigation against OpenAI, as reported this week by Columbia journalism Review (CJR).
Despite OpenAI’s claims of providing accurate citations, the chatbot returned incorrect or partially incorrect responses in 153 instances. Only seven times did it acknowledge its inability to locate the correct source, often opting to fabricate citations instead.
Examples include ChatGPT falsely attributing a quote from the Orlando Sentinel to Time and referencing plagiarized versions of New York Times content from unauthorized sources.
Even when publishers allowed OpenAI’s crawlers access, citations were often misattributed, such as linking syndicated versions rather than original articles.
Mat Honan, editor-in-chief of MIT Tech Review, expressed skepticism over ChatGPT’s transparency, noting that its responses could mislead users unfamiliar with AI’s limitations.
CJR notes that OpenAI defends its efforts, highlighting tools for publishers to manage content visibility and pledging to improve citation accuracy.
However, the Tow Center found that enabling crawlers or licensing content does not ensure accurate representation, with inconsistencies spanning both participating and non-participating publishers.
ChatGPT’s inaccuracies in referencing publisher content can erode trust in journalism and harm publishers’ reputations. When it misattributes or misrepresents articles, audiences may struggle to identify original sources, diluting brand recognition.
Even publishers permitting OpenAI’s crawlers or holding licensing agreements are not immune to these errors, highlighting systemic flaws. ChatGPT’s tendency to provide misleadingly confident answers, rather than admitting gaps in its knowledge, misleads users and undermines transparency.
Such practices could distance audiences from credible news sources, incentivize plagiarism, and weaken the visibility of high-quality journalism. These consequences jeopardize the integrity of information-sharing and trust in digital media platforms.