
Image by Freepik
The Rise of GhostGPT: Cybercrime’s New Weapon
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Artificial intelligence has revolutionized the way we approach everyday tasks, but it has also created new tools for cybercriminals. GhostGPT, an uncensored AI chatbot, is the latest example of this darker side of AI technology, as reported in a recent analysis by Abnormal .
In a Rush? Here are the Quick Facts!
- GhostGPT is an uncensored AI chatbot used for malware creation and phishing scams.
- It bypasses ethical guidelines by using jailbroken or open-source AI language models.
- GhostGPT is sold via Telegram, offering fast responses and no activity logs.
Unlike traditional AI models that are bound by ethical guidelines, GhostGPT removes those restrictions entirely, making it a powerful tool for malicious purposes. Abnormal reports that GhostGPT operates by connecting to a jailbroken version of ChatGPT, stripping away safeguards that typically block harmful content.
Sold on platforms like Telegram , the chatbot is accessible to anyone willing to pay a fee.It promises fast processing, no activity logs, and instant usability, features that make it particularly appealing to those engaging in cybercrime.
A researcher, speaking anonymously to Dark Reading , revealed that the authors offer three pricing tiers for the large language model: $50 for one week, $150 for one month, and $300 for three months.
The researchers explain that the chatbot’s capabilities include generating malware, crafting exploit tools, and writing convincing phishing emails. For instance, when prompted to create a fake DocuSign phishing email, GhostGPT produced a polished and deceptive template designed to trick unsuspecting victims.
While promotional materials for the tool suggest it could be used for cybersecurity purposes, its focus on activities like business email compromise scams makes its true intent clear.
What sets GhostGPT apart is its accessibility. Unlike more complex tools that require advanced technical knowledge, this chatbot lowers the barrier for entry into cybercrime.
Newcomers can purchase it and begin using it immediately, while experienced attackers can refine their techniques with its unfiltered responses. The absence of activity logs further enables users to operate without fear of being traced, making it even more dangerous.
The implications of GhostGPT go beyond the chatbot itself. It represents a growing trend of weaponized AI that is reshaping the cybersecurity landscape. By making cybercrime faster, easier, and more efficient, tools like GhostGPT pose significant challenges for defenders.
Recent research shows that AI could create up to 10,000 malware variants, evading detection 88% of the time . Meanwhile, researchers have uncovered vulnerabilities in AI-powered robots , allowing hackers to cause dangerous actions such as crashes or weaponization, raising critical security concerns.
As GhostGPT and similar chatbots gain traction, the cybersecurity community is locked in a race to outpace these evolving threats. The future of AI will depend not only on innovation but also on the ability to prevent its misuse.

Image by Simon Kadula, from Unsplash
AI-Automation Risks Deepening Inequality Across UK, Report Says
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The automation of millions of jobs in the UK risks deepening inequality unless the government steps in to support workers and small businesses, according to a report by the Institute for the Future of Work (IFOW).
In a Rush? Here are the Quick Facts!
- Automation disproportionately affects regions like the North East and parts of Wales.
- SMEs struggle with resources to implement responsible automation practices.
- Workers report anxiety due to AI-driven surveillance and routinization of tasks.
Automation has already transformed many sectors, with 80% of surveyed firms adopting AI and robotic technologies for physical tasks, such as warehouse management, and cognitive tasks, like data analysis.
For example, industries such as retail have increasingly adopted automated checkout systems, while manufacturing continues to integrate robotic assembly lines. However, this rapid transformation has led to new pressures on workers, including concerns about job security and wellbeing.
The three-year report, based on a survey of 1,000 businesses, revealed that some major employers had implemented tools to ease the impact of automation and AI on staff.
However, many smaller businesses struggled to understand how these technologies would reshape the workplace and what skills or training employees would need to adapt in the coming decade.
The report highlights stark regional disparities in how automation impacts jobs. For instance, areas such as London and the South East, which already benefit from better infrastructure and innovation ecosystems, have seen more positive outcomes from automation.
In contrast, regions like the North East and parts of Wales face challenges, with fewer resources to adapt to technological change. These regions experience higher rates of job displacement in routine roles, such as administrative support and low-skill manufacturing.
Small and medium-sized enterprises (SMEs), which employ a significant portion of the UK workforce, are also struggling to adapt. Many SMEs lack the expertise and resources to govern and implement automation responsibly.
For example, family-run businesses in traditional sectors like agriculture or construction often face barriers to adopting advanced AI tools, leaving them at risk of falling behind.
The IFOW report also raises concerns about the effects of automation on job quality. While automation can reduce repetitive tasks, it has in some cases led to routinization and reduced discretion in roles.
Workers in warehouses using AI-driven monitoring systems, for example, report feeling increased pressure and anxiety due to constant surveillance. Similarly, automation in delivery services, such as algorithmically managed driver routes, can leave workers with little flexibility or control over their schedules.
To address these challenges, the report calls for government investment in skills training, better regional data collection, and stronger protections for workers. For example, providing reskilling programs focused on digital literacy and advanced communication skills could help workers transition into higher-skill roles.
Enhanced employment rights and worker engagement in decision-making processes are also essential to fostering trust in new technologies.