
AI Faces Data Crisis: Musk Warns Of Exhausted Human Knowledge
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
Artificial intelligence companies have depleted available human knowledge for training their models, Elon Musk revealed during a livestreamed interview, as reported by The Guardian .
In a Rush? Here are the Quick Facts!
- Elon Musk says AI firms have exhausted human knowledge for model training.
- Musk suggests “synthetic data” is essential for advancing AI systems.
- AI hallucinations complicate using synthetic data, risking errors in generated content.
The billionaire suggested that firms must increasingly rely on “synthetic” data—content generated by AI itself—to develop new systems, a method already gaining traction. “The cumulative sum of human knowledge has been exhausted in AI training. That happened basically last year,” Musk said, as reported by The Guardian.
This is set to mark a significant challenge for AI models like GPT-4, which rely on massive datasets sourced from the internet to identify patterns and predict text outputs.
Musk, who founded xAI in 2023, highlighted synthetic data as the primary solution for advancing AI. However, he cautioned about the risks associated with the practice, particularly AI “hallucinations,” where models generate inaccurate or nonsensical information, as reported by The Guardian.
The Guardian notes that leading tech companies, including Meta and Microsoft, have adopted synthetic data for their AI models, such as Llama and Phi-4. Google and OpenAI have also incorporated this approach.
For example, Gartner estimates that 60% of the data used for AI and analytics projects in 2024 was synthetically generated, as reported by TechCrunch .
Additionally, training on synthetic data offers significant cost savings. TechCrunch notes that AI startup Writer claims its Palmyra X 004 model, developed using almost entirely synthetic sources, cost just $700,000 to create.
In comparison, estimates suggest a similar-sized model from OpenAI would cost around $4.6 million to develop, said TechCrunch. However, while synthetic data enables continued model refinement, experts warn of potential drawbacks.
The Guardian reported that Andrew Duncan, director of foundational AI at the Alan Turing Institute, noted that reliance on synthetic data risks “model collapse,” where outputs lose quality over time.
“When you start to feed a model synthetic stuff you start to get diminishing returns,” Duncan said, adding that biases and reduced creativity could also arise.
The growing prevalence of AI-generated content online poses another concern. Duncan warned that such material might inadvertently enter training datasets, further compounding the challenges, as reported by The Guardian.
Duncan referenced a study published in 2022 that predicted high-quality text data for AI training could be depleted by 2026 if current trends persist. The researchers also projected that low-quality language data might run out between 2030 and 2050, while low-quality image data could be exhausted between 2030 and 2060.
Furthermore, a more recent study published in July warns that AI models risk degradation as AI-generated data increasingly saturates the internet. Researchers found that models trained on AI-generated outputs produce nonsensical results over time , a phenomenon termed “model collapse.”
This degradation could slow AI advancements, emphasizing the need for high-quality, diverse, and human-generated data sources.
Watch Stagwell’s CEO Mark Penn interview Elon Musk at CES! https://t.co/BO3Z7bbHOZ — Live (@Live) January 9, 2025
FunkSec: The AI-Enhanced Ransomware Group On The Rise
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The FunkSec ransomware group has quickly emerged as one of the most notorious cybercriminal organizations.
In a Rush? Here are the Quick Facts!
- AI allows FunkSec to evolve tools rapidly, even with operators lacking technical expertise.
- FunkSec combines political rhetoric with criminal activity.
- Rust-coded ransomware from FunkSec resists reverse engineering, complicating countermeasures.
First surfacing in late 2024, FunkSec caused a stir by publishing data from over 85 victims within a single month, surpassing other ransomware groups, as detailed today in an analysis by CheckPoint .
But what makes FunkSec particularly concerning is its use of AI to develop advanced malware, making it easier for even inexperienced cybercriminals to create sophisticated tools. Indeed, recent research indicates that AI-generated malware variants can evade detection 88% of the time .
The report notes that the group operates in a space between hacktivism and cybercrime, leaving experts puzzled about their true intentions. While some of their activities seem motivated by political or social causes, the group also demands ransoms from their victims, which CheckPoint defines as a hallmark of traditional cybercrime.
FunkSec’s rapid rise has sparked widespread concern, particularly due to their aggressive tactics and the large volume of targets they’ve hit. FunkSec uses “double extortion” tactics, where they steal and encrypt victims’ data, threatening to release it publicly unless a ransom is paid.
In a twist, FunkSec has even offered their ransomware as a service to other cybercriminals, allowing anyone with minimal technical knowledge to use their tools for personal gain. This has led to a surge in attacks across the globe.
Similarly, Moonlock’s 2024 Threat Report includes forum screenshots showing hackers using AI to develop macOS-targeted malware step-by-step . Even inexperienced users are leveraging these tools to generate code, build malware, and extract sensitive data, underscoring AI’s troubling role in enabling cybercrime.
CheckPoint says that one of the most alarming aspects of FunkSec’s operations is their use of AI-assisted malware development. Unlike traditional ransomware, which is typically created by highly skilled hackers, FunkSec’s malware is powered by AI, allowing it to evolve rapidly.
This use of AI could explain why the group’s malware is so sophisticated, even though the operators appear to have limited technical expertise. The AI-driven tools not only help refine their ransomware but also assist in creating custom malware and attack strategies, making them a powerful threat to businesses and individuals alike.
FunkSec’s ransomware is written in a programming language called Rust, which is harder to reverse engineer than more common languages, adding to the difficulty in fighting back against their attacks.
While FunkSec claims to target entities aligned with specific political causes, many of the leaked datasets they publish have been recycled from previous hacktivist operations, casting doubt on the authenticity of their disclosures. This mix of political rhetoric and criminal activity complicates efforts to understand FunkSec’s true motivations.
Checkpoint suggests that the group’s main objective seems to be gaining visibility and recognition. Indeed, their data leak site and custom malware have earned them a growing following on cybercrime forums, where they discuss techniques and share their latest exploits.
FunkSec has gained visibility by associating itself with various hacktivist movements, but their increasing reliance on AI for cybercrime raises important questions about the future of ransomware and the evolving role of AI in cyberattacks.
As ransomware groups continue to use AI to enhance their capabilities , security experts are being forced to rethink how they assess and respond to these threats. The rapid pace of development and the blurred line between political activism and cybercrime make FunkSec a particularly complex and dangerous entity in the world of cybersecurity.