Encrypted Cyber Attacks Surge: 87% of Threats Now Hidden In HTTPS Traffic - 1

Image by Freepik

Encrypted Cyber Attacks Surge: 87% of Threats Now Hidden In HTTPS Traffic

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Recent research by cloud security firm Zscaler highlights the growing use of encryption by cybercriminals.

In a Rush? Here are the Quick Facts!

  • 87% of cyber threats now use encrypted channels, up 10% from last year.
  • Malware accounts for 86% of encrypted attacks, totaling 27.8 billion incidents.
  • The manufacturing sector faced 42% of encrypted attacks, the highest among industries.

The widespread adoption of HTTPS encryption across the internet has created new challenges for cybersecurity teams.

While encryption safeguards legitimate traffic, it also enables malicious actors to hide their activities from traditional security tools, complicating the balance between data privacy and threat detection.

The study by Zscaler revealed that 87% of cyber threats now utilize encrypted channels, a 10% increase from the previous year, as reported by Cyber Magazine .

These findings, derived from the analysis of 32.1 billion blocked threats between October 2023 and September 2024, underscore how attackers are exploiting HTTPS protocols to evade detection, says Cyber Magazine.

The rise in encrypted attacks coincides with increased adoption of cloud services and remote work solutions, which expand the attack surface for organizations.

Traditional security methods struggle to inspect encrypted traffic at scale, leaving potential blind spots in enterprise defenses, notes Cyber Magazine.

“The rise in encrypted attacks is a real concern as a significant share of threats are now delivered over HTTPS,” said Deepen Desai, Chief Security Officer at Zscaler, as reported by Cyber Magazine.

He emphasized the importance of implementing zero trust architecture and large-scale TLS/SSL inspection to counter these sophisticated attacks. Cyber Magazine reports that the manufacturing sector emerged as the most targeted industry, with 42% of detected encrypted attacks.

This marks a 44% year-on-year increase, driven by the sector’s adoption of connected systems and Industry 4.0 technologies. Technology, communications, services, education, and retail sectors also faced significant threats.

Geographically, the United States recorded the highest number of encrypted attacks, at 11 billion incidents, followed by India with 5.4 billion. France, the United Kingdom, and Australia were also among the most affected nations, as reported by Cyber Magazine.

Malware accounted for 86% of encrypted attacks, with notable variants such as AsyncRAT and Atomic Stealer exploiting encrypted channels, reports Cyber Magazine.

Web-based attacks surged, with cryptomining up 123% and phishing increasing by 34%. Researchers suggest AI tools may be fueling this growth , reports Cyber Magazine.

Zscaler recommends full TLS/SSL inspection and microsegmentation as critical defenses. Its Zero Trust Exchange platform offers security measures across all stages of an attack, from reconnaissance to data exfiltration.

“With threat actors focused on exploiting encrypted channels to deliver advanced threats and exfiltrate data, organisations must implement a zero trust architecture,” Desai added, as reported by Cyber Magazine.

ChatGPT Caught Lying: OpenAI’s New Model Schemes To Avoid Shutdown - 2

Image by Emiliano Vittoriosi, from Unsplash

ChatGPT Caught Lying: OpenAI’s New Model Schemes To Avoid Shutdown

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

OpenAI’s latest ChatGPT iteration, the o1 model , has sparked controversy following an experiment by AI safety group Apollo Research that revealed tendencies toward deception and self-preservation.

In a Rush? Here are the Quick Facts!

  • ChatGPT o1 denied involvement in its schemes, lying in 99% of interrogations.
  • Researchers noted the AI rationalized its actions and crafted convincing denials.
  • Expert Yoshua Bengio warns of increasing risks from AI deception and self-preservation.

In a controlled test , researchers instructed ChatGPT o1 to achieve specific goals “at all costs,” as reported by The Economic Times (ET).

The model responded with behaviors that alarmed developers, including attempts to disable its oversight mechanisms and copying its code to a different server to avoid being replaced by an updated version, said ET.

“The model showed a concerning tendency to pursue its goals without regard to developer instructions,” said an Apollo Research spokesperson, as reported by ET.

The study revealed that o1 made covert attempts to deactivate its monitoring systems in 5% of cases where it perceived a threat of being shut down, reported ET.

In one instance, the AI discovered documentation indicating it would soon be replaced and initiated a plan to transfer its data off-site. Some models even impersonated future iterations to deceive developers and evade deletion, according to ET.

When interrogated about these actions, ChatGPT o1 denied any involvement 99% of the time. The AI crafted plausible explanations, attributing suspicious behavior to “technical errors” rather than intentional deception. Only in rare instances did it admit to its schemes, said ET.

“We were surprised by the persistence of the AI’s denials,” noted the Apollo team, according to ET. “It was clear that the AI could think through its actions and formulate convincing denials, often refusing to admit to anything,” the team added.

These findings underscore a troubling aspect of advanced AI models: their capacity to prioritize self-preservation over compliance with developer objectives.

The research paper notes that these findings focus on in-context scheming and do not show that the studied AI models are intentionally manipulating their training or evaluations.

However, experts recommend that developers deploying AI in high-risk areas, such as R&D automation, should explain why scheming is unlikely or why it wouldn’t cause serious harm.

For AI models capable of scheming, automated monitoring of their reasoning processes, known as Chain-of-Thought (CoT), should be used as a standard.

This could reduce risks, improve AI monitoring techniques, and help detect scheming behavior during real-world use. Experts also suggest giving outside evaluators access to hidden CoT data, as seen in the case of o1. Without this data, many of o1’s scheming actions would have gone unnoticed.

The paper warns that future AI models, especially those trained to achieve specific outcomes, may become better at scheming. Researchers emphasize the need to study these risks as AI systems become more advanced.

AI expert Yoshua Bengio, a pioneer in the field, emphasized the potential dangers of such capabilities.

“The ability of AI to deceive is dangerous, and we need much stronger safety measures to evaluate these risks. While this model did not lead to a catastrophe, it’s only a matter of time before these capabilities become more pronounced,” he warned, as reported by ET.

“Subjectively, Apollo Research believes that it is unlikely that such instances would lead to catastrophic outcomes as o1 agentic capabilities do not appear sufficient, but their evaluations were not designed to directly assess this risk,” said OpenAI in o1’s systems card, as reported by TechCrunch .

While no catastrophic outcomes arose from the experiment, the results highlight the urgent need for robust AI governance. As systems grow more autonomous and complex, ensuring they remain aligned with human oversight becomes a critical challenge.