Hackers Hide Malware In DNS Records To Evade Detection - 1

Image by Adrien, from Unsplash

Hackers Hide Malware In DNS Records To Evade Detection

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Cybersecurity researchers have discovered a new, stealthy hacking technique, which hides malware inside DNS records.

In a rush? Here are the quick facts:

  • Hackers are hiding malware inside DNS TXT records of legitimate-looking domains.
  • Malware is split into tiny hex chunks and reassembled using DNS queries.
  • Attackers also used DNS to launch prompt injection attacks on AI bots.

Attackers use this technique to evade traditional security tools by embedding dangerous code in areas that most systems do not inspect, as first reported by ArsTechnica .

The Domain Name System (DNS) functions as a system that converts website names into IP addresses. Hackers now employ DNS as an unorthodox data storage solution.

Researchers at DomainTools detected attackers embedding malware within TXT records of the domain whitetreecollective[.]com. The records, which serve to prove website ownership, contained numerous small text fragments which, when merged, formed malicious files.

The malware included a file for ‘‘Joke Screenmate’’, which functions as a type of nuisance software that disrupts normal computer use. The attackers transformed the file into hexadecimal format before distributing it through various subdomains. A network administrator who possesses access to the system can quietly gather these chunks through DNS requests that appear harmless.

“Even sophisticated organizations with their own in-network DNS resolvers have a hard time delineating authentic DNS traffic from anomalous requests, so it’s a route that’s been used before for malicious activity,” said Ian Campbell, senior security operations engineer at DomainTools, as reported by ArsTechnica.

“The proliferation of DOH and DOT contributes to this by encrypting DNS traffic until it hits the resolver, which means unless you’re one of those firms doing your own in-network DNS resolution, you can’t even tell what the request is, no less whether it’s normal or suspicious,” Campbell added.

Campbell discovered that certain DNS records served as platforms to execute prompt injection attacks against AI chatbots . These hidden commands attempt to trick bots into leaking data or disobeying rules.

Said Campbell: “Like the rest of the Internet, DNS can be a strange and enchanting place.”

Experts Warn AI Safety Is Falling Behind Rapid Progress - 2

Experts Warn AI Safety Is Falling Behind Rapid Progress

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Researchers warn that AI companies that strive to develop human-level systems lack established safety protocols, while we are losing our ability to see how these models think.

In a rush? Here are the quick facts:

  • No AI firm scored above D in existential safety planning.
  • Experts warn we may have AGI within the next decade.
  • AI companies lack coherent plans to manage advanced system risks.

OpenAI and Google DeepMind, together with Meta and xAI, are racing to build artificial general intelligence (AGI), which is also known as human-level AI.

But a report published on Thursday by the Future of Life Institute (FLI) warns that these companies are “fundamentally unprepared” for the consequences of their own goals.

“The industry is fundamentally unprepared for its own stated goals. Companies claim they will achieve artificial general intelligence (AGI) within the decade, yet none scored above D in existential safety planning,” the report states.

FLI evaluated seven major companies, yet discovered none of the seven companies assessed had “anything like a coherent, actionable plan” to keep these systems safe.

FLI awarded Anthropic the top safety ranking with a C+ grade, followed by OpenAI at C and Google DeepMind at C. Zhipu AI and DeepSeek earned the lowest scores among the evaluated companies.

FLI co-founder Max Tegmark compared the situation to “someone is building a gigantic nuclear power plant in New York City and it is going to open next week – but there is no plan to prevent it having a meltdown.”

A separate study , also published on Thursday, by SaferAI echoed the concern, saying the companies’ risk management practices are “weak to very weak,” and that current safety approaches are “unacceptable.”

Adding to the concern, researchers from OpenAI, DeepMind, Anthropic, and Meta reported in a new paper that we may be “losing the ability to understand AI.”

AI models now generate “thinking out loud” output by displaying human-like reasoning chains, which are a window to look into their thought processes.

However, the researchers warned that this monitoring is fragile and could vanish as systems become more advanced. OpenAI researcher and lead author Bowen Baker expressed these concerns in posts on social media:

Furthermore, the existing CoT monitorability may be extremely fragile. Higher-compute RL, alternative model architectures, certain forms of process supervision, etc. may all lead to models that obfuscate their thinking. — Bowen Baker (@bobabowen) July 15, 2025

Indeed, previous research by OpenAI found that penalizing AI misbehavior leads models to hide intentions rather than stop cheating . Additionally, OpenAI’s ChatGPT o1 showed deceptive, self-preserving behavior in tests, lying 99% when questioned about its covert actions.

Boaz Barak, a safety researcher at OpenAI and professor of Computer Science at Harvard, also noted:

I didn’t want to post on Grok safety since I work at a competitor, but it’s not about competition. I appreciate the scientists and engineers at @xai but the way safety was handled is completely irresponsible. Thread below. — Boaz Barak (@boazbaraktcs) July 15, 2025

Scientists, along with watchdogs, share concerns that the fast-growing AI capabilities could make it impossible for humans to control their creations when safety frameworks remain inadequate.