Nvidia And Abu Dhabi Launch New AI Research And Robotics Lab - 1

Photo by SnapSaga on Unsplash

Nvidia And Abu Dhabi Launch New AI Research And Robotics Lab

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Nvidia and Abu Dhabi’s Technology Innovation Institute (TII) announced on Monday a new partnership to build an AI research and robotics lab in the United Arab Emirates (UAE). This will be Nvidia’s first AI technology center in the Middle East.

In a rush? Here are the quick facts:

  • Nvidia and Abu Dhabi’s Technology Innovation Institute (TII) will build an AI research and robotics lab.
  • TII will use Nvidia’s GPU chips to accelerate AI and robotics research.
  • This will be Nvidia’s first AI technology center in the Middle East.

According to Reuters , the new lab will focus on developing advanced AI models and the next-generation robotics platforms. The TII, which is part of the Advanced Technology Research Council—a government department—will use Nvidia’s GPU chips to accelerate and support its efforts to become a global leader in AI technology.

Technology Innovation Institute (TII) and NVIDIA have launched the Middle East’s first joint NVAITC AI and Robotics Lab. This strategic initiative creates a powerful ecosystem for advancing Generative AI models, embodied AI and robotics. pic.twitter.com/XQiewxPLbE — Technology Innovation Institute (@TIIuae) September 22, 2025

The institute is currently building humanoid and quadruped robots, as well as components such as robotic arms. The new partnership is expected to boost these efforts.

“It will be a chip that we will newly use…It’s called the Thor chip, and it is a chip that enables advanced robotic systems development,” said Najwa Aaraj, the CEO of TII, in an interview with Reuters.

The discussions for this new deal have been ongoing. In May, President Donald Trump signed a multi-billion-dollar deal in Abu Dhabi to build one of the largest data centers in the world—featuring U.S. technology and Nvidia chips—, but it was not finalized due to the UAE’s close relationship with China.

Negotiations with Nvidia to build the joint lab have been underway for the past year. Both institutions have now reached an agreement, which includes contributions from both teams and new staff hired exclusively for the project.

“The initiative aligns with Abu Dhabi’s long-term strategy to advance technological sovereignty and shape the future of intelligent autonomous systems,” wrote the TII in a recent post on the social media platform X.

Last week, Nvidia also announced a new partnership with the U.K. government and multiple tech companies—including OpenAI, Google, and Microsoft—to develop AI infrastructure in the region.

New Malware Uses GPT-4 To Generate Attacks On The Fly - 2

Image by Xavier Cee, from Unsplash

New Malware Uses GPT-4 To Generate Attacks On The Fly

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Security researchers have found early evidence of malware that uses large language models (LLMs) to generate malicious actions on the fly.

In a rush? Here are the quick facts:

  • Researchers found malware using LLMs to generate code at runtime.
  • Malware dubbed MalTerminal used GPT-4 to build ransomware and shells.
  • Traditional antivirus tools struggle to detect runtime-generated malicious code.

The findings were presented at LABScon 2025 in a talk titled “ LLM-Enabled Malware In the Wild. ”

According to SentinelLABS, “LLM-enabled malware poses new challenges for detection and threat hunting as malicious logic can be generated at runtime rather than embedded in code.”

These threats operate through execution-based methods which make them impossible for standard antivirus systems to detect because the harmful code does not exist until execution time.

The team identified what they believe may be the earliest case of this kind of malware, which they dubbed ‘MalTerminal’. The system based on Python employs GPT-4 API from OpenAI to generate ransomware attacks and reverse shell attacks.

The researchers documented additional offensive tools, which included vulnerability injectors, and phishing aids to show how attackers experiment with LLMs.

“On the face of it, malware that offloads its malicious functionality to an LLM that can generate code-on-the-fly looks like a detection engineer’s nightmare,” the researchers wrote.

Other cases include ‘PromptLock’, which first emerged as an AI-based ransomware in 2023, and PROMPTSTEAL, a malware connected to the Russian group APT28. The researchers explain that PROMPTSTEAL embedded 284 HuggingFace API keys and used LLMs to produce system commands for stealing files.

Researchers found that despite their sophistication, LLM-enabled malware must include “embedded API keys and prompts,” leaving traces that defenders can track. They wrote, “This makes LLM enabled malware something of a curiosity: a tool that is uniquely capable, adaptable, and yet also brittle.”

For now, the use of LLM-enabled malware appears rare and mostly experimental. But experts warn that as adversaries refine their methods, these tools could become a serious cybersecurity threat.