Meta Launches Smartglasses With AI Assistant And Neural Band - 1

Photo by Dima Solomin on Unsplash

Meta Launches Smartglasses With AI Assistant And Neural Band

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

Meta unveiled its latest and most advanced smartglasses, the Meta Ray-Ban Display, on Wednesday at the annual Connect event in California. The “AI glasses” come with an electromyography (EMG) wristband, called Meta Neural Band, which includes a sensor that allows users to control the device through gestures.

In a rush? Here are the quick facts:

  • Meta launched its latest and most advanced smartglasses, Meta Ray-Ban Display, for $799.
  • The new device comes with a wristband, Meta Neural Band, which allows users to control the AI glasses with gestures.
  • The company now offers three categories of AI glasses: Camera AI glasses, Display AI glasses, and Augmented reality glasses.

According to Meta’s official announcement , the new Meta Ray-Ban Display integrates microphones, cameras, speakers, and a full-color high-resolution display backed with AI technology. The smart glasses will be sold together with the Meta Neural Band, starting at $799 on September 30 in the United States.

Meta Ray-Ban Display + Meta Neural Band = our most advanced pair of AI glasses. Ever. pic.twitter.com/PlrVcwbprN — Meta (@Meta) September 18, 2025

Meta emphasized that the display is designed for easy removal and optimized for short interactions controlled through its intuitive wristband.

“Meta Neural Band is so effortless, it makes interacting with your glasses feel like magic,” states the document. “It replaces the touchscreens, buttons, and dials of today’s technology with a sensor on your wrist, so you can silently scroll, click, and, in the near future, even write out messages using subtle finger movements.”

Meta Neural band’s battery is expected to last up to 18 hours and is made with Vectran, a strong and bendable material.

The tech giant also introduced new AI-powered features for the new device. The company explained that users will be able to show images and text on the glasses, with context-aware adjustments based on their surroundings. Interactions can be performed by swiping with the thumb or giving voice commands.

The AI glasses will also support messaging—including WhatsApp messages and videos from social media—and take video calls, showing the wearer’s point of view. A new pedestrian navigation system provides a visual map on the display, enabling users to move without checking their smartphones.

Meta also clarified that it now has three categories for its smartglasses: Camera AI glasses—developed along with Ray-Ban and Oakley—, AR glasses—introduced last year with the Orion prototype —, and the new category, Display AI glasses, introduced with the release of the new Meta Ray-Ban Display.

DeepSeek AI Model Praised But Security Flaws Raise Concerns - 2

Image by Solen Feyissa, from Unsplash

DeepSeek AI Model Praised But Security Flaws Raise Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The AI model R1 from DeepSeek impresses with low-cost reasoning skills, yet some researchers argue that it produces dangerous code outputs for politically sensitive regions.

In a rush? Here are the quick facts:

  • DeepSeek’s R1 AI model was trained for just $294,000.
  • R1 excels at reasoning tasks like math and coding.
  • CrowdStrike found DeepSeek produced unsafe code for politically sensitive groups.

DeepSeek’s AI model R1 impresses with low-cost reasoning skills, yet testing shows unsafe code output for politically sensitive regions, sparking expert concerns.

The U.S. stock market experienced a major disruption when the R1 model became available to the public in January. Scientific American (Sci Am) reports that this week, the first peer-reviewed study of R1 was published in Nature.

The research reported that R1 received training at a budget cost of $294,000 while competitors spent tens of millions of dollars.

“This is a very welcome precedent,” said Lewis Tunstall of Hugging Face, who reviewed the paper. Ohio State University’s Huan Sun agreed, saying, “Going through a rigorous peer-review process certainly helps verify the validity and usefulness of the model,” reported Sci Am.

DeepSeek says R1 excels at “reasoning” tasks like math and coding by using reinforcement learning, a process that rewards the system for solving problems on its own.

But alongside the praise, the security firm CrowdStrike, from the United States, has flagged security issues, as reported by The Washington Post .

The testing revealed DeepSeek produced less-secure or even harmful code when users requested information about groups that China opposes, such as Tibet, Taiwan and the banned spiritual group Falun Gong.

When asked to generate code for the Islamic State, 42.1 percent of answers were unsafe. Specifically, even when DeepSeek provided code, it often contained security flaws that left systems vulnerable to hacking.

Experts warn that deliberately flawed code is subtler than back doors but equally risky, potentially enabling unauthorized access or manipulation of critical systems.

“This is a really interesting finding,” said Helen Toner of Georgetown University, as reported by The Post. “That is something people have worried about — largely without evidence.” CrowdStrike warned that inserting flaws may make targets easier to hack.

The Post says that DeepSeek did not respond to requests for comment. Despite growing recognition of its technical achievements, the company now faces tough questions about whether politics influences the safety of its code.