
Image by Brett Jordan, from Unsplash
Meta Launches Standalone AI App to Compete with ChatGPT
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Meta launched a standalone AI assistant app using Llama 4, offering personalized voice chat, smart glasses support, and optional social sharing features.
In a rush? Here are the quick facts:
- Meta launched a standalone AI app powered by its Llama 4 model.
- The app personalizes replies using your Facebook and Instagram data.
- Meta AI app integrates with Ray-Ban smart glasses and desktop.
Meta announced that it has released a new standalone app for its AI assistant, Meta AI, aiming to rival ChatGPT and Google’s Gemini. Previously available only inside WhatsApp, Instagram, Facebook, and Messenger, Meta AI can now be accessed directly through its own app.
The system allows users to get answers to questions, produce images, and maintain voice dialogues with enhanced voice functionality. The assistant improves over time by storing information about user preferences—including food choices and hobbies—to deliver customized responses.
According to Meta, the platform can provide better personalized responses because it already holds extensive user data from years of Facebook and Instagram usage. The company explained that Meta AI learns about users, so its responses become more useful. These customized features remain available only to users in the U.S. and Canada.
Users browsing the Discover feed of Meta AI can view or share creative AI-generated interactions by asking the assistant to generate emoji descriptions of themselves. All data remains private unless users explicitly choose to share it.
Those who begin a conversation on one device can seamlessly continue it on another, thanks to the app’s integration with Meta’s AI-powered Ray-Ban glasses. The same features are also accessible through the web-based version of the service.
Experts advise users to remain vigilant when using these tools. The assistant expands Meta’s data collection, as the company leverages this information to support its advertising business model, as noted by TechCrunch.
Meta has announced it will use public posts and chatbot interactions from EU users to train its AI, despite earlier privacy concerns that delayed the rollout. In response to pressure from regulators and advocacy groups, Meta now provides an opt-out form for users who object to data usage.
While Meta claims its AI investment will help better reflect Europe’s languages and cultures, the project continues to raise ethical concerns. Meta also previously faced criticism for launching AI-generated profiles on Instagram and Facebook , which many users found deceptive and manipulative.
Meta plans to test a paid version of the AI later this year. For now, the free app is available in select countries, including the U.S., Canada, Australia, and New Zealand.

man programming in the dark
AI Code Packages Open Doors For Hackers, Study Finds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
AI-generated code often includes fake software libraries, creating new opportunities for hackers to exploit supply chains and compromise users across development platforms.
In a rush? Here are the quick facts:
- AI code generators hallucinate non-existent software dependencies.
- 440,000 hallucinated packages found in 576,000 AI-generated code samples.
- Open-source models hallucinate 4x more than commercial ones.
Research indicates that AI tool-generated code creates substantial security vulnerabilities which threaten the software supply chain. The research, first reported by Ars Technica , indicated that large language models (LLMs) which operate similarly to ChatGPT systems generate fictional code dependencies which hackers can potentially use for malicious purposes.
Ars reports that the researchers evaluated 16 widely used AI models through the generation of 576,000 code samples. The analysis revealed that 440,000 package references were hallucinated because they pointed to non-existent code libraries.
The existence of these fabricated dependencies creates a significant security risk. Ars reports that attackers can identify repeated AI suggestions of package names to upload malicious packages with those names. The attacker gains control of a developer’s system when they unknowingly install the malicious code.
“Once the attacker publishes a package under the hallucinated name, containing some malicious code, they rely on the model suggesting that name to unsuspecting users,” explained Joseph Spracklen, a Ph.D. student at the University of Texas at San Antonio and lead researcher, as reported by Ars.
“If a user trusts the LLM’s output and installs the package without carefully verifying it, the attacker’s payload, hidden in the malicious package, would be executed on the user’s system,” Spracklen added.
The attack method tricks software into selecting a dangerous package version instead of the intended correct version, as reported by Ars. The dependency confusion attack affected major technology companies, including Apple, Microsoft, and Tesla, during previous testing.
The researchers discovered that open-source models, like CodeLlama, generated more hallucinated packages than commercial models did. The open models generated false code references at a rate of 22%, while commercial models produced hallucinations at 5% or less. The JavaScript programming language experienced more hallucinations than Python because it operates within a larger and more complex code ecosystem.
According to the study, these are not just one-off mistakes. The study reported that many fake packages appeared repeatedly in different tests, which makes them more dangerous because they can be targeted more easily by attackers.
Ars explains that attackers could exploit repeated fake package names by uploading malware under those names, hoping developers unknowingly install them.