
Image by Stephen Phillips, from Unsplash
MatrixPDF Malware Targets Gmail Users with Malicious PDFs
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The new cyber threat, named MatrixPDF, allows attackers to transform regular PDF files into phishing and malware distribution tools, targeting Gmail users.
In a rush? Here are the quick facts:
- MatrixPDF turns ordinary PDFs into phishing and malware delivery tools.
- It uses overlays, JavaScript, and fake prompts to bypass Gmail filters.
- Clicking “Open Secure Document” can steal credentials or download malware.
The malware uses three methods to bypass email filters. Specifically it does this via overlays, clickable prompts and embedded JavaScript, as first detailed by security researchers at Varonis .
“Cybercriminals don’t need to look for new exploits when they can weaponize what people already trust,” the researchers say. PDF files are trusted by users, and attackers exploit that trust to steal credentials or deliver malware.
MatrixPDF modifies actual PDF documents through the by activating deceptive “Secure Document” alerts, content blurring, customized icons, as well as JavaScript execution.
Attackers redirect users to phishing sites, and malware download locations. They do this through payload URLs which users access by clicking on the malicious PDFs. Other options include simulating system dialogs or alert messages to guide the user.
The researchers explain that there are two main attack methods. The first uses email PDF previews in Gmail. The PDF appears normal because Gmail does not run JavaScript.
The system displays a hazy screen which shows an alert, prompting users to click “Open Secure Document,” this in turn opens a malicious URL in the browser. The researches say that this evades Gmail’s antivirus sandbox as the download is treated as a user-initiated web request.
The second method uses PDF-embedded JavaScript within desktop and browser PDF readers. The script performs automatic malware retrieval when users launch files or respond to system prompts.
The majority of users encounter security warnings when accessing files, but researchers say many of them proceed to click “Allow” believing it to be a necessary step for them to view the file.
AI-powered email security systems use their ability to detect unusual file structures and dangerous URLs and hidden scripts to identify MatrixPDF attacks in attachments.
The systems run simulated attacks in a sandbox environment to detect the “Open Secure Document” prompt and prevent it from entering email inboxes. “AI-powered defenses can detect and block the entire attack process before it reaches your inbox,” Varonis argued.

Image by gibblesmash asdf, from Unsplash
Law Gap Leaves Police Unable To Fine Autonomous Cars
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Police in Northern California were left baffled after pulling over a self-driving Waymo taxi that made an illegal U-turn since no driver was behind the wheel hence no one could be fined.
In a rush? Here are the quick facts:
- Police pulled over a Waymo taxi for an illegal U-turn.
- No driver was present, so officers could not issue a ticket.
- California law currently allows tickets only to human drivers.
The San Bruno Police Department said officers were conducting a DUI operation early Saturday morning when the Waymo robotaxi turned in front of them. “That’s right … no driver, no hands, no clue,” read a social media post showing an officer peering into the empty vehicle, as reported by the AP .
Officers contacted Waymo to report the “glitch” and said, “Hopefully the reprogramming will keep it from making any more illegal moves,” as reported by LA Times .
San Bruno Sgt. Scott Smithmatungol explained that current law only allows police to ticket a human driver for moving violations. “Citation books don’t have a box for ‘robot,’” he said as reported by the AP.
A new California law taking effect next year will allow authorities to report autonomous vehicle violations to the Department of Motor Vehicles, though details on penalties are still being worked out, says LA Times.
Waymo spokesperson Julia Ilina said the company’s autonomous system is closely monitored by regulators. “We are looking into this situation and are committed to improving road safety through our ongoing learnings and experience,” Ilina told the AP.
Waymo currently operates in Phoenix, Los Angeles, San Francisco, and nearby suburbs, including San Bruno. “It blew up a lot bigger than we thought,” Smithmatungol added about the viral post, as reported by the AP.
The incident demonstrates that California needs to update its laws because autonomous vehicles are starting to appear on public roads.
LA Times notes that the law faces criticism for being inadequate yet Waymo proves its vehicles enhance urban safety through data which shows 79% fewer airbag deployments and 80% fewer injury-related crashes than human-driven vehicles.
This is not the first incident. Last year, a Waymo robotaxi collided with a Serve delivery robot in West Hollywood. The taxi hit the bot after misjudging timing; no damage occurred, but the event raised questions about autonomous vehicle safety and liability.