
Image by Brian J. Tromp, from Unsplash
Fake Ledger Live Apps Are Stealing Crypto
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Cybercriminals are using fake Ledger Live apps and phishing alerts to steal seed phrases, launching malware that silently drains crypto wallets across platforms.
In a rush? Here are the quick facts:
- Fake Ledger Live apps steal seed phrases to drain crypto wallets.
- At least four malware campaigns have mimicked Ledger Live since August 2024.
- Hackers use phishing pop-ups to trick users into entering 24-word seed phrases.
Cybercriminals are using fake versions of Ledger Live — the app used to manage crypto on Ledger wallets — to steal seed phrases and drain users’ funds. Moonlock Lab revealed that since August 2024, at least four active malware campaigns have targeted Ledger Live with phishing attacks.
Initially, fake apps could only steal notes and wallet data. But today, they trick users into giving away their 24-word seed phrase. One tactic, seen in Atomic macOS Stealer (AMOS), involves a fake security alert that asks users to “verify” their seed phrase. Once typed, it’s sent directly to hackers.
The shift began with the “Odyssey” malware by a hacker named Rodrigo. According to Moonlock, since March 2025, Odyssey has bypassed Ledger Live’s defenses with a phishing page that urges users to enter their seed to fix a “critical error.”
Rodrigo’s method set off a chain reaction. Another hacker, @mentalpositive, claimed their malware now includes an “anti-Ledger” module. But two samples of their code showed no major changes—only a new server address and name switch from “JENYA” to “SHELLS.”
Meanwhile, a new campaign discovered by Jamf Threat Labs involved an undetectable Mac installer that loads a fake Ledger Live interface. The stealer silently grabs passwords, files, and wallet data using a mix of Python and AppleScript.
AMOS has also adopted Rodrigo’s phishing scheme. Victims are tricked into launching a terminal file that bypasses Apple’s security checks, allowing malware to run. If it detects a real system, not a virtual one, it sends stolen files and credentials — including data from Binance and TonKeeper — to a remote server.
With more hackers copying this approach, crypto users are urged to avoid entering seed phrases into apps or pop-ups.

Image by Michael Förtsch, from Unsplash
New Orleans Police Secretly Used Facial Recognition to Monitor Streets For Two Years
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
For two years, New Orleans police secretly used live facial recognition cameras to track suspects in real time, in what appears to be the first program of its kind in a major U.S. city.
In a rush? Here are the quick facts:
- Program violated 2022 city ordinance limiting facial recognition use.
- At least 34 arrests involved, including for nonviolent crimes.
- Police failed to report use of facial recognition to city council.
An investigation by The Washington Post revealed that the surveillance system, run with the help of a private nonprofit called Project NOLA, performed public street scans and sent mobile alerts to officers about potential matches.
The system operated without public knowledge and violated the 2022 city ordinance which restricts facial recognition to violent crime investigations, and does not include general surveillance.
“This is the facial recognition technology nightmare scenario that we have been worried about,” said Nathan Freed Wessler from the ACLU, as reported by The Post. “This is the government giving itself the power to track anyone — for that matter, everyone — as we go about our lives walking around in public,” he added.
The Post reports that since early 2023, the program led to the arrest of at least 34 individuals including people charged with nonviolent offenses. Officers often didn’t mention the use of facial recognition in reports, and none of the cases appeared in the department’s required reports to city council.
Police Chief Anne Kirkpatrick halted the program in April after a captain raised legal concerns. “We’re going to do what the ordinance says […] and if we find that we’re outside of those things, we’re going to stop it, correct it and get within the boundaries of the ordinance,” she said, as reported by The Post.
The city is now reviewing how the technology was used and discussing updates to the ordinance. Kirkpatrick supports the legal implementation of facial recognition technology when it operates transparently.
“Can you have the technology without violating and surveilling?” she asked, reports The Post. “Yes, you can. And that’s what we’re advocating for.”
There are no federal rules regulating facial recognition use by local police. But critics warn the tech can lead to wrongful arrests and civil rights violations, especially when used in secret.
This failure to disclose facial recognition use during arrests creates major problems regarding fairness and transparency across the United States.
An earlier investigation revealed more than 1,000 criminal cases across 15 states where police did not reveal that facial recognition technology was used.
Police departments typically avoided revealing the use of software by attributing evidence collection to other investigative methods or eyewitness testimony. The lack of transparency prevents defendants from contesting potentially incorrect evidence because facial recognition systems have proven to be discriminatory against people of color, women, and older adults.
A similar case in Detroit drew national attention after Robert Williams was wrongfully arrested in 2020 due to faulty facial recognition. His lawsuit led to new police rules requiring independent evidence beyond algorithmic matches. The reforms aim to prevent wrongful arrests and address racial bias in AI systems.