ByteDance Sues Intern For $1.1 Billion In Damages From AI Breach - 1

Court order by Nick Youngson CC BY-SA 3.0 Pix4free

ByteDance Sues Intern For $1.1 Billion In Damages From AI Breach

  • Written by Andrea Miliani Former Tech News Expert

The Chinese giant ByteDance, TikTok’s parent company, is suing its former intern, Tian Keyu, for $1.1 million in damages from an AI breach. ByteDance alleges that Tian manipulated the code by making unauthorized changes.

In a Rush? Here are the Quick Facts!

  • ByteDance is suing its former intern Tian Keyu for attacking the company’s AI model training infrastructure
  • The tech giant is asking for $1.1 billion in damages, an unusual amount of money for companies suing former employees
  • The lawsuit was filed at the Haidian District People’s Court in Beijing

According to Reuters , ByteDance is accusing Tian of deliberately attacking the company’s AI model training infrastructure and has filed a lawsuit with the Haidian District People’s Court in Beijing, China.

The information was revealed by the international news agency today, and Bytedance has declined to comment on the case. The intern, a postgraduate student at Peking University, hasn’t commented or shared any public statement yet.

While lawsuits from companies to employees are common in China, it is rare to see lawsuits for such a large sum.

According to The Guardian , the incident happened in August and the company fired the intern for sabotaging and claimed that the person ‘maliciously interfered’ with the AI project.

The news went viral and was largely commented on on social media channels. Back then, Bytedance shared a public statement and called the rumors “exaggerations” including those saying that 8,000 graphic units were compromised and that the losses were around tens of millions of dollars.

Users on Reddit debated multiple theories of what could have happened and whether he was guilty or not. “Apparently, he implanted a backdoor into checkpoint models (unsafe pickle) to gain access to systems and then used this to sabotage colleagues’ work,” wrote a user. “Quite some heavy lifting for an intern,” added another user.

AI Clones Fool Bank Security - 2

Image by Petar Milošević, from Wikimedia Commpns

AI Clones Fool Bank Security

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

AI voice cloning breached Santander and Halifax accounts, exposing flaws in voice ID security. Experts urge stronger fraud prevention amid rapid AI advancements

In a Rush? Here are the Quick Facts!

  • Voice ID security systems were bypassed using basic audio playback devices.
  • The cloned voice used a phrase required for authentication: “my voice is my password.”
  • Banks claim voice ID is more secure than traditional authentication methods.

In a BBC investigation , reporter Shari Vahl successfully used an AI-generated version of her voice to breach her bank accounts at Santander and Halifax.

Vahl’s experiment involved cloning her voice using audio from a previous radio interview. The AI voice, when played through standard speakers, fooled both banks’ voice recognition systems, which typically use phrases like “my voice is my password” to verify identity.

The implications are profound. Although voice ID is marketed as a robust security measure, this test demonstrates vulnerabilities.

Santander stated to BBC, “We have not seen any fraud as a result of the use of voice ID and are confident that it provides greater levels of security than traditional knowledge-based authentication methods.”

On the other hand, Halifax described voice ID as an optional feature, asserting ” We are confident that it offers a higher level of security compared to traditional knowledge-based authentication methods,” as reported by BBC.

Cybersecurity expert Saj Huq expressed concern about the ease with which the AI voice penetrated these systems. He noted that the rapid advancement of AI makes such breaches increasingly plausible.

However, certain conditions must be met for successful fraud: access to the registered phone and an unlocked device. While this makes such attacks challenging, they are far from impossible.

This experiment underscores the urgent need for stronger defenses as AI technologies evolve. Though no fraud linked to voice ID has been reported yet, the risks are clear.

The story raises a critical question for the future of banking: how secure is secure enough in an era of sophisticated AI-driven scams?