WeTransfer Faces Criticism Over Terms Of Service Amid AI Training Concerns - 1

Photo by Joey Huang on Unsplash

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

In a rush? Here are the quick facts:

  • WeTransfer was forced to change its terms of service updates after user backlash.
  • Customers complained about the platform’s policy updates.
  • The tech company clarified that it does not use customers’ data for AI training.

How is this acceptable, @WeTransfer ? You’re not a free service, I pay you to shift my big artwork files. I DON’T pay you to have the right to use them to train AI or print, sell and distribute my artwork and set yourself up as a commercial rival to me, using my own work.😡 pic.twitter.com/OHPIjRGGOM — Sarah McIntyre (@jabberworks) July 15, 2025

Many of the posts from concerned users went viral, prompting WeTransfer to clarify its position and revise the policy again.

“We don’t use machine learning or any form of AI to process content shared via WeTransfer, nor do we sell content or data to any third parties,” said a spokeswoman from WeTransfer to the BBC in a recent interview.

WeTransfer explained that the clause had been added as the company was exploring the possibility of using AI to improve content moderation and detect harmful data.

The company said it revised the terms again on Tuesday, as the original language “may have caused confusion” and that they have now “made the language easier to understand.”

New AI Model Stops Voice Cloning with “Machine Unlearning” - 2

Image by Vecstoc, from Freepik

New AI Model Stops Voice Cloning with “Machine Unlearning”

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

South Korean researchers developed a new way to make AI voice generators to “forget” how to imitate specific people’s voices.

In a rush? Here are the quick facts:

  • The method reduces voice mimic accuracy by over 75%.
  • Allowed voices still work, with only 2.8% performance loss.
  • The system needs 5 minutes of audio to forget a speaker.

The ‘‘machine unlearning” system aims to be a solution to stop the misuse of voice-cloning technologies, which scammers and deepfake creators use.

The current zero-shot text-to-speech (ZS-TTS) models require only a few seconds of audio to create realistic voice imitations of any person. “Anyone’s voice can be reproduced or copied with just a few seconds of their voice,” said Jong Hwan Ko, a professor at Sungkyunkwan University, as reported by MIT Technology Review .

This opens the door to serious privacy and security concerns, such as impersonation and fraud.

The research team of Ko developed Teacher-Guided Unlearning (TGU) as the first system which trains AI models to forget how to produce specific people’s voices. They explain in their paper that instead of blocking requests with filters (called “guardrails”), this technique modifies the AI’s memory storage so the voice data becomes inaccessible to the system.

When prompted to generate speech in a forgotten voice, the updated AI model returns a random voice instead. This randomness, the researchers argue, proves that the original voice has been successfully erased. In tests, the AI was 75% less accurate at mimicking the removed voice, yet performance for allowed voices dropped only slightly (by 2.8%).

The method requires only five minutes of audio recordings from each speaker to complete its process. The early-stage development shows significant promise, according to expert opinions. “This is one of the first works I’ve seen for speech,” said Vaidehi Patil, a PhD student at UNC-Chapel Hill, as reported by MIT.