AI Companions Fill Emotional Void For China’s Youth, Raising Ethical Concerns - 1

image by freepik

AI Companions Fill Emotional Void For China’s Youth, Raising Ethical Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Young people in China are increasingly turning to artificial intelligence for emotional support, with the Chinese chatbot DeepSeek emerging as a popular alternative to traditional counseling.

In a Rush? Here are the Quick Facts!

  • Young Chinese users seek AI chatbots like DeepSeek for emotional support and counseling.
  • DeepSeek’s responses are deeply resonant, sometimes bringing users to tears.
  • DeepSeek offers personalized, empathetic conversations, filling an emotional void for young users.

Users describe the AI’s responses as deeply resonant, sometimes even bringing them to tears, as detailed in an article by the BBC .

Holly Wang, a 28-year-old from Guangzhou, has been using DeepSeek since its launch in January for what she calls “therapy sessions.” The AI chatbot has helped her process personal struggles, including the recent passing of her grandmother, as reported by the BBC.

“DeepSeek has been such an amazing counsellor. It has helped me look at things from different perspectives and does a better job than the paid counselling services I have tried,” she said, reported by BBC.

DeepSeek, a generative AI tool similar to OpenAI’s ChatGPT and Google’s Gemini, has gained national recognition for its superior performance compared to other Chinese AI apps. Beyond its advanced language abilities, it stands out by allowing users to see its reasoning process before delivering responses.

The BBC argues that for many young Chinese, AI is filling an emotional void. Economic challenges, high unemployment, and lingering effects of COVID-19 lockdowns have left many feeling uncertain about their futures. DeepSeek has become a source of comfort, offering personalized and empathetic responses.

When Holly first used the app, she asked it to write a tribute to her grandmother. The response was so moving that she felt an existential crisis.

DeepSeek replied: “Remember that all these words that make you shiver merely echo those that have long existed in your soul. I am but the occasional valley you’ve passed through, that allows you to hear the weight of your own voice.”

Reflecting on the exchange, she said to the BBC: “I don’t know why I teared up reading this. Perhaps because it’s been a long, long time since I received such comfort in real life.”

With Western AI models like ChatGPT blocked in China, DeepSeek has quickly become a preferred choice. Other Chinese AI models developed by Alibaba, Baidu, and ByteDance have struggled to match its capabilities, particularly in generating creative and literary content.

Beyond casual conversations, DeepSeek is increasingly seen as a counselor. Nan Jia, a professor at the University of Southern California, notes that AI chatbots “help people feel heard”, sometimes even more effectively than human counterparts, as reported by BBC.

However, concerns remain. The MIT researchers warned that AI is increasingly woven into our personal lives, taking on roles as friends, romantic partners, and mentors, and they cautioned that this technology could become highly addictive .

Despite privacy concerns, many users prioritize the chatbot’s emotional support over potential risks.The BBC reports one user wrote, “Its thought process is beautiful… It is an absolute blessing to people like me. Frankly, I can’t care less about the privacy concerns.”

Beyond emotional support, artificial intelligence is changing the way people think about death and how we remember those who have passed, as reported in a recent research analysis. AI technology can now create digital versions of the deceased, allowing posthumous interactions .

However, the authors of the analysis also note that digital grieving may complicate emotional closure by keeping memories too accessible.

At the same time, concerns about AI’s impact on youth are growing, with lawsuits filed against AI companion platforms. Character.AI, a chatbot company, is facing legal action from two families who claim it exposed minors to self-harm, violence, and sexual content .

The lawsuit argues that AI-generated interactions could be harmful, raising questions about how these technologies shape young users’ emotional well-being.

As AI becomes more integrated into mental health care, experts stress that it should complement human professionals, not replace them. While AI therapy tools can analyze vast datasets to offer personalized insights, they must be designed to ensure patient safety, privacy, and ethical use, as highlighted by the World Economic Forum.

Moreover, cybersecurity experts warn that AI chatbots, particularly those used for sensitive conversations, are vulnerable to hacking and data breaches.

Personal information shared with AI systems could be exploited, raising concerns about privacy, identity theft, and manipulation. Experts caution that as AI becomes more ingrained in mental health support, security measures must evolve to protect users from potential risks .

White Hat Hackers Expose Security Flaws In Iridium Satellite Communications - 2

White Hat Hackers Expose Security Flaws In Iridium Satellite Communications

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

German white hat hackers have recently demonstrated significant security vulnerabilities in Iridium satellite communications, potentially compromising the privacy of users, including U.S. Department of Defense (DoD) employees, as first reported by Spectrum .

In a Rush? Here are the Quick Facts!

  • German hackers intercepted Iridium communications, revealing vulnerabilities in satellite systems.
  • Hackers pinpointed DoD users’ locations with 4 km accuracy using basic equipment.
  • Iridium’s legacy satellite devices still use an outdated, unencrypted radio protocol.

The hackers revealed how they were able to intercept text messages and pinpoint user locations with remarkable accuracy, raising concerns about the system’s integrity.

Spectrum reports that during a presentation at the Chaos Communication Congress in Hamburg in late December, hackers Sec and Schneider showcased their findings.

They revealed that despite Iridium’s reliance on a secure gateway to route and encrypt traffic for the DoD, their eavesdropping equipment was able to pinpoint the location of DoD users with an accuracy of approximately 4 kilometers.

They utilized a home-assembled kit consisting of an Iridium antenna, a software-defined radio receiver, and a basic computer such as a Raspberry Pi.

“We see devices that register with the DoD service center and then we can find their positions from these registrations,” Sec explained, as reported by Spectrum. “You don’t have to see the communication from the actual phone to the network, you just see the network’s answer with the position, and you then can map where all the registered devices are.”

The Iridium satellite constellation, launched in the late 1990s, was the first to offer global satellite communication services. Although the company has upgraded its systems with more secure satellites, many of its older devices still operate on the legacy radio protocol, which lacks encryption, says Spectrum.

According to analyst Christian von der Ropp, this outdated system leaves users vulnerable. “The regular satellite phones that they sell still operate under the old legacy protocol,” von der Ropp said, as reported by Spectrum.

“If you buy a brand-new civilian Iridium phone, it still operates using the 30-year-old radio protocol, and it is subject to the same vulnerability. So, you can intercept everything. You can listen to the voice calls, you can read SMS, absolutely everything. Out of the box it’s a totally unsecure service.”

The hackers also demonstrated the ease of intercepting communications. They revealed a text message exchanged between two employees of the German Foreign Office, showing how low-cost, readily available equipment can intercept Iridium signals across vast areas.

“With US $400 worth of equipment and freely available software, you can start right away intercepting Iridium communications,” von der Ropp said, as reported by Spectrum.

Despite these vulnerabilities, Iridium remains a key player in satellite communications, having secured a $94 million contract with the U.S. Space Force last year. However, the DoD is reportedly seeking alternatives, such as Starlink, due to concerns over Iridium’s security risks, as noted by SPectrum.