BubblePal: New AI Toy That Talks, Plays, and Connects with Kids - 1

Image from Freepik

BubblePal: New AI Toy That Talks, Plays, and Connects with Kids

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

BubblePal is the next AI-powered interactive toy. The bubble-shaped toy was launched last week and is created by Haivivi, a toy company specializing in experimental products for kids. According to the company, it can engage in natural conversations with children and offer emotional companionship.

Its bionic memory technology enables BubblePal to evolve the more it interacts with a child, becoming more knowledgeable and responsive over time. It also has a silicone strap, which attaches the sphere to a toy, allowing that toy to “come to life.”

The toy also comes with the Haivivi Pal App , which allows parents to personalize their child’s chosen BubblePal characters, set dialogue expectations, establish limits, and review growth records. It also provides weekly updates through a ‘Growth Report”.

The app also includes a Growth Badge System, developed based on Howard Gardner’s Theory of Multiple Intelligences . This system analyzes each child’s unique characteristics and types of intelligence through chat, aiming to help parents recognize their children’s talents. In addition, the real-time Emotional Barometer captures the child’s preferences or aversions to different topics.

Designed for children aged 3 to 12, it is available for $89 on Haivivi’s official website.

However, Forbes raises an important concern about privacy with these toys. Previous, less advanced models have had weak security measures, failing to adequately protect children’s data.

For example, Mattel’s Hello Barbie , an AI-powered doll designed for entertainment, was criticized as a “privacy nightmare”. Similarly, BBC reported that the My Friend Cayla doll was scrutinized for its vulnerability to Bluetooth hacking, which could allow unauthorized individuals to send voice messages directly to children. These examples highlight the ongoing challenges in ensuring robust security for interactive toys.

EU’s World-First Artificial Intelligence Act Takes Effect - 2

Image by DC Studio, from Freepik

EU’s World-First Artificial Intelligence Act Takes Effect

  • Written by Kiara Fabbri Former Tech News Writer

Starting today, the European Union’s Artificial Intelligence Act (AI Act) comes into force, marking a shift in the regulation of artificial intelligence (AI) technologies within the EU.

The European AI Act is the first comprehensive law regulating artificial intelligence. Its primary goal is to ensure that AI used in the EU is trustworthy, safe, and respects fundamental human rights. It also aims to create a favorable environment for AI innovation and investment within the EU.

The Act categorizes AI systems based on their risk level:

  • Minimal risk : AI like spam filters or recommendation systems pose little risk and are largely unregulated.
  • Limited risk : AI like chatbots must be transparent about being AI and disclose when deepfakes or biometric data is used.
  • High risk : AI used in critical areas like recruitment, loan approval, or autonomous robots face strict regulations, including data quality checks, human oversight, and cybersecurity measures.
  • Unacceptable risk : AI that manipulates human behavior, social scoring, or certain biometric uses is outright banned.

European Commission President Ursula von der Leyen stated, “With our artificial intelligence act, we create new guardrails not only to protect people and their interests, but also to give business and innovators clear rules and certainty,” in a France 24 report.

Several advisory bodies will be established to support the enforcement process. The European Artificial Intelligence Board will ensure consistent application of the AI Act across EU countries and facilitate cooperation. A scientific panel will provide expert advice, including warnings about potential risks in general-purpose AI. Additionally, a stakeholder forum will offer input on the Act’s implementation.

Companies that violate the AI Act face substantial fines, with the most severe penalties for banned AI applications.

The majority of the AI Act’s rules will come into full effect on August 2, 2026. However, restrictions on high-risk AI systems will be implemented sooner. To prepare for full implementation, the EU Commission is encouraging voluntary adoption of the Act’s principles through the AI Pact.

Companies breaching the EU AI Act could face fines ranging from 35 million euros ($41 million) or 7% of their global annual revenues, whichever is higher, to 7.5 million euros or 1.5% of global annual revenues.

The AI Act serves as a model for other regions around the world to develop their own AI regulations.