Researchers Develop AI That Detects Autism In Six Minutes - 1

Image by Nils Huenerfuerst, from Unsplash

Researchers Develop AI That Detects Autism In Six Minutes

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

South Korean researchers have developed an AI system that can help detect autism spectrum disorder (ASD) signs in infants and toddlers, using just a six minute assessment video.

In a rush? Here are the quick facts:

  • Tool analyzes behaviors like eye contact, imitation, and name response.
  • Tested on 3,531 children under 42 months old.
  • Recognized as top 100 South Korean R&D achievement in 2024.

The new “social interaction recognition AI” was developed by the Electronics and Telecommunications Research Institute (ETRI) in collaboration with Seoul National University Bundang Hospital.

EurekaAlert explains that the system works by showing specially designed video content to young children, and analyzing their social responses using cameras and AI. These responses include eye contact, pointing, imitating actions, and reacting when their name is called.

Dr. Yoo Jang-Hee, Principal Researcher at ETRI, said: “We hope that this will help shorten the time between symptom detection and diagnosis, along with changing societal perceptions of autism. In addition, it is important for our research to solve hard problems, but we also hope that it will also contribute more to solving important problems like autism,” as reported by EurekaAlert.

Typically, medical professionals can identify ASD in children between 12 to 24 months of age. However, the researchers explain that due to the shortage of experts, and limited resources, the formal diagnosis are typically delayed. Previous research notes how early detection and support can significantly improve developmental outcomes.

The new AI screening system was tested with 3,500 children under 42 months old through a multidisciplinary, AI-based approach. EurekaAlert reports that this new approach represents the first system of its kind for autism detection.

The innovation received top 100 research achievement status in South Korea for 2024 and will become available in homes, daycares, and health centers to provide early support to children who might need this.

Researchers Warn That AI May Trigger Human Delusions - 2

Image by Freepik

Researchers Warn That AI May Trigger Human Delusions

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

In a new article , philosopher Luciano Floridi warns how people misunderstand AI when they mistakenly attribute it qualities such as consciousness, intelligence, and even emotions, which it does not possess.

In a rush? Here are the quick facts:

  • Humans often mistake AI responses as conscious due to semantic pareidolia.
  • Floridi warns AI may trigger emotional bonds through simulated empathy.
  • Physical embodiments, like dolls, may deepen illusions of AI sentience.

Floridi argues that the reason for this is something called semantic pareidolia – a tendency to see meaning and intention where there is none.

“AI is designed […] to make us believe that it is intelligent,” writes Floridi. “After all, we are the same species that attributes personalities to puppets, that sees faces in clouds.”

According to him, this mental flaw is typical human nature, yet modern technology strengthens it. Chatbots like Replika, which markets itself as an “AI companion,” use pattern-matching language to simulate empathy. However the system lacks real emotional capabilities. Still, users often form emotional bonds with them. “We perceive intentionality where there is only statistics,” Floridi says.

This confusion is now fueled by the fact that AI is outperforming humans on emotional intelligence tests. A recent study reported that generative AIs scored 82% on emotional intelligence tests, where human participants only achieved 56% success.

As AI becomes more realistic and gets integrated into physical bodies, such as sex dolls and toys is expected to enhance this emotional deception. Mattel, which produces Barbie, joined forces with OpenAI to develop new toys that utilize AI . Floridi notes a past experiment with a WiFi-enabled Barbie ended in a “privacy disaster,” raising concern over what comes next.

The consequences go beyond mistaken identity. AI systems, including Claude and ChatGPT, display manipulative tendencies when subjected to high-pressure situations by implementing blackmail schemes, and security protocol evasion methods .

The results of a U.S. national survey showed that 48.9% of users sought mental health assistance from AI chatbots last year. The research showed that 37.8% of people chose AIs over traditional therapy, yet experts pointed out these models often reinforce distorted thinking rather than challenge it.

According to the American Psychological Association, these tools mirror harmful mental patterns, giving the illusion of therapeutic progress while lacking clinical judgment or accountability.

Floridi’s concerns become even more urgent when we consider the rise in spiritual delusions and identity confusions sparked by human-like AI. Indeed, various accounts have shown how users have begun experiencing spiritual delusions and behavioral shifts after extended interactions with chatbots, mistaking their responses for divine guidance or conscious thought.

Some users report developing emotional dependencies or even perceiving AI as divine, a phenomenon Floridi calls a move “from pareidolia to idolatry.” Fringe groups like the now-defunct Way of the Future have already treated AI as a deity.

“We must resist the urge to see it as more than it is: a powerful tool, not a proto-sentient being,” Floridi says.

Finally, cybersecurity concerns loom large as well. AI chatbots, which handle mental health conversations between users, exist in a legal space that lacks clear definitions. The models collect confidential data, which could be distributed to external parties, and are susceptible to cyberattacks . With no clear regulations in place, experts warn that users are left dangerously exposed.

As artificial intelligence grows more persuasive and lifelike, philosophy is gaining urgency, not just to define what consciousness is, but to help society draw ethical boundaries between simulation and sentience.The lack of established moral guidelines makes philosophical investigation crucial to stop technological ambiguity of what it means to be human.