AI Supporting Students With Disabilities In Schools - 1

Image by Freepik

AI Supporting Students With Disabilities In Schools

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

AI is transforming education for students with disabilities, offering tailored tools that enhance learning and provide independence.

In a Rush? Here are the Quick Facts!

  • AI-powered tools help students overcome challenges with dyslexia in reading and writing.
  • Text-to-speech software supports students with visual or auditory impairments, improving accessibility.
  • Experts warn AI should complement skill-building and address privacy concerns for students.

For 14-year-old Makenzie Gilkison, who has dyslexia, AI-powered tools like chatbots, word prediction programs, and text-to-speech software have played a crucial role in overcoming challenges with reading and writing, as reported today by the AP .

These technologies have allowed her to focus on comprehension instead of struggling with spelling. “I would have just probably given up if I didn’t have them,” said Makenzie, who now excels academically and was recently named to the National Junior Honor Society, as reported by the AP.

The impact of AI on students with learning disabilities is significant. Makenzie, for example, uses a word prediction tool that suggests correct spellings for challenging words helping her avoid frustration.

Text-to-speech software reads aloud her textbooks and assignments, enabling her to concentrate on understanding the material rather than decoding the text. Additionally, AI-powered chatbots help break down complex concepts and offer further explanations when needed, as reported by the AP.

Ben Snyder, a freshman in Larchmont, New York, also relies on AI tools to navigate learning challenges. Diagnosed with a learning disability, Ben struggles to grasp mathematical concepts using traditional methods, reported the AP.

He uses Question AI, an AI-powered tool that provides multiple explanations for math problems, helping him understand the material in different ways. For writing tasks, Ben utilizes AI to generate outlines, significantly speeding up the process of organizing his thoughts.

A scientific literature review published by Oxford Academic outlines how AI applications for students with learning disabilities can be categorized into four levels: substitution, augmentation, modification, and redefinition.

At the substitution level, AI provides basic functionalities, such as tracking engagement, without greatly improving traditional teaching methods. The augmentation level enhances support, offering tools like writing assistants that help students with challenges such as dyslexia.

The modification level introduces more substantial changes, providing personalized strategies and adaptive learning to better address individual needs.

At the redefinition level, AI creates entirely new learning opportunities, offering personalized and immersive experiences that traditional methods cannot replicate, ultimately fostering greater educational success.

The AP notes that AI also benefits students with visual and auditory impairments. For instance, text-to-speech software has advanced, providing natural-sounding voices that help students with visual impairments or dyslexia.

Speech-to-text programs enable students with hearing impairments to communicate effectively by converting spoken words into written text.

The AP reportes that the U.S. Education Department has acknowledged the value of AI in special education, encouraging schools to integrate technologies like text-to-speech and communication devices.

Despite its advantages, the AP notes that experts warn of the potential risks associated with AI. Mary Lawson, general counsel at the Council of the Great City Schools, cautions that AI tools should complement, not replace, skill-building, especially for tasks like reading and writing.

There are also ethical concerns, such as the possibility of AI inadvertently revealing a student’s disability, raising privacy issues. Additionally, the increasing prevalence of AI-based tools, which are often visually oriented, has led to concerns about exclusion for blind and partially sighted individuals.

Tom Pey, president of the Royal Society for Blind Children, argues that blind people are being left behind as AI technologies, such as video games and augmented reality, become more common, as reported by The Guardian .

As AI continues to evolve, balancing its benefits and ethical concerns remains crucial for inclusive education.

OpenAI’s o3 Achieves Human-Level Intelligence On Key Benchmark Test - 2

Image by Freepik

OpenAI’s o3 Achieves Human-Level Intelligence On Key Benchmark Test

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A recent breakthrough in artificial intelligence has brought researchers closer to creating artificial general intelligence (AGI), a long-sought goal in the field.

In a Rush? Here are the Quick Facts!

  • OpenAI’s o3 AI scored 85% on the ARC-AGI general intelligence benchmark.
  • The score equals average human performance and beats previous AI’s 55% record.
  • The ARC-AGI test measures sample efficiency and ability to adapt to new tasks.

OpenAI’s new AI system, known as o3, achieved an 85% score on the ARC-AGI benchmark—a test designed to measure an AI’s ability to adapt to new situations, as reported by The Conversation .

This result surpasses the previous AI best of 55% and matches the average human performance, marking a significant milestone in AI research.The ARC-AGI benchmark evaluates an AI system’s “sample efficiency,” which refers to how well it learns from limited examples, says The Conversation.

Unlike widely used AI models like ChatGPT, which rely on massive datasets to generate outputs, the o3 model demonstrates the ability to generalize and adapt to novel tasks with minimal data. This capability is considered fundamental to achieving human-like intelligence, as reported by The Conversation.

Developed by French AI researcher François Chollet, the ARC-AGI test involves solving grid-based puzzles by identifying patterns.

Traditional LLMs rely on memorizing, fetching, and applying pre-learned “mini-programs” but struggle with fluid intelligence, as evidenced by low scores on the ARC-AGI benchmark. The o3 model introduces a test-time program synthesis mechanism, enabling it to generate and execute new solutions, as detailed by Chollet.

Chollet explains that at its core, o3 performs natural language program search within token space, guided by an evaluator model. When presented with a task, o3 explores possible “chains of thought” (CoTs)—step-by-step solutions described in natural language.

It evaluates these CoTs for fitness, recombining knowledge into coherent programs to address novel challenges effectively. The Conversation notes that OpenAI has not disclosed the exact methods used to develop o3, but researchers speculate the system employs a process akin to Google’s AlphaGo, which defeated the world Go champion in 2016.

However, Chollet notes that the process is computationally intensive. Generating solutions may involve exploring millions of potential paths in the program space, incurring significant costs in time and resources. Unlike systems like AlphaZero, which autonomously acquire abilities through iterative learning, o3 depends on expert-labeled CoT data, limiting its autonomy.

Despite these promising results, significant questions remain. OpenAI has released limited information about o3, sharing details only with select researchers and institutions.

The Conversation notes that it is unclear whether the system’s adaptability stems from fundamentally improved underlying models or from task-specific optimizations during training. Further testing and transparency will be critical to understanding o3’s true potential.

Furthermore, the Chollet highlighs the cost of this intelligence: solving ARC-AGI tasks costs $5 for humans but $17–$20 for o3 in low-compute mode. However, they expect rapid improvements, making o3 competitive with human performance soon.

The achievement reignites debates about the feasibility and implications of AG. For some researchers, the success of o3 makes the prospect of AGI more tangible and urgent. This is particularly crucial given cybersecurity concerns, as AI-generated malware variants increasingly evade detection .

However, others remain cautious, emphasizing that robust evaluations are needed to determine whether o3’s capabilities extend beyond specific benchmarks. As the AI community awaits broader access to o3, the breakthrough signals a transformative moment in the pursuit of intelligent systems capable of reasoning and learning like humans.