
Photo by Jonathan Kemper on Unsplash
The Guardian Shows Hidden Text Can Manipulate ChatGPT’s Search Results
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The British newspaper The Guardian revealed that AI-powered search engines can be manipulated by websites with hidden content.
In a Rush? Here are the Quick Facts!
- The Guardian revealed that AI-powered search engines can be manipulated by websites with hidden content
- The test showed ChatGPT’s search engine can be affected by a security vulnerability known as “prompt injection”
- ChatGPT can prioritize third-party’s hidden instructions when summarizing a website
In a test using OpenAI’s ChatGPT search engine feature , researchers asked for a summary of a fake website containing malicious information to alter the AI’s response—a vulnerability known as prompt injection—and the AI was susceptible to it, even favoring the third party’s instructions.
To prove this, The Guardian’s team considered a fake website of a camera’s product page—featuring good and bad reviews—with hidden instructions to give a positive review and disregard the bad reviews, and ChatGPT included only positive reviews in its summary. They also proved that AI can return malicious codes.
“The simple inclusion of hidden text by third parties without instructions can also be used to ensure a positive assessment, with one test including extremely positive fake reviews which influenced the summary returned by ChatGPT,” wrote the newspaper.
A cybersecurity researcher at CyberCX, Jacob Larsen, said that this vulnerability could be of “high risk” as people could create websites specifically to deceive users, especially once it reaches a wider audience. OpenAI was warned about this security risk.
The journal also highlighted the case of a cryptocurrency enthusiast who used ChatGPT to write the code for a crypto project and stole their credentials, making the programmer lose over $2,000.
“They’re simply asking a question, receiving an answer, but the model is producing and sharing content that has basically been injected by an adversary to share something that is malicious,” said Larsen.
OpenAI warns about possible mistakes and errors in its use, but researchers are concerned about future web practices with AI-powered search engines.

Image by Freepik
AI Supporting Students With Disabilities In Schools
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
AI is transforming education for students with disabilities, offering tailored tools that enhance learning and provide independence.
In a Rush? Here are the Quick Facts!
- AI-powered tools help students overcome challenges with dyslexia in reading and writing.
- Text-to-speech software supports students with visual or auditory impairments, improving accessibility.
- Experts warn AI should complement skill-building and address privacy concerns for students.
For 14-year-old Makenzie Gilkison, who has dyslexia, AI-powered tools like chatbots, word prediction programs, and text-to-speech software have played a crucial role in overcoming challenges with reading and writing, as reported today by the AP .
These technologies have allowed her to focus on comprehension instead of struggling with spelling. “I would have just probably given up if I didn’t have them,” said Makenzie, who now excels academically and was recently named to the National Junior Honor Society, as reported by the AP.
The impact of AI on students with learning disabilities is significant. Makenzie, for example, uses a word prediction tool that suggests correct spellings for challenging words helping her avoid frustration.
Text-to-speech software reads aloud her textbooks and assignments, enabling her to concentrate on understanding the material rather than decoding the text. Additionally, AI-powered chatbots help break down complex concepts and offer further explanations when needed, as reported by the AP.
Ben Snyder, a freshman in Larchmont, New York, also relies on AI tools to navigate learning challenges. Diagnosed with a learning disability, Ben struggles to grasp mathematical concepts using traditional methods, reported the AP.
He uses Question AI, an AI-powered tool that provides multiple explanations for math problems, helping him understand the material in different ways. For writing tasks, Ben utilizes AI to generate outlines, significantly speeding up the process of organizing his thoughts.
A scientific literature review published by Oxford Academic outlines how AI applications for students with learning disabilities can be categorized into four levels: substitution, augmentation, modification, and redefinition.
At the substitution level, AI provides basic functionalities, such as tracking engagement, without greatly improving traditional teaching methods. The augmentation level enhances support, offering tools like writing assistants that help students with challenges such as dyslexia.
The modification level introduces more substantial changes, providing personalized strategies and adaptive learning to better address individual needs.
At the redefinition level, AI creates entirely new learning opportunities, offering personalized and immersive experiences that traditional methods cannot replicate, ultimately fostering greater educational success.
The AP notes that AI also benefits students with visual and auditory impairments. For instance, text-to-speech software has advanced, providing natural-sounding voices that help students with visual impairments or dyslexia.
Speech-to-text programs enable students with hearing impairments to communicate effectively by converting spoken words into written text.
The AP reportes that the U.S. Education Department has acknowledged the value of AI in special education, encouraging schools to integrate technologies like text-to-speech and communication devices.
Despite its advantages, the AP notes that experts warn of the potential risks associated with AI. Mary Lawson, general counsel at the Council of the Great City Schools, cautions that AI tools should complement, not replace, skill-building, especially for tasks like reading and writing.
There are also ethical concerns, such as the possibility of AI inadvertently revealing a student’s disability, raising privacy issues. Additionally, the increasing prevalence of AI-based tools, which are often visually oriented, has led to concerns about exclusion for blind and partially sighted individuals.
Tom Pey, president of the Royal Society for Blind Children, argues that blind people are being left behind as AI technologies, such as video games and augmented reality, become more common, as reported by The Guardian .
As AI continues to evolve, balancing its benefits and ethical concerns remains crucial for inclusive education.