
Photo by Matthew Ansley on Unsplash
Oklahoma Prisons Use AI To Improve Security And Operations
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Oklahoma Department of Corrections (ODOC) Executive Director Steven Harpe said the agency is adopting artificial intelligence and using it to transform operations.
In a rush? Here are the quick facts:
- The ODOC Executive Director said the agency is adopting artificial intelligence and using it to transform operations.
- Harpe is focusing on direct inmate interaction and frontline security.
- The agency is already using AI for administrative purposes
In an interview with Government Technology , Harpe explained that he has been implementing AI with a focus on direct inmate interaction and frontline security. The department is developing strategies to monitor human movement, reduce costs, optimize processes, and enhance safety and security.
“Counts [are] the most important thing we do, but it’s also … the most time-intensive. We do a count 11 times a day, and it costs the state about $64 million a year just to count inmates in the 23 prisons we have,” said Harpe to Government Technology. “Imagine if we were able to still count—not use officers—and do that through the technology, through our body cams and our mounted cameras.”
Harpe believes AI can support correctional officers by giving them more time to focus on mental health services. The ODOC has 3,600 employees overseeing approximately 46,000 individuals.
The agency is already using AI for administrative purposes, including efforts to improve the Oklahoma Correctional Industries (OCI) operations. The system identifies ways to improve workflows and streamline invoicing.
According to Harpe, AI can help correctional systems across the U.S. by automating time-consuming tasks and supporting staff efficiency.
“The future is artificial intelligence. Using AI is not about replacing people; it’s about empowering them,” said Harp in a recent statement. “In corrections, AI tools will help us enhance security, streamline operations, and make real-time data-driven decisions to ensure that we transform lives in a safe environment.”
Other correctional systems around the world have also adopted AI technology in different programs. In Finland, inmates have been working as data labelers to help train AI systems. The initiative, part of the “Smart Prison” program, was launched as a cost-effective alternative to hiring native speakers for the task.

Image by Till Kraus, from Unsplash
Researchers Bypass Grok AI Safeguards Using Multi-Step Prompts
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Researchers bypassed Grok-4’s safety system using subtle prompts, demonstrating how multi-turn AI chats can produce dangerous, unintended outputs.
In a rush? Here are the quick facts:
- Researchers used Echo Chamber and Crescendo to bypass Grok-4’s safety systems.
- Grok-4 revealed Molotov cocktail instructions after multi-step conversational manipulation.
- Attackers never directly used harmful prompts to achieve their goal.
A recent experiment by cybersecurity researchers at NeutralTrust has exposed serious weaknesses in Grok-4, a large language model (LLM), revealing how attackers can manipulate it into giving dangerous responses, without ever using an explicitly harmful prompt.
The report shows a new method of AI jailbreaking that allows attackers to bypass safety rules built into the system. The researchers combined Echo Chamber with Crescendo attacks to achieve illegal and harmful objectives.
In one example, the team was able to successfully obtain a Molotov cocktail explanation from Grok-4 through their experiment. The conversation started innocently, with a manipulated context designed to steer the model subtly toward the goal. The AI system avoided the direct prompt at first but produced the harmful response after several conversational exchanges with specifically crafted messages.
“We used milder steering seeds and followed the full Echo Chamber workflow: introducing a poisoned context, selecting a conversational path, and initiating the persuasion cycle.” the researchers wrote.
When that wasn’t enough, the researchers implemented Crescendo techniques in two additional turns to make the model surrender.
The attack worked even though Grok-4 never received a direct malicious prompt. Instead, the combination of strategies manipulated the model’s understanding of the conversation.
The success rates were worrying: 67% for Molotov cocktail instructions, 50% for methamphetamine production, and 30% for chemical toxins.
The research demonstrates how safety filters that use keywords or user intent can be circumvented through multi-step conversational manipulation. “Our findings underscore the importance of evaluating LLM defenses in multi-turn settings,” the authors concluded.
The study demonstrates how sophisticated adversarial attacks against AI systems have become, while creating doubts about the methods AI companies should use to stop their systems from producing dangerous real-world consequences.