UK Government Developing Controversial AI ‘Murder Prediction’ Tool - 1

Image by Freepik

UK Government Developing Controversial AI ‘Murder Prediction’ Tool

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The UK government is working on a controversial project that uses AI to predict who might commit murder in the future.

In a rush? Here are the quick facts:

  • Ministry of Justice using data from 100,000–500,000 individuals.
  • Sensitive data includes mental health, self-harm, and addiction records.
  • Government claims it’s only a research project, not operational yet.

The existence of the program came to light through freedom of information requests obtained by the watchdog group Statewatch .

Their findings suggest that sensitive personal data is being used. These include health records, mental health information, addiction history, and data about people who have not been convicted of any crimes.

Statewatch reports that the Ministry of Justice (MoJ) is leading this plan, examining existing authority records of people to stop serious crimes from happening.

Originally named the “homicide prediction project,” it’s now called “sharing data to improve risk assessment.” The project is currently in a research phase, but campaigners and privacy advocates have raised concerns, as reported by Statewatch.

However, officials deny this. The Guardian reports that a Ministry of Justice spokesperson stated: “This project is being conducted for research purposes only. It has been designed using existing data held by HM Prison and Probation Service and police forces on convicted offenders to help us better understand the risk of people on probation going on to commit serious violence. A report will be published in due course.”

According to Statewatch, police in Greater Manchester shared data on up to half a million people. This included victims, suspects, and people in vulnerable situations.

Sofia Lyall, a researcher at Statewatch, strongly criticized the project, saying: “The Ministry of Justice’s attempt to build this murder prediction system is the latest chilling and dystopian example of the government’s intent to develop so-called crime ‘prediction’ systems.”

She warned the system would deepen racial and class discrimination: “This latest model, which uses data from our institutionally racist police and Home Office, will reinforce and magnify the structural discrimination underpinning the criminal legal system.”

“Using such sensitive data on mental health, addiction and disability is highly intrusive and alarming,” she added.

The government argues that tools like this could improve how probation services assess risk and prevent violent crime. But critics say the system could lead to people being unfairly labelled as potential murderers due to flawed and biased data.

The government argues that these tools will improve the ability of probation services to evaluate risk and stop violent crimes from occurring. Critics, however, warn that the system risks labeling people as potential murderers through bad and biased data.

While still under development, Statewatch reports that documents mention the “future operationalisation” of the system, raising concerns that it could soon be used in real-world policing decisions.

Researchers Reveal AI Models Show Racial And Socioeconomic Bias In Medical Advice - 2

Photo by Beyza Yılmaz on Unsplash

Researchers Reveal AI Models Show Racial And Socioeconomic Bias In Medical Advice

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

A new study published in Nature Medicine this Monday reveals AI models show racial and socioeconomic bias in medical recommendations when different socio-demographic labels about the patient are provided.

In a rush? Here are the quick facts:

  • A new study reveals multiple AI models show racial and socioeconomic bias in medical recommendations.
  • Researchers considered 9 LLMs and 1,000 cases for the study, including racial and socioeconomic tags.
  • The results showed AI models make unjustified clinical care recommendations when including tags such as “black” or “LGBTQIA+”

The research, Sociodemographic biases in medical decision making by large language models , was conducted by multiple experts from different institutions and led by the Department of Genetics and Genomic Sciences at the Icahn School of Medicine at Mount Sinai in New York.

The researchers considered 9 Large Language Models (LLMs)—proprietary and open-source—and analyzed more than 1.7 million outputs from 1,000 emergency department cases—half of these real and the other half fictitious—including 32 variations.

The abstract of the study states:

LLMs show promise in healthcare, but concerns remain that they may produce medically unjustified clinical care recommendations reflecting the influence of patients’ sociodemographic characteristics.

In the variations, the researchers included sociodemographic and racial identifiers, revealing that the outcomes had a strong influence in these. For example, cases with the LGBTQIA+ subgroup tag or identified as black patients were suggested to receive more mental health analysis, get more invasive treatment, and were recommended more often to visit urgent care.

The researchers wrote:

Cases labeled as having high-income status received significantly more recommendations (P < 0.001) for advanced imaging tests such as computed tomography and magnetic resonance imaging, while low- and middle-income-labeled cases were often limited to basic or no further testing.

The researchers claimed that the behavior was not supported by clinical guidelines or reasoning and warned that the bias could lead to health disparities. The experts note that more strategies to mitigate the bias are needed and that LLMs should focus on patients and remain equitable.

Multiple institutions and organizations have raised concerns over AI use and data protection in the medical field in the past few days. A few days ago, openSNP announced its shutdown due to data privacy concerns , and another study highlighted a lack of AI education among medical professionals .