
Image by Karsten Winegeart, from Unsplash
AI Helps Solve Decades-Old Mystery Of Holocaust Massacre Image
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Historian Jürgen Matthäus has used AI to identify Nazi soldier Jakobus Onnen in a notorious 1941 Holocaust photo from Berdychiv.
In a rush? Here are the quick facts:
- AI compared historical photos to identify the suspected killer, Jakobus Onnen.
- The AI produced an “unusually high” match despite the historical photo’s age.
- Historical letters and photographs were digitized to aid AI-assisted identification.
A historian employed artificial intelligence technology to resolve a long-standing enigma about one of the most well-known Holocaust photographs from history.
The image shows a bespectacled Nazi soldier aiming a pistol at the head of a kneeling man beside a pit of corpses, while German troops look on. For years, the photo was wrongly known as ‘The Last Jew in Vinnitsa’.
The Guardian , who first reported the story, says that Jürgen Matthäus who works as a German historian in the United States has dedicated multiple years to study the image.
Now, with the help of AI and volunteers from the open-source group Bellingcat, he believes he has identified the killer as Jakobus Onnen, a teacher from northern Germany, as reported by The Guardian.
According to Matthäus’s findings, published in Zeitschrift für Geschichtswissenschaft, the massacre took place on 28 July 1941 in the citadel of Berdychiv, Ukraine, carried out by SS unit Einsatzgruppe C. The city had long been a thriving Jewish center. Of the estimated 20,000 Jews there at the time, only 15 survived by early 1944.
AI analysis compared Onnen’s photos to the image which resulted in a match that Matthäus described as “unusually high”. He cautioned that the technology is not definitive but provides strong evidence when combined with archival research.
“The match, from everything I hear from the technical experts, is unusually high in terms of the percentage the algorithm throws out there,” he said, as reported by The Guardian.
Matthäus stressed that AI is only one part of the process. “This is clearly not the silver bullet – this is one tool among many. The human element continues to be the most important aspect.’’
Onnen became a member of the Nazi party when he joined in 1933 but lost his life during combat in 1943. Matthäus said the image should be seen as a crucial reminder of the Holocaust’s brutality, as reported by The Guardian..
“I think this image should be just as important as the image of the gate in Auschwitz, because it shows us the hands-on nature, the direct confrontation between killer and person to be killed.’’

Image by Luis Reed, from Unsplash
DNA Screening Under Threat As AI Designs Undetectable Toxins
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Microsoft researchers reported that AI could be used to create dangerous biological threats by tricking DNA screening systems.
In a rush? Here are the quick facts:
- Generative AI can design both beneficial and harmful proteins.
- Microsoft’s EvoDiff model helped redesign toxins to evade detection.
- Researchers conducted tests digitally, avoiding creation of real toxins.
DNA screening systems are safeguards meant to block people from ordering genetic material that could be used to produce toxins or pathogens. But a team led by Microsoft’s chief scientist, Eric Horvitz, revealed in Science that they managed to bypass the protections in a way never before identified.
The team conducted experiments with generative AI systems which created brand-new protein structures. Such systems are helping drug companies search for cures, but researchers say they are “dual use,” which means that they are capable of producing both useful and harmful molecules.
Microsoft initiated testing of this risk during 2023 to determine if “adversarial AI protein design” would allow bioterrorists to create dangerous proteins.
To carry out the test, they used protein models, including Microsoft’s EvoDiff, to subtly redesign toxins so they could slip past screening software while keeping their harmful functions intact. The researchers stressed that their work was digital only, saying they “never produced any toxic proteins” to avoid any suggestion of bioweapons development, as reported by MIT Tech Review .
The researchers performed the test by using protein models including Microsoft’s EvoDiff to create modified toxins, which could evade screening software yet maintain their toxic properties.
“The patch is incomplete, and the state of the art is changing. But this isn’t a one-and-done thing. It’s the start of even more testing,” said Adam Clore of Integrated DNA Technologies, a coauthor of the study, as reported by MIT. “We’re in something of an arms race,” he added.
Dean Ball, a fellow at the Foundation for American Innovation, warned: “This finding, combined with rapid advances in AI-enabled biological modeling, demonstrates the clear and urgent need for enhanced nucleic acid synthesis screening procedures coupled with a reliable enforcement and verification mechanism,” as reported by MIT
Others are more skeptical. Michael Cohen, an AI-safety researcher at UC Berkeley, argued: “The challenge appears weak, and their patched tools fail a lot.” He said defenses should be built into AI systems themselves.