
Image by Andrea Ferrario, from Unsplash
Allianz Life Data Breach Exposes 1.1 Million Customers
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A massive cyberattack on Allianz Life has exposed the personal data of 1.1 million customers in the United States, according to breach notification site Have I Been Pwned .
In a rush? Here are the quick facts:
- Hackers accessed Salesforce systems via malicious OAuth apps.
- Stolen data includes emails, addresses, phone numbers, and tax IDs.
- ShinyHunters leaked 2.8 million records from Allianz Salesforce databases.
The attack, which took place in mid-July, targeted the company’s Salesforce customer management system.
The U.S. subsidiary of Germany’s Allianz SE, Allianz Life, revealed hackers stole data from the “majority” of its 1.4 million customer base during July.
BleepingComputer notes that the company operates with 2,000 American staff members, providing insurance services to millions of customers worldwide through its parent company, which ranks as one of the world’s largest insurers.
According to BleepingComputer, the stolen information includes “email addresses, names, genders, dates of birth, phone numbers, and physical addresses.” BleepingComputer confirmed with several affected individuals that their leaked data, including tax IDs, was accurate.
Hackers linked to the ShinyHunters extortion group are believed to be behind the breach. They reportedly tricked employees into granting access to a malicious OAuth app connected to Allianz’s Salesforce instance.
Once inside, attackers stole roughly 2.8 million data records, including those of customers, brokers, financial advisors, and wealth management companies. Databases were later leaked online as part of extortion campaigns.
“Allianz Life had previously said that hackers stole personal information of most of its 1.4 million U.S. customers, financial professionals and select employees,” Reuters reported. The company confirmed that “some selected Allianz Life employees” were also impacted.
An Allianz spokesperson said the investigation is ongoing and the company “couldn’t offer any additional comment at this time,” noted BleepingComputer. However, Reuters reports that Allianz has promised “dedicated resources, including two years of identity monitoring services, to assist impacted individuals.”
The breach is part of a wider campaign of Salesforce-targeted attacks that also hit Google, Qantas, Adidas, Dior, Tiffany & Co., Chanel, and HR giant Workday.

Image by ODISSEI, from Unsplash
Nearly Half of Online Survey Responses May Come From AI, Study Finds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Online behavioural research, which was previously regarded as a trustworthy method for studying human psychology, now faces a major problem as participants use AI tools such as chatbots to generate their responses.
In a rush? Here are the quick facts:
- 45% of Prolific survey participants pasted AI-generated responses.
- Researchers found chatbot text often looked “overly verbose” or “non-human.”
- Experts call the issue “LLM Pollution,” threatening behavioural research validity.
Researchers at the Max Planck Institute for Human Development in Berlin recently investigated how widespread the problem is on platforms such as Prolific, which pays volunteers to complete surveys.
“The incidence rates that we were observing were really shocking,” says lead researcher Anne-Marie Nussberger, as reported by New Scientist (NS).
In one test, 45 per cent of participants appeared to paste chatbot-generated content into an open-ended response box. The replies often showed signs such as “overly verbose” or “distinctly non-human” language.
“From the data that we collected at the beginning of this year, it seems that a substantial proportion of studies is contaminated,” Nussberger said to NS.
To detect suspicious responses, her team introduced hidden traps. Basic reCAPTCHAs flagged 0.2 per cent of users, a more advanced version caught 2.7 per cent, an invisible text prompt that asked for the word “hazelnut” snared 1.6 per cent, and banning copy-pasting revealed another 4.7 per cent.
The problem has evolved into what experts now call “LLM Pollution,” which extends beyond cheating. The research study reveals three AI interference patterns: Partial Mediation (AI assists with wording or translation), Full Delegation (AI performs complete studies), and Spillover (humans modify their actions because they anticipate AI presence).
“What we need to do is not distrust online research completely, but to respond and react,” says Nussberger, calling on platforms to take the problem seriously, as reported by NS..
Matt Hodgkinson, a research ethics consultant, warns to NS: “The integrity of online behavioural research was already being challenged […] Researchers either need to collectively work out ways to remotely verify human involvement or return to the old-fashioned approach of face-to-face contact.”
Prolific declined to comment to NS.