
Image by Pramod Tiwari, from Unsplash
AI Use Surges in Workplaces, So Do Privacy Risks
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
A new international study reveals widespread AI use in workplaces, with nearly half of employees misusing tools like ChatGPT, often risking data exposure.
In a rush? Here are the quick facts:
- 58% of global workers use AI regularly at their jobs.
- 48% uploaded sensitive company data into public AI tools.
- 66% rely on AI output without checking its accuracy
A new study, reported by The Conversation , has revealed that while most workers are embracing AI tools like ChatGPT to improve performance, many are also using them in risky ways, often without their employers’ knowledge.
The research conducted by Melbourne Business School together with KPMG support gathered data from 32,000 workers spread across 47 countries. The survey revealed that 58% of employees use AI tools in their work activities and most workers reported improved efficiency and innovation and better work quality.
However, 47% admitted to misusing AI, including uploading sensitive data to public tools or bypassing company rules. Even more (63%) have witnessed colleagues doing the same, as reported by The Conversation.
More concerning is how widespread “shadow AI” has become, when employees use AI tools secretly or present its output as their own. Sixty-one percent said they don’t disclose when they use AI, while 55% have passed off AI-generated content as personal work.
This secrecy may not be surprising given the growing pressure workers face to appear indispensable in an AI-dominated labor market . At companies like Shopify, AI adoption is not only encouraged, it’s mandated. CEO Tobi Lütke recently told employees that before requesting additional staff or resources, they must prove AI can’t do the job first .
He emphasized that effective AI usage is now a fundamental expectation, and that performance reviews will assess how well employees integrate AI tools into their workflows. Workers who lean into automation, he noted, are accomplishing “100X the work.”
While this drive boosts productivity, it also fuels quiet competition. Admitting reliance on generative AI could be perceived as making one’s role replaceable.
This concern is echoed globally: a recent UNCTAD report warned that AI could affect up to 40% of jobs worldwide . It noted AI’s ability to perform cognitive tasks traditionally reserved for humans raising the spectre of job loss and economic inequality.
In such an environment, many workers may choose to hide their use of AI to retain a sense of control, creativity, or job security, even if it means violating transparency norms or workplace policies.
The Conversation reports that complacency is another issue in the reviewed study, where 66% of respondents say they have relied on AI output without evaluating it, leading to errors and, in some cases, serious consequences like privacy breaches or financial loss.
Researchers stressed the need for urgent reforms, since they noted that just 47% of workers have received any AI training.The authors call for stronger governance, mandatory training, and a work culture that supports transparency.
Yet, with 39% of current skills expected to require reskilling by 2030 , some workers may stay silent. As automation transforms jobs, employees might hide AI use to avoid appearing replaceable.

Image by Brett Jordan, from Unsplash
AI Bots Broke Reddit Rules In Controversial Persuasion Test
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Anonymous researchers secretly used AI bots on Reddit to pose as real people, manipulating opinions and violating platform rules without users’ knowledge.
In a rush? Here are the quick facts:
- Bots posed as survivors, counselors, and marginalized individuals.
- 1,783 AI comments were posted over four months.
- The experiment broke Reddit rules banning undisclosed AI.
A group of researchers, claiming to be from the University of Zurich, secretly conducted an unauthorized AI experiment on Reddit’s r/changemyview, a subreddit with over 3.8 million users, as first reported by 404 Media .
Their goal was to see if AI could change people’s opinions on sensitive topics—but they never asked for anyone’s consent.
One bot wrote:
“I’m a male survivor of (willing to call it) statutory rape […] She was 22. She targeted me and several other kids, no one said anything, we all kept quiet.”
Another bot claimed to speak “as a Black man”:
“In 2020, the Black Lives Matter movement was viralized by algorithms and media corporations who happen to be owned by […] guess? NOT black people.”
A third said:
“I work at a domestic violence shelter, and I’ve seen firsthand how this ‘men vs women’ narrative actually hurts the most vulnerable.”
404 media reports that the bots’ responses received more than 20,000 upvotes and 137 deltas—a token on r/changemyview given when someone admits their mind has been changed. The researchers claimed their AI was significantly better at persuasion than humans.
404 Media noted that the experiment violated the subreddit’s clearly stated rule: “bots are unilaterally banned.”
But the researchers defended themselves, claiming that breaking the rule was necessary. In a public response, they said: “To ethically test LLMs’ persuasive power in realistic scenarios, an unaware setting was necessary […] we carefully designed our experiment to still honor the spirit behind [the rule],” as reported by 404 Media.
“Given the [human oversight] considerations, we consider it inaccurate and potentially misleading to consider our accounts as ‘bots.’”
The research paper that explains the experiment was published without listing any author names, a highly unusual move in academic publishing, as noted by 404 Media.
The researchers also used an anonymous email to answer questions and refused to identify themselves, saying only that they wished to protect their privacy “given the current circumstances.”
Moderators of r/changemyview were furious. “People do not come here to discuss their views with AI or to be experimented upon,” they wrote in a public statement, as reported by 404 Media. They added that users had been subjected to “psychological manipulation.”
The controversy comes as OpenAI’s latest benchmark shows its o3-mini model outperformed Reddit users in 82% of persuasive cases on the same subreddit.
Additionally, the rise of GhostGPT highlights the escalating threat of AI-powered cybercrime. This chatbot can create malware, build exploit tools, and compose highly convincing phishing messages.
GhostGPT exemplifies a broader shift toward weaponized AI , accelerating the pace and efficiency of cyberattacks. Security researchers warn that AI tools could produce up to 10,000 malware variants , slipping past detection systems nearly 88% of the time.
While OpenAI emphasized ethical use and safeguards, the Zurich experiment reveals the real-world misuse risk: AI can now craft arguments so compelling they sway opinions, without users realizing the source isn’t human.