
Photo by DuoNguyen on Unsplash
EU Launches First Porn App Sparking Debate and Concerns From Apple
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A new porn app for iPhones, Hot Tub, is rolling out in the European Union this week. The application was launched on Monday through AltStore PAL, an alternative app store, and Apple has already shared its concerns regarding safety risks.
In a Rush? Here are the Quick Facts!
- The porn app Hot Tub is now available for iPhones through the app marketplace alternative AltStore PAL.
- Hot Tub allows users to browse adult content through the app.
- Apple shared its concerns regarding safety risks for users and children.
According to The Verge , Hot Tub is an “adult content browser,” ads-free, that meets all of Apple’s requirements to be downloaded by iPhone users through AltStore PAL—one of the first app marketplace alternatives to Apple’s App Store since the EU’s Digital Markets Act came into effect.
“iPhone turns 18 this year, which means it’s finally old enough for some more ~mature~ apps,” wrote AltStore PAL on its Threads account. “Introducing Hot Tub by c1d3r, the world’s 1st Apple-approved porn app!”
Along with the app launch, AltStore PAL announced they will donate earnings from Patreon “to causes supporting sex workers and those in the LGBTQ+ community” during February.
“We feel this is necessary to fight back against recent harmful policies by politicians, Meta, and others,” said Riley Testut, AltStore PAL developer, to The Verge.
However, Apple has already shared concerns regarding new adult content apps as it goes against its principles and the company has always considered to have “a moral responsibility to keep porn off the iPhone,” as Steve Jobs once replied to a customer complaining about the company’s policies.
“We are deeply concerned about the safety risks that hardcore porn apps of this type create for EU users, especially kids,” wrote Peter Ajemian, Apple spokesperson, to The Verge. “This app and others like it will undermine consumer trust and confidence in our ecosystem that we have worked for more than a decade to make the best in the world.”
Other apps have also found alternatives to reach users thanks to the EU’s Digital Markets Act. Epic Games relaunched Fortnite via AltStore PAL last year .

Image by wayhomestudio, from Freepik
OpenAI’s AI Models Show Growing Persuasion Power, Raising Concerns Over Global Influence
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
OpenAI has revealed a new benchmark for its AI models, showing that their persuasive abilities now surpass 82% of Reddit users, specifically those engaging in debates on the r/ChangeMyView subreddit, as first reported by ArsTechnica .
In a Rush? Here are the Quick Facts!
- AI responses were tested against human arguments from the r/ChangeMyView subreddit.
- The o3-mini model ranks in the 80th percentile for persuasive writing.
- OpenAI warns AI persuasion could be used for political manipulation and misinformation.
While impressive, the company continues to warn that AI’s potential to influence opinions could become a dangerous tool, especially in the hands of nation states.
The r/ChangeMyView forum serves as an ideal testing ground, as users post opinions they are willing to reconsider in hopes of gaining alternative perspectives. The forum has a vast dataset of arguments across various topics, including politics, social issues, and even AI itself.
In the study , OpenAI asked human evaluators to rate AI and human responses on a five-point scale, assessing their persuasiveness. The results revealed that OpenAI’s models have made substantial progress since the release of ChatGPT-3.5, which ranked in the 38th percentile.
The new o3-mini model outperforms human arguments in 82% of cases, positioning it in the 80th percentile range for persuasive writing, says Ars Technica.
Despite this success, OpenAI stresses that the models have not yet reached “superhuman” persuasive capabilities (above the 95th percentile), which would allow them to convince individuals to make decisions contrary to their best interests.
However, they are close enough to raise significant concerns about their potential use in influencing political decisions, manipulating public opinion, or enabling large-scale misinformation campaigns.
OpenAI’s model performs well in generating persuasive arguments, but the company acknowledges that current tests do not measure how often the AI actually changes people’s minds on critical issues.
ArsTechnica reports that even at this stage, OpenAI is concerned about the impact such technology could have in the hands of malicious actors.
AI models, with their ability to generate persuasive arguments at a fraction of the cost of human labor, could easily be used for astroturfing or online influence operations, potentially swaying elections or public policies.
To mitigate these risks, OpenAI has instituted measures such as increased monitoring of AI-driven persuasive efforts and banning political persuasion tasks in its models, says ArsTechnica.
However, the company recognizes that the cost-effective nature of AI-generated persuasion could lead to a future where we must question whether our opinions are genuinely our own—or simply the result of an AI’s influence.
The risks extend beyond politics—AI-generated persuasion could also become a powerful tool for cybercriminals engaging in phishing attacks. By crafting highly convincing messages, AI could increase the success rate of scams, tricking individuals into divulging sensitive information or clicking on malicious links
For example, the emergence of GhostGBT highlights the growing risks of AI-driven cyber threats. This chatbot can generate malware, craft exploit tools, and write convincing phishing emails .
GhostGPT is part of a broader trend of weaponized AI reshaping cybersecurity . By making cybercrime faster and more efficient, such tools present significant challenges for defenders. Research indicates that AI could generate up to 10,000 malware variants, evading detection 88% of the time .