
OpenAI And News Corp Announce New Partnership
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by
OpenAI and the American mass media News Corp announced a new partnership last Wednesday. The new multi-year agreement will allow OpenAI to access and use content from major publications including The Wall Street Journal, New York Post, The Times, The Sun, and The Daily Telegraph.
“We’re joining forces with News Corp to support the highest journalistic standards and enrich our products with its premium journalism,” states OpenAI in the press release . Besides gaining access to archived and current content from numerous publications owned by News Corp, the news corporation will also “share journalistic expertise to help ensure the highest journalism standards.”
“Our partnership with News Corp is a proud moment for journalism and technology,” said Sam Altman, CEO of OpenAI. “Together, we are setting the foundation for a future where AI deeply respects, enhances, and upholds the standards of world-class journalism.”
Robert Thomson, News Corp’s chief executive, said he was “delighted” with the new agreement and sees it as “the beginning of a beautiful friendship.”
According to Fast Company , Fox News—part of Fox Corporation but linked to News Corp by media mogul Rupert Murdoch — will not be included in the deal. OpenAI also clarified that “the partnership does not include access to content from any of News Corp’s other businesses.”
Just a few days ago, OpenAI also announced a new partnership with Reddit. As reported by Fast Company, OpenAI has already built alliances with The Financial Times, Dotdash Meredith, Shutterstock, and international publishers like Prisa Media and Axel Springer.
This new journalistic alliance arrived weeks after 8 newspapers sued OpenAI and Microsoft for using its content without permission to feed chatbots, requesting compensation for using the information in the articles.
OpenAI seems to be achieving agreements to protect itself from future lawsuits for using content to feed its generative AI models and building alliances to improve services and gain strength in the generative AI race .

Slack Faces Criticism For Using Private Data To Train AI Without Clear Opt-Out
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by
Last week, Slack users complained on the social news website Hacker News and on X about the company’s AI training methods, which uses private data without asking for explicit permission and makes it difficult to opt out. The discussions went viral, raising awareness and concerns, and in turn, causing Slack to update its AI privacy principles .
Customers quoted and criticized the company’s requirements to stop sharing data for AI training. “To opt out, please have your Org or Workspace Owners or Primary Owner contact our Customer Experience team at feedback@slack.com with your Workspace/Org URL and the subject line ‘Slack Global model opt-out request.’”
“They want to make it as difficult and painful as possible,” said one user. “My paid workspace opt-out confirmation just came through. One down. Several to go,” said another. Users expressed concern about the use of their private information and their trust in Slack.
The company’s clarification of the use of data in the opt-out email response shared on Hacker News, like only considering data for “machine learning models for things like channel and emoji recommendations and search results,” raised even more debates. “How can anyone in their right mind think building AI for emoji selection is a remotely good use of time?” wrote one user.
After the users’ backlash, Slack shared an announcement about new updates to its AI Privacy Principles, “ Privacy Principles: Search, Learning and Artificial Intelligence. ”
In the document, Slack recognized customers’ concerns and explained it considers “industry-standard, privacy-protective machine learning techniques” and does not train LLM models with customer data. Slack emphasized its data collection is for machine learning models and not generative AI models and assured users that private information was not shared with third parties, ensuring there’s no leak risk across workspaces.
Slack provided details and examples of how it considers machine learning models for features like “Search ranking” to aggregate data and emphasized that they “do not access original message content in DMs, private channels or public channels to make these suggestions.” The company added that it uses LLM models for Slack AI, a separate add-on product that doesn’t use customer data.
However, users’ data to train AI and machine learning algorithms is still set as default. The process quoted and criticized by users remains the same: customers who want to opt out of this program must send an email and explicitly request Slack to stop using their workspace data.