LinkedIn Faces Lawsuit For Allegedly Sharing User Messages To Train AI Models - 1

Image by Stock Snap, from Unsplash

LinkedIn Faces Lawsuit For Allegedly Sharing User Messages To Train AI Models

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

LinkedIn is facing a class-action lawsuit from Premium users who allege the platform shared their private messages with third parties to train generative AI models without proper consent, as reported by Reuters .

In a Rush? Here are the Quick Facts!

  • Plaintiffs accuse LinkedIn of quietly updating its privacy policy in September.
  • The lawsuit seeks $1,000 per user for federal data privacy violations.
  • LinkedIn denies all allegations, calling the claims “false claims with no merit.”

The lawsuit , filed in federal court in San Jose, California, claims LinkedIn introduced a privacy setting in August allowing users to opt in or out of data sharing.

The complaint accuses LinkedIn of deliberately violating its promise to use user data solely to enhance the platform, suggesting the company sought to minimize public and legal scrutiny, as reported by Reuters.

The suit was filed on behalf of Premium users who sent or received InMail messages and had their data shared before the September policy update.

The lawsuit alleges that LinkedIn breached its contractual promises by sharing Premium customers’ private messages with third parties to train generative AI models, as reported by The Register.

These messages could contain sensitive information about employment, intellectual property, compensation, and personal matters, raising serious privacy concerns.

The lawsuit focuses particularly on Premium customers—those subscribing to tiers like Premium Career, Premium Business, Sales Navigator, and Recruiter Lite—who are subject to the LinkedIn Subscription Agreement (LSA), noted The Register.

This agreement makes specific privacy commitments, including a clause in Section 3.2 promising not to disclose Premium customers’ confidential information to third parties, as noted by The Register.

The lawsuit claims LinkedIn violated this clause, breaching the US Stored Communications Act, contract terms, and California’s unfair competition laws.

However, The Register notes that the plaintiffs do not present evidence that InMail contents were shared. Instead, the complaint speculates that LinkedIn included these messages in AI training data.

It bases this assumption on LinkedIn’s alleged unannounced policy changes and its failure to publicly deny accessing InMail messages for training purposes, as reported by The Register.

Plaintiffs are seeking damages for breach of contract, violations of California’s unfair competition law, and $1,000 per user under the federal Stored Communications Act, as reported by Reuters.

Human Brain Processing Can Inspire Next-Gen AI Systems, Researchers Say - 2

Image by AltumCode, from Unsplash

Human Brain Processing Can Inspire Next-Gen AI Systems, Researchers Say

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Research published on Jan. 22 in Nature suggests that human information processing can serve as a model for training next-generation AI systems.

In a Rush? Here are the Quick Facts!

  • Efficient AI could impact sectors like space exploration, health, and surveillance.
  • The study explores new memory technologies for scalable neuromorphic computing systems.
  • Neuromorphic computing offers energy-efficient solutions as AI’s electricity consumption doubles by 2026.

The study brought together over a dozen researchers worldwide, including Cory Merkel, associate professor of computer engineering at Rochester Institute of Technology . Merkel specializes in neuromorphic computing, a brain-inspired approach aimed at enhancing processing power and energy efficiency in AI applications.

“The ability to have efficient AI on constrained devices will also open the door to many new application domains in areas like brain-computer interfacing, space exploration, health monitoring technologies, and autonomous surveillance systems, for example,” Merkel explained, in the university press release.

His work addresses the growing demand for AI systems tailored to size, weight, and power-constrained environments, such as wearable devices, smartphones, robots, drones, and satellites. Neuromorphic computing promises significant improvements in processing capabilities and mass storage needs.

The researchers highlight how neuromorphic systems leverage bio-intelligence principles identified by neuroscientists, offering a model for faster and more efficient computational networks.

Merkel and Suma George Cardwell, a senior researcher at Sandia National Laboratory, also explored emerging memory technologies, such as RRAM and Spintronics, for mass storage in neuromorphic systems. These technologies show potential for scalable solutions and effective handling of device variabilities.

As AI’s electricity consumption is projected to double by 2026, researchers view neuromorphic computing as a promising solution. They highlighted that the field is at a “critical juncture,” with scalability becoming a crucial measure of progress.

Neuromorphic computing presents a path toward creating more efficient, energy-conscious AI systems for the future.