AI Disinformation Had No Impact On 2024 European Elections, Report Finds - 1

Image by Vikasss, from Pixabay

AI Disinformation Had No Impact On 2024 European Elections, Report Finds

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • AI had no effect on European election outcomes.
  • AI disinformation reinforced existing political views.
  • Misinformation and confusion damaged trust in sources.

AI-generated disinformation and deepfakes had no impact on the results of the 2024 UK, European Union (EU), and French elections, according to a new report by the Centre for Election Technology and Security (CETaS).

Despite widespread concerns about AI manipulation, the study found that most AI-enabled disinformation reinforced existing political beliefs rather than swaying undecided voters.

Nevertheless, the report raises concerns over the broader consequences of AI use, especially regarding the ethical challenges it presents in democratic processes.

The report identified 16 instances of AI-fueled viral disinformation in the UK election, and 11 cases during the EU and French elections. Most of these cases, the study argues, merely reinforced pre-existing political views.

However, the aftermath of these AI incidents revealed a pattern of misinformation. Many people were also confused about whether AI-generated content was real, which damaged trust in online sources.

The report states that some politicians used AI in campaign ads without proper labeling, encouraging dishonest election practices.

In another finding, the report notes that the rise of AI-generated satire, often mistaken for real content, further misled voters, revealing a new type of risk to election integrity.

The report highlighted the role of both domestic actors and foreign interference in spreading AI-driven misinformation. However, it emphasized that traditional methods, like bot-driven astroturfing and disinformation spread by human influencers, had a far greater impact on voters than AI content.

While the influence of AI was minor in terms of election results, CETaS warns of the growing risks as AI technology becomes more accessible.

The report calls for legal and regulatory bodies to address these challenges, proposing the need to balance free speech with combating AI-driven disinformation. It also stresses the importance of clear labeling of AI-generated political content to prevent unethical campaigning practices.

The final report from CETaS, due in November 2024, will focus on AI’s role in the U.S. election and offer long-term recommendations to protect democratic processes from AI-related threats.

The briefing concludes by acknowledging the potential positive applications of AI. The report claims that AI provided a platform to strengthen the connection between voters and political candidates via synthetic online personas.

Additionally, generative AI assisted fact-checkers in prioritizing misleading claims made by candidates, helping them determine which ones needed urgent attention.

LinkedIn Using User Data To Train AI Models Without Clear Consent - 2

Image by Airam Dato-on, from Pexels

LinkedIn Using User Data To Train AI Models Without Clear Consent

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • LinkedIn used U.S. user data for AI without clear notice.
  • An opt-out existed, but LinkedIn didn’t update its privacy policy initially.
  • LinkedIn’s delayed update reflects growing global concerns about AI data use.

LinkedIn, the professional networking platform, has faced criticism for using user data to train its AI models without explicitly informing users beforehand.

LinkedIn users in the U.S. — but not those in the EU, EEA, or Switzerland, likely due to stricter data privacy laws — have an opt-out toggle in their settings that reveals LinkedIn collects personal data to train “content creation AI models.”

On a help page , LinkedIn explains that its generative AI models are used for tasks like writing assistant features.

Users can opt out of having their data used for AI training by navigating to the “Data for Generative AI Improvement” section under the Data privacy tab in their account settings.

Turning off the toggle will stop LinkedIn from using personal data for future AI model training, though it does not undo training that has already occurred.

This approach usually allows users to adjust their settings or leave the platform if they disagree with the changes. This time, however, that wasn’t the case.

This comes amid broader concerns about how personal data is processed by AI systems. Scrutiny over AI data practices is intensifying globally.

A recent study from MIT revealed that a growing number of websites are restricting the use of their data for AI training .

Additionally, The DPC recently concluded legal proceedings against X regarding its AI tool after the company agreed to comply with previous restrictions on using EU/EEA user data for AI training.

The incident underscores the increasing importance of transparency and user consent in the development of AI technologies.

As AI continues to advance, it is crucial for companies to be clear about their data practices and obtain explicit permission from users before utilizing their information for training purposes.

The incident also highlights the growing tensions between AI companies and data owners, as more and more organizations are demanding greater control over how their data is used.