
Image by Studiogstock, from Freepik
Report Highlights Privacy Concerns And Data Retention Practices Of Social Media
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- FTC report reveals major privacy concerns.
- Companies retain extensive data indefinitely.
- Lack of transparency in data practices noted.
In a comprehensive report published today by the Federal Trade Commission (FTC), findings reveal significant privacy concerns surrounding social media and video streaming services.
The report notes as these platforms become integral to daily life, they also build infrastructure for mass commercial surveillance, raising questions about user data privacy and market competition.
The FTC initiated its investigation in December 2020, issuing orders to nine major companies to disclose their data collection and usage practices.
The report highlights troubling trends: many companies amass extensive data on users and non-users alike, often retaining this information indefinitely.
This data includes personal details, online behaviors, and even demographic information purchased from data brokers.
Such practices can pose serious risks to privacy, with many firms lacking clear policies on data minimization and retention. In some cases, companies merely de-identify data rather than delete it upon user requests.
The advertising ecosystem also raises red flags. Many firms utilize personal data for targeted advertising, employing tracking technologies that consumers may not fully understand.
This opaque system complicates users’ ability to comprehend how their data is utilized for marketing purposes, often without their explicit consent.
Algorithmic decision-making plays a significant role in shaping user experiences, with companies leveraging AI and data analytics to recommend content and target ads.
However, users typically lack control over how their data is employed in these automated systems, particularly regarding sensitive inferences made about them. The report notes a lack of transparency and accountability in these processes, leading to potential harms, especially among children and teens.
While many companies claim to protect minors by adhering to the Children’s Online Privacy Protection Rule (COPPA), the report criticizes these efforts as inadequate.
Firms often assert there are no child users on their platforms, disregarding the reality that children do access these services.
The FTC’s findings suggest a need for urgent reforms in the digital landscape. Recommendations include implementing stricter privacy protections, enhancing transparency around data usage, and ensuring greater safeguards for young users.
The report calls for Congress to enact comprehensive federal privacy legislation to establish robust consumer data rights.
As the digital ecosystem continues to evolve, the FTC’s report highlights how the substantial market power of online platforms can lead to practices that significantly impact consumers.
This underscores the need for careful examination of data practices in relation to competition and consumer privacy.

Image by Vikasss, from Pixabay
AI Disinformation Had No Impact On 2024 European Elections, Report Finds
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- AI had no effect on European election outcomes.
- AI disinformation reinforced existing political views.
- Misinformation and confusion damaged trust in sources.
AI-generated disinformation and deepfakes had no impact on the results of the 2024 UK, European Union (EU), and French elections, according to a new report by the Centre for Election Technology and Security (CETaS).
Despite widespread concerns about AI manipulation, the study found that most AI-enabled disinformation reinforced existing political beliefs rather than swaying undecided voters.
Nevertheless, the report raises concerns over the broader consequences of AI use, especially regarding the ethical challenges it presents in democratic processes.
The report identified 16 instances of AI-fueled viral disinformation in the UK election, and 11 cases during the EU and French elections. Most of these cases, the study argues, merely reinforced pre-existing political views.
However, the aftermath of these AI incidents revealed a pattern of misinformation. Many people were also confused about whether AI-generated content was real, which damaged trust in online sources.
The report states that some politicians used AI in campaign ads without proper labeling, encouraging dishonest election practices.
In another finding, the report notes that the rise of AI-generated satire, often mistaken for real content, further misled voters, revealing a new type of risk to election integrity.
The report highlighted the role of both domestic actors and foreign interference in spreading AI-driven misinformation. However, it emphasized that traditional methods, like bot-driven astroturfing and disinformation spread by human influencers, had a far greater impact on voters than AI content.
While the influence of AI was minor in terms of election results, CETaS warns of the growing risks as AI technology becomes more accessible.
The report calls for legal and regulatory bodies to address these challenges, proposing the need to balance free speech with combating AI-driven disinformation. It also stresses the importance of clear labeling of AI-generated political content to prevent unethical campaigning practices.
The final report from CETaS, due in November 2024, will focus on AI’s role in the U.S. election and offer long-term recommendations to protect democratic processes from AI-related threats.
The briefing concludes by acknowledging the potential positive applications of AI. The report claims that AI provided a platform to strengthen the connection between voters and political candidates via synthetic online personas.
Additionally, generative AI assisted fact-checkers in prioritizing misleading claims made by candidates, helping them determine which ones needed urgent attention.