Kenya’s Digital Struggle: AI, Activism, And Crackdowns - 1

Image by Hassan Kibwana, from Unsplash

Kenya’s Digital Struggle: AI, Activism, And Crackdowns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Kenya is at the center of an escalating battle over digital rights as the government tightens regulations on social media and artificial intelligence.

In a Rush? Here are the Quick Facts!

  • Kenyan government cracks down on AI-generated content, targeting political dissent.
  • 82 critics abducted since June 2024, with 29 still missing.
  • Activists warn AI regulations mainly suppress dissent, not protect workers’ rights.

The rise of AI-generated content has sparked political dissent, leading to growing tensions between authorities and tech-savvy activists, as first reported by DW .

In recent months, a wave of abductions has targeted government critics, with many linked to AI-generated images. Among those detained were cartoonist Gideon Kibet and 24-year-old Billy Mwangi, both of whom had shared an AI-generated image depicting President William Ruto in a coffin, as reported by DW.

According to the Kenya National Commission on Human Rights, 82 people have been abducted since June 2024, with at least 29 still missing. While some individuals, including Kibet and Mwangi, were later released, authorities deny any involvement in their disappearances.

With AI fueling digital dissent, the government is seeking ways to control social media. Kenyan authorities have condemned the use of AI-generated images targeting politicians, with Interior Minister Kipchumba Murkomen warning, “We will ensure that those using social media to threaten others face the full force of the law,” as reported by DW

Officials are also considering requiring social media companies to establish local offices for regulatory oversight, although platforms like X (formerly Twitter) remain resistant.

Tech expert Mark Kaigwa sees the government’s response as part of a broader struggle to control online narratives. “Citizens have, in their own way, been exercising what some would call ‘greater than their freedom of expression’ and many might describe as well within their rights,” he said, as reported by DW.

AI tools like Grok, embedded within X, have made it easier for users to generate politically charged content, intensifying tensions. “Some of the ones that have been generated have been of political leaders in coffins,” Kaigwa noted, as reported by DW. .

Kenya’s online activism is well known, with “Kenyans on X” gaining global influence. Kaigwa highlighted a recent example in which digital protests nearly derailed a planned visit by the Dutch king. “Their entire IT systems were overwhelmed with people writing emails saying, ‘Hey, we don’t think you should come,’ ” reported DW.

Beyond digital activism, AI is reshaping Kenya’s labor market, with workers facing severe exploitation . Kenyan workers play a crucial role in training AI systems for major U.S. tech firms, performing tedious and emotionally taxing tasks for as little as $2 per hour.

These workers—known as “humans in the loop”—label images, sort data, and review disturbing content, including graphic violence and child abuse, with little mental health support.

Despite Kenya’s efforts to attract foreign tech investment, labor laws remain weak, leaving workers vulnerable. Activists argue that while AI brings economic opportunities, its benefits are unevenly distributed, disproportionately harming those at the bottom.

The crackdown in Kenya mirrors broader trends across Africa , where governments are increasingly restricting digital spaces. While Kenya positions itself as an AI leader, critics argue that its regulations primarily target dissent.

As government scrutiny intensifies, Kenya faces a crucial choice: embrace digital freedom or risk deeper repression.

Creators Demand Tech Giants To Pay For AI Training Data - 2

Image by Cristofer Maximilian, from Unsplash

Creators Demand Tech Giants To Pay For AI Training Data

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Governments are allowing AI developers to steal content – both creative and journalistic – for fear of upsetting the tech sector and damaging investment, a UK Parliamentary committee heard this week, as first reported by The Register .

In a Rush? Here are the Quick Facts!

  • UK MPs heard concerns over AI exploiting copyrighted content without compensation.
  • Composer Max Richter warned AI threatens musicians’ livelihoods and originality.
  • Publishers found 1,000 bots scraping data from 3,000 news websites for AI models.

Despite a tech industry figure insisting that the “original sin” of text and data mining had already occurred and that content creators and legislators should move on, a joint committee of MPs heard from publishers and a composer angered by the tech industry’s unchecked exploitation of copyrighted material.

The Culture, Media and Sport Committee and Science, Innovation and Technology Committee asked composer Max Richter how he would know if “bad-faith actors” were using his material to train AI models.

“There’s really nothing I can do,” he told MPs. “There are a couple of music AI models, and it’s perfectly easy to make them generate a piece of music that sounds uncannily like me,” he said, as reported by The Register.

“That wouldn’t be possible unless it had hoovered up my stuff without asking me and without paying for it. That’s happening on a huge scale. It’s obviously happened to basically every artist whose work is on the internet,” Richter added.

Richter, whose work has been used in major film and television scores, warned that automated material would edge out human creators, impoverishing musicians. “You’re going to get a vanilla-ization of music culture,” he said, as reported by The Register.

“If we allow the erosion of copyright, which is really how value is created in the music sector, then we’re going to be in a position where there won’t be artists in the future,” he added.

Former Google staffer James Smith echoed this sentiment, saying, “The original sin, if you like, has happened.” He suggested governments should focus on supporting licensing as an alternative monetization model, reported The Register.

Matt Rogerson, director of global public policy at the Financial Times, disagreed, emphasizing that AI companies were actively scraping content without permission. “We can only deal with what we see in front of us,” he said, as reported by The Register.

A study found that 1,000 unique bots were scraping data from 3,000 publisher websites, likely for AI model training, according to The Register.

Sajeeda Merali, chief executive of the Professional Publishers Association, criticized the AI sector’s argument that transparency over data scraping was commercially sensitive. “Its real concern is that publishers would then ask for a fair value in exchange for that data,” she said, as reported by The Register.

The controversy over AI training data escalated in October when over 13,500 artists signed a petition to stop AI companies from scraping creative works without consent . Organized by composer and former AI executive Ed Newton-Rex, the petition was signed by public figures like Julianne Moore, Thom Yorke, and Kazuo Ishiguro.

“There are three key resources that generative AI companies need to build AI models: people, compute, and data. They spend vast sums on the first two – sometimes a million dollars per engineer, and up to a billion dollars per model. But they expect to take the third – training data – for free,” Newton-Rex said.

Tensions heightened further when a group of artists leaked access to OpenAI’s text-to-video tool, Sora, in protest. Calling themselves “ Sora PR Puppets ,” they provided free access to Sora’s API via Hugging Face, allowing users to generate video clips for three hours before OpenAI shut it down.

The protesters claimed OpenAI treated artists as “PR puppets,” exploiting unpaid labor for a $157 billion company. They released an open letter demanding fair compensation and invited artists to develop their own AI models.

With artists and publishers pushing back against AI’s unchecked use of their content, the debate over ethical AI training practices continues to intensify. The UK government faces mounting pressure to implement policies that protect creative industries without stifling technological advancement.