Google And Meta Urge Australia To Delay Social Media Ban - 1

Photo by Luke Porter on Unsplash

Google And Meta Urge Australia To Delay Social Media Ban

  • Written by Andrea Miliani Former Tech News Expert

Google and Meta have urged the Australian government to delay the bill to ban social media networks for children under 16 this Tuesday.

In a Rush? Here are the Quick Facts!

  • Google and Meta are asking the Australian government for more time to evaluate the consequences of the ban
  • The tech giants want to consider the results from the age verification system trial first
  • X and TikTok also shared concerns about the new bill submitted last week

According to Reuters , the tech giants explained that more time is required to evaluate its impact and that the government should wait for the age-verification trial results before implementing the law.

The initiative to ban social media for children and teenagers, led by Austrian Prime Minister Anthony Albanese, represents one of the strictest measures worldwide. There are many c oncerns and doubts about whether it is the best approach to alleviating the risks and challenges this vulnerable population is facing. Albanese’s administration is determined to pass the bill, which should become a law by the end of the year.

The bill was introduced last week, and the submission of opinions opened for only one day when Google and Meta shared their suggestions, including waiting for the age verification system trial as it will include government identification and biometrics to determine age and it’s a new technology that hasn’t been implemented before.

“In the absence of such results, neither industry nor Australians will understand the nature or scale of age assurance required by the bill, nor the impact of such measures on Australians,” said Meta and added that the bill as it is presented right now is “inconsistent and ineffective.”

According to the new bill proposed, social media platforms must comply with the new system and could be fined up to $32 million for breaches.

TikTok also participated and said that the bill wasn’t clear enough and that they were concerned about its impact. X also added that it was against children’s human rights to access the internet and their freedom of expression.

The Senate should deliver a report soon.

This week, the Australian misinformation bill was abandoned as it didn’t get enough support from the Senate.

Disturbing Content And Low Pay: Kenyan Workers Speak Out On AI Jobs Exploitation - 2

Image by OER Africa, from Flickr

Disturbing Content And Low Pay: Kenyan Workers Speak Out On AI Jobs Exploitation

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Kenyan workers training AI face exploitation: low pay, emotional distress from disturbing content, and lawsuits against tech firms for poor conditions.

In a Rush? Here are the Quick Facts!

  • Kenyan workers label images and videos for AI systems at $2 per hour.
  • Workers face emotional distress from disturbing content like violence and pornography.
  • Kenyan government’s incentives haven’t improved wages or working conditions for local workers.

Kenyan workers are being exploited by major U.S. tech companies to train artificial AI systems , performing laborious tasks for wages far below local living standards, according to workers and activists, as detailed in a report by CBS News .

These workers, known as “humans in the loop,” are vital in teaching AI algorithms. They sort, label, and sift vast data sets to train AI for companies like Meta, OpenAI, Microsoft, and Google. This essential, fast-paced work is often outsourced to regions like Africa to reduce costs, says CBS.

These “humans in the loop” are found not just in Kenya, but also in countries like India, the Philippines, and Venezuela—places with large, low-wage populations of well-educated but unemployed individuals, points out CBS.

Naftali Wambalo, a Kenyan worker in Nairobi, spends his days labeling images and videos for AI systems. Despite holding a degree in mathematics, Wambalo finds himself working long hours for just $2 per hour, reports CBS.

He says that he spends his day categorizing images of furniture or identifying the race of faces in photos to help train AI algorithms. “The robots or the machines, you are teaching them how to think like human, to do things like human,” he said, as reported by CBS.

The job, however, is far from easy. Wambalo, like many AI workers, is assigned to projects by Meta and OpenAI that involve reviewing disturbing content, such as graphic violence, pornography, and hate speech—experiences that leave a lasting emotional impact.

“I looked at people being slaughtered, people engaging in sexual activity with animals. People abusing children physically, sexually. People committing suicide,” said Wambalo to CBS.

The demand for workers in AI training continues to rise, but the wages remain shockingly low. For instance, workers at SAMA, an American outsourcing company employing over 3,000 workers, are contracted by Meta and OpenAI, as reported by CBS.

According to documents CBS obtained, OpenAI agreed to pay SAMA $12.50 per hour per worker, far more than the $2 the workers actually received. However, SAMA asserts that this wage is fair for the region.

Civil rights activist Nerima Wako-Ojiwa argues that these jobs are a form of exploitation. She describes them as cheap labor, with companies coming to the region, promoting the jobs as opportunities for the future, but ultimately exploiting workers, as reported by CBS.

Workers are often given short-term contracts—sometimes only lasting a few days—with no benefits or long-term job security.

The Kenyan government has pushed to attract foreign tech firms by offering financial incentives and promoting lenient labor laws, but these efforts have not resulted in better pay or working conditions for local workers, as noted by CBS.

The emotional toll is another significant concern due to the content they are forced to review.

Fasica, one of the AI workers said to CBS, “I was basically reviewing content which are very graphic, very disturbing contents. I was watching dismembered bodies or drone attack victims. You name it. You know, whenever I talk about this, I still have flashbacks.”

SAMA, declined an on-camera interview with CBS. Meta and OpenAI stated their commitment to safe working conditions, fair wages, and mental health support.

CBS reports about another U.S. AI training company facing criticism in Kenya is Scale AI, which runs the website Remotasks. Employee of this platform get paid per task. However, the company sometimes withheld payment, citing policy violations. One of the workers explained to CBS there’s no recourse.

As complaints grew, Remotasks shut down in Kenya. Activist Nerima Wako-Ojiwa highlighted how Kenya’s outdated labor laws nevertheless leave workers vulnerable to exploitation.

Nerima Wako-Ojiwa added, “I think that we’re so concerned with ‘creating opportunities,’ but we’re not asking, ‘Are they good opportunities?’ ”