Reskilling Essential As 39% Of Skills Set To Become Obsolete By 2030 - 1

Image by Viktor Krč, from Unsplash

Reskilling Essential As 39% Of Skills Set To Become Obsolete By 2030

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

The global labor market is bracing for dramatic changes by 2030 as technological advancements, economic pressures, demographic shifts, and climate adaptation drive transformations.

In a Rush? Here are the Quick Facts!

  • 60% of employers expect digital access to transform business by 2030.
  • AI and information processing lead technological trends, with 86% of employer support.
  • AI-driven roles like machine learning experts are growing, while clerical jobs are declining sharply.

The World Economic Forum (WEF) published on Wednesday The Future of Jobs Report 2025 , compiled from the perspectives of over 1,000 global employers representing 14 million workers across 22 industries, highlights critical trends shaping the future of work.

Broadening digital access is expected to be the most transformative trend in the global job market, with 60% of employers predicting it will reshape their businesses by 2030.

Key technologies driving change include AI and information processing, highlighted by 86% of employers, followed by robotics and automation (58%) and energy generation and storage (41%).

These advancements are fueling rapid growth in technology-related jobs, such as AI specialists, machine learning experts, big data analysts, and cybersecurity professionals. However, roles like cashiers, ticket clerks, and administrative assistants are set to decline sharply as automation takes over repetitive tasks.

WEF also highlights the economic pressures reshaping the workforce. Rising living costs rank as the second-most impactful trend, with half of employers expecting significant changes to their operations.

Furthermore, AI and tech innovations are not only transforming jobs but also reshaping the skills needed to succeed.

By 2030, 39% of workers’ skills are predicted to become obsolete. Analytical thinking remains the most sought-after skill, valued by 70% of employers, alongside AI and big data proficiency, networks and cybersecurity knowledge, and adaptability.

Reskilling will be crucial, as 59% of the workforce will need new training. Of this group, 29% can upskill in their current roles, and 19% can transition to new positions, while 11% risk job loss due to skill gaps.

The report suggests that generative AI’s main impact on jobs will be in “augmenting” human skills through collaboration, rather than replacing them.

However, the BBC noted how some workers have already been replaced by AI. Companies like Dropbox and Duolingo have cited AI as a reason for recent layoffs.

Finally, the WEF notes how geopolitical tensions and geoeconomic fragmentation are also impacting the job market. Trade restrictions and reshoring strategies affect 34% of businesses, driving demand for cybersecurity roles and human-centered skills like leadership and resilience.

Big Tech’s Influence On EU AI Standards Raises Concerns - 2

Big Tech’s Influence on EU AI Standards Raises Concerns

Big Tech’s Influence On EU AI Standards Raises Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Big Tech companies hold considerable sway over the development of EU standards for AI, according to a report by the campaign group Corporate Europe Observatory (CEO).

In a Rush? Here are the Quick Facts!

  • Over half of JTC21 members represent corporate or consultancy interests, not civil society.
  • Corporations prioritize light AI regulations, undermining safeguards for fundamental rights.
  • Civil society and academia struggle to participate in AI standard-setting due to barriers.

The report raises concerns about the inclusivity and fairness of the standard-setting process, which underpins the EU’s newly approved AI Act .

The AI Act, which became law in August 2024, adopts a risk-based approach, categorising AI systems by risk levels. While high-risk systems, such as those in healthcare and judicial applications, face stricter requirements, specific guidelines remain undefined, as noted by the CEO.

Over half (55%) of the 143 members of the Joint Technical Committee on AI (JTC21), established by European standardization bodies CEN and CENELEC, represent corporate or consultancy interests.

The report criticizes most standard-setting organizations, like CEN and CENELEC, for being private and lacking democratic accountability. These bodies often charge high fees and operate with limited transparency, making it difficult for civil society to participate.

Meanwhile, civil society faces logistical and financial barriers to participation. Only 23 representatives (16%) in Corporate Europe Observatory’s sample were from academia, and just 13 (9%) represented civil society.

While the EU’s Annex III organizations advocate for societal interests, they lack voting rights and resources in comparison to corporate participants. Oracle, a major tech corporation, has publicly lauded its involvement in AI standardisation, claiming its efforts ensure “ethical, trustworthy, and accessible” standards, as reported by the CEO.

Bram Vranken, a researcher at CEO, expressed concern about this delegation of public policymaking.

“The European Commission’s decision to delegate public policymaking on AI to a private body is deeply problematic. For the first time, standard setting is being used to implement requirements related to fundamental rights, fairness, trustworthiness and bias,” he said, as reported by Euro News .

CEN and CENELEC did not disclose participants in their AI standards development. CEO’s requests for participant lists from the JTC21 committee were met with silence, and their Code of Conduct enforces secrecy about participant identities and affiliations.

CEO described these rules as overly restrictive, preventing open discussion about committee membership.

In response to the report, the European Commission stated that harmonised standards would undergo assessment to ensure they align with the objectives of the AI Act. Member States and the European Parliament also retain the right to challenge these standards, reported Euro News.

In conclusion, legislators have delegated the task of operationalising these standards to private organisations. Civil society groups and independent experts, often underfunded and outnumbered, are struggling to counterbalance corporate dominance.

This imbalance risks undermining protections against discrimination, privacy violations, and other fundamental rights, leaving Europe’s AI governance largely shaped by industry interests.