Big Tech’s Influence On EU AI Standards Raises Concerns - 1

Big Tech’s Influence on EU AI Standards Raises Concerns

Big Tech’s Influence On EU AI Standards Raises Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Big Tech companies hold considerable sway over the development of EU standards for AI, according to a report by the campaign group Corporate Europe Observatory (CEO).

In a Rush? Here are the Quick Facts!

  • Over half of JTC21 members represent corporate or consultancy interests, not civil society.
  • Corporations prioritize light AI regulations, undermining safeguards for fundamental rights.
  • Civil society and academia struggle to participate in AI standard-setting due to barriers.

The report raises concerns about the inclusivity and fairness of the standard-setting process, which underpins the EU’s newly approved AI Act .

The AI Act, which became law in August 2024, adopts a risk-based approach, categorising AI systems by risk levels. While high-risk systems, such as those in healthcare and judicial applications, face stricter requirements, specific guidelines remain undefined, as noted by the CEO.

Over half (55%) of the 143 members of the Joint Technical Committee on AI (JTC21), established by European standardization bodies CEN and CENELEC, represent corporate or consultancy interests.

The report criticizes most standard-setting organizations, like CEN and CENELEC, for being private and lacking democratic accountability. These bodies often charge high fees and operate with limited transparency, making it difficult for civil society to participate.

Meanwhile, civil society faces logistical and financial barriers to participation. Only 23 representatives (16%) in Corporate Europe Observatory’s sample were from academia, and just 13 (9%) represented civil society.

While the EU’s Annex III organizations advocate for societal interests, they lack voting rights and resources in comparison to corporate participants. Oracle, a major tech corporation, has publicly lauded its involvement in AI standardisation, claiming its efforts ensure “ethical, trustworthy, and accessible” standards, as reported by the CEO.

Bram Vranken, a researcher at CEO, expressed concern about this delegation of public policymaking.

“The European Commission’s decision to delegate public policymaking on AI to a private body is deeply problematic. For the first time, standard setting is being used to implement requirements related to fundamental rights, fairness, trustworthiness and bias,” he said, as reported by Euro News .

CEN and CENELEC did not disclose participants in their AI standards development. CEO’s requests for participant lists from the JTC21 committee were met with silence, and their Code of Conduct enforces secrecy about participant identities and affiliations.

CEO described these rules as overly restrictive, preventing open discussion about committee membership.

In response to the report, the European Commission stated that harmonised standards would undergo assessment to ensure they align with the objectives of the AI Act. Member States and the European Parliament also retain the right to challenge these standards, reported Euro News.

In conclusion, legislators have delegated the task of operationalising these standards to private organisations. Civil society groups and independent experts, often underfunded and outnumbered, are struggling to counterbalance corporate dominance.

This imbalance risks undermining protections against discrimination, privacy violations, and other fundamental rights, leaving Europe’s AI governance largely shaped by industry interests.

Study Reveals How Neurotransmitters React To Emotional Words In The Brain - 2

Credit: Clayton Metz/Virginia Tech

Study Reveals How Neurotransmitters React To Emotional Words In The Brain

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A team of scientists has released a new study revealing how neurotransmitters in the human brain behave when processing emotionally charged words for the first time.

In a Rush? Here are the Quick Facts!

  • A study led by Virginia Tech scientists reveals, for the first time, how neurotransmitters respond to language with emotional content in areas such as the thalamus and cortex.
  • Dopamine and serotonin can also be released in the human brain when processing the emotional meaning of words.
  • The research could help expand studies on decision-making and mental health.

The study , published in the journal Cell Reports, provides new information on how humans interact with language and its impact on decision-making and mental health. The research was conducted by more than 20 researchers from multiple institutions and led by Virginia Tech scientists.

“The common belief about brain chemicals, like dopamine and serotonin, is that they send out signals related to the positive or negative value of experiences,” said computational neuroscientist Read Montague, professor of the Fralin Biomedical Research Institute at VTC and co-senior author of the document in a public statement . “Our findings suggest that these chemicals are released in specific areas of the brain when we process the emotional meaning of words.”

Dr. Montague explained that their research suggests that the brain systems that originally evolved to help humans react to positive or negative stimuli in the environment may also be involved in processing language, highlighting the critical role words play in survival.

This study is the first to map and track the release of serotonin, dopamine, and norepinephrine in the brain when people interact and respond to language and its complex dynamics.

Big Tech’s Influence On EU AI Standards Raises Concerns - 3

Credit: Batten et al/cell Reports

“The emotional content of words is shared across multiple transmitter systems, but each system fluctuates differently,” said Dr. Montague. “There’s no single brain region handling this activity, and it’s not as simple as one chemical representing one emotion.”

To reach these conclusions, the scientists performed multiple measurements in patients undergoing deep brain stimulation surgery for multiple treatments. The patients were shown emotionally charged words—based on the Affective Norms for English Words (ANEW) database—on a screen and the scientists analyzed neurochemical behaviors in the thalamus and cortex using electrodes.

“The surprising result came from the thalamus,” said William “Matt” Howe, an assistant professor with the School of Neuroscience of the Virginia Tech College of Science. “This region hasn’t been thought to have a role in processing language or emotional content, yet we saw neurotransmitter changes in response to emotional words. This suggests that even brain regions not typically associated with emotional or linguistic processing might still be privy to that information.”

This research could be used for future studies on decision-making and mental health, while also deepening our understanding of human behaviors related to language. A few weeks ago, another study revealed that humans preferred AI-generated poems thinking they contained more “human-like” words.