UK, US, Canada Form AI, Cybersecurity Partnership - 1

Image by Ecole polytechnique from Wikimedia Commons

UK, US, Canada Form AI, Cybersecurity Partnership

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Partnership aims to enhance defence capabilities in cybersecurity and AI.
  • Collaborative efforts focus on resilient systems and trustworthy AI technologies.
  • Agreement addresses rapid technological advancements and evolving defence challenges.

The UK, US, and Canada have announced today a new trilateral partnership to advance research in cybersecurity and artificial intelligence (AI).

The agreement, formalized by the UK’s Ministry of Defence, the US Defense Advanced Research Projects Agency (DARPA), and Canada’s Department of National Defence, aims to enhance defence capabilities across the three nations.

The Defence Science and Technology Laboratory (Dstl) will lead the UK’s efforts, while Defence Research and Development Canada (DRDC) will represent Canada.

This collaboration focuses on developing new technologies, methodologies, and tools to address real-world security challenges. Key areas include AI, resilient systems, and information domain technologies.

The new partnership aligns with a recent United Nations report calling for global governance of AI. The UN report highlights AI’s positive impact across sectors, and emphasizes the risks of unchecked development, such as algorithmic bias and privacy threats.

It stresses the need for a coordinated global framework to ensure AI’s benefits are equitably distributed and its risks managed.

Dr. Nick Joad from the UK Ministry of Defence highlighted the significance of these international partnerships in driving forward research in AI and cybersecurity.

DARPA Director Stefanie Tompkins emphasized the strength of collective collaboration, stating that working together enhances each country’s capabilities.

Among the initiatives already underway is the Cyber Agents for Security Testing and Learning Environments (CASTLE) programme, which trains AI to autonomously defend networks against cyber threats.

Other areas of interest include human-AI teaming in military contexts, developing trustworthy AI systems, and improving the resilience and security of information systems.

The partnership is driven by the rapid pace of technological advancement and the need for robust defence strategies in a shifting geopolitical landscape.

Singapore Considers New Law to Combat Deepfakes in Elections - 2

Image from Wikimedia Commons

Singapore Considers New Law to Combat Deepfakes in Elections

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Singapore plans to allow candidates to flag deepfakes.
  • Proposed bill empowers officers to correct misleading content.
  • The upcoming election highlights the challenge of manipulated media.

In a move to enhance public trust in its electoral process, Singapore is contemplating legislative changes that would allow candidates to flag deepfake videos of themselves during elections, as reported today by the SCMP .

The SCMP notes that Senior Minister of State for Digital Development and Information, Janil Puthucheary, announced this proposal during the Festival of Ideas at the Lee Kuan Yew School of Public Policy.

The upcoming general election will see Singapore join a growing number of jurisdictions addressing the challenge of manipulated media.

The proposed Elections (Integrity of Online Advertising) (Amendment) Bill would empower the Returning Officer to issue corrective directions to publishers or service providers when digitally altered content misrepresents candidates.

Additionally, candidates could publicly clarify the authenticity of their statements, with penalties for non-compliance, including fines and imprisonment.

Puthucheary highlighted the potential for AI-driven tools to manipulate voter perceptions, underscoring the bill’s focus on safeguarding the integrity of information in the electoral landscape.

Interestingly, a recent report by the Centre for Election Technology and Security found that AI-generated disinformation and deepfakes did not impact the outcomes of the 2024 European elections, as most AI-enabled disinformation reinforced existing political beliefs rather than swaying undecided voters.

However, the report raises ethical concerns about AI’s role in democracy, noting instances of misleading AI-generated content and the need for clear labeling in political advertising.

The concern about the impact of AI on democracy was also outlined in a recent United Nations report, which emphasizes the need for a global framework to monitor and govern AI

This initiative follows Singapore’s previous efforts to combat misinformation, including the Protection from Online Falsehoods and Manipulation Act (Pofma) and the Foreign Interference (Countermeasures) Act.

As the global conversation around deepfakes intensifies, Singapore’s legislative efforts could serve as an experimental example in maintaining electoral integrity and public confidence in democratic institutions.