Canada’s Privacy Commissioner Pushes For Digital ‘Right To Be Forgotten’ - 1

Canada’s Privacy Commissioner Pushes For Digital ‘Right To Be Forgotten’

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

Canada’s Privacy Commissioner says people have the right, in some cases, to have personal information removed from online search results, however Google is refusing to comply.

In a rush? Here are the quick facts:

  • The case involves a dropped criminal charge still showing in Google search results.
  • The commissioner ruled Google violated Canada’s privacy law, PIPEDA, by refusing de-listing.
  • Google argues de-listing should be balanced with freedom of expression and information rights.

The case revolves around an individual who faced a criminal accusation, which eventually got dismissed. Despite this, the articles surrounding his charges continue to appear whenever the person’s name is searched on Google.

The individual argued that this has caused them serious harm, including “physical assault, lost employment opportunities, and severe social stigma,” according to a news release by The Privacy Commissioner published on Wednesday.

The release reports that Privacy Commissioner Philippe Dufresne conducted an investigation of the complaint, stating that Google violated the Personal Information Protection and Electronic Documents Act (PIPEDA), a federal privacy law.

The Privacy Commissioner ordered Google to remove the articles from search results so users could no longer find them through name-based searches. However, the articles would still remain online and could be found in other ways.

“Individuals have the right, under Canadian privacy law, to have information about them de-listed from online searches for their name in certain circumstances when there is a significant risk of harm that outweighs the public interest in that information remaining accessible through such a search,” said Dufresne.

The Commissioner stressed that this right applies only in “limited circumstances,” such as when the information is outdated, inaccurate, relates to minors, or poses risks to dignity and safety. His office noted that it is considering “all available options to secure Google’s compliance with the act.”

Google, however, has pushed back. A spokesperson said to CBC that the company is reviewing the report but is “strongly of the view that consideration of a so-called ‘right to be forgotten’ must be appropriately balanced with the freedom of expression and access to information rights of Canadians, the news media and other publishers, and therefore should be determined and defined by the courts.”

The battle over whether Canadians have a digital “right to be forgotten” has been ongoing since the original complaint was filed in 2017. Courts have repeatedly ruled that Google’s search engine is covered by privacy law, but the company still refuses to enforce the Commissioner’s recommendation.

Anthropic Reveals Hacker Used Its Chatbot for Cyberattacks - 2

Photo by Glen Carrie on Unsplash

Anthropic Reveals Hacker Used Its Chatbot for Cyberattacks

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Sarah Frazier Former Content Manager

The AI company Anthropic released a new cybersecurity report on Wednesday, revealing that malicious actors have used its AI model, Claude, for sophisticated criminal operations. The startup said that one hacker recently targeted around 17 organizations, leveraging a new technique known as “vibe hacking.”

In a rush? Here are the quick facts:

  • Anthropic revealed that hackers have been using its AI model Claude for sophisticated criminal operations.
  • The startup shared a case in which a malicious actor used Claude Code from North Korea for ransomware attacks targeting around 17 organizations.
  • The startup warns about emerging “vibe hacking” cases.

According to Anthropic’s announcement , the company has implemented several security measures to prevent misuse of the technology, but cybercriminals have found ways to exploit it.

In one of the cases reported, the startup disclosed a major criminal operation in which a hacker, with basic coding skills, used Claude Code—an agentic coding tool—to carry out a fraudulent scheme originating from North Korea.

“AI models are now being used to perform sophisticated cyberattacks, not just advise on how to carry them out,” states the announcement. “Criminals with few technical skills are using AI to conduct complex operations, such as developing ransomware, that would previously have required years of training.”

In the main case presented, the hacker used Claude Code to steal data from 17 organizations, including emergency services, government agencies, healthcare providers, and religious institutions. The malicious actor then extorted victims, in some cases demanding over $500,000.

Anthropic reported that its AI agent was used to decide what data to exfiltrate, draft extortion messages, and even suggest ransom amounts tailored to each victim.

“The actor used AI to what we believe is an unprecedented degree,” states the announcement. “This represents an evolution in AI-assisted cybercrime.”

The company said that as soon as it detected the malicious operations, it blocked the accounts and developed new screening and detection tools to prevent similar cases in the future. More details were included in the full report.

Anthropic also warned about emerging “vibe hacking” techniques. “We expect attacks like this to become more common as AI-assisted coding reduces the technical expertise required for cybercrime,” noted the company.

Finally, Anthropic highlighted the growing integration of AI into existing cyber schemes, citing examples such as the North Korean IT workers scam reported in April, in which hackers stole the identities of American citizens to secure remote jobs in the United States.