Growing Link Between Cybercriminals And State Hackers Raises Security Concerns - 1

Image by Freepik

Growing Link Between Cybercriminals And State Hackers Raises Security Concerns

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

In a Rush? Here are the Quick Facts!

  • Most cyberattacks today are financially motivated, with criminals using ransomware and scams.
  • Hospitals are increasingly targeted, with data leaks doubling in the past three years.
  • Russia, Iran, China, and North Korea use cybercrime to support espionage and financial goals

The trend is particularly evident in the growing use of cybercrime tactics by state-backed hackers to conceal espionage activities and fund operations.

Google-owned cybersecurity firm Mandiant highlighted this development on Tuesday, noting that financially motivated cybercrime now dominates online threats, accounting for most of the malicious activity detected by security teams.

In 2024, Mandiant responded to nearly four times more financially motivated cyber intrusions than those linked to nation-states. However, researchers warn that while cybercrime often receives less attention from national security experts, its impact can be just as severe as espionage-related attacks.

“A hospital disrupted by a state-backed group using a wiper and a hospital disrupted by a financially motivated group using ransomware have the same impact on patient care,” Mandiant researchers wrote.

This concern is especially relevant as cybercriminals increasingly target healthcare institutions, with data leak incidents in the sector doubling over the past three years.

Beyond direct threats, cybercriminal groups are also enabling state-backed hacking efforts. Nation-states are increasingly purchasing cyber capabilities from these groups or co-opting them for espionage and disruptive operations.

Russia, for instance, has relied on cybercriminal expertise in its cyber warfare against Ukraine. The Russian military intelligence unit APT44, also known as Sandworm, has reportedly used malware from cybercrime networks to conduct cyberattacks.

Similarly, RomCom, a group historically focused on financial cybercrime, has been involved in espionage operations against the Ukrainian government since 2022, as reported by the researchers.

This pattern extends beyond Russia. Iranian hacking groups deploy ransomware for financial gain while simultaneously conducting espionage. Chinese espionage groups often engage in cybercrime to supplement their income.

Mandiant emphasized that alongside law enforcement efforts, systemic solutions such as bolstering cybersecurity education and resilience are necessary to curb the growing cybercrime ecosystem.

As cybercrime and espionage continue to converge, experts warn that the threat landscape will become even more complex, demanding stronger global coordination to combat both financially and politically driven cyber threats.

Google Introduces New Machine Learning Model To Estimate User Age - 2

Photo by Annie Spratt on Unsplash

Google Introduces New Machine Learning Model To Estimate User Age

  • Written by Andrea Miliani Former Tech News Expert
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Google announced new digital protection tools on Wednesday, including a new AI tool deployed in the United States to estimate ages and detect if a user is under 18 years old.

In a Rush? Here are the Quick Facts!

  • The new machine-learning tool is expected to estimate users’ age and detect those under 18.
  • Google said it will help provide more “age-appropriate” experiences.
  • The AI tool is being tested in the United States, but the company expects to expand it to more countries soon.

The tech giant explained that the new machine learning technology will be tested to prevent underage users from accessing inappropriate content and to address one of the biggest challenges of the company which is guessing netizens’ ages.

“This year we’ll begin testing a machine learning-based age estimation model in the U.S.,” wrote Jen Fitzpatrick, Senior Vice President at Google, in a document shared . “This model helps us estimate whether a user is over or under 18 so that we can apply protections to help provide more age-appropriate experiences. We’ll bring this technology to more countries over time.”

The new AI model could help provide more personalized experiences, as part of the company’s efforts to build a safe environment for users of all ages. Fitzpatrick also highlighted previous initiatives such as the restriction of sensitive ad content, a SafeSearch Filter for children, and its “teen wellbeing” protection on YouTube.

According to The Verge , Neal Mohan, YouTube CEO, revealed on Tuesday that the new AI tool will use existing data about users—such as videos watched, how long they’ve had an account, and sites visited—and notify about settings changes when it detects the user could be under 18. If the user is an adult, they must verify their age with official documentation such as a government ID.

Besides the AI model, Google also announced a new update for the Google Family Link feature—a parental control service—including tools to help kids focus, and families to set School Time, and include contacts approved by parents.

A few days ago, Google also made changes to its AI safety policies in a different area. The tech giant updated its AI ethics guidelines, allowing the use of AI for developing weapons or surveillance tools.