Google Issues Warning Of Second Cyberattack Wave Targeting 2.5 Billion Gmail Users - 1

Image by rawpixel.com, from Freepik

Google Issues Warning Of Second Cyberattack Wave Targeting 2.5 Billion Gmail Users

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Google has alerted its 2.5 billion Gmail users to an impending second wave of cyber threats this holiday season, urging vigilance against phishing and malware attacks. Google warns that attackers are persistent and typically intensify their efforts during this period.

In a Rush? Here are the Quick Facts!

  • Google warns of a second wave of phishing and malware scams targeting Gmail.
  • Gmail has blocked over 99.9% of spam, phishing, and malware this holiday season.
  • Scams this season include fake invoices, celebrity endorsements, and extortion threats.

Since mid-November, Google has observed a “massive surge” in email traffic, increasing the challenge of maintaining inbox security. Despite these threats, Google noted a 35% drop in phishing attacks compared to last year’s holiday season.

However, Gmail remains a prime target due to its vast user base, prompting Google to invest heavily in security measures that block over 99.9% of spam, phishing, and malware.

Google’s recent blog post highlighted significant improvements in security, with Gmail users reporting one-third fewer scams during the first holiday month than in 2023. Google’s systems blocked millions of additional harmful messages before they reached users’ inboxes.

While Gmail’s robust systems block the majority of threats, scammers are adapting their tactics, making user awareness critical. Google warns that this season has seen a surge in three types of scams. Specifically, invoice scams involve fake billing emails designed to provoke disputes, during which scammers manipulate victims into making payments.

Celebrity scams exploit the names of famous individuals, either by impersonating them or falsely claiming endorsements for products, tricking users with “too good to be true” promises. Extortion scams take a more menacing approach, using personal details such as home addresses to issue threats of harm or exposure unless demands are met.

Additionally, Check Point researchers recently revealed that cybercriminals are now exploiting Google Calendar and Google Drawings for phishing attacks . Imitations of calendar invitations redirect users to malicious links designed to steal sensitive information.

Meanwhile, Forbes reported a rise in AI-driven phishing scams. Attackers use AI to mimic Google support, creating hyper-realistic calls and emails that deceive even experienced users .

In one case, a scammer combined fake recovery notifications, a spoofed Google phone number, and an AI-generated call to trick a Microsoft consultant. Another incident involved a phishing attempt exploiting a false claim about a family member’s death to approve fraudulent account recovery.

To stay secure, Google advises users to slow down and carefully evaluate emails, especially those that create urgency or fear. Verifying the authenticity of messages and senders is essential, as is refusing to share sensitive information or make payments under pressure.

Contractors Warn New Google Guidelines Could Affect Gemini’s Accuracy On Sensitive Topics - 2

Image by Solen Feyissa, from Unsplash

Contractors Warn New Google Guidelines Could Affect Gemini’s Accuracy On Sensitive Topics

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A recent shift in internal guidelines at Google has raised concerns over the accuracy of its Gemini AI, particularly when it comes to handling sensitive or highly specialized topics.

In a Rush? Here are the Quick Facts!

  • Google contractors can no longer skip prompts outside their expertise for Gemini evaluation.
  • Contractors now rate AI responses they don’t fully understand, noting lack of expertise.
  • Contractors previously skipped prompts on complex topics like cardiology or rare diseases.

Contractors working on the Gemini project, who are tasked with evaluating the accuracy of AI-generated responses, can no longer skip prompts outside their domain expertise. This change, first reported by TechCrunch , could potentially impact the reliability of information provided by the AI on topics such as healthcare, where precise knowledge is crucial.

TechCrunch notes that previously, contractors at GlobalLogic, an outsourcing firm owned by Hitachi , were previously tasked with evaluating AI responses based on factors like “truthfulness” and allowed to bypass prompts outside their expertise.

For example, if asked to evaluate a technical question about cardiology, a contractor with no scientific background could skip it.

However, under the new guidelines, contractors are now instructed to evaluate responses to all prompts, including those requiring specialized knowledge, and note any areas where they lack expertise, as reported by TechCrunch.

The new rule has led to concerns about the quality of ratings provided for complex topics. Contractors, often without the necessary background, are now tasked with judging AI responses on issues such as rare disease s or advanced mathematics.

One contractor expressed to TechCrunch frustration in internal correspondence, questioning the logic behind eliminating the skip option: “I thought the point of skipping was to increase accuracy by giving it to someone better?”

TechCrunch reports that the updated guidelines allow contractors to skip prompts only in two cases: if the prompt or response is incomplete or contains harmful content that requires special consent for evaluation.

This restriction has raised alarms among those working on Gemini, who worry that the AI could produce inaccurate or misleading information in highly sensitive areas.

TechCrunch reports that Google has not provided a detailed response to the concerns raised by contractors.

However, a spokesperson emphasized to TechCrunch that the company is “constantly working to improve factual accuracy in Gemini.” They further clarified that while raters provide valuable feedback across multiple factors, their ratings do not directly impact the algorithms but are used to gauge overall system performance.

Mashable noted that the report questions the rigor and standards Google claims to apply when testing Gemini for accuracy.

In the “ Building responsibly ” section of the Gemini 2.0 announcement, Google stated that it is “working with trusted testers and external experts and performing extensive risk assessments and safety and assurance evaluations.”

While there is a reasonable emphasis on evaluating responses for sensitive and harmful content, less attention seems to be given to responses that, while not harmful, are simply inaccurate, as noted by Mashable.