News Heading - 1

Romantic AI-Powered Chatbots Raises Significant Privacy Concerns

  • Written by Shipra Sanganeria Cybersecurity & Tech Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

Privacy concerns due to AI-powered romantic partners are on the rise with more people using these free or paid apps, a new study by Mozilla Foundation has revealed.

In its analysis of 11 romantic chatbots, the company found a series of security and privacy issues. Despite making claims of protecting user privacy, these apps are known to collect and store a lot of sensitive information, which are invasive in nature.

According to the company, overall, 90% of the surveyed apps failed to meet Mozilla’s Minimum Security Standards, i.e., securing a users’ personal data. Of these:

  • Almost half (45%) allowed users to create weak passwords, including “1.”
  • Most (73%) of these apps didn’t reveal how they manage security vulnerabilities, especially in the midst of growing cyber threats (powered by generative AI).
  • Most (64%) haven’t published clear information about encryption and whether they use it.
  • Except EVA AI Chat Bot & Soulmate, all the other apps (90%) might sell or share users’ personal information. The analyzed apps had an average of 2,663 trackers per minute, said Mozilla.
  • Around half (45%) of the apps won’t allow a user to delete his/her personal data. For instance, “[..] Romantic AI put it, “communication via the chatbot belongs to software,” the study revealed .

Some of the stored or harvested personal data include content (user conversation), financial information, device and network data, health, gender identity and preference, sexual health, prescriptions, contact, and audio, visual information.

Among the surveyed personal apps, Mozilla laid emphasis on a specific chatbot – CrushOn.AI, a Not Safe For Work (NSFW) platform. The AI chatbot is known to collect unnecessary but very sensitive information of individuals.

Based on its analysis, Mozilla also revealed that it was unclear about the origin of the companies running these apps. Thus, to prevent misuse of their personal information, it is imperative that not only users limit the app usage but also avoid revealing personal information while accessing these chatbots.

News Heading - 2

GoldPickaxe Malware Harvests Personal and Facial Biometric Data to Scam Victims

  • Written by Shipra Sanganeria Cybersecurity & Tech Writer
  • Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor

A first for iOS devices, security researchers have identified a new banking trojan dubbed ‘GoldPickaxe,’ that has the ability to create deepfakes using stolen facial biometrics.

Available for both Android and iOS devices, the new malware strain is suspected to belong to ‘GoldFactory,’ a Chinese threat group responsible for ‘GoldDigger’ and ‘GoldKefu’ malware strains. According to researchers at Group-IB , the current targets are mainly victims in the APAC region, particularly Vietnam and Thailand.

Active since October 2023, the malware uses various social engineering techniques, including impersonating government and banking organizations to lure victims into sharing personal information.

According to Thailand Banking Sector CERT (TB-CERT), the threat actors pose as legitimate government agencies or officials to trick victims into installing fraudulent apps.

For instance, trojan-laden Android apps such as ‘Digital Pension,’ promoted via popular messaging apps LINE, are either installed via fake corporate or Google Play websites.

While the distribution chain for iOS devices is different. For iOS devices, the cybercriminals leveraged Apple’s TestFlight platform, or lured victims into installing a Mobile Device Management (MDM) profile through fraudulent websites. These tactics and techniques helped the hackers gain control over the targets’ device.

Once installed, the malware ‘’prompts the victim to record a video as a confirmation method in the fake application. The recorded video is then used as raw material for the creation of deepfake videos facilitated by face-swapping artificial intelligence services,’’ Group-IB revealed.

Additional capabilities attributed to the malware include, intercepting SMS messages, personal data, requesting identity documents, and proxying traffic through the target’s device.

Group-IB researchers believe that facial recognition information is essentially being used to access the victim’s bank account. It also believes that instead of the target’s device, the hackers are using their own devices to commit the fraud. This belief was further corroborated by the Thai police .

While concluding the security researchers stated that GoldFactory has ‘’well-defined processes, operational maturity, and demonstrate an increased level of ingenuity. Their ability to simultaneously develop and distribute malware variants tailored to different regions shows a worrying level of sophistication.’’