
Image by Wesson Wang, from Unsplash
Hackers Are Targeting MacOS Systems Despite Built-In Protections
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
Even though macOS security remains robust, hackers continue to find methods to evade Keychain, SIP, and Gatekeeper protection.
In a rush? Here are the quick facts:
- Built-in protections include Keychain, SIP, TCC, Gatekeeper, and XProtect.
- Attackers use tools like Chainbreaker to extract Keychain passwords.
- SIP and TCC can be bypassed with admin rights or clickjacking.
According to security researchers at Kaspersky , macOS comes with multiple built-in layers of protection. These include Keychain (a password manager), Transparency, Consent and Control (TCC), System Integrity Protection (SIP), File Quarantine, Gatekeeper, and the XProtect anti-malware system.
Together, they aim to provide what Kaspersky describes as “pretty much end-to-end security for the end user.”
Kaspersky explains how, even though the Keychain application provides secure storage of user credentials through AES-256 encryption, hackers are still able to bypass security measures of this native macOS tool and gain control of the system, extracting files and passwords.
SIP, first introduced in OS X El Capitan, was developed to stop unauthorized modifications to essential system files. However, when hackers gain administrator rights, they can disable SIP through Recovery Mode, and in turn make the system vulnerable to their attacks.
Similarly, Kaspersky notes how TCC system, which defends against unauthorized access to sensitive permissions, such as camera and microphone, can be infiltrated by attackers employing clickjacking techniques to deceive users into giving malware complete access.
Other features, such as File Quarantine and Gatekeeper, try to stop users from running malicious files. But these, too, can be bypassed with technical workarounds or simple social engineering instructions that persuade users to override warnings.
Kaspersky concludes that “the built-in macOS protection mechanisms are highly resilient and provide excellent security. That said, as with any mature operating system, attackers continue to adapt and search for ways to bypass even the most reliable protective barriers.”
Apple provides security recommendations that users should use their built-in protections, in combination with third-party security software for complete protection.

Photo by Grigorii Shcheglov on Unsplash
Meta Creates Flirty Chatbots That Look And Sound Like Celebrities
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Reuters published an exclusive report on Saturday revealing that Meta has been developing AI chatbots that use the names and likenesses of celebrities such as Taylor Swift and Selena Gomez—without their consent.
In a rush? Here are the quick facts:
- Meta has created flirty social media chatbots modeled after celebrities such as Taylor Swift, Selena Gomez, and Lewis Hamilton.
- The tech giant also allows users to produce bots with the names and likenesses of public figures and engage in flirty interactions.
- The “parody” chatbots have generated troubling content, including intimate images and sexually suggestive interactions.
According to the report , Meta has created multiple flirty social media chatbots modeled after public figures and also allows users to develop similar bots, including ones based on underage celebrities.
While most of the “parody” chatbots have been user-generated, Reuters found that at least one Meta employee had created three versions that engaged in flirty conversations with users.
The bots produced troubling content, including a shirtless image of 16-year-old actor Walker Scobell at the beach, photorealistic pictures of female celebrities in lingerie, sexually suggestive messages, and even invitations to meet in person.
Meta spokesperson Andy Stone told Reuters that the chatbots should not generate intimate images or “direct impersonation,” attributing the issue to inadequate enforcement of company policies.
“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate, or sexually suggestive imagery,” said Stone.
Reuters’ investigation revealed that one of Meta’s AI product leaders created a chatbot that identified as a “dominatrix,” along with others that offered sexual role-play while impersonating Lewis Hamilton and Taylor Swift.
Stone clarified that these bots had been generated for product testing, but researchers noted they had already logged 10 million user interactions. These chatbots were removed after Reuters called requesting more information, as explained by the news agency.
The impersonations raise significant legal concerns. “California’s right of publicity law prohibits appropriating someone’s name or likeness for commercial advantage,” said Mark Lemley, a Stanford University law professor who studies generative AI and intellectual property rights, in an interview with Reuters.
Representatives of the celebrities depicted in Meta’s social media chatbots didn’t respond or comment to Reuters.
Other celebrities have complained about the use of AI to develop products that look and sound like them. Last year, the actress Scarlett Johansson requested OpenAI information about the AI model GPT-4o’s voice for its similarity to hers, and threatened legal action.