
Photo by Oleksii Shikov on Unsplash
Apple Reportedly Working On Home Security Device With Face ID Technology
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
The tech giant Apple is reportedly working on different home security products including a new doorbell security device that can be unlocked by connecting it to the company’s Face ID.
In a Rush? Here are the Quick Facts!
- Apple reportedly building a new doorbell security system featuring Face ID
- The new device will compete with Amazon’s Ring and Google’s Nest Doorbell
- It is still in an early stage and Apple might partner with lock makers to provide a full system
A new report shared by Bloomberg’s Mark Gurman revealed the company is developing multiple home technologies to launch to the market soon including security cameras for front doors and a recently disclosed Face ID security system.
“Apple is considering taking on Ring and Nest, and is developing a smart doorbell with Face ID that can unlock your door like Face ID unlocks your iPhone,” wrote Gurman on the social media platform X.
Power On: Apple is considering taking on Ring and Nest, and is developing a smart doorbell with Face ID that can unlock your door like Face ID unlocks your iPhone. https://t.co/IBUAtqWdZU — Mark Gurman (@markgurman) December 22, 2024
The new device powered with Face ID technology would be connected to a deadbolt lock; people’s faces at the front door would be scanned to unlock the door for those registered as residents using the company’s facial recognition technology.
The new doorbell security system is still in an early stage and Apple may partner with a lock maker to offer a complete system soon.
The development of security devices aligns with the company’s focus on privacy and security and Apple expects customers to trust its track record as a company that cares about the security of its users.
Apple has experience managing video security footage through a feature known as HomeKit Security Video, and including video cameras and a new doorbell security system would allow the company to expand its services and gain more Cloud subscriptions.

Image by AP, from FMT
Apple Urged to Remove AI Feature After False Headline Controversy
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A major journalism watchdog has urged Apple to withdraw its generative AI feature after the tool falsely attributed a headline about a high-profile murder case to BBC News.
In a Rush? Here are the Quick Facts!
- Apple’s AI tool falsely attributed a headline about Luigi Mangione to BBC News.
- Reporters Without Borders calls for Apple to remove the AI feature.
- The BBC filed a complaint after the false headline emerged.
The BBC reported on Thursday that controversy arose after Apple’s new feature , Apple Intelligence, inaccurately generated a headline. It suggested that Luigi Mangione, a suspect in the murder of healthcare insurance CEO Brian Thompson, had shot himself. The BBC had not published such an article.
Reporters Without Borders (RSF), an international organization advocating for press freedom, criticized Apple, stating the incident underscores the risks posed by immature AI tools to media credibility and public trust.
Vincent Berthier, head of RSF’s technology and journalism desk, condemned the misstep, calling it a “blow to the outlet’s credibility” and a threat to public access to reliable information. RSF demanded Apple remove the feature immediately.
“The European AI Act — despite being the most advanced legislation in the world in this area — did not classified information-generating AIs as high-risk systems, leaving a critical legal vacuum. This gap must be filled immediately,” said Berthier.
The BBC confirmed it had lodged a formal complaint with the tech giant and requested a resolution, but it remains unclear whether Apple has responded.
The flawed AI summary was not limited to BBC News. On November 21, the feature inaccurately summarized a New York Times article, falsely suggesting Israeli Prime Minister Benjamin Netanyahu had been arrested, as reported by the BBC article.
This notification, later highlighted by journalist Ken Schwencke, actually referred to an International Criminal Court warrant against Netanyahu.
Apple AI notification summaries continue to be so so so bad [image or embed] — Ken Schwencke ( @schwanksta.com ) 21 November 2024 at 16:22
Apple Intelligence , aims to reduce notification interruptions by summarizing and grouping them. Available on iPhones running iOS 18.1 and newer models, as well as select iPads and Macs, the tool has faced backlash for misrepresenting information from reputable news outlets.
The BBC spokesperson emphasized that while some grouped notifications accurately summarized stories about global events, the misinformation involving Mangione demonstrated the technology’s potential to harm both media outlets and public understanding.
RSF warned that “This accident illustrates that generative AI services are still too immature to produce reliable information for the public, and should not be allowed on the market for such uses.”
Ars Technica reviewed Apple’s notification summaries, noting several issues that arise frequently. First, while some summaries are factually correct, they sound robotic and awkward, particularly in sensitive subjects like relationships or mental health. Others misunderstand sarcasm, slang, or idiomatic expressions, often interpreting them too literally.
Loss of context is another major problem, particularly when summaries miss earlier messages in a thread or fail to consider multimedia content. In group chats with multiple topics, summaries can become overloaded, mixing subjects and leading to inaccuracies.
Finally, some summaries are just plain wrong, often due to an attempt to condense too much information into too few words, which results in distorted accounts. Overall, these problems, which affect the usefulness and accuracy of the feature, are frequent in everyday use.
Mangione, now charged with first-degree murder, has become a focal point in the debate over the risks associated with AI in journalism . The public’s reliance on credible news remains at stake as Apple faces mounting pressure to address the issue.