
Cloudflare Bug Exposed Broad Locations Of Chat App Users
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A recently discovered issue in Cloudflare’s Content Delivery Network (CDN) highlights how attackers could pinpoint a chat app user’s approximate location, as reported today by 404 Media .
In a Rush? Here are the Quick Facts!
- Cloudflare bug allowed attackers to infer users’ locations via cached images.
- Exploit affected apps like Signal, Discord, and Twitter/X.
- Attack required sending an image; users didn’t need to open it.
The bug allowed hackers to determine which Cloudflare data center cached an image sent through popular apps like Signal, Discord, and Twitter/X. By exploiting this, attackers could infer a user’s city or state, though not exact locations.
The vulnerability centers on how Cloudflare’s CDN operates. CDNs improve content delivery by caching data across servers worldwide. When an image is sent through a chat app, it is cached by the data center closest to the recipient, as noted by 404 Media.
Fifteen years old security researcher “daniel” created a tool named Cloudflare Teleport to exploit this behavior. By analyzing which data center responded to a query, the tool could identify the user’s general location, says 404 Media.
404 Media explains that the hack operated by exploiting a sequence of steps. First, an attacker would send an image to the target through a messaging app. They would then use Burp Suite, a popular web application security tool, to extract the URL of the uploaded image.
Next, the attacker employed a custom tool to query all Cloudflare data centers, checking where the image had been cached. If a specific data center returned a “HIT” response, it indicated the approximate location of the target.
In testing, daniel successfully identified the location of Signal users, even without them opening the image. A push notification could preload the image, making it possible to infer a user’s city or state without direct interaction, as reported by 404 Media.
This vulnerability raises concerns for users requiring anonymity, such as activists or whistleblowers. Although the revealed data is coarse, it underscores the potential risks of network-layer surveillance. Using a Virtual Private Network (VPN) might mitigate this issue, but VPNs come with their own limitations and risks, says 404 Media.
404 Media notes that Cloudflare has since patched the specific issue exploited by daniel’s tool, according to Jackie Dutton, a senior cybersecurity representative at the company. However, daniel noted that similar attacks remain possible through more labor-intensive methods, such as manually routing requests via a VPN to different locations.
Messaging apps like Signal and Discord emphasized the inherent limitations of CDNs, noting their necessity for global performance. Signal, in particular, stated that its end-to-end encryption remains unaffected and recommended VPNs for users needing enhanced anonymity.
While the immediate exploit has been resolved, the incident highlights ongoing privacy risks in digital communication platforms. Users concerned about location privacy should consider additional security measures beyond those provided by standard apps.

Trust Levels Shape AI Adoption And Performance In Companies
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A study published today in the Journal of Management Studies , offers valuable insights into how organizational members’ trust in AI shapes adoption and performance.
In a Rush? Here are the Quick Facts!
- Trust levels affect how employees adopt and interact with AI technology in companies.
- Low trust leads to biased data, lowering AI performance and delaying adoption.
- Emotional reactions to AI, like fear or excitement, influence employee adoption.
The research, based on a qualitative, real-life study, identifies four distinct trust configurations among employees: full trust, full distrust, uncomfortable trust, and blind trust.
These trust levels—comprising cognitive (belief-based) and emotional (feeling-based) components—affect behaviors in digital environments which, the researchers argue, will influence AI’s effectiveness within organizations.
Full trust is when employees have both high emotional and cognitive trust in AI. Uncomfortable trust refers to high cognitive but low emotional trust, while blind trust means employees have low cognitive but high emotional trust.
Different levels of trust have significant effects on how employees interact with AI in the workplace. Full trust in AI leads to smoother adoption, as employees feel confident relying on the technology for decision-making.
However, when employees completely distrust AI, they may resist its use, limiting its effectiveness and the potential benefits. Uncomfortable trust, where employees are unsure but still use AI, can lead to inconsistent results, as employees may not fully commit to the technology.
These behaviors created a “vicious cycle,” where inaccurate or incomplete data input lowered AI performance, further damaging trust in the technology and delaying its adoption.
On the other hand, blind trust—where employees trust AI without question—can cause issues, as it may overlook flaws in the system, leading to biased data or poor decisions.
The study suggests that trust is not just about how much employees cognitively understand AI, but also how emotionally comfortable they feel with it. Interestingly, the introduction of AI can disrupt trust levels even in organizations with a culture of transparency.
Employees’ emotional reactions to AI, such as fear or excitement, play a huge role in determining whether they’ll adopt it. The researchers developed a model that explains how different trust configurations lead to different behaviors, which in turn influence AI performance and, ultimately, AI adoption.
The model suggests that when employees feel trusted and comfortable, they provide more accurate data to the AI, which improves its performance. On the other hand, distrust leads to biased or limited data, damaging the AI’s ability to function effectively.
The study also calls for a more human-centered approach to AI adoption. Instead of focusing only on AI’s technical aspects, leaders should address both the cognitive and emotional concerns employees have about the technology. This means recognizing that different trust levels require different strategies to help employees adjust to AI.
Managers can improve AI adoption by building both cognitive and emotional trust. To build cognitive trust, it’s important to provide training to help employees understand how AI works, explain its capabilities and limitations, and set clear policies on data usage. Managing expectations and being patient with AI as it improves over time is also crucial.
To build emotional trust, managers should encourage open conversations about concerns, show excitement about AI’s potential, and create a safe space for employees to express their feelings. Leaders should also ensure AI is used ethically and responsibly, especially when handling sensitive data, to make employees feel secure.
In conclusion, the study emphasizes that trust plays a crucial role in how employees behave around AI. Understanding and addressing both the cognitive and emotional aspects of trust can help companies successfully integrate AI technology into their workflows.