
Image by Mohamed Hassan, from Pxhere
Balancing AI’s Promise And Perils: UN Advocates For Global Framework
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- UN report calls for global governance of AI technologies.
- AI offers significant benefits but risks widening digital divides.
- Without oversight, AI development could lead to missed opportunities.
A report released by the United Nations on Thursday highlights the need for a global framework to monitor and govern AI. The report calls for the international body to oversee a coordinated effort, addressing both the opportunities and risks presented by AI.
AI is rapidly transforming the world, with potential benefits ranging from scientific breakthroughs and optimized energy grids to improvements in public health and agriculture.
For example, the recent development of the AI model FireSat aims to enhance early fire detection helping to improve response efforts.
The report also highlights AI’s transformative impact across a wide range of scientific disciplines. While some current claims about AI may be overhyped, others have already been validated, and its long-term potential appears promising.
For instance, AI-powered technologies like a robot designed for early lung cancer detection are now operational, and AI models have shown greater accuracy in diagnosing Alzheimer’s disease compared to traditional clinical markers.
The report emphasizes that AI could play a crucial role in advancing the United Nations’ Sustainable Development Goals (SDGs). However, without proper governance, these benefits may not be distributed equitably, with many countries, particularly in the Global South, being left behind.
According to the report, the current lack of global governance poses significant risks. One of the major concerns is the increasing digital divide, which could limit AI’s benefits to only a handful of states, corporations, and individuals.
The report also points out that the unchecked development of AI could lead to missed opportunities, as trust in the technology may erode without proper regulations and oversight.
Beyond equity issues, AI presents a host of challenges, including algorithmic bias, the spread of disinformation , and threats to privacy and security .
The report highlights growing concerns about AI systems that operate independently of human control, such as autonomous weapons, and the impact of AI on the global job market .
As AI systems become more powerful and opaque, traditional regulatory systems struggle to keep up.
The UN report notes that while many governments, companies, and international organizations have developed ethical frameworks and principles for AI governance , there is no comprehensive global system in place.
The lack of coordination has left many countries out of key conversations, with representation skewed heavily toward a small number of nations. The report argues that a truly global effort is required to ensure accountability and equitable access to AI’s benefits.
Ultimately, the report stresses that AI governance should not be left to the private sector or individual governments alone, as the technology transcends national borders.

Photo by Accuray on Unsplash
Hackers Share Stolen Data From India’s Largest Health Insurer via Telegram
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
In a Rush? Here are the Quick Facts!
- Hackers stole data from Start Health and shared it via Telegram chatbots
- Reuters got access to more than 1,500 documents and confirmed the authenticity of selected documents
- Star Health and Allied Insurance claims data is secure
Hackers shared data stolen from Star Health, India’s largest health insurer, through chatbots on Telegram according to a recent exclusive by Reuters .
The data included medical reports and were offered for sale through the platform. Malicious actors also provided samples through the chatbot which would provide information to anyone who asked it to view it.
Even though Star Health and Allied Insurance assured Reuters that sensitive information has not been compromised and that customers’ data remains secure—but did report unauthorized access to authorities in India—, the news agency was able to download documents that included tax details, names, phone numbers, test results, addresses, copies of IDs, and medical diagnoses through the chatbots.
During its research, Reuters was able to download 1,500 files, including recent results from July this year. “If this bot gets taken down watch out and another one will be made available in a few hours,” wrote the bot in a welcome message through the platform.
One of the chatbots provides PFD documents, and the other bot provides samples with sensitive information, up to 20, from 31.2 million datasets in just a few clicks. One of the documents retrieved by Reuters included details of the “treatment of the one-year-old daughter of policyholder Sandeep TS,” and the information was true and verified by TS, who was not notified by the health insurance company about the leak.
The chatbots were reported and taken down within 24 hours, but new ones reappear.
“The sharing of private information on Telegram is expressly forbidden and is removed whenever it is found. Moderators use a combination of proactive monitoring, AI tools, and user reports to remove millions of pieces of harmful content each day,” said Remi Vaughn, a spokesperson from Telegram.
This story was shared just weeks after Telegram’s CEO Pavel Durov was arrested in France for not providing enough moderation and safety measures, allowing crimes through the platform.