Department Of Health And Human Services Rolls Out AI Tool - 1

Image by American Life League’s photostream, from Flikr

Department Of Health And Human Services Rolls Out AI Tool

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

The Department of Health and Human Services (HHS) has introduced ChatGPT for all staff members to improve efficiency, but caution employees to maintain proper care when handling confidential information.

In a rush? Here are the quick facts:

  • Rollout overseen by CIO Clark Minor, former Palantir employee.
  • Sensitive data like SSNs and bank details cannot be input.
  • Concerns remain over bias in AI systems affecting patient care.

HHS distributed an email to all staff members announcing that ChatGPT would be rolled out for all staff across the organization. The email, from Deputy Secretary Jim O’Neill, titled “AI Deployment,” is part of an initiative led by Clark Minor, HHS Chief Information Officer, who previously worked at Palantir.

“Artificial intelligence is beginning to improve health care, business, and government,” the email reads, as first reported by 404Media .

“Our department is committed to supporting and encouraging this transformation. In many offices around the world, the growing administrative burden of extensive emails and meetings can distract even highly motivated people from getting things done. We should all be vigilant against barriers that could slow our progress toward making America healthy again.”

The email adds,

I’m excited to move us forward by making ChatGPT available to everyone in the Department effective immediately. Some operating divisions, such as FDA and ACF [Administration for Children and Families], have already benefitted from specific deployments of large language models to enhance their work, and now the rest of us can join them. This tool can help us promote rigorous science, radical transparency, and robust good health. As Secretary Kennedy said, ‘The AI revolution has arrived.’

HHS staff are instructed to log in using government email addresses and can ask ChatGPT questions, refine answers, and consult multiple perspectives.

“Of course, you should be skeptical of everything you read, watch for potential bias, and treat answers as suggestions. Before making a significant decision, make sure you have considered original sources and counterarguments. Like other LLMs, ChatGPT is particularly good at summarizing long documents,” the email says.

Minor has “taken precautions to ensure that your work with AI is carried out in a high-security environment,” the email adds, noting that most internal data, including procurement-sensitive information, can be entered safely.

It warns, however, that ChatGPT “is currently not approved for disclosure of sensitive personally identifiable information (such as SSNs and bank account numbers), classified information, export-controlled data, or confidential commercial information subject to the Trade Secrets Act.”

The rollout comes amid broader federal efforts to integrate AI and raises concerns about bias in AI systems, especially in programs like Medicare and Medicaid that determine patient eligibility for treatment.

Study Finds 75% of Popular Free Apps Collect Excessive Data - 2

Image by James Yarema, from Unsplash

Study Finds 75% of Popular Free Apps Collect Excessive Data

  • Written by Kiara Fabbri Former Tech News Writer
  • Fact-Checked by Sarah Frazier Former Content Manager

A new study shows how three out of four of the most popular free apps in the United States are collecting excessive amounts of user data.

In a rush? Here are the quick facts:

  • 75% of top free U.S. apps collect excessive user data.
  • Messenger, Pinterest, and Lyft ranked as most intrusive apps.
  • Researchers flagged deceptive design techniques pressuring users to share data.

Tenscope researchers analyzed the Apple App Store’s 100 most downloaded free applications through their 2025 App Privacy Index . The result shows that 75% of these applications monitor users through different platforms without their knowledge.

The study revealed Messenger, Pinterest, and Lyft as the three applications that gather the most user data. The invasiveness index of Tenscope rates Messenger as the most intrusive application at 100 points.

The researchers say that Messenger collects data exceeding twenty times the amount of most private apps.

The study demonstrates that certain applications maintain proper operation through minimal surveillance practices. Indeed, the privacy-focused design of ParentSquare and Microsoft Edge achieved scores of 4 and 11, respectively, showing how privacy-friendly design methods do exist.

“Good design empowers users, but what we found is a landscape where design is often used to manipulate them,” said Jovan Babovic, Creative Director and Co-founder of Tenscope.

“This report isn’t just a list; it’s a call for greater transparency and a guide for consumers to reclaim control of their digital identity,” he added.

The report demonstrates how certain applications use deceptive design methods to obtain user data. The apps employ confusing permission requests and complicated settings to force users into sharing data more than necessary.

“The highest-scoring apps have one thing in common: their business model relies on knowing as much about you as possible,” Babovic explained. “What this list proves is that data collection is a choice, not a necessity,” he added.