
Image by Vladimir Fedotov, from Unsplash
Should AI Influence End-of-Life Medical Choices? Experts Discuss
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
AI is already revolutionizing healthcare in areas like imaging and diagnostics. However, as AI technology continues to advance, there are growing discussions about its potential role in end-of-life medical decision-making.
In a Rush? Here are the Quick Facts!
- End-of-life decisions should reflect patients’ wishes and be medically appropriate.
- AI can help provide prognostic information and assist in decision-making processes.
- AI may help with incapacitated patients who lack advance directives but has limitations.
Rebecca Weintraub Brendel, director of the Harvard Medical School Center for Bioethics, recently discussed the ethical implications of using AI in critical decision-making in a Harvard Medical School press release .
Weintraub Brendel emphasized that end-of-life choices ultimately reflect patients’ wishes, provided they are competent to make those decisions and the choices are medically appropriate.
But complications arise when a patient is unable to express their desires due to illness. In these cases, understanding both the cognitive and emotional implications of the decision becomes essential.
For instance, patients with progressive neurological conditions, such as ALS, may eventually reach a point where they are prepared to make end-of-life decisions. Conversely, individuals with cancer often experience significant shifts in their mindset once symptoms are addressed, leading them to reconsider their choices.
“People sometimes say, ‘I would never want to live that way,’ but they wouldn’t make the same decision in all circumstances,” noted Weintraub Brendel.
The conversation then turned to younger patients facing life-altering injuries. “When we’re faced with something that alters our sense of bodily integrity, our sense of ourselves as fully functional human beings, it’s natural, even expected, that our capacity to cope can be overwhelmed,” said Weintraub Brendel.
However, many individuals, even those suffering from severe injuries, report an improved quality of life over time, highlighting the importance of resilience and hope.
Weintraub Brendel also discussed the potential role of AI in helping patients navigate these tough decisions. AI systems could offer valuable insights into what might be expected during the progression of a chronic illness or how a person may cope with pain.
With its ability to process vast amounts of data, AI could provide prognostic information, helping clinicians and patients better understand potential outcomes and make informed decisions. “AI could give us a picture that could be helpful,“ she explained.
One of the more contentious issues, however, is the use of AI when patients are incapacitated and lack advance directives. In such cases, medical teams often rely on assumptions about what the patient would have wanted.
“I’m less optimistic about the use of large-language models for making capacity decisions or figuring out what somebody would have wanted. To me it’s about respect. We respect our patients and try to make our best guesses, and realize that we all are complicated, sometimes tortured, sometimes lovable, and, ideally, loved,“ Weintraub Brende argues.
Weintraub Brendel stresses that “having a better prognostic sense of what might happen is really important,” but cautions against over-relying on AI without acknowledging the complexity of human values.
Despite its potential, Weintraub Brendel is wary of AI’s role in making ethical decisions. “We can’t abdicate our responsibility to center human meaning in our decisions, even when based on data,” she stated. While AI can assist in diagnostics and provide valuable insights, the final decision-making should remain a human responsibility.
Ultimately, the integration of AI into healthcare, particularly in end-of-life decisions, requires careful ethical consideration.
As Weintraub Brendel put it, “We have to ask, ‘How do we do that and follow our values of justice, care, respect for persons?’” As technology advances, the balance between human judgment and AI’s capabilities will continue to shape the future of medical care.

Steam Users Warned After Downloading Malware-Laden Game PirateFi
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Justyn Newman Former Lead Cybersecurity Editor
A free-to-play game on Steam called PirateFi has been found to carry malware that can steal your online information. The game, which was released on February 8, 2025, was flagged by users and antivirus software as dangerous, triggering a warning from Steam itself.
In a Rush? Here are the Quick Facts!
- PirateFi is a free-to-play game that infects PCs with dangerous malware.
- Malware steals browser cookies, giving hackers access to online accounts.
- Steam warned affected users to run antivirus scans and check for suspicious software.
The malware, once installed, targets users’ browser cookies, allowing the hacker behind PirateFi to hijack online accounts, as first reported by PCMag .
This means your social media, email, and even banking accounts could be at risk. Steam has advised affected users to run antivirus scans and check their computers for unfamiliar software.
A game called PirateFi released on Steam last week and it contained malware. Valve have removed the game two days ago. Users that played the game have received the following email: pic.twitter.com/B98BFs0WbK — SteamDB (@SteamDB) February 12, 2025
One user, who tried to run PirateFi, said their antivirus flagged the game as a Trojan virus, specifically “Trojan.Win32.Lazzzy.gen.”
The malware downloads itself onto the computer and hides in a folder under the name Howard.exe. From there, it starts stealing browser cookies, giving the hacker access to various accounts, as reported by PCMag.
The damage doesn’t stop there. Several users who downloaded PirateFi reported that their accounts were hijacked.
One user mentioned that their Microsoft account was stolen, while another saw $20 taken from their Roblox account and scam messages sent to friends. Even Steam points were stolen and used to buy awards for fake bot accounts.
The game has been circulating in various ways, including through a job offer posted in a Telegram chat, claiming to pay $17 an hour for an in-game chat moderator position. A PCMag reader investigated and discovered the job offer was a trick to get users to download PirateFi onto their computers.
Users have also noticed that the screenshots for PirateFi on Steam seem to be copied from another game, raising further suspicions about the game’s legitimacy.
Steam has urged users to reinstall Windows to ensure the malware is fully removed, as it may be difficult to completely clean your system otherwise. According to SteamDB, PirateFi may have been downloaded by over 800 users.
Valve, Steam’s parent company, has not yet commented on how the malware-ridden game made it onto their platform.