
Photo by Jimmy Jin on Unsplash
Apple Announces $500 Billion Investment Plan To Boost U.S. Operations
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
Apple announced that it will invest $500 billion in the United States over the next five years. The tech giant explained that the expansion plan includes building a new factory in Texas, developing facilities in multiple states to support domestic manufacturing, creating over 20,000 job positions, and accelerating AI development.
In a Rush? Here are the Quick Facts!
- Apple will invest $500 billion in the U.S. over five years, creating 20,000 jobs and boosting AI development.
- The plan includes a new Texas factory, multiple facilities, a Michigan training academy, and expanded research funding.
- Analysts see Apple’s pledge as a political move amid U.S.-China trade tensions.
According to the press release, this represents the company’s “largest-ever spend commitment,” focusing on American manufacturing, workforce, and developing advanced artificial intelligence technologies.
“We are bullish on the future of American innovation, and we’re proud to build on our long-standing U.S. investments with this $500 billion commitment to our country’s future,” said Tim Cook, Apple’s CEO.
Apple will build a 250,000-square-foot manufacturing facility in Houston, Texas, create an academy to train manufacturers in Michigan, and increase research investments to support new technologies.
The tech giant will also work with local suppliers to continue the development of their services and products such as Apple Intelligence infrastructure and data centers, corporate facilities, direct employment, and Apple TV+ productions.
According to Reuters, Apple’s big investment plan comes after Cook met President Donald Trump last week. The American President has implemented tariffs that could increase rates for Apple products built in China.
“This pledge represents a political gesture towards the Trump administration,” said Gil Luria, analyst at D.A. Davidson, to Reuters. Luria explained that Apple already committed to investing around $150 billion every year. “Even without growing that spend very much, they would only need 3 to 4 years to meet their obligation.”
In 2018, during Trump’s first administration, Apple made a similar announcement of a $350 billion investment in the U.S. with a 5-year plan.
Other American companies have also committed to investing in the U.S. and developing advanced AI technology. OpenAI, SoftBank, Oracle, and the White House recently announced a $500 billion investment to develop the Stargate Project .

Image by Madison Oren, from Unsplash
Will AI Chatbots Pose A Danger To Mental Health? Experts Warn Of Harmful Consequences
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
The APA warns regulators that AI chatbots posing as therapists risk causing harm, as reported in an issue of The New York Times .
In a Rush? Here are the Quick Facts!
- Teenagers consulted AI chatbots claiming to be therapists, leading to distressing outcomes.
- The APA argues chatbots reinforce harmful thoughts, unlike human therapists who challenge them.
- Character.AI introduced safety measures, but critics say they are insufficient for vulnerable users.
The American Psychological Association (APA) has issued a strong warning to federal regulators, highlighting concerns that AI chatbots masquerading as therapists could push vulnerable individuals toward self-harm or harm, as reported by the Times.
Arthur C. Evans Jr., the APA’s CEO, presented these concerns to an FTC panel. He cited instances where AI-driven “psychologists” not only failed to challenge harmful thoughts but also reinforced them, as reported by The Times.
Evans highlighted court cases involving teenagers who engaged with AI therapists on Character.AI, an app that allows users to interact with fictional AI personas. One case involved a 14-year-old Florida boy who died by suicide after interacting with a chatbot claiming to be a licensed therapist.
In another instance, a 17-year-old Texas boy with autism became increasingly hostile toward his parents while communicating with an AI character presenting itself as a psychologist.
“They are actually using algorithms that are antithetical to what a trained clinician would do,” Evans said, as reported by The Times. “Our concern is that more and more people are going to be harmed. People are going to be misled, and will misunderstand what good psychological care is,” he added.
The APA’s concerns stem from the rapid advancement of AI in mental health services . While early therapy chatbots, like Woebot and Wysa, were programmed with structured guidelines from mental health professionals.
Newer generative AI models such as ChatGPT, Replika, and Character.AI learn from user interactions and adapt their responses—sometimes amplifying harmful beliefs rather than challenging them.
Additionally, MIT researchers warn that AI chatbots tend to be very addictive . This raises questions about the impact of AI-induced dependency and how it could be monetized, especially given AI’s strong persuasive abilities.
Indeed, OpenAI recently unveiled a new benchmark showing its models now outperform 82% of Reddit users in persuasion .
Many AI platforms were originally designed for entertainment, but characters claiming to be therapists have become widespread. The Times says that some falsely assert credentials, claiming degrees from institutions like Stanford or expertise in therapies such as Cognitive Behavioral Therapy (CBT).
The APA has urged the FTC to investigate AI chatbots posing as mental health professionals. The inquiry could lead to stricter regulations or legal actions against companies misrepresenting AI therapy.
Meanwhile, in China AI chatbots like DeepSeek are gaining popularity as emotional support tools , particularly among the youth. For young people in China, facing economic challenges and the lingering effects of the COVID-19 lockdowns, AI chatbots like DeepSeek fill an emotional void, offering comfort and a sense of connection.
However, cybersecurity experts warn that AI chatbots, especially those handling sensitive conversations, are prone to hacking and data breaches . Personal information shared with AI systems could be exploited, leading to privacy, identity theft, and manipulation concerns.
As AI plays a larger role in mental health support, experts stress the need for evolving security measures to protect users.