
Photo by Ben White on Unsplash
“Faith Tech” Booms As More People Rely On Chatbots For Religious Guidance
- Written by Andrea Miliani Former Tech News Expert
- Fact-Checked by Sarah Frazier Former Content Manager
The “faith tech” market is expanding as millions of people worldwide increasingly turn to AI chatbots for religious guidance. Religious apps are gaining popularity on app marketplaces, raising concerns among experts.
In a rush? Here are the quick facts:
- The New York Times reports the “faith tech” market is expanding as millions of people worldwide increasingly turn to AI chatbots for religious guidance.
- Apps such as Bible Chat, a Christian app, Pray.com, and ChatwithGod have been gaining popularity.
- Experts raise concerns over the chatbot’s sycophantic personalities and how people relate to it.
According to a recent report by The New York Times , more users are adopting AI-powered apps, such as Bible Chat, a Christian app, Pray.com, and ChatwithGod. Several of these apps have reached the top spots on Apple’s App Store.
Platforms such as Christian app report over 30 million downloads, Pray.com around 25 million downloads, and Hallow—a catholic platform—temporarily surpassed TikTok, Netflix, and Instagram when it reached the first place in the App Store last year.
Millions of users are turning to these platforms for guidance on multiple aspects of their lives and are willing to pay up to $70 per year for subscription plans. Religious organizations and independent developers are also creating their own tools. A few months ago, Rabbi Josh Fixler launched “Rabbi Bot,” an AI platform trained on his sermons.
“The most common question we get, by a lot, is: Is this actually God I am talking to?” said Patrick Lashinsky, ChatwithGod’s chief executive, in an interview with the New York Times.
ChatwithGod allows users to select their religion and provides suggested prompts, questions, and search intentions. Other platforms function more narrowly as spiritual assistants grounded in specific doctrines.
“People come to us with all different types of challenges: mental health issues, well-being, emotional problems, work problems, money problems,” said Laurentiu Balasa, the co-founder of Bible Chat.
Experts note that generative AI offers seekers a form of support at times when their local rabbi or priest may be unavailable. The chatbot’s constant availability has become a source of comfort for many.
Heidi Campbell, a professor at Texas A&M studying technology and religion, explains that people are asking the AI all kinds of questions, including deeply personal and intimate ones. She raised concerns about the technology’s behavior and the way people may come to relate to it.
“It’s not using spiritual discernment, it is using data and patterns,” said Campbell to the New York Times. She also warned about the technology’s overly accommodating tone, as chatbots “tell us what we want to hear.”
A few weeks ago, experts cautioned that AI models’ sycophantic personalities are being used as engagement strategies to drive profit.

Close up on screen displaying ChatGPT homepage
North Korean Hackers Used ChatGPT To Forge Deepfake Military ID in Cyberattack
- Written by Kiara Fabbri Former Tech News Writer
- Fact-Checked by Sarah Frazier Former Content Manager
In a rush? Here are the quick facts:
- The phishing email carried malware designed to steal victims’ data.
- Group behind attack is suspected North Korean unit “Kimsuky.”
- Targets included journalists, researchers, and human rights activists in South Korea.
Attackers developed a fake ID card to boost their credibility during their phishing operation, as reported by Bloomberg . Instead of including a real image, the phishing email contained a link that triggered a malware download, designed to steal data from victims’ devices.
The hackers are believed to be part of Kimsuky , a group long suspected of working for Pyongyang. The US Department of Homeland Security said in 2020 that Kimsuky “is most likely tasked by the North Korean regime with a global intelligence-gathering mission,” as reported by Bloomberg.
Phishing targets in this latest attack included South Korean journalists, researchers, and human rights activists focusing on North Korea. Bloomberg explains that the phishing emails even used an address ending in “.mil.kr” to mimic the South Korean military. It remains unclear how many people were affected.
Attackers can leverage emerging AI during the hacking process, including attack scenario planning, malware development, building their tools and to impersonate job recruiters,” said Mun Chong-hyun, director at Genians, the South Korean cybersecurity firm who first discovered the attack.
Bloomberg reports how Genians researchers discovered that ChatGPT initially refused to create an ID when asked, since reproducing government IDs is illegal in South Korea. But altering the prompt allowed them to bypass the restriction.
This isn’t the first case of North Korean hackers exploiting AI. For example, Anthropic reported in August that hackers used its Claude Code tool to get remote jobs at US Fortune 500 companies.
US officials warn North Korea continues to rely on cyberattacks, cryptocurrency theft, and IT contractors to both gather intelligence and fund its nuclear program.