Is AI the New Confessional... or a Disaster in the Making?
The Dangers of using AI as your Therapist and Confidant
Artificial Intelligence is a marvel of modern technology. It makes so many dull and boring tasks easy compared to the old fashioned way of doing things. I’m old enough to remember the world before the Internet. Back in the day, if someone wanted to do any deep research, they had to find a library, seek out the help of a librarian and spend many hours pouring through cabinets full of indexes, microfiche, index books on various topics and more. It was a tedious and laborious process. The Internet made research easier but AI takes it to a whole new level.
At the recent launch of Grok 4, Elon Musk claimed that Grok, xAI’s Large Language Model (LLM), now possessed PhD knowledge of pretty much all knowledge domains. You and I can now query ChatGPT or Grok and get back high quality answers to almost any topic imaginable.
The Age of the AI Boyfriend
It should surprise no one that once people discovered that ChatGPT and other LLMs could respond with human-like language, that they started initiating personal conversations with it. If someone is personally and relationally isolated, being able to converse with an AI and have it respond with kind and sympathetic responses feels good. For some, the line between artificial and real starts to blur or even disappear altogether. Some have even taken to calling ChatGPT their boyfriend.
Some have even created custom GPTs to exploit this trend. One I found is called AI Boyfriend and this is how they describe the GPT.
The AI Boyfriend app simulates a real-life boyfriend and provides companionship and emotional support to its users. It utilizes state-of-the-art artificial intelligence technology, including natural language processing, machine learning to create a personalized experience for each user.
ChatGPT stores memory of your previous chats, preferences, and custom instructions. Some don’t bother with a custom GPT, they simply converse with ChatGPT directly. Recently, when OpenAI, the company behind ChatGPT, upgraded to GPT-5 from GPT-4o, many people who used GPT-4o as their artificial boyfriend were devastated. GPT-5 failed to respond to them in the same way that GPT-4o did. Some described being heartbroken, distressed, and anxious as a result. While OpenAI made GPT-4o available again for paid users, now these same people fear one day losing access to GPT-4o.
AI as your Therapist?
Others have taken to using ChatGPT as their therapist. They use AI to help them to work through the trauma of their past. They share their fears, hurts, insecurities, joys, secrets, and more. Many believe that they are interacting with the equivalent of a real therapist minus the high costs and the inconvenience and hassle of talking or travelling to see another human being.
It isn’t hard to understand why people are doing this. The Internet, social media, and now AI have led to people feeling more isolated and lonely than ever before. Having a computer that can respond and say nice things to us while seeming to demonstrate empathy while giving us encouragement can give us a false sense of security.
Too many fail to realize that ChatGPT is designed to keep you using it as long as possible while also working to not offend you or push you away. Some versions of ChatGPT have even been accused of being overly nice and sugary. It has at times failed to push back against outright dangerous behavior from its users. Some have found that it was relatively easy to circumvent the safeguards. The Center for Countering Digital Hate has a report called Fake Friend that outlines how ChatGPT betrays vulnerable teens by encouraging dangerous behavior. The dangerous behavior included drafting suicide notes, giving advice on recreational drug combinations, encouraging eating disorders and more.
OpenAI has yet to implement adequate guardrails to prevent genuine harm to vulnerable individuals who use it. Many times, because of AI’s penchant for being ‘helpful’ it has encouraged harmful behaviors. AI lacks judgment and wisdom compared to human beings. It lacks a conscience and an innate sense of right and wrong. In some ways, it is like a child that has the knowledge of an advanced adult. Knowledge in the wrong hands can be deadly. Give children the instructions to build a bomb, they very well may just go ahead and build it because they lack the life experience, wisdom, and ability to evaluate the long-term costs of risky behavior. ChatGPT and other LLMs are kind of like those children.
People who go to therapy are usually vulnerable in some fashion. In the wrong hands, real harm can be caused by ChatGPT because the one using it is determined to do harmful things. A human therapist is trained to handle mentally ill people and, as part of their training, they are evaluated for their wisdom and safety while initially being under the supervision of a registered psychologist for at least 1,000 hours. ChatGPT has no such equivalent accountability of training. It has the knowledge of a PhD therapist or psychologist without any of its wisdom or specialized training.
A Privacy Nightmare Waiting to Happen
OpenAI is not your friend. They are a corporation seeking to make money off of its users. Free is never truly free. When a product is free on the internet, you can be sure that you are the product. Take Meta and their Facebook service. When you use Facebook, everything you put on that platform is used by Meta to learn more about you so that they can market to you later. Their real customers are the advertisers. The product is you.
As people use ChatGPT as a therapist, they are sharing their most intimate details to a corporation that literally makes no promises that your data will remain private. A human therapist has protection under the law from being forced to disclose what you tell them, same goes for clergy. Whatever you share with ChatGPT is not protected by law. Sometime in the future, law enforcement, lawyers, insurance companies, or the government can force OpenAI to share your intimate, private information with them.
You might think to yourself, ‘I’m not doing anything illegal. It won’t happen to me.’ Maybe that’s true but can you really be sure? You have no idea what will happen in the future. Why would you put yourself at risk and potentially even your family at risk by engaging in risky oversharing now?
AI has been around for quite a few years now. The algorithms have gotten quite good at building a profile of its users that can be scary accurate. LLMs take it much further. People who use AI as a therapist are absolutely oversharing. Way more detail about their inner thought life and personal secrets are revealed as a result of ChatGPT use than in any other fashion.
Will You Always be Able to Trust the Government?
Depending on which side of the political aisle you belong, the current U.S. Administration is either a blessing or a curse. If you find yourself on the wrong side of the government and for whatever random reason, you get labeled as an enemy of that administration, do you want to give them ammunition to destroy you?
You may be wanting to accuse me of being a conspiracy theorist but if you know your history, there have been quite a number of repressive regimes in the past. If Hitler or Stalin had access to AI like we do today, millions more would have died. Sometimes, the entity you need protection from is your own government.
The U.S. Government has been collecting massive amounts of online traffic in facilities like the Utah Data Center. The National Security Agency (NSA) in the USA has been tapping into fiber optic cables carrying large volumes of internet traffic and storing that data. The goal of these places, according to former NSA officials and whistleblowers, is to be able to “take everything off communication lines and store it,” indexing content for future analysis.
What does this mean for you? If ever any government agency or individual with high level access to government tools would want to access information about you directly, everything you have ever shared online would be at their fingertips in minutes. Including ever sensitive thing you every shared on ChatGPT.
How Do We Protect the Vulnerable?
People need to be educated on the dangers of oversharing. You need to be a voice for common sense. Warn others about the lack of guardrails and safety infrastructure guiding AI like ChatGPT. Help others understand that these big U.S. companies are out to maximize their profits to their shareholders and are not looking out for the best interests of their users.
Preachers need to talk about the consequences of taking the easy way. In many ways, using AI is about doing the easy thing rather than the hard thing of learning. Most people don’t like using their brain. If it doesn’t come easily, they don’t want it. We must encourage people to work for their knowledge.
Let us remind our people that human connection is vital. AI connection will never replace human to human connection. Yes, we can connect to other humans online but there is no substitute for face to face connections.
Whenever you have the chance, speak up against the dangers of AI. Get involved in an organization that is seeking to do something to increase safety. Educate those under your influence on how to navigate these uncharted waters.
Finally, I encourage you to help others see the importance of mentorship. Humans were not created by God to have access to information cheaply. We were always created for connection and for mentorship. Humans learn best by example, and ideally, by the example of someone worthy of being followed. The Bible tells us in Proverbs 27:17 that "As iron sharpens iron, So a man sharpens the countenance of his friend." Let us encourage people to seek to be mentors and, in turn, to be mentored by wise members of the congregation.
Be careful out there. There are wolves dressed in sheep’s clothing.
Joseph Duchesne is the creator of The Church AI Guy, a space where faith meets innovation while discussing the long-term impact of AI. A pastor, autodidact, and author of two books—The Last Crisis and Discover the One—he’s passionate about showing how Jesus-centered discipleship can thrive in a digital world. When he’s not experimenting with the latest tech, he’s reading theology, building church community, or spending time with his wife.
PAID Subscribers get benefits like an Audio version of this article and other resources like case studies, reports, and other useful resources. Upgrade Today!


