This Is Gettin’ Weird: The Future of AI and Mental Health

Let’s start with a little ramble.

I have been wanting to write something about artificial intelligence for a while now. I have started several posts about it, just to erase and start over again. I’m not sure what it is about this topic. AI is a tricky one. It could save the world, but it could also destroy the world. It could take our jobs, but it could also lead to abundance for all. I can’t be the only one who is confused by this.

I remember when I first heard about AI back in the early 2010’s. I thought it was futuristic mumbo jumbo. In fact, I was annoyed by the subject. It seemed impossible. Then, when that first 60 Minutes segment came out a few years ago, I was blown away. “Oh, this is what they were talking about,” I thought. “And it spontaneously taught itself Swahili? Wonderful. Not sure this is good.”

Fast forward to today, and it’s one of my biggest concerns for the future of humanity. Candidly, I have always been intellectually fascinated by existential threats, such as nuclear proliferation, climate change, pandemics, natural disasters, and geopolitical conflicts. I know, it’s not exactly a bubblegum hobby. The idea of a massive force outside of my control that has the potential to destroy the world as we know it is just mind-bending. In a strange way, such topics keep me humble and, if I don’t go too far down the rabbit hole, help give me perspective on what I can actually control and what truly matters in life.

That said, I get a uniquely dizzying feeling when I think about AI. Who knows, maybe if artificial intelligence weren’t a thing, I’d just find another existential threat to fuss over. But this really feels different and more imminent. Just listen to the CEOs of the companies developing AI. To paraphrase, they basically say (in the most disturbingly calm voice you can imagine): “Yeah, it could wipe out 50% of the work force in a couple of years. We’re not exactly sure how it works, whether we can control it, or if it will align with human values. But we got to hurry up and develop AI further because we can’t let China beat us.”

WTF!

Really? That’s it? I guess I’m supposed to just carry on with my day then?

So yeah, I can be a bit of a doomer. But, I am also a psychotherapist who’s deeply interested in the future of mental health. So how do I square that circle?

I think where I want to go with this post is to promote a mental health framework for dealing with the challenges associated with AI. My goal is to try to place a little control back in our hands, protect what’s most meaningful, and give a little hope. Here are some of my thoughts.

First, take back some control.

Whether it’s AI or any other macro-societal issue, we still have to fulfill our individual responsibilities today, tomorrow, and the next day. Prioritize these things first and foremost. Perhaps this is just a note to myself, but maybe others can relate. It makes sense to keep an eye on what’s happening in the world and sometimes to seek out greater involvement, but we should be intentional about how we spend our energy and clear about what we really have to offer a given cause.

With that in mind, if you have concerns about AI products (which we all should), remember you can still do something about it. You can vote (AI will be an increasingly relevant political topic), you can contact your representatives, you can protest, you can join an AI safety organization, you can initiate conversations at your workplace, or you can participate in community meetings. I get that some of these options aren’t for everyone, but you have a choice to get involved. It’s important to be aware of that.

Another thing within our control is to learn about AI tools. This may sound like I’m recommending you flirt with the devil, but I promise I’m not. The fact is that the future of AI is complicated, and it’s possible that the end result could be a net positive. So, in the event it is, it will be important that we know how to use AI. But even if it isn’t, understanding this technology will help you stay informed and respond accordingly, especially for parents whose children will be “AI natives.”

Which brings me to my second point: protect what’s most meaningful. If anything, use these tips to protect children.

We should be careful what we tell AI. I don’t know about you, but ChatGPT, Claude, or Grok never gave me a confidentiality agreement. There is nothing in the history of the internet that gives me the slightest bit of confidence that AI is truly designed to serve us. Particularly with issues that matter most, trustworthy humans are still our best bet. 

Relatedly, treat it as a tool, not a human. Take it from Yoshua Bengio (Turing Award winner, chair of the 2026 International AI Safety Report): “nobody would have thought that we would see the wave of psychological issues that have come from people interacting with AI systems… nobody expected people would fall in love with an AI, or become so intimate with an AI that it would influence them in potentially dangerous ways.” We should take that to heart. Romance, intimacy, and friendship should be with our fellow humans.

Beware of offloading too much cognitive effort onto AI systems. Generate your own ideas, email content, art, jokes, and love letters first before running it through AI. We have to hold these boundaries for the sake of our brain health. There was a recent study by MIT that showed that over-reliance on AI tools can lead to diminished critical thinking, drops in neural activity, memory gaps, and loss of original thought. In other words, AI can make you dumb. No one wants that. I still believe the best future for humans is one where we have strong minds. So, make sure you use it, so you don’t lose it.  

Last but not least, some hope.

AI mirrors back an incomplete image of humanity. It’s kind of like watching chimps. We can recognize how we are similar, but we also become more aware of our differences, which helps us better understand what makes us uniquely human. Now, unlike with chimps, some people would argue that humans will be the inferior ones compared to AI. But this is oversimplified and presumptuous.

To make my point, let’s look at what the current AI mirror reveals. First, it reveals that as a collective, humans are really freakin’ smart. AI is built on human intelligence and ingenuity, so everything artificial intelligence can do, as of now, is based on what we have already done. It’s still an open question as to what extent AI can match or exceed human intelligence and at what cost (financially and ecologically).

But let’s go ahead and assume that AI will surpass human intelligence in most domains. Even so, the mirror reveals an even more profound difference: Humans are not just a form of intelligence; we are meaning-experiencing biological beings. Strange phrase aside, I’m essentially saying the stakes are real for us. We actually experience success, failure, connection, suffering, hope, loss, and all the varieties of emotion due to our neurochemistry. Without these biological capabilities, AI can only hollowly generate ideas and complete tasks, but with no lived meaning and nothing on the line.

Simply put, meaning is something that cannot be artificially manufactured. It only exists through experience, which means it is safely and uniquely human.

So maybe this is where it all lands for me. I think we’re in for a confrontation with AI. What exactly that entails, no one can be certain. All I know is that I want to be on the side where the outcome matters, and we are already on that side.

This should give us all some genuine hope—something artificial intelligence will never experience for itself.

Leave a comment