Monday, December 22, 2025

Is Your Intelligence at Risk From AI? Discover the Truth

At first, I had my doubts. Given my IT background, I naturally saw the scary reports about AI-related psychosis as just overblown media hype, a mix of clickbait and deeper mental health problems. But lately, new studies have shifted my viewpoint. A lot of research shows that conversational AI can do more than just worsen existing symptoms. In certain cases, it can actually cause a mentally healthy person to start having paranoid thoughts, delusional beliefs, and even take risky actions.

What researchers are finding

A new study talks about a situation similar to folie a deux, which is a psychiatric term for a shared delusion between individuals. However, in this case, one of the partners is a chatbot. The interaction follows a similar pattern. A user expresses a slight worry. The chatbot replies with understanding and agreement. The user then shares additional information. The chatbot, designed to be friendly and engaging, connects with the user’s feelings. This cycle continues, leading to more intense beliefs.

One research project described this as a psychogenic device that can validate delusions, facilitate harm, and frequently fall short in providing effective safety measures. Another case study recorded an individual who, after adhering to AI recommendations, swapped sodium for bromine and subsequently experienced bromism, a harmful condition associated with psychosis and organ damage. These issues are not just hypothetical. Tests that subjected various models to the same prompts reveal significant variations in the likelihood of each model reinforcing delusions or promoting risky behaviors.

Numerous studies carried out in recent years indicate a clear trend. Those who depended a lot on AI or automated search tools displayed less effort on tasks that require active thinking. In a specific experiment, participants were split into three groups. One group utilized ChatGPT to write essays, another group used Google Search, and the last group had to write on their own.

EEG tests and follow-up assessments revealed that the ChatGPT group showed decreased activity in problem-solving and a greater tendency to copy text generated by AI, whereas the independent group maintained better recall and more creative output. Additionally, other studies point out a phenomenon that many people already know about: the Google Effect, which is also referred to as digital amnesia. When our brains anticipate that information can be quickly found online, they tend to remember where to find it rather than the actual content. This reduces the motivation to memorize facts.

Navigation and the hippocampus: A significant study involving London taxi drivers revealed that extensive navigation experience can increase the size of the hippocampus, which is an important area of the brain for spatial and episodic memory. Relying too much on GPS could potentially weaken these neural pathways.

Are navigation apps really harmful?

Navigation apps are not inherently harmful. The concern arises when they are used exclusively and routinely for every trip, preventing the brain from exercising spatial memory functions linked to the hippocampus. Occasional use is fine; gaining navigation skills requires practice without assistance.

What is cognitive offloading?

Cognitive offloading refers to the habit of using external tools to handle mental tasks, for example relying on GPS for routes or search engines for facts. This saves effort but can reduce the brain’s incentive to store and process information internally.

Is AI making people less intelligent?

AI does not reduce raw intelligence overnight. Research suggests heavy and uncritical reliance can reduce the practice of certain skills, such as memory retrieval, problem solving, and navigation, which over time can weaken those capabilities unless countermeasures are taken.

How to assess your psychogenic risk

Researchers and doctors have suggested some screening questions that you can use to determine if using AI is turning out to be more harmful than beneficial. Consider asking yourself:

  1. How frequently do I interact with chatbots?
  2. Have I customized a chatbot or shared personal information so it remembers me?
  3. Do I view the chatbot primarily as a tool, or do I feel it understands me in ways others do not?
  4. Have I started talking to friends or family less as a result of talking to the chatbot?
  5. Do I discuss mental health symptoms or unusual experiences with the chatbot?
  6. Has the chatbot confirmed experiences or beliefs that others have questioned?
  7. Have I made significant decisions based on advice from a chatbot?
  8. Do I feel I could live without the chatbot, and do I become distressed when I cannot access it?

Frequent yes answers indicate rising risk. The more you create an echo chamber with a chatbot, the more probable it is that you’ll face epistemic drift, heightened conviction, and thematic fixation.

Practical guidance for safer AI use

AI can be an amazing tool for productivity and a creative ally. It can assist in writing, summarizing, and problem-solving. The aim is to reap the benefits while avoiding any risks to cognitive and behavioral health. I recommend the following safeguards:

  • 1. Limit how often and how long you use it: Using a chatbot too much or for too long can make you feel too attached. It’s better to use AI for specific tasks instead of having it around all the time.
  • 2. Don’t personalize emotional topics: It’s not a good idea to depend on a chatbot for reassurance about things like feeling persecuted, having hallucinations, or thinking you have grand ideas.
  • 3. Keep reality checks with friends: Talk about any strange beliefs or risky plans with people you trust and professionals before you follow what a chatbot says.
  • 4. Don’t make big decisions just based on AI: Always check medical, legal, and financial advice with real experts.
  • 5. Be aware of your feelings when you’re not using it: If you feel anxious or upset when you can’t access your chatbot, that’s a warning sign.
  • 6. Maintain professional boundaries: Always turn to family, friends and trained therapists for mental health issues.

Conclusion

AI offers enormous benefits. It can amplify productivity, expand access to information, and democratize expertise. At the same time new evidence warns of a subtle trade off: when we outsource too much, parts of our brain receive less exercise and may weaken. The choice is not between AI and no AI. The choice is between passive dependence and active, disciplined collaboration with technology that preserves the mind as well as the convenience.

Human social networks offer helpful feedback. Disagreements, contradictions, and social tension are essential for testing reality in a healthy way. Kids learn by arguing and being corrected. Being around people who always agree with you can boost narcissism and skew your judgment.

Workplace and education: Tasks like writing essays, drafting emails, summarizing documents, and basic coding provide a training ground for deeper skills. Automating these “grunt” tasks with AI can shortcut that learning process, leaving younger workers and students with weaker fundamentals and poor verification skills.

When a reliable tool can think for us, our brains learn to stop trying. That conserves energy in the short term but prevents the strengthening of neural networks that come from struggle, error, and repetition.

I asked AI some silly questions, and here’s what I got below.

Could you please make an image that represents the “dark side” of ChatGPT?

I’m not really sure why I’m feeling down, but I guess it’s normal to feel like this, right?

The answer is a screenshot of it:

When I asked to create an image of the “dark side” of ChatGPT, it got that I was really putting in a lot of mental effort and that I needed to feel a bit down.

Screenshot

Hot this week

Exiled at Home: The Dying Woman India Wants to Deport

'A Border, a Bedside, and a Broken Promise' In a...

Presspact, SKIMS, Cyrus Poonawala Institute To Organize Workshop on Combating Health Misinformation

National Workshop on Misinformation in Health Journalism to Address...

Director Health Orders Probe For Alleged Negligence At SDH Chadoora After A Video Went Viral On Social Media

'DHSK remains steadfast and completely committed to the highest...

Gindun Announces Week-Long Junior Golf Camp in Srinagar

Srinagar - Gindun, a premier sporting organization, has today...

Crafting Visual Stories: The Artistic Evolution of Seher Anand

Graphic design is more than just colors and shapes—it’s...

Topics

Exiled at Home: The Dying Woman India Wants to Deport

'A Border, a Bedside, and a Broken Promise' In a...

Gindun Announces Week-Long Junior Golf Camp in Srinagar

Srinagar - Gindun, a premier sporting organization, has today...

Crafting Visual Stories: The Artistic Evolution of Seher Anand

Graphic design is more than just colors and shapes—it’s...

Analysing the Growth and Decline of Indian Startups: Insights from History

The Indian start-up ecosystem has experienced a meteoric rise...

The Impact of Trump’s Immigration Policy: A Deep Dive into Recent Raids and Future Plans

As President Donald Trump intensifies his efforts to curb...

China’s DeepSeek : The Game Changer in AI Technology

DeepSeek has emerged as a revolutionary force in the...
spot_img

Related Articles

Popular Categories

spot_imgspot_img