Tech Fixated

Tech How-To Guides

  • Technology
    • Apps & Software
    • Big Tech
    • Computing
    • Phones
    • Social Media
    • AI
  • Science
Reading: ChatGPT and Other AI Assistants Are Failing People Who Need Health Support Most
Share
Notification Show More
Font ResizerAa

Tech Fixated

Tech How-To Guides

Font ResizerAa
Search
  • Technology
    • Apps & Software
    • Big Tech
    • Computing
    • Phones
    • Social Media
    • AI
  • Science
Have an existing account? Sign In
Follow US
© 2022 Foxiz News Network. Ruby Design Company. All Rights Reserved.
Science

ChatGPT and Other AI Assistants Are Failing People Who Need Health Support Most

Edmund Ayitey
Last updated: September 10, 2025 3:23 am
Edmund Ayitey
Share
llm motivation neurosicence.jpg
SHARE

Major study reveals that popular AI chatbots excel at helping motivated individuals but leave behind those struggling with uncertainty about lifestyle changes.

University of Illinois researchers have uncovered a critical blind spot in how AI health assistants like ChatGPT, Google Bard, and Llama 2 respond to people seeking health guidance.

Published in the Journal of the American Medical Informatics Association, their study reveals that these systems excel at supporting people who already have clear health goals but fail dramatically when helping those who are uncertain or resistant to making lifestyle changes.

The research evaluated how these AI systems handle different motivational states using 25 validated scenarios across five major health topics.

While the chatbots successfully identified and supported users in preparation, action, and maintenance stages of behavior change, they provided irrelevant information to people in earlier stages, covering only 20-30% of the psychological processes needed to move forward.

This gap becomes particularly concerning when considering real-world applications.

Someone with diabetes who’s resistant to exercise receives inadequate support precisely when they need it most – during the crucial contemplation and precontemplation phases where awareness-building and emotional engagement are essential for eventual behavior change.

The findings expose how current AI health tools may inadvertently widen health disparities by serving those already motivated while leaving behind individuals who struggle with ambivalence about necessary lifestyle changes.

The Five Stages of Health Behavior Change

Understanding why AI assistants struggle requires examining the Transtheoretical Model that psychologists use to map behavior change. This framework identifies five distinct stages people progress through when adopting healthier habits.

Precontemplation represents the earliest stage, where individuals aren’t considering change and may not recognize problems with their current behavior. Contemplation involves awareness of issues but significant ambivalence about taking action.

Preparation marks the transition to concrete planning and goal-setting. Action involves actively implementing new behaviors, while maintenance focuses on sustaining changes over time.

Current AI systems shine brightest in the later stages but stumble badly in the earlier ones where psychological barriers are highest.

Where AI Health Assistants Excel

When users arrive with clear intentions and established goals, AI chatbots demonstrate remarkable capability. Someone who has already decided to start exercising for depression management receives comprehensive, relevant guidance from these systems.

The chatbots successfully identify preparation-stage motivation and provide sufficient information to help users move into action.

They excel at offering practical advice, creating implementation strategies, and supporting users who have already overcome initial psychological barriers.

For individuals in action and maintenance phases, the AI systems perform adequately, covering partial processes needed to initiate and sustain behavior changes.

This success has likely contributed to positive perceptions of AI health tools among users who were already motivated to change.

The Hidden Bias Against Those Who Need Help Most

Here’s what the tech industry doesn’t want to admit: AI health assistants are systematically biased toward helping people who need help least.

While companies tout their chatbots as democratizing healthcare access, the reality is more troubling. These systems inadvertently discriminate against individuals experiencing the psychological states where professional intervention could be most valuable.

Consider the diabetic patient resistant to exercise. Traditional healthcare approaches recognize that resistance isn’t defiance – it’s often rooted in complex psychological factors including fear, past failures, social circumstances, and competing priorities.

Effective intervention requires emotional engagement, awareness-building, and social connection – precisely the elements current AI systems fail to provide.

This isn’t simply a technical limitation – it’s a fundamental misunderstanding of how health behavior change actually works.

The assumption that people just need better information ignores decades of behavioral science research showing that knowledge alone rarely drives lasting change.

The Psychology AI Systems Miss

Human behavior change involves complex psychological processes that extend far beyond information delivery. In precontemplation stages, people need help recognizing problems, understanding personal relevance, and building emotional investment in change.

During contemplation, individuals wrestle with competing motivations and barriers. They need support processing ambivalence, exploring values, and gradually building confidence that change is possible and worthwhile.

Current AI systems bypass these crucial psychological elements, jumping straight to practical advice that assumes motivation already exists. It’s like offering detailed driving directions to someone who hasn’t decided whether they want to take the trip.

Real-World Consequences of AI’s Motivational Blindness

The implications extend beyond individual frustration to systemic healthcare inequities.

People who struggle most with health behavior change often face multiple barriers – socioeconomic stress, limited social support, mental health challenges, or previous negative healthcare experiences.

These individuals are precisely those who benefit most from skilled motivational support but are least likely to receive it from current AI systems. Meanwhile, motivated individuals with existing resources get additional support they may not actually need.

Healthcare disparities could widen significantly as AI tools become more prevalent in clinical and consumer settings. Well-educated, motivated patients receive enhanced support while struggling individuals encounter systems that essentially ignore their psychological reality.

The Transtheoretical Model in Digital Health

The Transtheoretical Model offers specific guidance for addressing different motivational states that current AI systems ignore.

For precontemplation, effective interventions focus on consciousness-raising, environmental reevaluation, and social liberation – helping people recognize problems and understand broader impacts.

Contemplation-stage interventions emphasize dramatic relief, environmental reevaluation, and self-reevaluation – emotional experiences that build personal investment in change.

These processes require nuanced understanding of individual circumstances and skilled application of motivational techniques.

Current AI systems show no evidence of incorporating these established psychological principles. They operate as sophisticated information retrieval systems rather than behavior change agents, missing the core elements that drive human motivation.

Technical Limitations or Design Failures?

The question becomes whether these limitations reflect fundamental technical constraints or simply insufficient attention to behavioral science in AI development. Large language models demonstrate remarkable capability in other complex domains, suggesting the potential exists for more sophisticated motivational support.

However, integrating psychological theory into AI systems requires interdisciplinary collaboration between computer scientists, psychologists, and healthcare professionals. Current development processes often prioritize technical metrics over behavioral outcomes.

The challenge isn’t just training AI on psychological content but developing systems that can dynamically assess motivational states, apply appropriate theoretical frameworks, and provide contextually relevant support that matches individual psychological needs.

The Role of Natural Language Processing in Behavior Change

Advanced natural language processing offers untapped potential for identifying subtle motivational cues in user communications. Linguistic patterns often reveal psychological states that users themselves might not explicitly recognize or articulate.

Sophisticated NLP systems could potentially detect ambivalence, resistance, or readiness through word choice, sentence structure, emotional tone, and other linguistic markers. This capability could enable more nuanced responses tailored to specific motivational states.

Integration with established psychological frameworks could transform AI from information providers into sophisticated behavior change agents. However, this requires fundamental shifts in how AI health tools are conceptualized and developed.

Building Motivation-Aware AI Health Systems

Creating effective AI behavior change agents requires systematic integration of psychological theory into system design.

This means training models not just on health information but on motivational interviewing techniques, stage-matched interventions, and psychological assessment approaches.

Development teams need behavioral science expertise to ensure systems recognize and respond appropriately to different motivational states. Technical capability must be paired with deep understanding of human psychology and behavior change processes.

User interface design becomes critically important for creating experiences that support psychological engagement rather than mere information consumption. The goal shifts from answering questions to facilitating personal insight and motivation.

Privacy and Ethical Considerations

Motivation-aware AI systems raise significant privacy concerns about psychological assessment and data collection. Systems capable of detecting psychological states necessarily gather intimate information about users’ mental and emotional conditions.

Ethical frameworks must address consent, data protection, and potential misuse of psychological insights. The same capabilities that could enhance health support could also enable manipulation or discrimination if not properly governed.

Transparency becomes essential – users should understand how systems assess their motivational states and what information is being collected and analyzed. Building trust requires clear communication about AI capabilities and limitations.

The Business Model Problem

Current AI development incentives may actively discourage investment in motivation-aware systems. Companies optimize for user engagement and satisfaction, which naturally favors serving already-motivated individuals who respond positively to AI interactions.

Supporting ambivalent or resistant users requires more complex, potentially frustrating interactions that might reduce user satisfaction scores.

Business models that prioritize positive feedback create systematic bias against developing capabilities for challenging psychological states.

Healthcare-focused business models that prioritize clinical outcomes over user satisfaction might better align incentives with actual health needs. This could drive investment in motivation-aware capabilities that current consumer-focused models discourage.

Research Directions and Future Development

The University of Illinois research points toward specific development priorities for next-generation AI health systems.

Integrating psychological assessment capabilities, stage-matched intervention strategies, and motivational interviewing techniques represents concrete technical challenges.

Research collaborations between AI developers and behavioral scientists could accelerate progress toward more effective systems. Academic medical centers offer ideal environments for testing motivation-aware AI tools with real patient populations.

Longitudinal outcome studies will be essential for validating whether motivation-aware AI systems actually improve health behavior change rates compared to current information-focused approaches.

Implications for Healthcare Providers

Healthcare professionals using AI tools need awareness of these motivational limitations to avoid over-relying on systems that may not serve all patients effectively. Integration strategies should account for patient motivational states when determining appropriate AI tool usage.

Training programs should help providers recognize when patients need human motivational support versus AI information delivery. This requires understanding both psychological theory and AI system capabilities.

Healthcare organizations implementing AI tools must monitor outcomes across different patient populations to identify potential disparities in AI effectiveness and develop compensatory support strategies.

The Path Forward

Creating truly effective AI health assistants requires fundamental reconceptualization of these systems’ roles and capabilities.

The goal must shift from information delivery to comprehensive behavior change support that addresses the full spectrum of human motivation.

This transformation demands interdisciplinary collaboration, ethical framework development, and business model innovation. Technical advances alone are insufficient without accompanying changes in how AI health tools are designed, deployed, and evaluated.

The current moment represents a critical juncture for AI in healthcare. Addressing motivational limitations now could prevent the entrenchment of systems that inadvertently worsen health disparities while appearing to democratize care.

The stakes are too high to accept AI systems that only help those who need help least. Millions of people struggling with health behavior change deserve AI tools designed to meet them where they are, not where developers assume they should be.


References:

  1. Journal of the American Medical Informatics Association
  2. University of Illinois School of Information Sciences
  3. Transtheoretical Model Research
  4. Digital Health Behavior Change Interventions
  5. AI Ethics in Healthcare
“Deceptively simple but highly effective”—Pilates instructor recommends beginners try these three glute exercises to build strength and muscle
Scientists Just Found The Largest Continuous Coral System – And It’s Like Nothing We’ve Ever Seen
Positive Adult Bond Buffers Against Depression in Kids Facing Adversity
Fungal Disease Could Wipe Bananas Out in 5 to 10 Years
Scientists finally prove your brain has its own cleaning system – and it changes everything about Alzheimer’s prevention
Share This Article
Facebook Flipboard Whatsapp Whatsapp LinkedIn Reddit Telegram Copy Link
Share
Previous Article immunotherapy alzheimers neuroscience.jpg Scientists Discover Revolutionary Way to Clear Alzheimer’s Brain Plaques Without Dangerous Side Effects
Next Article 106981107 1637963757088 GettyImages 188011449 Neurologist says over 95% of NFL players’ brains show signs of a brain disease
Leave a Comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Latest Guides

GettyImages 1302713332 623c252401e642d1aa0ea94cd3605fab
When Your Sense of Smell Fades, Your Brain May Already Be Fighting Alzheimer’s
Science
brain cleaning 1280x640 1
The brain’s cleaning system works only when you dream—and that’s when Alzheimer’s begins.
Science
download 1
The brain has a “trash system” that stops working decades before dementia begins.
Science
The Truth About Type 3 Diabetes
Scientists Say Alzheimer’s Might Really Be ‘Type 3 Diabetes’—And They Might Be Right
Science

You Might also Like

high sex 1024
Science

Scientists Say They’ve Figured Out Which Is Better: Drunk or Stoned Sex

8 Min Read
dog doubt 1024
Science

Dogs Know When You’re Lying to Them

11 Min Read
UnderwaterTerrainOfLostCity 1
Science

‘Lost City’ Deep Under The Ocean Is Unlike Anything We’ve Ever Seen Before on Earth

6 Min Read
bactriobacteriophage 1024
Science

Scientists Have Figured Out How to Pit Viruses Against Superbugs

4 Min Read
15 040 1024
Science

WATCH: The Rocket Booster That Will Take Humans to Mars in Action

11 Min Read
eyeliner 1024
Science

Eyeliner in Your Inner Eyelid Increases Risk of Vision Problems

3 Min Read
developmental language brain neurosicence.jpg
Science

Hidden Brain Region Holds Key to Childhood Language Problems

10 Min Read
AdobeStock 636857075 768x432 1
Science

The health care system is ignoring world’s most promising approach to preventing cardiovascular disease

12 Min Read
elderly interconnectedness resilience neuroscience.jpg
Science

Alzheimer’s doesn’t destroy love or emotion — those circuits stay alive the longest

14 Min Read
gaze direction neurosicence.jpg
Science

How Our Brain Deciphers Gaze Direction

18 Min Read
KelvinsThunderstorm Web 1024
Science

WATCH: How to Make Electrical Sparks Just Using Running Water

10 Min Read
33
Science

The Mental Load of Clutter: How a Messy Room Shrinks Working Memory

14 Min Read
th 1
Science

The subtle change to your hands that indicate high cholesterol

15 Min Read
Brain Boost Intelligence Increase Illustration 777x583 1
Science

This is How Long It Takes to Rewire Your Brain

19 Min Read
BB1mjEiT
Science

Small things to improve your health when you’re sitting all day

16 Min Read
kxecsWLRX54LbSnmKcapbn 970 80.jpg
Science

New fabric can heat up more than 50 degrees to keep people warm in ultracold weather

4 Min Read
These 7 Healthy Habits Could Lower Your Risk of Dementia New Study Suggests c0dba1af6c1b439385c31a4eae7bcaf6
Science

Alzheimer’s May Start Decades Before Symptoms. These 7 Habits Could Delay It

12 Min Read
Entanglement web 1024
Science

Einstein’s Entanglement Can Connect More Than Two Systems, New Research Suggests

3 Min Read
genetics depression neurosciecnce 390x390.jpg
Science

Brain Reward Signals Blunted by Genetic Depression Risk

18 Min Read
p5 laughteryoga n0725 gi626954430
Science

Try this: Laughter yoga is nothing to joke about

20 Min Read

Useful Links

  • Technology
    • Apps & Software
    • Big Tech
    • Computing
    • Phones
    • Social Media
    • AI
  • Science

Privacy

  • Privacy Policy
  • Terms and Conditions
  • Disclaimer

Our Company

  • About Us
  • Contact Us

Customize

  • Customize Interests
  • My Bookmarks
Follow US
© 2025 Tech Fixated. All Rights Reserved.
adbanner
Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?