As healthcare systems wrestle with the promises and risks of AI, the launch of ChatGPT Health in January this year marks an inflection point, not just in how AI is used, but in how people engage with their own health.
At its best for the average user, it could be a powerfully informed, 24/7 assistant, providing highly desired health advice. At its very worst, it risks self-diagnoses and amplifying health anxiety.
Between these two scenarios lie the defining questions that determine if this really could be the next step for AI and personalised healthcare: How accurately and reliably will it be able to answer queries from users? And will it support or contradict health systems?
So, what is ChatGPT Health?
On January 7th 2026, Open AI announced the launch of Chat GPT Health, which alongside standard responses, allows people to upload their medical records and health documents, as well as connect data from wellness apps to enable more personalised support. The launch of Chat GPT Health has long been in the making – in fact, Chat GPT receives over 40 million daily health-related questions, with over 5% of all ChatGPT messages globally relating to health. i
With this context, the question now is not whether people will use it, but rather, should they use it, and how can it be governed to ensure it is used for good?
The Gaps ChatGPT Health Could Fill
Medical Information Support
OpenAI has been clear that Chat GPT Health is about “supporting, not replacing, care from clinicians”. The tool aims to help users understand complicated medical documents, prepare for medical appointments, navigate costs and coverage, and promptly answer questions on a range of health plans or problems.
The value of such support may be particularly pronounced in systems where medical insurance and administrative tasks are a larger part of patients’ medical burden. Open AI estimates that almost two million messages every week relate to health queries, suggesting substantial unmet demand for navigational assistance rather than clinical advice alone.i
Access and Timing
A further advantage lies in availability. Around seven in ten health-related questions are asked outside standard clinical working hours, during moments of worry when services are closed, inaccessible, or unaffordable.i In systems like the NHS, where GP waiting times remain high, NHS appointments scarce, and targets are frequently missed, ChatGPT Health could act as an interim layer of support, if there is a recognised layer of quality control.
The Risks: Support, Influence and Doctors
The doctor problem
The crucial factor to the success of Chat GPT Health and all AI support tools is their fundamental ability to provide accurate information. Chat GPT Health has been trained over two years with input from over 260 physicians across 60 countries, with over 600,000 points of feedback.i It is hard to predict how well this training will prepare ChatGPT Health, as its accuracy rates will require ongoing monitoring and regulation.
However, even if we accept the level of quality input and training and, therefore, its ability to provide accurate responses; two features of ChatGPT Health create a particular challenge. Firstly, generative AI does more than relay information, it frames it. The conversational way in which prompts are answered can feel persuasive, even when no advice is intended. Small choices in wording and tone can affect how serious a symptom sounds, how risky a result appears, or how urgent an action feels.
Secondly, ChatGPT can only work with the context it is given, and when that context is provided entirely by the user, it immediately suffers from user input-bias. Some symptoms may be emphasised while others completely omitted. The system’s output, however accurate in isolation, is left in a context painted by the user and a potentially incomplete picture.
This is where the enduring value of doctors and healthcare professionals becomes most evident. Doctors provide the combination of contextual judgment, breadth of knowledge and strong empathy skills which enable them to interpret not only clinical information, but the human circumstances in which it is embedded.
What does this all mean?
ChatGPT Health represents a new piece in both the evolving AI landscape and the broader story of personalised healthcare. Unlike previous innovations, where technology was a background force shaping outcomes at scale, this tool places the puzzle piece directly in the hands of individuals.
The potential benefits are significant. ChatGPT Health could act as an intelligent triaging tool, easing pressure on healthcare systems and helping individuals navigate complex medical information. Yet its success will depend on maintaining a careful balance: supporting users without inadvertently contradicting or undermining the clinical guidance and trust embodied by healthcare professionals.
The global picture adds further nuance. Differences in regulation, data access, and public trust will shape how, when, and where such tools are adopted. How systems in different countries respond, and how users integrate AI into their health journey, will offer an important test case for the future of personalised healthcare and the role AI might play alongside, rather than in place of, human expertise.
Ultimately, ChatGPT Health is not just a technological development, it is a reflection of how much responsibility, trust, and engagement we are willing to place in AI as a partner in our own care. Its trajectory will reveal as much about societal attitudes toward technology as it does about the tool itself.