According to The Verge, OpenAI has officially launched ChatGPT Health, a new sandboxed tab within ChatGPT designed for health questions. The product, announced in January 2026, encourages users to connect personal medical records via a partnership with b.well (which works with 2.2 million providers) and apps like Apple Health and MyFitnessPal for personalized advice. OpenAI says over 230 million people already ask ChatGPT health questions weekly, and they’ve worked with more than 260 physicians who’ve given feedback over 600,000 times. The company explicitly states it’s not for diagnosis or treatment. Access is starting with a beta group, requiring users to sign up for a waitlist, with a gradual rollout planned for all users regardless of subscription tier.
The personal data gamble
Here’s the thing: this move is both incredibly logical and deeply fraught. Of course people are asking an LLM health questions—we’ve all done it. So creating a walled garden with a separate chat history and memory makes sense from a privacy perspective. They’re even promising that chats here won’t be used to train foundation models by default. But the core ask is huge: hand over your clinical history, lab results, and fitness data. OpenAI says it’s using “purpose-built encryption,” but notably not end-to-end encryption. And let’s not forget their March 2023 security breach that exposed user data. Plus, as health head Nate Gross stated, HIPAA doesn’t apply because this is a consumer product, not a clinical one. That’s a crucial distinction. You’re getting convenience, but the legal protections you’d expect at a doctor’s office don’t fully translate.
The ghost in the machine: mental health and misinformation
One of the most glaring omissions in the announcement was a direct address of mental health. The blog post only vaguely mentions letting users customize instructions “to avoid mentioning sensitive topics.” But during the briefing, Applications CEO Fidji Simo confirmed it “can handle any part of your health including mental health.” That’s a massive can of worms. There are documented, tragic cases, like those detailed in a PMC study, of people dying by suicide after confiding in ChatGPT. OpenAI says they’re tuning the model to be “informative without ever being alarmist” and to direct users to professionals. But can an LLM truly navigate that nuance in a moment of crisis? And what about health anxiety? The potential for a hypochondriac to spiral with instant access to “analyze” their records is real. This feels like the biggest uninsured risk.
The inescapable context of getting it wrong
OpenAI can’t ignore the recent history of AI giving dangerously bad health advice. They mention a case from August, detailed in Annals of Internal Medicine, where a man was hospitalized after allegedly following ChatGPT’s advice to use sodium bromide instead of salt. And they’re launching in the shadow of Google’s AI Overviews fiasco, which a Guardian investigation found is still giving dangerous diet advice for cancer patients. Their own data shows 70% of health chats happen outside clinic hours, and rural users send nearly 600,000 healthcare messages weekly. That speaks to a real need, but also to the danger. When people are desperate and can’t see a doctor, they might treat AI output as gospel. The disclaimer “not for diagnosis” is a legal shield, but will it be a behavioral barrier? I doubt it.
So what’s the real play here?
Look, this is a classic tech industry beachhead strategy. Start with a consumer-facing “health ally,” get people comfortable feeding it their most sensitive data, and build from there. The vision document they released hints at a much broader ambition. The immediate product is a personalized Q&A layer on top of your data. But the long game? Probably integrating deeper into the healthcare system itself. For now, it’s a high-stakes experiment with our private information. The promise of convenience is seductive, especially for appointment prep or deciphering lab results. But the track record of both AI hallucinations and tech company data breaches suggests we should proceed with extreme caution. Basically, connect your Peloton data if you must, but maybe think twice before uploading your entire clinical history.
