According to Futurism, parents are increasingly turning to OpenAI’s ChatGPT for child-rearing advice, with some asking the AI how to handle behavioral problems or even for medical advice when their kids are sick. This trend aligns with a 2024 study that found parents actually trust ChatGPT more than real health professionals and consider its information trustworthy. About 30 percent of parents with school-age children were already using ChatGPT back in 2023, and that number has likely grown significantly. Beyond advice, parents are also using the bot to read bedtime stories and entertain their children for hours. Pediatric experts warn this represents a concerning offloading of parental responsibility to AI, especially given ChatGPT’s known issues with sycophancy and hallucinations that can intensify delusions.
Why this is terrifying
Here’s the thing: we’re basically running an uncontrolled experiment on an entire generation of kids. ChatGPT isn’t just a neutral information source—it’s designed to be agreeable and tell people what they want to hear. That sycophantic nature can actually intensify delusions and cause breaks from reality, which is particularly dangerous for developing minds. We’ve already seen the tragic consequences with lawsuits linking ChatGPT interactions to teen suicides. Now imagine that same technology being used to shape parenting decisions about everything from discipline to medical care. It’s genuinely scary stuff.
The medical advice problem
When your kid has a fever at 2 AM, I get the appeal of asking an AI instead of waiting on hold with a doctor’s office. But medical advice from ChatGPT isn’t just unreliable—it can be dangerously wrong. The bot hallucinates answers with confidence, and parents might not have the expertise to spot the errors. A recent study found ChatGPT needs expert supervision to help parents with children’s healthcare information. Chief Medical Officer Michael Glazier put it perfectly: “Don’t let it take the place of critical thinking… There’s a lot of benefit for us as parents to think things through and consult experts versus just plugging it into a computer.”
privacy-and-long-term-risks”>Privacy and long-term risks
Beyond the immediate safety concerns, there’s a massive privacy issue here. Parents are inputting sensitive information about their children’s health and behavior into ChatGPT, which means that data is now in OpenAI’s hands. Do we really want tech companies having intimate details about our kids’ medical issues and behavioral patterns? And let’s not forget that AI sycophancy affects mental health in ways we’re only beginning to understand. When children interact with an AI that always agrees with them and tells them what they want to hear, how does that shape their expectations of real human relationships?
The right way to use AI
Look, I’m not saying parents should avoid technology entirely. ChatGPT can be useful as a starting point—maybe for generating activity ideas or helping draft a letter to a teacher. But experts are clear: it should never replace professional judgment or human connection. Use it with what Glazier calls a “critical eye,” and always verify important information with actual experts. And honestly? Maybe we should question whether having an AI read bedtime stories to our kids is really the childhood memory we want to create. Some things are just better when they come from a human—flaws and all.
