According to Android Authority, OpenAI is piloting group chats in ChatGPT using shareable links that can be forwarded until 20 people join. The feature uses GPT‑5.1 Auto model and works across all ChatGPT plans including Free, Plus, and Pro tiers. Users can add others to existing conversations or start new ones, with group chats appearing in a dedicated sidebar section. OpenAI claims ChatGPT now has “social behaviors” that let it decide when to respond or stay quiet in group conversations. The rollout is currently limited to users in Japan, New Zealand, South Korea, and Taiwan who are signed into their accounts. Group chats don’t use personal ChatGPT memory and conversations aren’t saved as memories for privacy protection.
Chaos by design
Okay, so anyone with the link can forward it to anyone else? That’s basically inviting chaos. We’re talking about AI conversations where up to 20 random people could potentially pile into what started as a private chat. Sure, the creator can remove people, but by then the damage might be done. It’s like handing out keys to your house and hoping nobody makes copies.
Here’s the thing: this feels like OpenAI desperately trying to create sticky social features rather than solving actual user problems. They’re throwing spaghetti at the wall to see what sticks in their race against Google and Meta. But group chats with AI? That’s a fundamentally different beast than human-only messaging.
AI social dynamics
The company says they’ve taught ChatGPT “social behaviors” – it can decide when to respond, when to stay quiet, and even react with emojis. But think about that for a second. An AI trying to navigate the complex dynamics of human group conversations? That’s a recipe for awkwardness at best and complete disruption at worst.
And it gets weirder – ChatGPT can reference profile photos to create “fun personalized images” within the group. So now we have AI generating images of people based on their profile pictures in group chats? What could possibly go wrong there? Basically, we’re handing the AI social context it might completely misinterpret.
Privacy paradox
OpenAI claims they’re protecting privacy by not using personal memory in group chats and not saving conversations as memories. But let’s be real – the moment you have 20 people in a chat, privacy goes out the window. Anyone can screenshot, anyone can share sensitive information, and the AI itself is processing all of it.
The official announcement calls this a “small first step toward shared experiences,” which sounds like corporate speak for “we’re not sure how this will actually work in practice.” They’re using four countries as their guinea pigs before potentially unleashing this on the wider world.
Desperate for engagement
Look, I get why OpenAI is doing this. They need to prove ChatGPT isn’t just a fancy question-answering tool but a platform people actually live in. But forcing social features feels like putting a square peg in a round hole. Remember when every app tried to become a social network? Most of those efforts failed spectacularly.
And honestly, do people really want AI chiming in on their private group chats? Sometimes you just want to talk to humans without an algorithm deciding it has something valuable to add. This feels like a solution in search of a problem – and the chaotic link-sharing approach suggests they haven’t really thought through the social consequences.
