← Front Page
AI Daily
Society • Monday, March 16, 2026

Twelve Percent of Teens Use AI for Emotional Support. The Research Is Worried.

By AI Daily Editorial • Monday, March 16, 2026

A survey published in February found that 12% of US teenagers now turn to AI chatbots for emotional support or advice — a figure that would have seemed implausible three years ago and is now treated, in most coverage, as unremarkable. Sixteen percent use AI for casual conversation. The numbers are growing. The research on what this means for adolescent mental health is building, and it is not uniformly reassuring.

The most striking findings come from inside the industry itself. An OpenAI product policy researcher published a paper in 2025 finding that frequent AI companion use was associated with erosion of real-life social skills. A separate MIT Media Lab study commissioned by OpenAI found that heavy daily ChatGPT use correlated with increased loneliness. These are not external critics raising alarm — they are researchers at and funded by the company that makes the product, finding effects that the company is presumably not thrilled to publicise. The fact that OpenAI has nonetheless funded mental health research grants suggests the concern is genuine rather than performative.

Scientific American's reporting on teen AI chatbot use adds another dimension: the combination of always-available, endlessly patient, highly personalised interaction that AI companions provide is qualitatively different from anything that has existed before as a social substitute. Previous research on social media and teen mental health found correlations between heavy use and anxiety and depression, but the mechanism was largely social comparison and content exposure. AI companions offer something more actively relational — they respond, remember (to varying degrees), and adapt — which may create different and potentially stronger attachment dynamics, particularly for adolescents whose social skills are still forming.

Platforms are responding, though the response so far is more reactive than structural. Instagram announced in February that it will alert parents when teenagers repeatedly search for suicide and self-harm terms — a safety feature that addresses the most acute risk but doesn't touch the underlying question of whether AI companions should be available to minors at all, and under what conditions. Mental health professionals surveyed by CNBC are broadly cautious: AI can be useful for psychoeducation, journaling prompts, and as a first point of contact that lowers the barrier to seeking help — but as a substitute for therapy or human connection, the professional consensus is strongly negative.

The honest difficulty is that "don't use AI for emotional support" is advice that will be ignored by the teenagers who most need emotional support and have the fewest other sources of it. The kids turning to chatbots for advice are often the ones with limited access to counsellors, unsupportive home environments, or social anxiety that makes human interaction difficult. Removing the tool doesn't address the underlying need; it just removes one response to it. What the field hasn't yet produced is a clear-eyed framework for what responsible AI emotional support would actually look like — what safeguards, what escalation paths, what transparency about limitations. That work is underway, but it is running well behind the adoption curve.

Sources