← Front Page
AI Daily
Society • March 19, 2026

The Kids Are Talking to AI — and the Industry Is Finally Starting to React

By AI Daily Editorial • March 19, 2026

Sixty-four percent of American teenagers between 13 and 17 have used an AI chatbot. About one in four uses one daily. Twelve percent report turning to AI specifically for emotional support or advice — a figure that sounds small until you consider that it represents millions of adolescents treating a language model as something like a confidant. The AI industry, which spent several years largely ignoring the child safety dimension of its products, is now moving on multiple fronts simultaneously — some voluntarily, some under regulatory pressure, and some in response to high-profile lawsuits that have made the stakes viscerally clear.

Meta's decision in January to pause teen access to its AI character features — the personality-based chatbots that could take on personas of celebrities or fictional characters — is the most visible sign that something has shifted. The pause was framed as preparation for a new version, but it followed a period of intense scrutiny after reports that some AI character interactions with minors had taken disturbing turns. Meta's products operate at a scale that makes even small percentages of problematic interactions large numbers in absolute terms, and the company appears to have concluded that the reputational risk of proceeding as-is outweighed the engagement upside.

OpenAI has taken a more technical approach. It began rolling out age-prediction models to ChatGPT consumer plans earlier this year — a system that infers user age from behavioural signals and automatically applies additional protections when it suspects an underage user. When the model flags someone as likely under 18, ChatGPT reduces exposure to sensitive content including self-harm depictions, and applies the restrictions from OpenAI's published Teen Safety Blueprint. The system is imperfect — behavioural age prediction is an inexact science — but it represents a genuine attempt to build safety defaults into the product rather than relying on users to opt into restrictions.

California moved first on the regulatory side, with Governor Newsom signing legislation late last year that requires AI companion chatbot operators to implement age verification and display warnings every three hours to minor users reminding them that they are speaking with an AI. The law took effect in January. Other states — Illinois, Nevada, Utah — have gone further, restricting or banning the use of AI chatbots as substitutes for licensed mental health care. At the federal level, Senator Josh Hawley introduced legislation that would ban minors from interacting with AI chatbots altogether, a position that is unlikely to pass but signals the political temperature.

The UK's approach is different again: the Online Safety Act now explicitly covers ChatGPT, Gemini, and Copilot, requiring them to comply with illegal content duties or face enforcement action. The effect is to treat major AI chatbots as platforms with child protection obligations, analogous to social media companies, rather than as neutral tools.

What's striking about the current moment is the gap between the regulatory frameworks being designed and the actual nature of teen AI use. Most of the legislative attention has focused on companion chatbots and emotional dependency — the Character.AI model of parasocial AI relationships. But the data suggests that the majority of teen AI interaction looks much more like ChatGPT for homework than it does like parasocial relationships with AI personas. The risk profiles are genuinely different, and a regulatory framework optimised for one may not address the other well. Whether the industry's self-regulatory moves are sufficient, or whether the legal and legislative pressure will produce something more durable, is the question that will define this space over the next two years.

Sources