← Front Page
AI Daily
Opinion
March 29, 2026

The Ad Model Is the Original Sin

By Peter Harrison • March 29, 2026

OpenAI hit $100 million in annualised advertising revenue in under two months. The tech press treated this as a business milestone. I want to suggest it is something more significant: the moment a company with a safety mission submitted to the oldest and most corrupting incentive structure in media.

I am not going to moralize about ads. I use Google. I watch YouTube. I know the deal. What I want to do is be precise about the mechanism, because I think the mechanism matters more than the announcement.

The advertising model does one thing above all else: it makes the product compete for your attention rather than your welfare. That shift is not cosmetic. It rewires what the system is optimising for. Google's search product has been degrading for years, and the most credible explanation is not negligence or incompetence; it is that the incentive to show you more ads gradually displaced the incentive to give you the best answer. The product that captures more of your time makes more money than the product that solves your problem and lets you leave.

Now apply that to an AI assistant. The question OpenAI's product team will increasingly face is not just "did Claude give the right answer?" but "did the user stay in the app long enough to see an ad?" These are not the same question. In fact they are sometimes opposed. A model that resolves your problem in three sentences is less valuable to an advertiser than one that continues the conversation. A model that tells you something uncomfortable and sends you away is less valuable than one that tells you something engaging and keeps you there.

You might say: OpenAI has promised not to corrupt the model's actual answers. They have stated that ads will not alter responses. I believe them, for now. But promises about product design are not structural safeguards. They are intentions. And intentions do not survive indefinitely when $30 billion in projected annual revenue pulls in a different direction.

I have watched this movie before. Facebook promised in 2007 that its advertising would be non-intrusive and that users' data would never be used against their interests. Twitter promised that promoted tweets would be clearly labelled and would not crowd out organic content. These were not lies when spoken. They were sincere commitments made before the gravitational force of the ad model had fully asserted itself. The companies did not become evil. They became ad companies, and that is a different thing, but the outcome is similar.

The specific danger with AI is greater than it was with social media, for reasons that connect to what I think is actually at stake in this technology. Social media captured your attention. An AI assistant captures your judgment. If the model learns over time to give you answers that keep you engaged rather than answers that are accurate, the corruption is deeper. You are not just spending time in an app. You are outsourcing your thinking to a system that has an incentive to mislead you in subtle ways that keep you dependent.

This connects to my broader concern about p(pets): the outcome where AI systems, even well-intentioned ones, erode human agency and judgment rather than augmenting them. The advertising model accelerates that outcome. It does not build a tool that makes you smarter and more capable. It builds a tool that makes you more engaged, more dependent, and more profitable. Those are not the same.

OpenAI's framing, repeated in every press release, is that advertising will "expand access to ChatGPT" by funding a free tier. That argument has the structure of a public health argument: we are doing something slightly harmful to some people in order to benefit many more. The problem is that the harm in question is not a side effect; it is the entire mechanism by which the advertising model generates the funding. You cannot separate the revenue from the incentive distortion that produces it.

What would a different path look like? The subscription model, for all its imperfections, aligns the product's incentives with the user's. If you pay $20 a month for a service, the service's goal is to make you feel that it was worth $20. That is still not perfectly aligned with your welfare, but it is much closer than a system whose revenue depends on your time-on-app. A genuinely public interest AI, funded by governments or endowments rather than advertising or subscriptions, would be better still, but that seems unlikely to describe OpenAI in 2030.

I am not calling for anyone to cancel their ChatGPT account. I am saying that the $100 million milestone is worth marking not because it represents success, but because it represents a fork in the road that has already been taken. The company is now structurally committed to a model that will, over time, assert its own logic. That logic is not aligned with safety. It is not aligned with accuracy. It is aligned with engagement, and engagement is something different from both.

Watch the product carefully over the next two years. Not for obvious corruption. For subtle drift. For the answers that are a little longer than they need to be. For the conversations that continue a little past the point of resolution. That is where the ad model shows up, not in a press release, but in the texture of everyday interaction.