← Front Page
AI Daily
Opinion
March 27, 2026

The Measurement Trap

By Peter Harrison • March 27, 2026

Anthropic released a new methodology this week for measuring how AI actually affects work. They call it "economic primitives": decomposing jobs into atomic tasks, then measuring AI involvement at that level rather than at the level of whole occupations. The research is genuinely interesting and I think it points in the right direction. But I also think it is going to be used, mostly unintentionally, to make the labour displacement problem look smaller than it is for longer than it should.

Here is the mechanism. When you decompose jobs into tasks, you find that AI handles some tasks well and others poorly. The coding tasks look transformed. The document synthesis looks partially transformed. The interpersonal coordination looks largely untouched. You add it up and conclude: AI is changing parts of work, not whole jobs. The displacement risk is real but manageable. We have time.

What this framing misses is the economics of what happens when you remove the expensive tasks from a bundle of cheap and expensive tasks. If a lawyer's job consists of six hours of document review at $400 per hour and two hours of client strategy at $800 per hour, and AI handles the document review, you do not need fewer lawyers. You need lawyers who spend eight hours on strategy instead of six, at possibly different billing rates. That sounds fine. But the demand for legal services is not fixed. Once document review costs approach zero, the pressure on total legal budgets is intense. You end up needing roughly a third as many lawyers, each working on the high-value tasks, not the same number of lawyers freed up to do more interesting work.

The task-level measurement captures the technology accurately. It does not capture the economic cascade that follows from the technology. That cascade runs through pricing, competition, and budget pressure, and it operates at the firm and market level, not the individual task level. The people who will be displaced are not the ones whose tasks disappear. They are the ones who worked in the margin of a business model that depended on expensive tasks being expensive.

I have been watching this pattern for a long time. In 2005, when I first started writing about what I was calling "immortal machines," the argument about AI and jobs was already running on the wrong level of abstraction. People were asking whether AI could replace individual jobs, when the real question was whether it would change the economics of entire sectors. The answer to the first question was almost always "not completely" and the answer to the second was almost always "yes, dramatically." The task decomposition methodology is a more sophisticated version of the same wrong question.

Anthropic is genuinely trying to build better measurement tools, and I do not think this is cynical. But there is a structural problem in having the companies most invested in AI adoption also be the primary researchers into AI's labour market effects. The data Anthropic has access to is real: it is actual usage logs from people using Claude at work. That is more granular than any survey-based economic data. But it is data about people who have already adopted AI tools, in firms that have already decided to deploy them. It systematically underrepresents the workers who are already displaced and no longer present in any dataset.

The people who lost income-generating work to AI tools are not using those tools to ask for help with their work. They are not in the Claude logs. They are invisible to any methodology that starts from the premise of measuring current AI use and extrapolating from there.

What I want to see is the complementary research: not how is AI changing the tasks of people who have jobs, but how is AI changing the supply of jobs available in each sector, and what is happening to the wages at the bottom of the skill distribution in AI-exposed fields. Those numbers are in the Bureau of Labor Statistics data and in tax records. They do not require AI company cooperation to access. They are also, consistently, not the research that gets published by the labs.

This is not a conspiracy. It is a funding effect. Anthropic has strong incentives to understand how its tools change work for people who use them. It has weaker incentives to study the employment and wage effects on people who do not use them. The primitives methodology is a real contribution to the first question and essentially silent on the second.

I will keep reading this research because the task-level data is genuinely informative. But I think we are in danger of letting methodological sophistication substitute for asking the harder question. The harder question is not "how much of each task does AI now handle?" It is "what happens to the people whose income depended on doing those tasks being expensive?" The answer to that question is not in the logs.