The standard frame for AI and employment is a two-option argument: either AI is already destroying jobs at scale, or it isn't yet and the concerns are overblown. What is missing from both sides is a credible way to measure what is actually happening below the level of aggregate employment statistics. Anthropic's new Economic Primitives framework is an attempt to build that measurement layer from the ground up, using what the company can observe directly: how people actually use Claude at work.
The core idea is that "jobs" are too coarse a unit. Jobs are bundles of tasks, and those tasks decompose further into what Anthropic calls primitives: the atomic cognitive and procedural operations that make up knowledge work. Searching for information. Synthesising documents. Generating first drafts. Reviewing outputs. Coordinating between people. Debugging code. Each of these primitives can be evaluated independently for how much AI is currently involved, how much it could theoretically be involved, and what the gap between those two figures means for workers and firms.
The January 2026 Economic Index report, released alongside the primitives methodology, offers the first data under this framework. A few findings stand out. Coding-adjacent tasks show the highest current AI involvement, by some margin, consistent with what productivity surveys have been reporting. But document review and synthesis, tasks associated with professional services like law and consulting, show the largest gap between current use and estimated automation potential. That gap is where the employment risk is concentrated, and it is not yet showing up in hiring data because firms are still learning how to use the tools rather than replacing headcount with them.
This framing reframes a puzzle that has frustrated economists for the past two years. If AI can technically automate a significant share of knowledge work tasks, why has measured employment impact been so small? The primitives framework suggests the answer is organisational lag, not technological limitation. Firms know how to buy software. They are much slower at redesigning the workflows and roles that software touches. The displacement, on this account, is real but delayed.
There is an obvious tension in Anthropic publishing this research. The company makes money when people use Claude for work. A framework that documents AI's growing role in professional tasks is, at one level, a marketing asset. Anthropic is aware of this and has been reasonably transparent about the methodology: the underlying data is Claude usage logs, which creates selection bias toward users who have already adopted AI tools. The framework does not claim to represent the whole labour market; it claims to measure the part of the labour market that is already engaged with AI assistance.
That limitation is important, but so is the underlying ambition. The debate about AI and jobs has been running largely on extrapolation and intuition. Academic labour economists work with survey data and employment statistics that are one or two years old and too coarse to capture task-level changes. Firm-level productivity data is proprietary. There is a genuine measurement gap, and that gap is making it hard to design good policy. If Anthropic's primitives methodology can be validated externally and adopted more broadly, it would be one of the more useful contributions the AI industry has made to understanding its own effects.
The number that matters most is not how much AI is currently doing, but the rate at which the gap between current use and theoretical automation potential is closing. If it is closing slowly and steadily, firms and workers have time to adapt. If it closes suddenly once agentic systems become reliable in production, the adjustment problem becomes much harder. The primitives data, if collected consistently over time, would be one of the few early-warning systems capable of showing which scenario is unfolding.