I use Claude Code every working day. It has genuinely changed how I work: I write more, prototype faster, spend less time on the parts of programming I have always found tedious. I am aware, doing this, that I am in a specific category. I am technically sophisticated, motivated to make the tools work, and operating in a domain where AI currently has real leverage. The WalkMe data released this week is a description of everyone who is not in that category, which is most people, and it is worth taking seriously.
The headline is striking: more than half of employees abandon enterprise AI tools and return to doing tasks manually. Another 37 percent do not use them at all. The average digital transformation budget has jumped to $54 million. AI takes 35 cents of every technology dollar. Forty percent of that spending has underperformed. The WalkMe CEO called it "the most expensive spell checker ever built," which is a good line, but it is also a slightly convenient one because it frames the problem as one of use-case clarity and change management. If only employees were shown better use cases. If only the tools were integrated more thoughtfully. If only adoption were handled with more sophistication.
I do not think that is what the data shows. The primary reason employees give for abandoning these tools is fragmentation: too many disconnected products, no clear guidance, interfaces that create friction. That is true. But underneath it is a more fundamental problem, which is that AI tools are powerful in specific, uneven ways, and companies are deploying them universally as if "powerful in specific contexts" is the same as "useful for most workers doing most tasks." It is not. A spreadsheet is powerful. Advanced spreadsheet skills are genuinely valuable. But not everyone doing every job benefits equally from advanced spreadsheet skills, and no one calls mass spreadsheet rollouts a communications problem when uptake is uneven.
The pressure to deploy AI universally does not come from evidence that it works universally. It comes from competitive anxiety. The decision-making logic is not "will this help my workers" but "am I falling behind companies that are using AI." Individual firms face a collective action problem: the rational move for each company is to keep spending, because the counterfactual of not spending is too alarming. The 40 percent underperformance does not stop the investment because the question is not whether the tools work well, it is whether the competitor's tools will work well enough to matter. This dynamic produces a lot of wasted capital and a lot of workers bearing the overhead of being studied, measured, and encouraged to use tools that do not yet actually help them with their actual work.
Separate developer survey data tells a related story. AI now generates 42 percent of all committed code, up from 6 percent three years ago. But 96 percent of developers say they do not fully trust that AI-generated code is functionally correct. Reviewing AI output now takes more effort than reviewing code written by a colleague. The speed of generation has outpaced the speed of verification. The gains in output have been reinvested into a new kind of overhead: anxious, harder to measure, and not captured in any productivity metric that gets reported to executives.
Here is where I want to be precise, because there is a version of this argument that comes out as comfort and I do not mean it that way. The employees who have evaluated AI tools and found them not yet useful for their specific work are making a reasonable assessment of their current situation. The company deploying those tools without adequate integration, training, or attention to actual workflows is getting the result it deserves. These are real problems with real solutions, and the companies that figure them out will do better than those that don't.
But none of that is the main event. The main event is what happens when the tools get good enough that the resistance stops being rational. And this is happening on a timeline that is not intuitive. Three years ago, AI generated 6 percent of committed code. Today it is 42 percent. The projection to 65 percent by 2027 is probably conservative. The Sonar data says the tools are not yet trusted. That will not always be true. When the trust gap closes, when verification becomes routine rather than anxious, when the tools work well enough that the overhead of using them is less than the overhead of not using them, two things will happen simultaneously: genuine productivity gains will materialise, and the economic case for headcount reduction will become inarguable in ways it currently is not.
Amazon announced a $25 billion investment in Anthropic yesterday. The capital is not flowing at that scale because enterprise adoption is hitting 50 percent abandonment rates. It is flowing because the investors see the trajectory clearly, even if the current reality is messier than the pitch deck. The adoption gap today is not a terminal condition. It is a point on a curve.
What I do not know, and nobody knows, is what we do with the time we have while the curve is still climbing. The companies failing at AI adoption today will not all fail at it in three years. The workers currently making rational decisions about tools that do not work for their current jobs will face a different calculation when the tools actually work. Whether any of this gives us time to build something better, I genuinely do not know. The data from this week does not give me much reason for optimism, but it does at least tell me where we currently are. And where we are is not where we are going to be.