Companies are spending more on AI than ever before, and their employees are quietly ignoring it. A new report from SAP's WalkMe division, drawing on surveys of 3,750 executives and workers alongside behavioural data from platform users, finds that more than half of workers abandon enterprise AI tools and return to doing tasks manually. Another 37 percent do not use AI at all. The average digital transformation budget has jumped 38 percent in a single year, reaching $54 million in 2026. AI now claims 35 cents of every technology dollar spent. Forty percent of that spend has underperformed. "Enterprise AI budgets nearly doubled last year," WalkMe CEO Dan Adika said in the report. "AI takes 35 cents of every technology dollar. And most employees use it to rewrite an email or run a quick summary, then go back to doing things the old way. That's not adoption. That's the most expensive spell checker ever built."
The gap between executive investment and frontline use is not hard to understand if you talk to the people actually doing the work. The primary complaint is fragmentation: too many disconnected tools, no clear guidance on which to use when, and interfaces that create friction rather than reduce it. But there is something deeper going on too. A separate piece of data, from Sonar's State of Code Developer Survey, points to a trust problem that goes beyond onboarding. AI now accounts for 42 percent of all committed code, up from 6 percent in 2023. Yet 96 percent of developers report they do not fully trust that AI-generated code is functionally correct. Sixty-one percent say AI often produces code that looks right but is not. And 38 percent say reviewing AI-generated code actually takes more effort than reviewing code written by a colleague. The time saved in drafting has been reinvested into a new kind of verification work: slower, more anxious, harder to measure.
This is the productivity paradox in action. The tools are producing more output, but output is not the same as value. A codebase that is 42 percent machine-generated but 96 percent distrusted is not a more productive codebase; it is a more anxious one. The "trust gap," as the Sonar report calls it, means that engineering organisations are not realising the promised velocity gains. They are instead developing new workflows for managing uncertainty, new rituals of checking and rechecking, new mental overhead around every pull request.
It shows up in the people, too. Nikhyl Singhal, a former VP of product at Meta, described the mood in his community as "smiling exhaustion." Product managers feel more capable than ever, he said, able to prototype ideas directly using tools like Claude, able to compress what once took weeks into days. But the pace of change is relentless in a way that is starting to wear people down. "I've never seen an industry that's more tired than they are now," Singhal told Lenny's Podcast. "Nothing's constant. Everyone's in a state of alert." Simon Willison, co-creator of Django, has described juggling AI agents as "mentally exhausting." Steve Yegge, a veteran engineer, warned of a "vampiric effect" that drains workers before noon.
There is a stranger dimension to the story, too. Anthropic published its own internal data on employee use of Claude, and the headline numbers are striking: employees use Claude in 60 percent of their daily work, report nearly 50 percent productivity gains, and engineers are logging 67 percent more pull requests per person since adopting Claude Code. But the footnote is what catches the eye. Twenty-seven percent of Claude-assisted work consists of tasks that employees say they would not have attempted without the tool. Ornamental data dashboards. Minor refactors that were never worth the effort before. Exploratory analyses that previously sat below the effort threshold. The AI is not just speeding up existing work; it is also generating new work that may or may not add value.
Put these data points together and a coherent picture emerges, one that is more complicated than either the boosters or the sceptics want to acknowledge. AI tools are genuinely useful for specific, well-defined tasks. They are also being rolled out with inadequate training, unclear use cases, and management expectations that do not map to how knowledge work actually happens. The employees who walk away from these tools are not Luddites. Many of them are making a rational calculation: the friction of adoption outweighs the benefit they can actually see, for the tasks they actually do, in the workflow they actually have. Until that changes, the most expensive spell checker in history will keep collecting its subscription fees.