The numbers from a new survey of 800 US business and technology leaders are striking in their consistency. Seventy-four percent of organizations are increasing AI investment. Forty-six percent say their AI initiatives have not met expectations. Only a small minority report that AI is delivering measurable business value. The survey, produced by Salesforce and Snowflake consultancy Coastal in partnership with Oxford Economics, captures something that has become difficult to ignore: the gap between what enterprises are spending on AI and what they are getting out of it is not closing.
The specific bottleneck the report identifies is not the AI models themselves. It is everything around them. Seventy percent of organizations report data access or quality problems during the initial setup of AI systems. Seventy-three percent hit those same problems again while running AI in production. Only 26% of organizations began their AI initiatives with a clearly defined business problem; most started with the technology and worked backwards toward a purpose. One in six has a dedicated AI or transformation team with clear ownership.
IBM's current strategy, on display at its Think conference, is essentially a bid to sell solutions to exactly these failures. The company has positioned its watsonx platform around what it calls "governed AI": auditability, data controls, and operational accountability designed for messy enterprise environments where governance matters as much as capability. "IBM isn't trying to win the AI hype cycle," one analyst said at the event. "They're trying to win enterprise reality." Whether that framing wins customers is an open question, but the problem it describes is real.
The human side of the stalling shows up in a separate Accenture report on AI adoption in Ireland, where 64% of employees now expect to reskill due to AI. That number sounds encouraging until you reach the next one: 47% of employees say they have been expected to use new AI tools without being trained on them. Seventy-three percent of organizations in the Coastal survey say they struggle with employee adoption due to lack of trust, poor workflow fit, or unclear outputs. These are not separate problems. They are the same problem at different levels of abstraction. Organizations are acquiring AI capability faster than they are building the foundations to use it well.
The functions where this gap is sharpest are sales and marketing, where AI-driven decisions about targeting and audience segmentation are already running at scale, and where flawed inputs produce wrong outputs immediately. "The limiting factor is rarely the model itself: it's the data," Marc Fanelli of audience data firm Eyeota wrote recently. That same logic applies across the enterprise: AI amplifies what is already in the system, whether that is good data or bad.
The Coastal report concludes that enterprise AI has entered a new phase: one where the interesting question is no longer whether AI can work in principle but whether organizations can operate it at scale. The companies getting results are treating AI as an ongoing operating function with defined ownership, continuous data management, and sustained adoption work. That description sounds unremarkable. The survey data suggests most organizations are not doing it.
There is something clarifying about 46% failure rates. They are too high to dismiss as growing pains and too consistent across industries to blame on individual missteps. The era of AI experimentation is over. What comes next requires a different kind of organizational discipline than launching a pilot, and many enterprises appear to be finding that out the hard way.