Skip to content

Blog

Why 95% of Enterprise AI Projects Fail: The Disconnect Between Executives and Teams

Why 95% of Enterprise AI Projects Fail: The Disconnect Between Executives and Teams
6:40

 

Dr. Ryan Ries back again in your inbox. I've been reading through some research that's got me thinking about the gap between executives and the front lines when it comes to AI.

Quick plug before we dive in:

I’m hosting an Agentic AI: Ask Us Anything session next week! Register and come with your hard-hitting agentic AI questions.

The Great AI Disconnect

A recent Wall Street Journal piece caught my attention. C-suite executives claim AI saves them more than 8 hours per week. Meanwhile, two-thirds of everyone else says they’re saving less than two hours, or nothing at all.

That's a 38-percentage-point chasm between what leadership experiences and what the workforce actually gets! So, how can this be? Let’s discuss.

Why Executives Love AI (And Why That Doesn't Scale)

Here's what seems to be happening. Executives gravitate toward AI for summarization, quick research, and drafting communications. These are tasks where AI shines: condensing information, generating first drafts, and answering straightforward questions.

The catch is that those use cases don't translate well to the factory floor, the call center, or the engineering team. Front-line workers and individual contributors need AI to integrate deeply into workflows, adapt to messy real-world data, and learn from context. Generic tools like ChatGPT can't do that at scale because they don't learn from your environment or adjust to your business logic.

Research from MIT

If you know me, you know I refer to this article nonstop. But that’s because it was just so good!

MIT's report supports the claims in the WSJ article. About 95% of enterprise AI pilots are failing to deliver meaningful business impact.

The reason isn't the technology, but it's how companies are deploying it. Most organizations are either:

  1. Buying off-the-shelf tools that don't integrate with their systems and aren’t built to integrate with their systems
  2. Building internally without the specialized expertise needed
  3. Selecting the wrong use cases from the start or starting with the “low-hanging fruit” (Ruba Borno accurately addresses this in her recent LinkedIn post)

I often work with customers on GenAI Roadmapping Prioritization(GRP) workshops to help define use cases that will be successful for their business. We use this mapping to identify the best opportunities that will deliver ROI for the customer. As we dig in, it is always interesting to find use cases that I call “the money pit.” When you first think about it, these projects feel like you could deliver value, but as you dig in further, the amount of effort to complete the use case is extensive, making the ROI minimal or even non-existent. 

That’s why I think sessions like the GRP are so important. MIT's data shows that purchasing specialized AI solutions from vendors and building partnerships succeeds about 67% of the time. Internal builds succeed only about one-third as often.

The Use Case Problem

If you're starting your AI strategy by asking "What can ChatGPT do for us?", you should take a step back.

The question should be: "What specific business problem are we solving, and is AI the right tool?" Because let us not forget, AI isn’t the “fix all.” There are still a ton of business use cases that are better solved by good ol’ machine learning.

More than half of generative AI budgets are going toward sales and marketing tools. Yet MIT found the biggest ROI comes from back-office automation (eliminating outsourced processes, cutting agency costs, streamlining operations).

Companies are spending where it feels innovative, and not putting enough resources toward where the greatest value could be unlocked.

What Works vs. What Doesn't

AI excels at:

  • Document summarization
  • First-draft generation
  • Rapid research for knowledge workers
  • Automating repetitive back-office tasks

AI struggles with:

  • Complex problem-solving without domain expertise
  • Adapting to unique business contexts
  • Learning from proprietary data in real time
  • Tasks requiring nuanced judgment calls

Steve McGarvey, a UX designer from Raleigh, put it well in the WSJ article: he's spent entire afternoons correcting AI suggestions for accessibility issues. The tool gave him wrong answers with complete confidence. If you don't have the expertise to catch those errors, you're not saving time, and you're creating unnecessary risk.

What This Means for You

If your AI strategy is "let's see what happens when we give everyone access to ChatGPT," you're setting yourself up for disappointment.

What you need instead:

  1. Start with the business problem, not the technology. Pick one major pain point where AI can demonstrably help. This often involves processes that require a lot of manual labor, like reviewing documents and forms.

  2. Partner with experts. Find a partner like Mission who has solved similar problems in your industry. MIT's research shows that partnerships outperform internal builds by a significant margin.

  3. Focus on integration and learning. You need AI that connects to your systems, learns from your data, and adapts to your workflows. Just giving everyone access to ChatGPT won’t help you unlock the greatest value.

  4. Measure what matters. Time saved is nice. Revenue generated, costs eliminated, and risks reduced are better.

My Final Thoughts

Leadership's enthusiasm for AI is real, and they are finding more time in their week because of it. However, that doesn't mean the value is flowing down to the teams that need it most.

The gap between executive experience and employee reality is an execution problem. And closing this gap requires deliberate strategy, thoughtful use case selection, and partnership with people who've done this before.

If you're curious about how to bridge that gap in your organization, let's talk. I'm always interested in hearing what challenges you're facing.

 

Until next time,
Ryan

Now, time for this week's AI-generated image and the prompt I used to create it.

For some context, our internal chat has been blowing up about Clawdbot. If you haven’t dug into this, it’s pretty interesting, and I think Jonathan LaCour is going to talk more about this tomorrow in his newsletter (CloudHustle). We were debating who should cover Clawdbot this week in their newsletter so hence why we’re facing off! He’s done some pretty cool stuff with it, so make sure you check out CloudHustle tomorrow.

PROMPT: Create an image of me as a Clawdbot versus Jonathan as a Clawdbot. I've attached reference photos of both of us. We are facing off in a wrestling ring.

Ryan Vs Jonathan

 

Author Spotlight:

Ryan Ries

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.