By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Scoopico
  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
Reading: Fixing AI failure: Three changes enterprises should make now
Share
Font ResizerAa
ScoopicoScoopico
Search

Search

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel

Latest Stories

Pope escalates call for ceasefire in Iran by addressing those responsible for the war
Pope escalates call for ceasefire in Iran by addressing those responsible for the war
Boston Blue Could Bring Back Blue Bloods’ Tony Terraciano After Recast
Boston Blue Could Bring Back Blue Bloods’ Tony Terraciano After Recast
A tech entrepreneur used AI to help create the first-ever bespoke cancer vaccine for a dog
A tech entrepreneur used AI to help create the first-ever bespoke cancer vaccine for a dog
Price pressure in the pipeline
Price pressure in the pipeline
Mike Florio gets honest on Aaron Glenn’s Jets approach after rough first season in New York
Mike Florio gets honest on Aaron Glenn’s Jets approach after rough first season in New York
Have an existing account? Sign In
Follow US
  • Contact Us
  • Privacy Policy
  • Terms of Service
2025 Copyright © Scoopico. All rights reserved
Fixing AI failure: Three changes enterprises should make now
Tech

Fixing AI failure: Three changes enterprises should make now

Scoopico
Last updated: March 15, 2026 5:26 pm
Scoopico
Published: March 15, 2026
Share
SHARE



Contents
Expand AI literacy beyond engineeringEstablish clear rules for AI autonomyCreate cross-functional playbooksMoving forward

Recent reports about AI project failure rates have raised uncomfortable questions for organizations investing heavily in AI. Much of the discussion has focused on technical factors like model accuracy and data quality, but after watching dozens of AI initiatives launch, I’ve noticed that the biggest opportunities for improvement are often cultural, not technical.

Internal projects that struggle tend to share common issues. For example, engineering teams build models that product managers don’t know how to use. Data scientists build prototypes that operations teams struggle to maintain. And AI applications sit unused because the people they were built for weren't involved in deciding what “useful” really meant.

In contrast, organizations that achieve meaningful value with AI have figured out how to create the right kind of collaboration across departments, and established shared accountability for outcomes. The technology matters, but the organizational readiness matters just as much.

Here are three practices I’ve observed that address the cultural and organizational barriers that can impede AI success.

Expand AI literacy beyond engineering

When only engineers understand how an AI system works and what it’s capable of, collaboration breaks down. Product managers can't evaluate trade-offs they don't understand. Designers can't create interfaces for capabilities they can't articulate. Analysts can't validate outputs they can't interpret.

The solution isn't making everyone a data scientist. It's helping each role understand how AI applies to their specific work. Product managers need to grasp what kinds of generated content, predictions or recommendations are realistic given available data. Designers need to understand what the AI can actually do so they can design features users will find useful. Analysts need to know which AI outputs require human validation versus which can be trusted.

When teams share this working vocabulary, AI stops being something that happens in the engineering department and becomes a tool the entire organization can use effectively.

Establish clear rules for AI autonomy

The second challenge involves knowing where AI can act on its own versus where human approval is required. Many organizations default to extremes, either bottlenecking every AI decision through human review, or letting AI systems operate without guardrails.

What's needed is a clear framework that defines where and how AI can act autonomously. This means establishing rules upfront: Can AI approve routine configuration changes? Can it recommend schema updates but not implement them? Can it deploy code to staging environments but not production?

These rules should include three elements: auditability (can you trace how the AI reached its decision?), reproducibility (can you recreate the decision path?), and observability (can teams monitor AI behavior as it happens?). Without this framework, you either slow down to the point where AI provides no advantage, or you create systems making decisions nobody can explain or control.

Create cross-functional playbooks

The third step is codifying how different teams actually work with AI systems. When every department develops its own approach, you get inconsistent results and redundant effort.

Cross-functional playbooks work best when teams develop them together rather than having them imposed from above. These playbooks answer concrete questions like: How do we test AI recommendations before putting them into production? What's our fallback procedure when an automated deployment fails – does it hand off to human operators or try a different approach first? Who needs to be involved when we override an AI decision? How do we incorporate feedback to improve the system?

The goal isn't to add bureaucracy. It's ensuring everyone understands how AI fits into their existing work, and what to do when results don't match expectations.

Moving forward

Technical excellence in AI remains important, but enterprises that over-index on model performance while ignoring organizational factors are setting themselves up for avoidable challenges. The successful AI deployments I’ve seen treat cultural transformation and workflows just as seriously as technical implementation.

The question isn't whether your AI technology is sophisticated enough. It's whether your organization is ready to work with it.

Adi Polak is director for advocacy and developer experience engineering at Confluent.

[/gpt3]

Finest Apple deal: Save $29.01 on Apple AirPods Professional 3 at Amazon
Is AdultFriendFinder legit? Only if you can follow these 4 rules
NYT Strands hints, answers for March 6, 2026
Moon section at present defined: What the moon will appear like on August 19, 2025
Runlayer is now offering secure OpenClaw agentic capabilities for large enterprises
Share This Article
Facebook Email Print

POPULAR

Pope escalates call for ceasefire in Iran by addressing those responsible for the war
U.S.

Pope escalates call for ceasefire in Iran by addressing those responsible for the war

Boston Blue Could Bring Back Blue Bloods’ Tony Terraciano After Recast
Entertainment

Boston Blue Could Bring Back Blue Bloods’ Tony Terraciano After Recast

A tech entrepreneur used AI to help create the first-ever bespoke cancer vaccine for a dog
Money

A tech entrepreneur used AI to help create the first-ever bespoke cancer vaccine for a dog

Price pressure in the pipeline
News

Price pressure in the pipeline

Mike Florio gets honest on Aaron Glenn’s Jets approach after rough first season in New York
Sports

Mike Florio gets honest on Aaron Glenn’s Jets approach after rough first season in New York

‘Never After Dark’ review: Satisfying scares fuel this slow-burn ghost story
Tech

‘Never After Dark’ review: Satisfying scares fuel this slow-burn ghost story

Scoopico

Stay ahead with Scoopico — your source for breaking news, bold opinions, trending culture, and sharp reporting across politics, tech, entertainment, and more. No fluff. Just the scoop.

  • Home
  • U.S.
  • Politics
  • Sports
  • True Crime
  • Entertainment
  • Life
  • Money
  • Tech
  • Travel
  • Contact Us
  • Privacy Policy
  • Terms of Service

2025 Copyright © Scoopico. All rights reserved

Welcome Back!

Sign in to your account

Username or Email Address
Password

Lost your password?