Why most agentic AI projects fail, and how to avoid being one of them
Date:
Tue, 07 Apr 2026 10:51:18 +0000
Description:
Most agentic AI pilots stall. Data quality, governance and integration determine whether they scale successfully.
FULL STORY ======================================================================Copy link Facebook X Whatsapp Reddit Pinterest Flipboard Threads Email Share this article 0 Join the conversation Follow us Add us as a preferred source on Google Newsletter Tech Radar Pro Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Become a Member in Seconds Unlock instant access to exclusive member features. Contact me with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over. You are
now subscribed Your newsletter sign-up was successful Join the club Get full access to premium articles, exclusive features and a growing list of member rewards. Explore An account already exists for this email address, please log in. Subscribe to our newsletter As businesses get used to using generative AI tools , attention is quickly turning to agentic AI. These systems are
designed to plan tasks, interpret information and take action within defined guardrails. In theory, this moves AI from a tool that assists employees to
one that helps run parts of the business.
Investment is rising fast, with McKinsey predicting that the agentic AI
market will rise from roughly $5-7 billion in 2024 to over $199 billion by 2034. But many businesses are finding it harder than expected to turn early pilots into something reliable and useful at scale. Martin Tombs Social Links Navigation
Field CTO EMEA, Qlik. Gartner predicts that more than 40% of agentic AI projects will be cancelled by the end of 2027. Meanwhile, Qlik found that 97% of organizations have committed budget to agentic AI, but only 18% are fully deploying it. Article continues below You may like Why agentic AI pilots
stall and how to fix them How AI will collide with data readiness How to build AI agents that dont break at scale
Many see the potential, yet practical deployment still proves difficult when systems are expected to operate reliably in real business environments. When AI starts acting inside workflows Early generative AI tools largely acted as assistants. Employees used them to answer questions, summarize documents or draft content. If the response was slightly wrong, the impact was usually limited.
Agentic systems operate differently. They can interpret signals, recommend next steps and carry out tasks across enterprise systems. In practice, this might involve identifying unusual changes in financial performance , triggering a supply chain adjustment or initiating an operational workflow.
Once AI interacts directly with business processes, the margin for error becomes much smaller. A generative AI recommendation can be reviewed before action is taken, but an automated workflow requires far greater confidence in the information and logic behind it. Are you a pro? Subscribe to our newsletter Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed! Contact me
with news and offers from other Future brands Receive email from us on behalf of our trusted partners or sponsors By submitting your information you agree to the Terms & Conditions and Privacy Policy and are aged 16 or over.
This is where many businesses discover their underlying data foundations are not ready. Fixing the data foundations first The most common reason agentic
AI projects stall is a lack of data maturity. Agents depend on a consistent and trusted view of information across the organization, yet many businesses still operate with fragmented data , duplicated sources and unclear
ownership. In these conditions, even the strongest AI models struggle to produce outputs that teams can comfortably rely on.
Unstructured information adds another layer of complexity. Internal documents , emails and knowledge bases often contain useful context but rarely have clear ownership. That makes it difficult to verify whether the information is current, accurate or even still relevant when an AI agent draws on it. What
to read next Agentic AI: Transforming industries and tackling the interoperability imperative How to finally operationalize Agentic AI and realize its full potential Mitigating the risks of autonomous AI with agent-ready data
As agents begin interacting with operational systems, these weaknesses become more visible. If the information feeding those systems is inconsistent or outdated, the reliability of the agents outputs quickly comes into question. Strengthening those data foundations is often the first step before agentic
AI can be deployed with confidence. Who is responsible when AI takes action
As agents take on more responsibility, governance becomes a practical issue rather than a theoretical one. Organizations need clear answers to some basic questions. Who owns the data feeding the system? Who signs off on actions an agent takes? And when should a person step in and review a decision?
Clear accountability helps teams trust the system implemented and reduces the risk of mistakes. It also makes it possible to understand how decisions were reached, which matters when AI outputs affect revenue, compliance or business planning.
Regulation can help provide structure here. Europes AI rules, including the
EU AI Act, aim to set expectations around transparency, accountability and risk early in the development of AI systems. While regulation is sometimes seen as slowing innovation, clearer rules can make it easier for
organizations to use AI responsibly. Getting AI tools to work together
Another challenge emerging with agentic AI is the growing number of
assistants operating across a business. Most organizations are not relying on a single model or platform. Different teams often use different AI tools depending on their needs, from analytics platforms to internal systems and external assistants .
For agents to work effectively in that environment, they need secure ways to access trusted data and interact with other systems. Without that connection, agents operate in isolation and their usefulness quickly becomes limited.
This is where shared standards are starting to play a role. Technologies such as Model Context Protocol (MCP) allow AI assistants to connect with
enterprise platforms while keeping access controls and governance in place.
Instead of building custom integrations for every tool, organizations can expose data and analytics through consistent interfaces that different assistants can use.
As more AI tools enter the workplace, making sure they can work together and access reliable data will become increasingly important. Organizations that plan for this early will find it much easier to scale agentic systems across the business. Building agentic AI that works Agentic AI has the potential to completely change how organizations operate for the better.
But success depends on prepare the systems underneath first, putting the
right data, accountability and controls in place before scaling beyond
pilots. We've ranked the best data visualization tools .
======================================================================
Link to news story:
https://www.techradar.com/pro/why-most-agentic-ai-projects-fail-and-how-to-avo id-being-one-of-them
--- Mystic BBS v1.12 A49 (Linux/64)
* Origin: tqwNet Technology News (1337:1/100)