Home>News>News
NewsTuesday, April 21, 2026·9 min read

Scaling agentic AI demands a strong data foundation - 4 steps to take first

AD
AI Agents Daily
Curated by AI Agents Daily team · Source: ZDNet AI
Scaling agentic AI demands a strong data foundation - 4 steps to take first
Why This Matters

McKinsey published a four-step framework in April 2026 explaining what enterprises must build before they can scale agentic AI safely. The core argument is simple and worth hearing: without modernized data infrastructure, autonomous agents will make costly, unchecked mistakes at ...

McKinsey and Company, publishing on April 2, 2026, laid out one of the most direct pieces of guidance the consulting firm has produced on autonomous AI deployment. The report identifies four coordinated steps that connect strategy, technology, and organizational structure, arguing that enterprises rushing to deploy agentic AI without addressing data foundations are setting themselves up for expensive failures. The research comes at a moment when enterprise enthusiasm for autonomous agents has clearly outpaced the foundational work required to make them trustworthy.

Why This Matters

Enterprises are spending real money on agentic AI tools right now, and most of them are doing it wrong. McKinsey's framework lands at a moment when early deployments are already producing visible failures: agents acting on bad data, systems unable to access real-time information, and governance gaps that let autonomous systems take unintended actions. The agentic AI market grew rapidly throughout 2025 and into 2026, but speed of adoption and quality of deployment are two very different things. If McKinsey's diagnosis is accurate, and the research is credible, then a significant portion of current enterprise AI budgets are funding problems, not solutions.

Stay ahead in AI agents

Daily briefing from 50+ sources. Free, 5-minute read.

The Full Story

The starting point in McKinsey's analysis is a pattern most enterprise technology buyers would recognize if they were honest about it. Financial services firms, used as a primary example in the report, often deploy five or more separate AI systems to handle a single client workflow. One tool records meetings. Another summarizes conversations. A third scans for regulatory issues. A fourth handles suitability assessments. A fifth generates reports. Each tool works independently, connected to the others through custom integrations and manual workarounds. The result is that the same client conversation gets processed multiple times across different platforms, creating redundant costs, audit complexity, and security headaches that do not show up in initial procurement decisions.

McKinsey's response to this fragmentation is a four-part framework that must be executed simultaneously, not sequentially. The first step is identifying which workflows produce the most business value and redesigning those specific workflows around autonomous agents. The consulting firm is explicit that this is not about deploying AI broadly across generic processes. It is about surgical prioritization, finding the processes where agentic AI operating independently creates the greatest measurable return.

The second step is modernizing data architectures. Legacy systems built for traditional analytics or transaction processing simply cannot support what autonomous agents need. Real-time data access, streaming pipelines, lakehouse architectures, and updated database structures are all part of what McKinsey describes as prerequisites, not nice-to-haves. The firm is clear that agents making production decisions need data infrastructure built for that specific use case, not repurposed from a 2015 business intelligence stack.

The third step, enforcing data quality, is where McKinsey's guidance gets most pointed. Autonomous agents amplify the consequences of bad data because there is no human review checkpoint before the agent acts. Organizations must build monitoring systems to detect data degradation, establish quality metrics, and create feedback loops so agents can flag suspicious inputs. McKinsey's research notes that starting on data quality enforcement often reveals problems far wider than initially expected. Departments frequently maintain incompatible definitions for the same business concepts, and resolving that requires cross-functional alignment that can be politically contentious inside large organizations.

The fourth step is evolving operating models. This is the organizational layer, and it is frequently the hardest. Defining when an autonomous agent can act independently versus when it requires human approval demands frameworks that reflect specific business risk tolerances. Building teams capable of managing, monitoring, and improving autonomous systems requires skills that traditional IT operations did not need. McKinsey frames human-agent collaboration as a structural design challenge, not a training exercise.

Key Details

  • McKinsey published the framework on April 2, 2026, as part of its McKinsey Technology capabilities research series.
  • Financial services firms are cited as the primary example, with up to five separate AI tools deployed for a single client workflow.
  • Data architecture modernization for large enterprises typically takes 12 to 24 months and requires significant capital expenditure, according to the research.
  • The four steps are: agentify high-impact workflows, modernize data architectures, enforce data quality, and evolve operating models.
  • Early enterprise agentic AI deployments have produced consistent failures across three categories: data quality errors, real-time access limitations, and governance gaps.
  • Gartner, Forrester, and specialized technology consultants have reached similar conclusions about data infrastructure as a prerequisite, though with varying implementation emphasis.

What's Next

Enterprises that treat McKinsey's four steps as a sequential checklist rather than a simultaneous commitment will likely underdeliver on agentic AI through 2026 and into 2027. The organizations to watch are those that began data modernization programs in 2024 and 2025, because those companies now have the infrastructure runway to deploy autonomous agents at production scale. Expect the value gap between data-mature enterprises and those still operating fragmented legacy stacks to widen sharply over the next 18 months.

How This Compares

McKinsey's April 2026 framework is notable for connecting data infrastructure, governance, and organizational design in a single integrated argument. Compare that to Gartner and Forrester, whose research on agentic AI prerequisites has been thorough but tends to treat data infrastructure and organizational change as separate workstreams. McKinsey's core contribution is the insistence that all four steps must happen concurrently, not in phases. That is a harder sell to enterprise leadership, but it is a more honest diagnosis of what successful deployments actually require.

The vendor community has arrived at similar conclusions through a different path. Data infrastructure platforms, including those covered in our AI tools directory, have added governance and monitoring features specifically designed for agentic AI workloads. The vendor response and the McKinsey framework are converging on the same truth: autonomous agents are a data problem as much as they are an AI problem. What McKinsey adds is the organizational layer, the operating model evolution that vendors naturally avoid because it is not something they can sell in a software package.

It is also worth comparing this to earlier AI adoption cycles. When enterprises deployed machine learning at scale between 2018 and 2022, the common failure mode was building models without data pipelines capable of supporting them in production. Agentic AI is repeating that pattern at higher speed and higher stakes. McKinsey's framework is, in some respects, a formalized version of lessons the industry already learned the hard way once. The difference this time is that autonomous agents act, not just predict, which means the consequences of bad data foundations are operational, not just analytical.

FAQ

Q: What is agentic AI and why does it need special data infrastructure? A: Agentic AI refers to autonomous systems that take actions, make decisions, and complete multi-step tasks without waiting for human approval at each step. Unlike traditional AI that generates recommendations, agentic systems act on those recommendations directly. That means errors from poor data quality or incomplete information produce real-world consequences immediately, which is why the data infrastructure supporting these systems must meet much higher standards than conventional analytics platforms.

Q: How long does it take to build the data foundation McKinsey recommends? A: McKinsey's research indicates that modernizing data architectures for large enterprises typically takes between 12 and 24 months, and that timeline assumes adequate funding and executive commitment. Data quality enforcement is an ongoing discipline, not a project with a completion date. Organizations should plan for a multi-year investment rather than expecting a one-time infrastructure upgrade to be sufficient.

Q: Can a company start deploying AI agents before finishing data modernization? A: Yes, but McKinsey's framework suggests limiting those early deployments to lower-stakes workflows where data errors carry manageable consequences. Using pilot deployments to surface data quality problems and integration gaps is a reasonable approach, provided the organization is simultaneously investing in the foundational work rather than treating pilots as a substitute for it. Check our guides section for practical starting points.

McKinsey's April 2026 framework will not be the last word on agentic AI deployment, but it is a clear-eyed assessment of where enterprises are falling short and why the technology keeps underdelivering on its promise. The organizations that treat data infrastructure as the product, rather than a supporting concern, are the ones that will have autonomous agents doing meaningful work by 2027. Subscribe to the AI Agents Daily weekly newsletter for daily updates on AI agents, tools, and automation.

Our Take

This story matters because it signals a shift in how AI agents are being adopted across the industry. We are tracking this development closely and will report on follow-up impacts as they emerge.

Post Share

Get stories like this daily

Free briefing. Curated from 50+ sources. 5-minute read every morning.

Share this article Post on X Share on LinkedIn