Artificial IntelligenceBusiness TransformationDigital Transformation

Real Challenges in Adopting AI and Data Platforms Within Legacy Environments: Lessons from Enterprise Transformation

By Bhumika Shah, AI Researcher in Enterprise Data & Organizational Transformation| Speaker| Author| Educator, University of the Cumberlands

Organizations across industries are under pressure to adopt Artificial Intelligence (AI) to improve decision-making, reduce operational friction, and unlock new business value. Yet despite heavy investment, many companies struggle to translate pilots into production-scale outcomes. A 2026 PwC CEO survey found that 56% of global CEOs had not yet observed meaningful revenue or cost benefits from their AI efforts, even as technology expectations continue to rise.¹ Similarly, according to Forrester analyst Biswajeet Mahapatra, only 10-15% of AI projects make it into sustained production use (Economic Times, 2026).

The core problem is not a lack of model performance or innovation capability, but instead that AI is colliding with systems, architectures, and operating models that were never designed for it. AI maturity presupposes data maturity, and many enterprises are still working through both.

Legacy Systems: Stable, Critical, and Difficult to Modernize

Many enterprises, especially in regulated sectors such as healthcare, banking, and government, still rely on decades-old core platforms. Analyses from the U.S. Government Accountability Office show that federal agencies spend nearly 80% of their IT budgets on operations and maintenance, much of it dedicated to sustaining legacy systems rather than modernization,³ leaving limited space for platform modernization or cloud migration. These systems were built for transactional stability rather than real-time data streams, feature engineering, lineage tracking, or probabilistic inference workloads.

Additionally, data fragmentation is pervasive. Recent enterprise surveys show that organizations now manage data across more than 400 distinct systems and sources on average, creating significant fragmentation that impedes AI readiness.⁴ Integrating these sources into a coherent AI-ready data fabric is not merely a tooling challenge; it is an architectural and governance transformation that touches every business unit and domain.

In healthcare, interoperability remains a significant obstacle. HIMSS surveys repeatedly highlight that interoperability gaps across electronic health record systems are among the primary barriers to AI adoption, even when strong clinical or operational use cases exist.⁵ Insurance environments face similar fragmentation across claims processing, policy administration, actuarial models, and regulatory reporting systems. The common thread is that AI is being asked to operate in ecosystems that predate modern data flows by decades.

AI adoption is ultimately not a tooling problem; it is a systems problem. Legacy enterprises can evolve into intelligent enterprises, but only by aligning data, governance, technology, and culture in a shared transformation strategy.

Data Governance and the “AI Readiness” Gap

While AI discussions often center on models, organizations consistently report that data quality, observability, and governance are bigger obstacles to value realization. Governance assessments from Gartner indicate that despite widespread AI experimentation, most organizations lack the data stewardship and governance capabilities required to move from pilots to production.⁶

Lineage, consent tracking, domain definitions, and privacy policies are essential for AI in regulated environments, but few organizations have these artifacts standardized across business units.

Without this foundation, every AI use case becomes a multi-stakeholder negotiation among legal, compliance, security, and data owners. This slows deployment and reduces trust. Equally important, it makes it difficult for boards and regulators to validate models, increasing organizational risk.

From my own work in healthcare and insurance data engineering and my ongoing PhD research on how AI adoption affects data-driven culture within data teams, one pattern is consistent: governance accelerates AI deployment when it is treated as an enabler rather than an afterthought. Organizations that invest in data cataloging, Lineage, and policy controls early are the ones that later scale AI without accumulating compliance debt.

The Human Dimension: Skills, Trust, and Culture

Technical constraints are only half the equation. The other half is human. Surveys on AI adoption show that workers are simultaneously positive toward AI and unsure how to trust or govern it. A 2025 KPMG study found that only 41% of workers report trusting AI, even though 61% already see benefits from using it.⁷ Meanwhile, CEO-level surveys show that most executives believe AI is strategically important, yet many acknowledge they lack the operating model and workforce skills to govern it responsibly.⁸

Inside data teams, this manifests as:

  • shadow usage of AI tools without a clear policy
  • friction between engineering and compliance teams
  • ambiguity around AI accountability
  • role confusion between data, IT, and business leadership

My Phd research suggests that success depends on more than skill acquisition. Psychological safety, cross-functional collaboration, and role clarity are critical factors in adoption. AI is not simply a technical capability; it is a shift in how decisions are made and who makes them.

Case Snapshots from Recent Transformations

Recent industry case studies highlight the interplay between legacy systems, data platforms, and cultural transformation:

  1. Insurance Claims Automation (EY): A Nordic insurer automated components of claims processing using AI and workflow redesign, but much of the effort went into integrating with legacy platforms rather than building models.⁹
  2. Hospital Readmission Prediction (Mayo Clinic/ACHE): Predictive AI demonstrated higher accuracy than traditional scoring, but success depended on embedding AI into clinician tools and addressing interoperability challenges.¹⁰ ⁵
  3. Public Sector Modernization (U.S.): Government technology reviews show that 70–80% of budgets go to legacy system maintenance,³ creating structural drag on modernization and AI adoption.

Across all three, the pattern is clear: AI value emerges only when modernization, data platforms, governance, and operating-model design are tackled together.

What Actually Works: A Practical Playbook for Scaling AI in Legacy Environments

Enterprises that successfully scale AI tend to follow a predictable maturity pattern. While terminology varies across industries, the underlying workflow is consistent.

Phase 1: Foundation: Build a Trustworthy Data Environment

AI cannot be layered on top of fragmented, low-quality, or poorly governed data. Successful programs begin by stabilizing the information supply chain.

This includes:

  • Selecting 2–3 high-value business journeys (vs. dozens of unfocused pilots)
  • Defining data products for the domains supporting those journeys (with owners, SLAs, and policies)
  • Establishing lineage, cataloging, and access controls to support auditability and consent
  • Providing sandbox environments for experimentation that do not compromise compliance

The foundational question:
“Do we have data we can trust and use responsibly?”

Phase 2: Integration: Connect Legacy and Modern Platforms

Once the data foundation is in place, enterprises encounter the integration bottleneck, where modern AI workloads meet decades-old systems.

Work here includes:

  • Wrapping legacy systems through APIs, connectors, and event streams
  • Harmonizing cloud/on-prem data strategies
  • Addressing interoperability across business domains
  • Aligning security and privacy controls end-to-end.

This phase is architectural, not analytical.

Phase 3: Operationalization: Turn Models into Business Systems

AI remains research until it becomes reliable, auditable, and governed.

Operationalization requires:

  • Continuous model performance monitoring and drift detection
  • Retraining and rollback workflows
  • Traceability for regulators and risk committees
  • Human-in-the-loop pathways for high-stakes decisions

The key question:
“Can we run this safely and repeatably?”

Phase 4 — Enablement: Prepare People, Roles, and Processes

Technology creates no value unless work changes accordingly.

Enablement focuses on:

  • Role-specific AI literacy and skills programs
  • Usage and governance policies with escalation paths
  • Role clarity around who owns model risk and validation
  • Embedding AI champions within business functions, not just IT

This phase resolves the cultural questions of trust and adoption.

Phase 5 — Value Realization: Measure, Learn, and Expand

Finally, organizations must demonstrate that AI has created measurable business value.

This includes:

  • Quantifying improvements in cycle time, cost, throughput, or quality
  • Validating that workflows and decisions actually changed
  • Communicating early wins to build momentum
  • Scaling by reuse rather than reinvention

Successful enterprises treat AI as a compounding capability, not a project¹².

Five-Phase Framework for Operationalizing AI in Legacy Systems

Looking Ahead

As global bodies publish formal AI governance recommendations and regulators emphasize explainability and accountability, organizations with fragile data foundations and legacy environments face increasing risk. But for those willing to modernize platforms, strengthen governance, and invest in people, AI can generate substantial strategic advantage. The differentiator will not be the novelty of the model but the maturity of the system that surrounds it.

AI adoption is ultimately not a tooling problem; it is a systems problem. Legacy enterprises can evolve into intelligent enterprises, but only by aligning data, governance, technology, and culture in a shared transformation strategy.


References

  1. PwC (2026). 29th Global CEO Survey – Leading Through Uncertainty in the Age of AI. PricewaterhouseCoopers Global.
  2. Forrester Research (2024). AI Adoption in Enterprise & Pilot Scalability Analysis. Report coverage summarized via Economic Times Technology.
  3. U.S. Government Accountability Office (2019). Federal Legacy IT Spending and Modernization Challenges. GAO-19-471.
  4. Informatica (2024). Enterprise Data Survey – State of Data Fragmentation and Integration.
  5. HIMSS (2024). State of Interoperability and Health IT Adoption Report.
  6. Gartner (2024–2025). Data Governance and Data Management Magic Quadrant Insights.
  7. KPMG (2025). U.S. Worker Trust and Attitudes Toward AI Study.
  8. PwC (2025). Responsible AI & Enterprise Risk Survey.
  9. EY (2024). Nordic Insurance Claims Transformation Case Study.
  10. ACHE Poster & HIMSS Case Evidence (2024). Predictive AI for Hospital 30-Day Readmission.
  11. BCG (2025). Beyond AI Adoption: The Full Potential Playbook for Enterprise Value.
  12. Booz Allen Hamilton (2025). AI-Driven Approaches for Modernizing Legacy Systems.