Healthcare’s Slow AI Adoption and Safety Mindset Are Costing Lives
By Christian Wernz, PhD, Lead Data Scientist at Sentara Health | Lecturer at UVA School of Data Science
In healthcare, “do no harm” is sacred. Yet ironically, this principle—meant to protect—can sometimes obstruct the very innovation that could save lives. Artificial Intelligence (AI) has the power to transform care, improving outcomes, easing clinician burden, and advancing health equity. But healthcare’s deeply ingrained caution and fragmented approach to AI adoption are leaving patients behind—and sometimes, putting them at risk.
We are not just missing out on potential. We are causing harm by standing still.
The Promise: AI That Saves Lives
AI is already transforming industries from logistics to finance. In healthcare, its promise is even more profound. AI models can detect sepsis hours before symptoms escalate, identify cancers in radiology scans that human eyes may miss, and streamline care pathways to avoid unnecessary hospitalizations. At an operational level, predictive analytics can reduce ER overcrowding and optimize staffing—two major challenges that directly affect care quality.
A 2016 BMJ study found that medical errors contribute to over 250,000 deaths each year in the United States alone. That’s not due to incompetence; it’s the result of complexity, time pressure, and data overload. AI can help.
So why aren’t we using it more?
The Reality: A Culture of Caution
Healthcare organizations are inherently risk-averse. Rightly so—we deal with human lives, and every new intervention must be scrutinized. However, there is a thin line separating caution from paralysis.
Too often, AI tools stall in perpetual pilot mode—if they even get there. Innovations get stuck in committees, burdened by overly rigid validation protocols that aim for perfection rather than progress. Even when AI is validated and available, frontline clinicians hesitate to use it due to a lack of trust, unclear workflows, or fear of liability.
It’s not uncommon to see hospitals running algorithms offline “for testing,” even after their value has long been demonstrated. Meanwhile, patients are being discharged late, misdiagnosed, or not diagnosed at all.
This isn’t just a systems issue—it’s a mindset issue.
Healthcare is full of brilliant people trying to do the right thing. But our system is optimized for slow, cautious change. That may have worked in the past—but it won’t work now. Not when technology is advancing this fast. Not when the cost of delay is measured in lives lost.
Inaction Is Not Neutral
The underlying assumption is that not using AI is the safer bet. But this thinking is flawed.
In healthcare, inaction is a decision. And it carries consequences. When we delay deploying proven tools, we delay diagnoses. When we don’t optimize workflows, we overload already exhausted staff. When we fail to act, we allow inequities to persist in who gets access to quality care.
The safety mindset becomes self-defeating: by waiting for “perfect,” we undermine “better than now.”
Let’s put it plainly: people are dying not because we used AI recklessly, but because we didn’t use it at all.
Rethinking Safety and Speed
Other high-stakes industries offer powerful lessons. AI provides pilots with real-time assistance in aviation. In finance, it flags fraud and adjusts portfolios instantly. These sectors don’t wait for perfection—they rely on version control, feedback loops, and ongoing learning to safely iterate in the real world.
Healthcare must embrace a similar mindset. We need to shift from trying to prove AI is flawless to proving it’s better than what we do now. That doesn’t mean ignoring risk—it means recognizing that refusing to improve is a risk of its own.
True safety isn’t about standing still—it’s about learning fast.
What Needs to Change
1. Culture
We need a new ethos—one where innovation and safety are not at odds. AI doesn’t replace clinicians; it supports them. But support tools only help when they are actually used. Cultural change requires clear messaging from senior leaders, coupled with foundational education in data science and AI across the workforce. People resist what they don’t understand.
2. Governance
Hospitals must establish clear, structured pathways to move AI solutions from pilot to deployment. Today, most AI oversight committees are primarily focused on identifying risks, and the burden of progress often falls on the data science teams and a few clinical champions. To create balance, organizations need mechanisms that not only scrutinize innovation but actively advocate for its implementation.
3. Leadership
Executives must lead with urgency and clarity. Healthcare CEOs must go beyond aspirational messaging—backing innovation with the necessary budget, staffing, and strategic commitment. While some health systems are beginning to appoint Chief AI Officers, few have paired this with an operational Head of AI who can translate strategy into execution. This dual structure—common in leading tech firms—is precisely what healthcare needs to move from AI aspiration to real-world impact. Empowered by the CEO, this leadership duo is responsible for weighing risks against benefits, authorizing implementation, and ensuring delivery of clinical and operational value. Together, they serve as a counterbalance to legal and IT security functions, making sure innovation isn’t stifled by institutional inertia.
4. Policy
Policy must evolve from being a gatekeeper of the past to a steward of the future. Today’s regulatory structures often reflect a world where technologies are static, and risk is binary. But AI doesn’t work that way—it is dynamic, data-driven, and constantly improving. To support meaningful progress, we need policies that acknowledge this fluidity. That means enabling safe experimentation, demanding accountability without rigidity, and shifting from approval events to continuous oversight. When policies are vague or outdated, hospitals default to inaction—waiting for clearer signals before moving forward. This “wait and see” stance is not neutral; it quietly stalls innovation and contributes to care delays.
5. Trust
Clinicians need to be brought into the process early—or better yet, be the ones initiating AI development. When stakeholders resist AI solutions and lament the black box nature of models, the issue is often not one of explainability, but of trust. Physicians routinely prescribe medications whose mechanisms they don’t fully understand, yet they do so confidently because those treatments have been rigorously tested and vetted. AI is no different.
A Call to Action
Healthcare is full of brilliant people trying to do the right thing. But our system is optimized for slow, cautious change. That may have worked in the past—but it won’t work now. Not when technology is advancing this fast. Not when the cost of delay is measured in lives lost.
It’s time to flip the script. Responsible AI adoption doesn’t mean going slow—it means going smart.
We must move forward, not because it’s trendy or competitive—but because it’s the safest thing we can do.
In this new era, speed is safety.