Innovation Without Impact? Why Radiology Risks Missing the AI Leap

Innovation Is Everywhere—Except in Clinical Practice

AI has been a topic in radiology for more than a decade. It once dominated congresses, now it features in almost every talk—promising a revolution. But despite the noise, daily practice remains largely untouched.

Vendors highlight constant improvements in accuracy and new AI use cases. And yet, where are the radiologists who say their work has dramatically changed? Or those who feel replaced by smarter machines? More than 10 years after the first clinical AI tools, there is still no widespread, routine adoption. Why? The short answer: it depends.

Let’s start with the production side. Most users don’t build their own tools—and when they do, it works remarkably well because they know the problem inside out. But usually, it's engineers trying to solve clinical problems. That gap often leads to a mismatch. A solution might look perfect in theory but fails in reality. The bigger the mismatch, the higher the risk of a misfit. In a perfect market, bad products disappear and good ones rise. In healthcare, it’s more complicated.

Radiologists aren’t always decision-makers. AI vendors pitch to administrators, not end users. Sure, radiologists are consulted—but they’re often late in the process. The result? AI tools that shine in the lab but stumble in everyday clinical use. Like outputs that appear as clunky tables in PACS, impossible to copy or paste—forcing radiologists to retype or dictate. It’s like replacing a horse-drawn carriage with a car that has no engine.

These seemingly minor, yet deeply practical failures are no exception. A systematic review by Hassan et al. (2024, *Human Factors*) confirms how layered the barriers to AI adoption truly are. Technical issues like poor data integration, limited algorithm generalizability, human factors such as resistance to change and lack of trust, and organizational hurdles—from high costs to workflow disruption—are all part of the equation. The problem isn’t the promise of AI. It’s everything around it.

In contrast, look at language translation. LLMs like ChatGPT spread rapidly. Why? Because verifying translation output is easy: if the text makes sense, it works. The stakes are also lower—translation errors are rarely life-threatening. In medicine, it’s different. We rely on experience, refined over decades. It’s not perfect, but it still works—at least for now.

Even in places where AI is available, clinical use remains low. A survey by the American College of Radiology (Allen et al., 2021, *JACR*) found only 30% of radiologists use AI routinely. Those who do tend to value it—those who don’t remain skeptical. Trust comes from use, not from headlines.

So what holds them back? Doubts about reliability, especially bias across patient groups, modalities, or image quality. Many see no tangible benefit. And most have little influence on what tech gets purchased. Without involving users meaningfully, implementation stays superficial—or fails.

Another example of an AI revolution that worked: the financial industry. Millions of trades are processed by AI daily. Systems like JPMorgan Chase’s “LOXM” outperform human traders in speed, accuracy, and cost-efficiency. In finance, the pressure to adapt is existential. Fall behind, and you're out.

Healthcare is different. And rightly so—medicine is more regulated than any other sector. No one wants half-tested tools making life-or-death decisions. But still, the question stands: how can a field like radiology—with its urgent needs and clear AI use cases—fail to harness the support that’s already out there?

The Human Cost of Innovation Paralysis

Errors in diagnosis and treatment have always been part of medicine. What’s different today is how avoidable many of them have become. With technologies like AI and standardized reporting, we now have tools that can dramatically increase patient safety and care quality. And yet, by not using them—or by using them too late—we are exposing patients to unnecessary risks.

We often talk about the dangers of premature innovation. But we rarely talk about the danger of waiting too long. Delaying the implementation of mature technologies can cost lives just as much as rushing half-baked solutions into practice.

One example of that cost is the growing global backlog in radiology reporting. A recent international pilot study by Goldberg et al. (2023), published in the Journal of the American College of Radiology, found that 68% of imaging centers had a delay in formally interpreting exams, even for time-critical modalities like head CT, chest CT, and chest X-rays. In some centers, up to 96% of head CTs remained unreported after a week. And even six months later, a significant number of facilities still had backlogs—posing a real risk to patient safety. These findings point not just to overworked systems, but to missed opportunities: if scalable tools like AI were used more effectively, delays like these might no longer be the norm. If we take our professional responsibility seriously, shouldn’t we be leading this change—or at the very least, shaping it? After all, who better to judge what serves the patient’s best interest than the physicians who care for them, and the patients themselves?

But the damage goes deeper than individual outcomes. Innovation is a process—and it needs real-world application to evolve. Every delay in implementation stalls progress. Promising tools remain stuck in the lab, and the data we need to guide future development is never collected. What could become exponential stays linear.

And there’s another risk we don’t talk about enough: exclusion. If physicians continue to hesitate, the system may move on without them. It’s hard to imagine healthcare innovation happening without doctors—but it’s not impossible. If we fail to engage, others will find ways forward. Technically driven actors—industry, platforms, startups—are already shaping healthcare in ways that bypass clinical input. If we leave the table, the table doesn’t disappear. It just fills up with someone else.

That should concern even the most skeptical among us. A healthcare system without medical leadership may become more efficient—but it risks losing its humanity. It may optimize for what’s measurable, not what matters. That is not the future we want. And that’s why we, as physicians, need to be involved—not someday, but now.

What should we do about it?

So perhaps the real question is not whether we need innovation—but how we choose to approach it. Caution is justified, especially in healthcare. But caution must not become inertia. Delays in adopting meaningful tools don’t just affect workflows—they affect people. And while not every solution will live up to its promise, ignoring the problem won’t make it go away.

If we want to remain part of the decision-making process, we should stay engaged—critically, constructively, and with a clear view of what’s at stake.

Next
Next

Artificial Intelligence in Clinical Practice: Are We Ready for the Revolution?