Artificial IntelligenceMedicalMedical Imaging

Why the Pace of Clinical Adoption of AI in Medical Imaging Has Been Slow


By Steven Blumer, Associate Medical Director of Radiology Informatics, UPMC

There has been much hype surrounding the potential of AI to disrupt medical imaging.  Yet the pace of clinical adoption of AI within medical imaging has been slow. A survey conducted in 2020 by the American College of Radiology Data Science Institute (ACR DSI) indicated that nearly one-third of radiologists were using AI and that twenty percent of practices planned to invest in AI in the next five years (https://pubmed.ncbi.nlm.nih.gov/33891859/).  More current estimates of AI adoption aren’t readily available. However, anecdotal reports from colleagues point to the continued slow clinical adoption of AI.  So, this begs the question of why this has been the case?

If we view the use of AI through the lens of the Gartner Hype Cycle, we have likely passed the peak of inflated expectations and started a descent into the trough of disillusionment. This is due to a misunderstanding about what AI can do in its current state within the field of medical imaging, especially regarding image interpretation. Current narrow AI models can perform one specific task rather than fully interpret diagnostic imaging studies, which requires the human-like higher order reasoning of general AI that is still years away. For example, current narrow interpretive AI models can identify pneumonia and pneumothorax on chest X-rays as well as brain bleeds and pulmonary nodules on CT scans. Some of these narrow AI models have performed as well or better than actual radiologists; however, this is not always the case. The use of narrow AI in other areas of the imaging life cycle is currently further along and has proven more beneficial to radiologists.

AI has many potential applications, but it’s primarily a problem-solving tool.  Without a specific use case, proving an ROI for AI adoption is difficult. Therefore, before adopting AI, it’s imperative to be able to find and delineate a problem to be solved with AI.  If no particular use case exists, investing in a technology that can prove expensive doesn’t make sense.

The emergence of AI marketplaces may also help improve the adoption of AI in medical imaging.

Once a use case has been articulated, a vendor that has developed a model that can potentially provide a solution to the given problem should be identified.  When evaluating models, it’s helpful to know whether the model has received 510k clearance from the FDA.  Currently, there are 242 radiology models cleared by the FDA according to the ACR DSI (https://aicentral.acrdsi.org).  Usually, implementing a model that has not been cleared by the FDA requires more work to be brought into an institution and documentation of a proof of concept by first performing a QA project which can be a time-consuming undertaking.

When evaluating a model, it’s important to determine the patient population that the model has been trained on.  Ideally, patients used to train the model should resemble the patient population that the practice serves.  Otherwise, the model may not perform well in clinical practice.  In addition, modality and protocol differences between the training data and the studies analyzed by the model in clinical practice may also affect performance. Many vendors offer a free trial prior to purchase.  This free trial can be used to validate the model by using retrospective patient data to ensure that it performs well on the intended patient population. If the model doesn’t perform as well as hoped, it doesn’t make sense to proceed with its adoption.  It’s also possible that a practice may not identify such a model and may then need to reevaluate previously tested models as they are updated or new models as they are released.

The security of protected health information (PHI) is also another potential barrier to AI adoption.  Most algorithms are either “on-prem” (on-premises) or cloud-based.  Cloud-based algorithms often require PHI to leave the institution’s network, while on prem solutions don’t require PHI to leave the institution.  Therefore, it’s necessary to get input from data security staff before deploying AI and some institutions have stringent requirements in place for PHI to leave their network. This can make implementing cloud-based applications more challenging from a data security perspective.  However, maintaining an infrastructure for on-prem applications requires resources from the institution as opposed to cloud-based applications, which are maintained by the vendor.

Cost is also a very important consideration when deploying AI.  Currently, there is only one CPT code for using AI in analyzing vertebral compression fractures, but this is an experimental code that is not eligible for reimbursement. Reimbursement for AI used clinically to triage strokes can be obtained by applying for funding from the New Technology Add-On Payment (NTAP) Program administered by CMS.  However, this is one limited use case and obtaining reimbursement for the clinical use of AI in medical imaging is clearly the exception and not the rule. Therefore, it’s especially imperative that practices can prove an ROI for AI adoption when asking administration for funding. 

Once a model is deployed in clinical practice, its performance needs to be monitored over time.  This is because models can drift or decay with continued use.  When models drift, they lose their predictive power which can adversely impact patient care.  Therefore, it’s important to develop a process to monitor the performance of models and to define what metrics are important to evaluate their performance. Feedback should be given to vendors on the performance of their models and if they have decayed.

It is also important to monitor for any potential impacts from AI applications to other Radiology IT systems such as the PACS system, RIS, and EMR.  This becomes especially important if many separate AI applications have been deployed that are not on an individual platform, as there is the theoretical risk of the models adversely impacting the performance of other Radiology IT systems as well as other AI models.

Currently, many practices and institutions don’t have protocols in place for AI adoption.  Without a process, adopting this technology can be a difficult, time consuming and poorly coordinated process that requires a clinical champion to coordinate with various stakeholders in finance, operations, IT, and data security, as well as clinical leadership.  It is for this reason that many institutions are adopting the data governance approach, which brings together various stakeholders mentioned above and allows for a more unified and coordinated approach to AI adoption. This approach should hopefully make it easier and quicker to adopt AI, although this remains to be seen.

The emergence of AI marketplaces may also help improve the adoption of AI in medical imaging.  A marketplace can help lower the costs and administrative burden of implementing AI because it effectively serves as a one stop shop allowing access to many models instead of purchasing models separately from different vendors.  This means that many of the tasks, such as purchasing, contracting and implementation, only need to be done once instead of multiple times.  It also allows for greater collaboration between institutions and vendors for testing and validation purposes.

The various challenges and obstacles discussed above have likely contributed to the slow pace of clinical adoption of AI in medical imaging, especially in the domain of image interpretation. However, new practices and paradigms for AI adoption, such as governance structures in institutions and marketplace offerings from vendors, may help drive the future clinical adoption of AI in medical imaging.