The Last Mile: Policy Considerations for the Adoption of AI in...

The Last Mile: Policy Considerations for the Adoption of AI in Healthcare

By Mark Hoffman, Chief Research Information Officer, Children’s Mercy Kansas City

Mark Hoffman, Chief Research Information Officer, Children’s Mercy Kansas City

Current health practice is primarily based on heuristic decision trees that are established from centuries of medical experience and scientific evidence. A patient presents with a set of symptoms. A highly trained provider reviews the patient history, physical exam and findings from lab results, imaging studies and other evaluations. The provider compares this information to their formal education, their practical training, their personal experience, perhaps consults with their colleagues or reviews recent literature, and then recommends a course of action for the patient. If the initial recommendation doesn’t resolve the concern, they may engage specialists. Human beings are present in every aspect of this process, from generating the scientific evidence included in the formal education to performing the diagnostic tests and establishing payment policies that influence health decisions.

The emergence of new technologies, including the various forms of machine learning and deep learning, is challenging the healthcare paradigm described above. Recent work in machine learning has demonstrated that algorithms can process images and detect pneumonia with accuracy comparable to a radiologist. Substantial progress is being made in the development of algorithms that can process electronic health record (EHR) data and identify patients at risk of developing conditions such as sepsis. The growth of genomic medicine requires computational aids to assist in the recognition of DNA variants that may be clinically significant. Healthcare workforce issues, the rising cost of healthcare and the limits of human cognition and perception combine to justify the impressive work computational work. Often the end goal of this work is represented as “next generation decision support.”

"AI has significant potential to improve the quality and efficiency of many aspects of healthcare"

The “last mile” for these advances to be incorporated into routine healthcare practice (especially beyond primarily academic settings) requires resolving significant gaps in current policies and practices for incorporating decision support into patient care. For example, in current practice the adoption of clinical decision support (CDS) rules in a clinical setting typically requires the review and approval of a committee. The committee assesses the logic of the rule, evaluates the literature for the evidence basis of the logic, reviews the results of local testing and reaches a conclusion about the readiness of the rule to be implemented in production. When the CDS logic is often a relatively simple “If [Condition A and B], then [action C]” that can be tested and reproduced with carefully constructed test patients, this process works well. When the CDS logic is more opaque, as often happens with the results of deep learning, this current evaluation process is unlikely to provide information that will reassure the committee that the CDS logic will be safe and effective. As another example, an algorithm optimized to identify pneumonia may miss a lung malignancy that a radiologist would note. Other concerns can include the quality of data used to train algorithms, whether or not diverse populations are accurately represented in the data and how to evaluate reproducibility in the era of dynamically adjusting algorithms.

Until these concerns can be addressed, current practice and policy frameworks will struggle to incorporate the creative work emerging from the data science community. One resolution is to clearly articulate the goals and role of the computational system. A simple change in labeling can help reset the tone. The acronym “AI” is primarily used to mean “Artificial Intelligence.” This expression is widely used in the technical community, the media and discussions of the risks and benefits of this emerging area. A parallel definition for “AI” is gaining traction, “Augmented Intelligence.” This alternative has the benefit of clearly conveying an assistive role. Where “artificial” can emphasize the non-human aspect of AI, “augmented” keeps the human expert in the center. For example, many diagnostic activities stretch the limits of human perception. It is no longer possible for a clinical trainee to know every clinically significant DNA variant. Many subtle pathologies present early indications that elude pathologists and radiologists but can be flagged by image analysis. When algorithms are used to triage results or to provide secondary review, the human expert remains in the loop but is able to work more effectively and efficiently. This approach is often the intent of development work in clinical data science but may be lost in the details of working out the technical analysis.

Successful marathon runners (that is not me, by the way!), plan for the “last mile” early in their training. Likewise, the ultimate success of AI in healthcare requires collaborative work between clinicians, data scientists and local and national policy makers. Establishing clarity about the role of AI in clinical decision making processes requires investment in defining the policy framework required to insert these new capabilities into routine clinical practice. Data scientists should engage in conversations with administrators that begin with, “what would it take for you to be comfortable that this manages risk and promotes better outcomes?” Policy makers should seek and expect clear answers to questions about quality control, risk and reproducibility. Developing a risk evaluation framework for the adoption of AI applications is also useful. For example, an application that augments the patient scheduling process is likely to be lower risk than a diagnostic algorithm. Substantial testing is still warranted for administrative AI applications because healthcare always has complex exceptions that may not have been represented in the initial training data.

AI has significant potential to improve the quality and efficiency of many aspects of healthcare. Early consideration of the “last mile,” the local and national policy considerations needed to insert new technologies into routine practice, will be an important factor in the ultimate success of these promising new capabilities.

Weekly Brief

Read Also

Transitioning Toward Value-Based Home Care

Transitioning Toward Value-Based Home Care

Robert Pritts, President, Home Care & Post Acute Services, SSM Health
Tools and Capabilities Required for Value-Based Care

Tools and Capabilities Required for Value-Based Care

Mark Weisman, Chief Medical Information Officer, Peninsula Regional Medical Center
Healthcare Information Security is an Imperative Segment for a CXO

Healthcare Information Security is an Imperative Segment for a CXO

Jackie Mattingly, Director of HIPAA Security, Owensboro Health
The Case for VR and Addiction Treatment

The Case for VR and Addiction Treatment

Derek Price, Chief Executive Officer, Desert Hope Treatment Center
Dripping In Data; What Does 'Cloud Computing' Mean For Patients And Pharma Collaboration In The Era Of Citizen Science?

Dripping In Data; What Does 'Cloud Computing' Mean For Patients And...

Emma Sutcliffe, Head, Patient Engagement and Innovation, NexGen Healthcare
Using Technology to Identify and Address Chronic Patients' Emotional and Social Needs

Using Technology to Identify and Address Chronic Patients'...

Bharat Tewarie, MD founder of Boston BioPharma Consultants Jennings Xu, Director, Quid