Artificial intelligence (AI) buzz is everywhere these days. The infusion of AI in our daily lives is becoming as evident moment-to-moment as it is increasingly prevalent—perhaps the hallmark of impeccable automation. Email spam filters, online purchasing preferences, self-driving cars, automatic toothbrushes (!), health diagnostics…AI, not long ago limited to esoteric academia and deep-pocketed corporations, is now nearly ubiquitous.
Cloud-based AI platforms now provide low-cost access to immense computing power and development pathways to the masses. Declining financial barriers to entry, paired with investor belief that the AI hype is real, make it almost unimaginable for any new technology strategy to ignore an AI angle.
What is AI?
The translation of AI research has accelerated in the past decade largely due to “machine learning.” Machine learning employs algorithms from well-known statistical methods (such as linear and logistic regression) to more advanced methods born from an intent to automate recognition and understanding of the world around us.
“Supervised learning” applies labeled data categorized by experts to content datasets for training. “Unsupervised learning” foregoes the labeling, freeing algorithms to discover correlations without prior knowledge of the ground truth. “Deep learning” uses neural networks, computerized analogues of the human central nervous system involving several layers of “neurons,” to refine (strengthen or weaken) correlations leading to meaningful outcomes. Iterative application of deep learning techniques has revolutionized machine learning capabilities, particularly for evaluation of large datasets.
AI in medicine
AI is arguably buzziest in the health care sector; however, the application of AI in medicine presents challenges and hurdles not encountered in many other industries.
Right off the bat, AI experts face data access issues due to the compartmentalization of information behind health care firewalls and the lack of electronic health record (EHR) system interoperability. EHR data is largely unstructured narrative, requiring abstraction tools such as natural language processing to parse the necessary components for machine learning.
Furthermore, AI validation with sufficient demographic and geographic diversity to be generalizable is dauntingly problematic due to these barriers. Many of the largest technology companies thriving on AI today have leapt confidently into the health care arena only to find a warren of disconnected data pipes and siloes of protected health information.
AI in diagnostic and interventional radiology
Anyone attending a radiology conference today will see AI touted as the next revolution. DR, in particular, is viewed by many as low hanging fruit for machine learning applications. With so much concentrated effort in DR, is IR likely to be swept into the same AI orbit? Remarkable achievements in radiographic imaging analysis would seem to set the stage for AI in IR, but are the mental operations and mechanical actions of an image-guided proceduralist inaccessible to machine learning?
IR has achieved a level of renown and distinction in health care for out-of-the-box thinking. Whether that serves as AI kryptonite, or yet another opportunity for AI to astonish (the way Charles Dotter did in the 1960s) remains to be seen. Optimistically, we could build AI in IR to capture IR’s distinctiveness, to shape design philosophies toward empowerment and discovery, and to evolve the practice of medicine.
The greatest upside potential for AI in IR may result from inventive blending of imaging, clinical information, procedural skills and workflow optimization. Inventiveness is core to IR but may not be enough to ensure success in an AI-heavy future. For example, the design of helpful tools like automated implant detection will benefit from co-development of meaningful metrics for assessment, as well as the means to integrate into clinical decision support tools.
Clarifying how the value proposition will be measured from the outset will help to separate AI wheat from chaff—as well as better prepare our specialty for successful implementation of more complex AI-based tools, such as automated and individualized risk stratification. As explaining the value of IR has not traditionally been our greatest strength, now is the time to consider strategic investments, (human) learning and new paradigms for innovation.
SIR and AI
SIR members are contributing to the expansion of AI in IR through the American College of Radiology’s Data Science Institute (DSI). DSI resources and expertise could open doors for a new generation of AI researchers, and IR is one of the core areas for “AI use case” concept development (see sidebar). Initial ideas range from leveraging SIR’s standardized reporting templates for training AI algorithms (inherent common terminologies and data standards), to AI applications for IR workflow and quality improvement.
The AI train is roaring down the track in all aspects of modern life. Can we can jump on board and steer it in directions that increase the value of IR and quality of life for our patients and practitioners? Watch for more insights and specific applications of AI to IR in a future issue of IRQ.
What are DSI use cases?BY THE AMERICAN COLLEGE OF RADIOLOGY
In 2018 the American College of Radiology (ACR) Data Science Institute (DSI) began publishing well-defined use cases for AI. These use cases provide guidance and vision to the medical AI community on building AI in ways that benefit the clinician and patient.
These use cases improve the odds of developing AI tools that:
- Address relevant clinical questions
- Can be implemented across multiple electronic workflow systems
- Comply with requirements to submit data to relevant registries to enable ongoing assessment
- Comply with applicable legal, regulatory and ethical requirements
The growing list of published use cases (available on the DSI website at bit.ly/2Z6j32a) currently includes the following categories: abdominal, breast imaging, cardiac, musculoskeletal, neurology, oncology, pediatric and thoracic.
“As we are working to obtain and incorporate feedback into our preliminary use cases, we are seeing a groundswell of support for the information we are providing,” said Laura Coombs, ACR senior director of informatics. “This is an exciting stage of use case development because every bit of feedback, no matter how small, has the potential to profoundly affect both the industry and clinicians’ ability to create and deploy AI technology.”
If you would like to share ideas or participate, find out more about opportunities to become involved at bit.ly/2Z93jr8.