Explainable AI
by Charles E. Kahn, Jr, MD, MS, Editor, Radiology: Artificial Intelligence
Machine learning typically needs lots of data – and in radiology, that can mean lots and lots of images.
The training of deep learning systems can require hundreds to thousands of images – even when using “transfer learning” that starts with a model pre-trained on other images.
But how many images does it take to train a radiologist?
As an example, let’s think about pneumothorax. If you’re a radiologist, think back to your own residency, or look at a radiology textbook. You probably saw a few examples:
a simple unilateral pneumothorax,
air filling an entire hemithorax,
a tension pneumothorax,
a hydropneumothorax,
a false-positive “fake out” by a rib edge,
and so on.
From relatively few images, you learned the idea of “pneumothorax” quickly. Why is that?
Part of the reason is that humans already have a mental model to understand the idea of pneumothorax. You understand the concepts of air, lungs, pleura, and ribs. The idea of “air inside the pleura, outside of the lung” makes sense.
The challenge for AI systems, and for deep learning systems in particular, is that they learn patterns without models. They don’t construct a “mental model” that allows them to connect their learning with the concepts humans can understand.
Tools such as activation maps (also called saliency maps or “heat maps”) can show which parts of an image are being used by the AI model. For now, such maps offer one of the few ways that we can “see inside the mind” of a deep learning model.
One of the ongoing challenges in AI is to develop systems that can explain their reasoning, or allow humans to interpret their output. Among others, the Defense Advanced Research Projects Agency (DARPA) program on Explainable AI (XAI) aims to create machine learning techniques that can produce more explainable models, while maintaining a high level of performance, and enable human users to understand, appropriately trust, and effectively manage AI systems.
Particularly for medical decision making, we need machine learning systems that can explain their rationale, characterize their strengths and weaknesses, and convey an understanding of how they will behave in the future.
Charles E. Kahn, Jr, MD, MS is professor and vice chair of radiology at the University of Pennsylvania, and editor of Radiology: Artificial Intelligence.
Follow him on Twitter: @cekahn, @Radiology_AI


