PATIENT SAFETY
Explainable artificial intelligence for safe intraoperative decision support
Published on
November 1, 2019
JAMA Surgery
Overview
Intraoperative adverse events are a common and important cause of surgical morbidity. Strategies to reduce adverse events and mitigate their consequences have traditionally focused on surgical education, structured communication, and adverse event management. However, until now, little could be done to anticipate these events in the operating room. Advances in both data capture in the operating room and explainable artificial intelligence (XAI) techniques to process these data open the way for real-time clinical decision support tools that can help surgical teams anticipate, understand, and prevent intraoperative events.
Results
The paper primarily discusses the development and potential of Explainable Artificial Intelligence (XAI) in surgical settings, with one key empirical result highlighted: In a study of the Prescience system, which is an XAI-based warning system for predicting hypoxemia during surgery, anesthesiologists using the tool showed significantly improved prediction accuracy compared to clinical judgment alone (area under the curve of 0.78 vs 0.66, P < .001). The authors estimate that while anesthesiologists typically predict approximately 15% of intraoperative events, this could potentially increase to 30% when using Prescience. Notably, they found that one-fifth of hypoxemic risk cases were potentially related to medications provided intraoperatively, representing a modifiable risk factor that could be addressed with early warning.