“Explainable AI for health: where we are and how to move forward”

The first part of my talk delves into various research endeavors conducted by my lab, focusing on explainable AI's application across diverse biomedical domains. I will demonstrate how explainable AI can elucidate novel scientific inquiries, with a primary emphasis on understanding neurodegenerative diseases and biological age.

In the second part, we will explore the evolving landscape of explainable AI, uncovering its potential to chart new scientific directions in biomedicine, exemplified by our recent work in dermatology, emergency medicine and precision cancer medicine. This discussion aims to shed light on the necessary enhancements for explainable AI to effectively tackle a wide array of real-world challenges in biomedicine.

Bio (shorter version): here

Bio (longer version):

Prof. Su-In Lee, the Paul G. Allen Professor of Computer Science at UW, earned her PhD from Stanford University in 2009 under the guidance of Prof. Daphne Koller. She joined UW in 2010 after serving as a visiting Assistant Professor in the Computational Biology Department at Carnegie Mellon University School of Computer Science. Recognized for her groundbreaking contributions to AI and biomedicine, Prof. Lee has received prestigious accolades including the National Science Foundation (NSF) CAREER Award, the International Society for Computational Biology (ISCB) Innovator Award, and the Ho-Am Prize often referred to as the "Korean Nobel Prize," and designation as an American Cancer Society (ACS) Research Scholar and a Fellow of American Institute for Medical and Biological Engineering (AIMBE). She has also received generous grants from the National Institutes of Health, the National Science Foundation, the American Cancer Society, and Chan Zuckerberg Initiative (CZI).

Her research aims to conceptually and fundamentally advance how AI can be integrated with biomedicine by addressing novel, forward-looking, and stimulating scientific questions, enabled by AI advances. For example, when the primary focus of AI applications in biomedicine was on making accurate predictions using machine learning (ML) models, her team uniquely focused on why a certain prediction was made by developing novel AI principles, and techniques (e.g., SHAP) to improve the interpretability of ML models, applicable to a broad spectrum of problems beyond biomedicine. 

Her recent research focuses on a broad spectrum of problems, including developing explainable AI (a.k.a. interpretable ML) techniques, identifying the cause and treatment of challenging diseases such as cancer and Alzheimer’s disease, and developing and auditing clinical AI models.