“Explainable AI: where we are and how to move forward for cancer pharmacogenomics”
In the first part of the talk, I will go over a number of research work done by my lab on the topics of explainable AI applied to biomedical problems, which exemplifies how it addresses new scientific questions, make new biological discoveries from data, make informed clinical decisions, and even open new research directions in biomedicine.
In the second part of the talk, I will show you that explainable AI needs to evolve and improve to solve real-world problems in computational biology and medicine by having a deep dive into our cancer pharmacogenomics project led by our Ph.D. student Joseph Janizek in collaboration with Prof. Kamila Naxerova at Harvard Medical School.
Prof. Su-In Lee is a Paul G. Allen Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She completed her PhD in 2009 at Stanford University with Prof. Daphne Koller in the Stanford Artificial Intelligence Laboratory. Before joining the UW in 2010, Lee was a Visiting Assistant Professor in the Computational Biology Department at Carnegie Mellon University School of Computer Science. She has received the National Science Foundation CAREER Award and been named an American Cancer Society Research Scholar. She has received generous grants from the National Institutes of Health, the National Science Foundation, and the American Cancer Society.
Her research aims to conceptually and fundamentally advance how AI can be integrated with biomedicine by addressing novel, forward-looking, and stimulating scientific questions, enabled by AI advances. For example, when the primary focus of AI applications in biomedicine was on making accurate predictions using machine learning (ML) models, her team uniquely focused on why a certain prediction was made by developing novel AI principles, and techniques (e.g., SHAP) to improve the interpretability of ML models, applicable to a broad spectrum of problems beyond biomedicine. Her recent research focuses on a broad spectrum of problems, including developing explainable AI (a.k.a. interpretable ML) techniques, identifying the cause and treatment of challenging diseases such as cancer and Alzheimer’s disease, and developing and auditing clinical AI models.