Lee Lab of
AI for bioMedical Sciences (AIMS)
The goal of the AIMS lab led by Prof. Su-In Lee is to conceptually and fundamentally advance the way AI is integrated with biomedicine by addressing novel, forward-looking, and stimulating scientific questions, enabled by AI advances. For example, when the primary focus of AI applications in biomedicine was on making accurate predictions using machine learning (ML) models, we uniquely focused on why a certain prediction was made by developing novel AI principles, and techniques (e.g., SHAP) to improve the interpretability of ML models, applicable to a broad spectrum of problems beyond biomedicine.
Modern complex, black-box ML models such as deep neural networks have become standard tools for biomedical research. Their opaque nature and consequent lack of interpretability have been a bottleneck impeding the widespread adoption of AI in biomedicine and beyond. These models do not answer the key questions, such as molecular basis for complex phenotypes, design strategies for therapeutics, or the reasoning process of clinical decisions. This challenge gave rise to explainable AI (XAI), a.k.a., interpretable ML.
AIMS lab’s research focuses on a broad spectrum of problems, advancing: (A) AI/ML - developing XAI principles and techniques, (B) Biology - identifying the cause and treatment of challenging diseases such as cancer and Alzheimer’s disease, and (C) Clinical medicine & healthcare - developing and auditing clinical AI models. See our latest publications.
We are a group of passionate researchers consisting of CSE Ph.D. students and MSTP (M.D./Ph.D.) students. Our lab is interdisciplinary with backgrounds ranging from computer science, statistics, and mathematics to molecular & cell biology and clinical medicine (pursuing or completed M.D.).
Two papers got accepted to ICLR'23. Congratulations Chris Lin, Hugh Chen, Chanwoo Kim, and Ian Covert!
The Annual CMB Symposium will be held on 1/11.
Prof. Lee shares her view on future trends in digital health with Geekwire.
Wei Qiu (CSE PhD)'s paper on explainable prediction of all-cause mortality is published in Nature Communications Medicine.
Joe/Alex (MD/CSE PhD)'s work on XAI-based model auditing is highlighted in Nature. Check here.
Hugh Chen (CSE PhD)'s paper on explainable AI for a series of models is published in Nature Communications.