Interpretable and Explainable Deep Learning
In this seminar, Mrs. Aya Abdelsalam Ismail will give an overview on interpreting neural networks, with a particular focus on the use of Deep Neural Networks (DNNs) to track and predict changes over time.
DNNs are proving to be highly accurate alternatives to conventional statistical and analytical methods, especially when considering numerous variables (genes, RNA molecules, proteins, etc.) and multiple interactions. Still, practitioners in fields such as science, bioinformatics, and research often are hesitant to use DNN models because they can be difficult to interpret.
During the event, Mrs. Ismail will:
- highlight the limitations of existing saliency-based interpretability methods for Recurrent Neural Networks and offer methods for overcoming these challenges.
- describe a framework for evaluating time series data using multiple metrics to assess the performance of a specific saliency method for detecting importance over time.
- show how to apply that evaluation framework to different saliency-based methods across diverse models.
- offer solutions for improving the quality of saliency methods in time series data using a two-step temporal saliency rescaling (TSR) approach (which first calculates the importance of each time step before calculating the importance of each feature over time).
- talk about how interpretations can be improved using a novel training technique known as saliency-guided training.
Mrs. Aya Abdelsalam Ismail is a Ph.D. candidate at the University of Maryland. Her research focuses on the interpretability of neural models, long-term forecasting in time series, and applications of deep learning in neuroscience and health informatics.
Upcoming Events
-
The 2025 AACI Catchment Area Data Excellence (CADEx) ConferenceJanuary 29, 2025 - January 31, 2025Innovation and AI in OncologyJanuary 29, 2025NCI Symposium on Translational Technologies for Global HealthMarch 19, 2025 - March 20, 2025