iLunch Talk (Oct 16): Modelling Connectivity: An Alternative Approach to Neural Network Compression
What: Modelling Connectivity: An Alternative Approach to Neural Network Compression
When: Friday October 16, 12 noon – 1:00 pm
Where: Zoom Meeting ID: 854 8826 5959 Password: 986329
Who: Madan R. Ganesh
Affiliation: PhD Candidate at the University of Michigan and Sekeh Lab (UMaine).
Abstract: We all know that larger and deeper neural network models are the modus operandi in tackling real-world problems. However, the focus on large capacity DNNs runs counter to the requirements for their hardware implementation. Neural network compression-via-pruning has emerged as a popular approach that can help bridge the gap between exorbitantly large theoretical models and their slimmer hardware counterparts, while maintaining a desired level of performance. Most approaches to neural network pruning focus on using deterministic constraints on the learned weight matrices, either by evaluating a filter’s importance using appropriate norms or modifying the objective function with sparsity constraints. While they offer a useful way to approximate contributions from filters, they either ignore the dependency between layers or solve a needlessly more difficult optimization objective. In this talk, I propose an alternative approach to neural network pruning, using the power of Conditional Mutual Information (CMI) under a probabilistic framework. In this work, I use CMI as a measure of connectivity between filters of adjacent layers across the entire DNN which can then be used to prune filters that offer lesser information to subsequent layers. Further expanding on this, I show how we can leverage ideas from the original weight-based approaches and our newly proposed probabilistic framework to offer a hybrid solution to pruning that is extremely effective.
Bio: Madan Ravi Ganesh is a Ph.D. candidate at the University of Michigan co-advised by Dr. Jason J. Corso and Dr. Salimeh Yasaei Sekeh (UMaine). He obtained his M.Sc. degree in Computer Vision from the University of Michigan in 2016 and B.E. degree in Electronics and Communication from MSRIT, Bangalore, in 2013. Madan’s interests lie in understanding the workings of deep neural networks, the analysis and development of video-based DNN architectures as well as efficient memory usage in deep learning.
Host: School of Computing and Information Science, University of Maine
For questions: email@example.com