Find out more about subscribing to add all events.
Modern problems require complex models to be handled. As an example larger and deeper neural network models are the modus operandi in tackling real-world problems. However, the implementation of large capacity DNNs brings up runtime issues and memory hardware requirements. Neural network compression-via-pruning has emerged as a popular approach that can resolve computational complexity challenges of DNNs while maintaining a desired level of performance. There are two broad approaches to DNN pruning: 1) applying a deterministic constraint on the weight matrices, which takes advantage of their ease of implementation and the learned structures of the weight matrix, and 2) using a probabilistic framework aimed at maintaining the flow of information between layers, which leverages the connections between filters and their downstream impact. While the deterministic approaches offer a useful way to approximate contributions from filters, they either ignore the dependency between layers or solve a needlessly more difficult optimization objective. In this talk, I propose a novel technique to neural network pruning, using the power of Conditional Mutual Information (CMI) under a probabilistic framework. Further, I use an adaptive CMI measure of connectivity between layers and show how we can leverage ideas from the original weight-based methods and our newly proposed probabilistic framework to offer a hybrid solution to pruning that is extremely effective. Finally, I take extended advantage of weight matrices by defining a sensitivity criteria for filters that measures the strength of their contributions to the following layer and highlights critical filters that need to be protected from pruning. I show that our proposed approach is faster by over 17x the nearest comparable method and outperforms all existing pruning approaches on three standard Dataset-DNN benchmarks: CIFAR10-VGG16, CIFAR10-ResNet56 and ILSVRC2012-ResNet50.
Salimeh Yasaei Sekeh is an Assistant Professor of computer science at the School of Computing and Information Science (SCIS), University of Maine. Prior to this, she was a Postdoctoral Research Fellow at the Electrical Engineering and Computer Science Department, University of Michigan, Ann Arbor working with Alfred O. Hero. She held CAPES-PNPD funder Postdoctoral Fellow appointment at the Federal University of Sao Carlos (UFSCar), Brazil in 2014 and 2015 and she was Visiting Scholar at Polytechnic University of Turin, Italy between 2011 and 2013. Salimeh is the Director of Sekeh Lab and her recent research interests are in machine learning, large scale data science, and statistical signal processing. Her primary focus is in design, improvement, and analysis of deep learning techniques with emphasize on deep network compression. Her interests include multi-class classification problems, graph-based learning, data mining, high-dimensional network structure learning, practical applications of machine learning in real-time problems, multi-agents trajectory prediction, and network interaction analysis.