Mount Sinai Health System

Machine Learning and Data Science Seminar: Developing Efficient Convolutional Networks and Training them at Scale

Abstract: Convolutional networks constitute the core of state-of-the-art approach to a range of problems in computer vision. Typical networks comprise of tens or even hundreds of layers of convolutions using learned filters, which require a lot of computational and memory resources. In this talk, I will introduce a new network architecture, called multi-scale DenseNets (MSDNets), that allows for the training of a cascade of multiple classifiers at intermediate layers of the network. This allows us to train a single network that, at prediction time, dynamically decides the size of the network: for "easy" images, only a small part of the network is evaluated, whilst for "difficult" images, we evaluate the full, high-quality network. MSDNets achieve state-of-the-art performances on image-classification benchmarks with much lower computational requirements. I will also present results of experiments in which we train convolutional networks on billions of weakly supervised web images. The resulting networks achieve the present state-of-the-art on the ImageNet image-classification benchmark.

Tuesday, May 21 at 2:00pm to 3:15pm

CSM Building, Davis Auditorium 1470 Madison Avenue, New York, NY 10029