Spring 2018

  ECE 539: Advanced Topics in DSP: Deep Learning

    Time: Tuesday 3:20pm-6:20pm
    Location: CoRE 538

    Vishal M. Patel
    Email: vishal.m.patel@rutgers.edu
    Office: CoRE 508

    Office Hours: By appointment

     Peter Meer
    Email: meer@soe.rutgers.edu
    Office: CoRE 519

    Office Hours: During the day of the lecture

     Tentative Schedule
Prof. Meer
From where we start
Convolutional neural net
Introduction in machine learning
Numerical problems
Principal component analysis
Prof. Patel
Eigenfaces for recognition by Turk & Pentland

Eigenfaces vs. Fisherfaces: Recognition Using Class Specific Linear Projection by Belhumeur et al.

Discriminant analysis for recognition of human face images by Etemand & Chellappa

A Tutorial on Support Vector Machines for Pattern Recognition by Burges

Robust Face Recognition via Sparse Representation by Wright et al.

Prof. Patel
Features: LBP, HOG, SIFT, Gabor
Intro to Deep Learning
Face Description with Local Binary Patterns: Application to Face Recognition by Ahonen et al.

Histograms of Oriented Gradients for Human Detection by Dalal & Triggs

Face Recognition by Elastic Bunch Graph Matching by Wiskott et al.

Distinctive Image Features from Scale-Invariant Keypoints by Lowe

Prof. Patel
Vishwanath Sindagi
Visualizing CNNs
Intro to Caffe, Torch, Theano, TensorFlow, Paddle
(R) ImageNet Classification with Deep Convolutional Neural Nets by Krivhevsky et al.

Very Deep Convolutional Neural Networks for Large-Scale Image Recognition by Simonyan & Zisserman

Visualizing and Understanding Convolutional Networks by Zeiler & Fergus

(R) Efficient Processing of Deep Neural Networks: A Tutorial and Survey by Sze et al.
Poojan Oza (2)
Matt Purri (3)
Keith Papa (1)
De-noising autoencoders
Variational autoencoders
Conditional Variational Auto-Encoder
(1) Deep Learning, Chapter 14

(1)(R)Extracting and Composing Robust Features with Denoising Autoencoders by Vincent et al.

(1)Stacked Denoising Autoencoders: Learning Useful Representations in a Deep Network with a Local Denoising Criterion by Vincent et al.

(2)Tutorial on Variational Autoencoders by Doersch

(2)Auto-Encoding Variational Bayes by Kingma and Welling

(2)(R)Attribute2Image: Conditional Image Generation from Visual Attributes by Yan et al.

(3)(R)Dynamic Routing Between Capsules by Sabour et al.

(3)Matrix Capsules with EM Routing by Hinton et al.
Rong Zhang (1)
Hairong Wang (2)
Luyang Liu (3)
Boltzmann Machines
Restricted Boltzmann Machines
Deep Belief Networks
Student-Teacher Networks
(1)(R)Reducing the Dimensionality of Data with Neural Networks by Hinton & Salakhutdinov

(1)(R)Boltzmann Machines by Hinton

(2)A Fast Learning Algorithm for Deep Belief Nets by Hinton et al.

(2)A Practical Guide to Training Restricted Boltzmann Machines by Hinton

(3)(R)Distilling the Knowledge in a Neural Network by Hinton et al.

(3)Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results by Tarvainen & Valpola

(3)FitNets: Hints for Thin Deep Nets by Romero et al

Pengyu Guo (1)
Kangkang Li (2)
Thomas Shyr (3)
Deep Residual Networks
Dense Net
(1)(R)Deep Residual Learning for Image Recognition by He et al.

(1)Residual Networks Behave Like Ensembles of Relatively Shallow Networks by Veit et al.

(2)(R)Identity Mappings in Deep Residual Networks by He et al.

(2)Aggregated Residual Transformations for Deep Neural Networks by Xie et al.

(3)(R)Densely Connected Convolutional Networks by Huang et al.

(3)Image Super-Resolution Using Dense Skip Connections by Tong et al.

Blerta Lindquist(1)
Jingxuan Chen (2)
Feng Rong (3)
Saeed Soori (4)
Recurrent Neural Networks (RNNs)
Long Short Term Memory networks (LSTMs)
(1)(R)Long Short-Term Memory by Hochreiter & Schmidhuber

(2)How to Construct Deep Recurrent Neural Networks by Pascanu et al.

(1)Unsupervised Learning of Video Representations using LSTMs by Srivastava et al.

(2)Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling by Chung et al.

(3)(R)Generating Sequences With Recurrent Neural Networks by Graves

(3)Long-term Recurrent Convolutional Networks for Visual Recognition and Description by Donahue et al.

(4)(R)Pixel Recurrent Neural Networks by van den Oord et al.

(4)Beyond Short Snippets: Deep Networks for Video Classification by Ng et al.

Spring break
Spring break
Spring break
Qike Ying (1)
Fnu Abhishek Prasanna (2)
Bangtian Liu (3)
Daohui Rong (4)
Object Detection
Face Detection
Spatial Pyramid Pooling


(1)(R)Deep neural networks for object detection by Szegedy et al.

(1)(R)Spatial pyramid pooling in deep convolutional networks for visual recognition by He et al.

(2)Fast R-CNN by Girshick

(2)Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks by Ren et al.

(3)R-FCN: Object Detection via Region-based Fully Convolutional Networks by Dai et al.

(3)You Only Look Once: Unified, Real-Time Object Detection by Redmon et al.

(4)(R)HyperFace: A Deep Multi-task Learning Framework for Face Detection, Landmark Localization, Pose Estimation, and Gender Recognition by Ranjan et al.

(4)SSH: Single Stage Headless Face Detector by Najibi et al.
Kaixiang Huang (1)
Shiva Salsabilian (2)

Prof. Meer
Deep Face Recognition

Mean Shift for Segmentation, Tracking and Motion
(1)(R)FaceNet: A Unified Embedding for Face Recognition and Clustering by Schroff et al.

(1)DeepFace: Closing the Gap to Human-Level Performance in Face Verification by Taigman et al.

(2)Web-Scale Training for Face Identification by Taigman et al.

(2)(R)Deep Learning Face Representation from Predicting 10,000 Classes by Sun et al.

(2)Deep Learning Face Representation by Joint Identification-Verification by Sun et al.

Short introduction into human color vision
Mean shift: A robust approach toward feature space analysis by Comaniciu et al.
Slides for segmentation
The EDISON segmentation window. (Can be downloaded from here)
Tracking through mean shift
Slides for tracking
Two needed relations
Nonlinear mean shift over Riemannian manifolds
An introduction to group theory
Slides for nonlinear mean shift for motion

Mohsen Ghassemi(1)
Zahra Shakeri (3)
Kumar Siddhant (2)
Generative Adversarial Networks (GANs)
Spatial Transformer Networks
Deep Clustering

(1)(R)Generative Adversarial Nets by Goodfellow et al.

(1)Conditional Generative Adversarial Nets by Mirza and Osindero

(1)NIPS 2016 Tutorial: Generative Adversarial Networks by Goodfellow

(2)Image-to-Image Translation with Conditional Adversarial Nets by Isola et al.

(2)Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networks by Zhu et al.

(2)(R)Spatial Transformer Networks by Jaderberg et al.

(3)Clustering with Deep Learning: Taxonomy and New Methods by Aljalbout et al.

(3)(R)Deep Subspace Clustering Networks by Ji et al.

Fatemeh K. (1)
Krishnamohan P. (2)
Yehan Wang (3)
Fully convolutional Nets
Domain Adaptation
(1)(R)Rich feature hierarchies for accurate object detection and semantic segmentation by Girshick et al.

(1)(R)Fully Convolutional Networks for Semantic Segmentation by Long et al.

(1)U-Net: Convolutional Networks for Biomedical Image Segmentation by Ronneberger et al.

(2)Pixel Objectness by Jain et al.

(2)Visual domain adaptation: a survey of recent advances by Patel et al.

(3)(R)Unsupervised Domain Adaptation by Backpropagation by Ganin and Lempitsky

(3)Deep CORAL: Correlation Alignment for Deep Domain Adaptation by Sun and Saenko

(3)CyCADA: Cycle-Consistent Adversarial Domain Adaptation by Hoffman et al

Xiaoyi Tang (1)
Kangning Yang (2)
Neural Turing Machines
Orderless Set Network
Meta Learning
Memory Networks
(1)(R)Neural Turing Machines by Graves et al.

(1)(R)Order Matters: Sequence to sequence for sets by Vinyals et al.

(1)Memory networks Weston et al.

(2)End-To-End Memory Networks by Sukhbaatar et al.

(2)Meta-Learning with Memory-Augmented Neural Networks by Santoro et al.

(2)Neural Aggregation Network for Video Face Recognition by Yang et al.

Prof. Meer
Robust estimators returning geometric functions
RANdom SAmple Consensus, RANSAC, has serious drawbacks.
The Multiple Inlier Structures Robust Estimator, MISRE, is without tuning parameters.  


Deep Learning, by Ian Goodfellow and Yoshua Bengio and Aaron Courville
   Neural Networks and Deep Learning, by Michael Nielsen