Course Description: This course will review modern methods used in deep learning and neural network design. The material will be focused on a broad set of techniques that are commonly used in state-of-the-art neural network architectures. We will include methods used broadly, as well as network styles prevalent in specific sub-domains like computer vision, natural language processing, and social network analysis. You will learn how to use these techniques in modern frameworks like PyTorch, and how to apply these methods to new problems. We will not review derivations of algorithms, but methods will be explained with (somewhat gentle) Math. This connects what you will learn to current research, so you will be able to remain abreast of future developments.
Prerequisite: DATA 602 Introduction to Data Analysis and Machine Learning or CMSC 478 – Introduction to Machine Learning
Required Text: Edward Raff, “Inside Deep Learning: Math, Algorithms, Models,” Publisher: MEAP, 2021. ISBN 9781617298639
Required Software: The course will be using Python 3 with the following libraries: numpy, sklearn, pandas, matplotlib, Jupyter, pyTorch. It is the students responsibility to have a working environment. If you’d like to have the environment installed locally Anaconda is a Python distribution that has all required libraries. The recommended option is to use Goolge’s Colab which is available from your UMBC account. Both options are a search away.
Course work Grade distribution: Homework 40%, Midterm 20%, Project 40%
Course Syllabus
Week 1 Introduction to PyTorch & Foundations [Key Concepts: Automatic Differentiation, PyTorch mechanics]
Week 2 Neural Network Review [Loss Function, Activation Functions, Regularization, Loss Functions]
Week 3 Convolutional Neural Networks (CNNs) [Key Concepts: Pooling, Weight Sharing, Stride, Dilation]
Application: Image Classification
Week 4 Recurrent Neural Networks (RNNs) [Key Concepts: Sequence Prediction Problems, Gradient Clipping]
Application: Sentiment Classification & Language Models
Week 5 Network Design: Common Design Building-Blocks [Key Concepts: Batch-Normalization, Skip-Connections, 1×1 Convolution]
Week 6 Modern Training Techniques [Key Concepts: Momentum, Hyperparameter Optimization, dropout, learning-rate annealing]
Week 7 Network Design: Embedding & Auto-Encoding [Key Concepts: PCA, Self-Supervision]
Application: Image Denoising & Information Retrieval
Week 8 Region Proposal Networks [Key Concepts: Mask R-CNN]
Applications: Object Detection.
Week 9 Generative Adversarial Networks (GANs) [Key Concepts: Min-Max games, Generative Models, Ethics]
Applications: Photo Manipulation, Super-Resolution
Week 10 Network Design: Attention Mechanisms [Key Concepts: Attention, encoding priors]
Application: Machine Translation
Week 11 Network Design: Alternatives to RNNs [Key Concepts: Temporal Pooling, Transformers]
Week 12 Transfer Learning [Key concepts: Pre-training, Fine-tuning & weight freezing, domain adaption]
Application: Object Detection, NLP
Week 13 Network Design 2: Advanced Design Building Blocks & Training Techniques [Key Concepts: Scale-Adaption, Better Pooling, mixup]
Week 14 Project Demonstrations