Deep Learning
Summary
This course provides an introduction to deep learning on modern Intel® architecture. Deep learning has gained significant attention in the industry by achieving state of the art results in computer vision and natural language processing.
By the end of this course, students will have a firm understanding of:
- Techniques, terminology, and mathematics of deep learning
- Fundamental neural network architectures, feedforward networks, convolutional networks, and recurrent networks
- How to appropriately build and train these models
- Various deep learning applications
- How to use pretrained models for best results
The course is structured around 12 weeks of lectures and exercises. Each week requires three hours to complete.
Week 1
This class recaps Machine Learning. Students who are experts in machine learning can skip to the next week’s class.
Week 2
The inspiration for neural networks comes from biology. This class teaches students the basic nomenclature in deep learning: what is a neuron (and its similarity to a biological neuron), the architecture of a feedforward neural network, activation functions, and weights.
Week 3
This class builds on the concepts learned in Week 2: how a neural network computes the output given an input in a single forward pass, and how to use this network to train a model. Learn how to calculate the loss and adjust weights using a technique called backpropagation. Different types of activation functions are also introduced.
Week 4
Learn techniques to improve training speed and accuracy. Identify the pros and cons of using gradient descent, stochastic gradient descent, and mini batches. With the foundational knowledge on neural networks covered in Weeks 2 through 4, learn how to build a basic neural network using Keras* with TensorFlow* as the back end.
Week 5
How can you prevent overfitting (regularization) in a neural network? In this class, learn about penalized cost function, dropout, early stopping, momentum, and some optimizers like AdaGrad and RMSProp that help with regularizing a neural network.
Week 6
Learn about convolutional neural networks (CNN) and compare them to the fully connected neural networks already introduced. Learn how to build a CNN by choosing the grid size, padding, stride, depth, and pooling.
Week 7
Using the LeNet-5* topology, learn how to apply all the CNN concepts learned in the last lesson to the MNIST (Modified National Institute of Standards and Technology) dataset for handwritten digits. With a trained neural network, see how the primitive features learned in the first few layers can be generalized across image classification tasks, and how transfer learning helps.
Week 8
Deep learning literature talks about many image classification topologies like AlexNet, VGG-16 and VGG-19, Inception, and ResNet. This week, learn how these topologies are designed and the usage scenarios for each.
Week 9
One practical obstacle to building image classifiers is obtaining labeled training data. Explore how to make the most of the available labeled data using data augmentation and implement data augmentation using Keras*.
Week 10
So far, we have used images as inputs to neural networks. Image values are essentially numbers (grayscale or RGB). But, how do we work with text? How can we build a neural network to work with pieces of text of variable length? How do we convert words into numerical values? Learn about recurrent neural networks (RNN) and their application to natural language processing (NLP).
Week 11
Learn more advanced topics for developing an RNN and how the concept of recurrence can be used to solve the issue with variable sequence and ordering of words. Take out your notebook and pencil and work through the math of RNNs.
Week 12
Standard RNNs have poor memory capabilities. In NLP, it is important to have a structure that can carry forward some of the signal over many steps. Learn about the long short term memory (LSTM) that addresses this problem.