Deep Learning: Conv Nets

Assessment
•

Josiah Wang
•
Mathematics
•
University
•
38 plays
•
Hard
Student preview

10 questions
Show all answers
1.
MULTIPLE CHOICE
1 min • 1 pt
How is shift Invariance achieved in ConvNets?
Answer explanation
The convolutional layers are shift equivariant. If an input image is shifted a little bit, the convolutional filters will produce the same response at the shifted location. The pooling layers are approximately shift invariant. For example, if an input image is shifted a little bit under a max pooling layer, the maximum value will still be the same. Overall, given an input image x, a shift S, a shift equivariant convolutional layer f, and a shift invariant pooling layer g, the ConvNet g(f(x)) is shift invariant because g(f(Sx)) = g(Sf(x)) = g(f(x)). (see Note02)
2.
MULTIPLE SELECT
45 sec • 1 pt
Why do we include dropout in the network architecture ?
Answer explanation
Dropout randomly removes connections between neurons in neural networks. It offers regularization through preventing the model from relying on all features, often termed co-adaption, helping reduce overfitting. In addition, Monte Carlo dropout can be used as a Bayesian approximation to estimate uncertainties in neural networks (see https://arxiv.org/abs/1506.02142).
3.
MULTIPLE CHOICE
1 min • 1 pt
Model Ensembling is:
4.
MULTIPLE SELECT
45 sec • 1 pt
Which of the following activation functions helps with the vanishing gradients problem?
Answer explanation
Hyperbolic activation functions have gradients in the range of 0-1 with significant proportions of the activation function space yielding very low gradients. As backpropagtion involves the chain role, repeatedly calculating the product of partial derivatives, the gradients passed back become vanishingly small. This does not occur in ReLUs for example as the gradient is either 0 or 1.
5.
MULTIPLE CHOICE
45 sec • 1 pt
True or False. Two 3x3 convolutional layers have the same receptive field as one 5x5 convolutional layer, results in more non linearities and requires less weights.
Answer explanation
6.
MULTIPLE CHOICE
45 sec • 1 pt
What causes vanishing gradients?
Answer explanation
Vanishing gradients occur when the derivative of a function becomes very close to zero, meaning large changes in input (X) cause only small changes in output (Y). This is a problem as backpropagation is done by calculating the derivatives of the error with respect to the weights, so if the derivatives are very small, the parameters will barely change, and the error will remain.
7.
MULTIPLE CHOICE
30 sec • 1 pt
True or False. SELUs are more likely to 'die' compared to ReLUs.
Answer explanation
ReLUs can 'die' as when inactive, below 0, they yield gradients of 0. Therefore, there is no learning signal propagating through the deactivated unit. Weights will not be updated based on any learning signal which was intended to pass through the deactivated unit. SeLUs combat this problem as have no non zero gradients therefore always yielding a learning signal.
8.
MULTIPLE CHOICE
30 sec • 1 pt
Which of the following loss functions is best for Classification?
9.
MULTIPLE SELECT
1 min • 1 pt
Here we can see a figure of a single convolutional layer. Which of the following statements are True.
10.
DROPDOWN
30 sec • 1 pt
As the number of dimensions increase, data becomes more (a)
Explore all questions with a free account
Find a similar activity
Create activity tailored to your needs using
Signals Nd Systems

•
University
Deep Learning: Generative Models

•
University
Quiz-1

•
University - Professi...
Bahasa Inggris Matematika SMA Chapter 1

•
University
4NT Coordinates Geometry

•
10th Grade - University
DL 2

•
University - Professi...
AI Models Quiz

•
University
Trellis Data Annual Challenge

•
University