10 questions
How is shift Invariance achieved in ConvNets?
Through convolutional equivariance
Through convolutional equivariance and approximate translation invariance with pooling
Through convolutional equivariance and exact pooling invariance
They exist in a higher dimensional invariant space
Why do we include dropout in the network architecture ?
Offers regularization and helps build deeper networks
Can help with uncertainty estimation through Monte-Carlo use
Increases the capacity of the model
Prevents vanishing gradients
None of these
Model Ensembling is:
Having multiple instances of the network(s) and average together their responses
Having a single instance of the network and pass the input multiple times but altered in a small way
The perfect string quartet
None of the above
Which of the following activation functions helps with the vanishing gradients problem?
Sigmoid
Tanh
ReLU
SELU
Softmax
True or False. Two 3x3 convolutional layers have the same receptive field as one 5x5 convolutional layer, results in more non linearities and requires less weights.
True
False
What causes vanishing gradients?
The Wizard Merlin
Large changes in X cause small changes in Y
Large changes in Y cause small changes in X
ReLU activations 'dying'
True or False. SELUs are more likely to 'die' compared to ReLUs.
True
False
Which of the following loss functions is best for Classification?
L1
L2
Manhattan Distance
Negative Loglikelihood
Here we can see a figure of a single convolutional layer. Which of the following statements are True.
The kernel size is 3
From this image you cannot determine the amount of padding
The number of learnable convolutions in the layer is 7
The number of learnable convolutions in the layer is 96
The amount of padding is 1
As the number of dimensions increase, data becomes more (a)