Intro to ML: Neural Networks Lecture 2 Part 2

Intro to ML: Neural Networks Lecture 2 Part 2

Assessment

Assessment

Created by

Josiah Wang

Mathematics, Computers, Fun

University

15 plays

Hard

Student preview

quiz-placeholder

6 questions

Show all answers

1.

MULTIPLE CHOICE

2 mins • 1 pt

Is the following statement True or False? A neural network’s weights can be randomly initialised as gradient descent will always eventually find the optimal set of parameters, therefore being invariant to the initial set of parameters

Answer explanation

A neural network's weights are often randomly initialised. What is wrong here is that gradient descent has no theoretical guarantee to find the optimal set of parameters and often does not achieve this. (recall local optima from the lecture slides)

2.

MULTIPLE CHOICE

1 min • 1 pt

L1 regularisation favours few non zero weights where as L2 regularisation favours small values around zero.

Answer explanation

L1 regularization favours a sparse solution by encouraging a few non-zero weights, while L2 regularization favours small values around zero. This means that L1 regularization tends to select only the most important features, while L2 regularization spreads the importance across all features. In this case, the correct choice is 'True' because L1 regularization does favour a few non-zero weights.

3.

MULTIPLE SELECT

1 min • 1 pt

Which of the following statements about dropout are correct?

4.

MULTIPLE CHOICE

2 mins • 1 pt

A dense multi layer perceptron layer has 300 input values and 200 output values. Assuming no bias, how many parameters does the layer contain?

Answer explanation

The given question asks about the number of parameters in a dense multi-layer perceptron layer with 300 input values and 200 output values. Since there is no bias, each input value is connected to each output value, resulting in a total of 300 * 200 = 60000 parameters in the layer. This means that there are 60000 weights to be learned during the training process.

5.

MULTIPLE CHOICE

1 min • 1 pt

If a neural network is overfitting, which of the following would not help

Answer explanation

To address overfitting in a neural network, several techniques can be employed. Introducing dropout, reducing the number of layers in the model, and increasing the size of the training data are effective methods. However, increasing the learning rate may not help in reducing overfitting. By adjusting the learning rate, the model may become more prone to overshooting the optimal solution and fail to generalize well. Therefore, increasing the learning rate is not a suitable approach to combat overfitting.

6.

MULTIPLE SELECT

2 mins • 1 pt

Which of the following statements are True?

Discover more resources for Mathematics