Intro to ML: Neural Networks Lecture 2 Part 2

Assessment
•

Josiah Wang
•
Mathematics, Computers, Fun
•
University
•
15 plays
•
Hard
Student preview

6 questions
Show all answers
1.
MULTIPLE CHOICE
2 mins • 1 pt
Is the following statement True or False? A neural network’s weights can be randomly initialised as gradient descent will always eventually find the optimal set of parameters, therefore being invariant to the initial set of parameters
Answer explanation
A neural network's weights are often randomly initialised. What is wrong here is that gradient descent has no theoretical guarantee to find the optimal set of parameters and often does not achieve this. (recall local optima from the lecture slides)
2.
MULTIPLE CHOICE
1 min • 1 pt
L1 regularisation favours few non zero weights where as L2 regularisation favours small values around zero.
Answer explanation
L1 regularization favours a sparse solution by encouraging a few non-zero weights, while L2 regularization favours small values around zero. This means that L1 regularization tends to select only the most important features, while L2 regularization spreads the importance across all features. In this case, the correct choice is 'True' because L1 regularization does favour a few non-zero weights.
3.
MULTIPLE SELECT
1 min • 1 pt
Which of the following statements about dropout are correct?
4.
MULTIPLE CHOICE
2 mins • 1 pt
A dense multi layer perceptron layer has 300 input values and 200 output values. Assuming no bias, how many parameters does the layer contain?
Answer explanation
The given question asks about the number of parameters in a dense multi-layer perceptron layer with 300 input values and 200 output values. Since there is no bias, each input value is connected to each output value, resulting in a total of 300 * 200 = 60000 parameters in the layer. This means that there are 60000 weights to be learned during the training process.
5.
MULTIPLE CHOICE
1 min • 1 pt
If a neural network is overfitting, which of the following would not help
Answer explanation
To address overfitting in a neural network, several techniques can be employed. Introducing dropout, reducing the number of layers in the model, and increasing the size of the training data are effective methods. However, increasing the learning rate may not help in reducing overfitting. By adjusting the learning rate, the model may become more prone to overshooting the optimal solution and fail to generalize well. Therefore, increasing the learning rate is not a suitable approach to combat overfitting.
6.
MULTIPLE SELECT
2 mins • 1 pt
Which of the following statements are True?
Similar Resources on Quizizz
10 questions
Gradient of a Line

•
9th Grade - University
8 questions
Least Squares and RMSE

•
University
10 questions
DIFFERENTIATION & INTEGRATION

•
University
10 questions
Adobe Photoshop

•
University
10 questions
Test About Median of Triangles

•
10th Grade - University
10 questions
Grade 10 Math Review Quiz

•
6th Grade - University
10 questions
Persiapan UTS Deep Learning

•
University
10 questions
Finding the Gradient - given the function and x

•
11th Grade - University
Popular Resources on Quizizz
5 questions
Brain Teasers

•
KG - University
37 questions
Math STAAR Review

•
4th Grade
12 questions
Earth Day

•
4th Grade
25 questions
STAAR REVIEW - SCIENCE

•
5th Grade
20 questions
Science STAAR Review! 23-24

•
5th Grade
20 questions
Reading Comprehension

•
5th Grade
20 questions
Types of Credit

•
9th - 12th Grade
23 questions
Math STAAR Review

•
4th Grade