Intro to ML: Neural Networks Lecture 1 Part 2

Intro to ML: Neural Networks Lecture 1 Part 2

University

6 Qs

Similar activities

Intro to Deep Learning

Intro to Deep Learning

University

10 Qs

Neural Networks Quiz

Neural Networks Quiz

University

10 Qs

Ml Activity Quiz

Ml Activity Quiz

University

10 Qs

Facial Nerve

Facial Nerve

University

10 Qs

Deep Learning - Q1

Deep Learning - Q1

University

10 Qs

Gradient Descent Method

Gradient Descent Method

University

10 Qs

Hyperbolic Functions

Hyperbolic Functions

University

9 Qs

CNN Quiz

CNN Quiz

University

8 Qs

Intro to ML: Neural Networks Lecture 1 Part 2

Intro to ML: Neural Networks Lecture 1 Part 2

Assessment

Quiz

Created by

Josiah Wang

Mathematics, Computers, Fun

University

24 plays

Hard

6 questions

Show all answers

1.

MULTIPLE CHOICE QUESTION

3 mins • 1 pt

A neuron with 3 inputs has weight vector [0.2, -0.1, 0.1]^T, a bias of b = 0 and a ReLU activation function. If the input vector is X = [0.2, 0.4, 0.2]^T, then what is the output value of the neuron? 

0.2

0.1

0.02

-0.1

Answer explanation

To find the output value of the neuron, we first calculate the dot product of the weight vector and input vector: (0.2 * 0.2) + (-0.1 * 0.4) + (0.1 * 0.2) = 0.02. Since the bias is 0, the pre-activation value is also 0.02. With a ReLU activation function, the output is the maximum of 0 and the pre-activation value, which is 0.02.

2.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

A neural network consisting of only linear activations is underfitting a dataset. A student only has the time to change one feature of the network. Which of the following statements is the best option for increasing the accuracy of the model?

Add more layers to the network

Train for longer

Introduce non linear activations

Aquire more data

None of the these

Answer explanation

Introducing non-linear activations is the best option for increasing the accuracy of the model. A neural network with only linear activations is limited in its ability to learn complex patterns. Non-linear activations, such as ReLU or sigmoid, allow the network to learn more complex relationships in the data, thus improving its performance and reducing underfitting.

3.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which is the best output configuration for a model tasked with predicting a patient's age given their brain MRI image?

One neuron with sigmoid

Multiple neurons with softmax

One neuron with linear output

Multiple neurons with Tanh

Answer explanation

A model predicting a patient's age from their brain MRI image requires a continuous output, as age is a continuous variable. One neuron with a linear output is the best choice, as it can produce a continuous output without any activation function constraints, unlike sigmoid, softmax, or Tanh.

4.

MULTIPLE CHOICE QUESTION

1 min • 1 pt

Which is the best output configuration for a model tasked with classifying a person's mood, e.g. angry, happy, sad etc. by the tone of their voice?

One neuron with sigmoid

Multiple neurons with softmax

One neuron with linear output

None of these

Answer explanation

The best output configuration for a model classifying a person's mood by their voice tone would be 'Multiple neurons with softmax'. This is because softmax function is used for multi-class classification problems. It gives the probabilities of each class and the sum of all probabilities is 1, which is ideal for mood classification as it can be angry, happy, sad etc.

5.

MULTIPLE CHOICE QUESTION

2 mins • 1 pt

Media Image

The derivatives for common activation functions are plotted. Pair the label with the correct activation function.

P: purple dotted line

B: blue solid line

G: green dashed line

R: red solid line

P: step, B: Tanh, G: sigmoid, R: ReLU

P: ReLU, B: Tanh, G: sigmoid, R: Step

P: step, B: sigmoid, G: Tanh, R: ReLU

P: Tanh, B: step, G: sigmoid, R: ReLU

6.

MULTIPLE CHOICE QUESTION

5 mins • 1 pt

Media Image

The following diagram represents a feedforward neural network with its corresponding weights. Each layer has ReLU activations. The weight connecting node i to node j is \omega_{ij} . Calculate the output from one forward pass through the network with the input  x=[1,0]T\overline{x}=\left[1,0\right]^T  

[0,1]

[0,4]

[-6,6]

[6,6]

None of the above