No student devices needed. Know more
15 questions
You are given reviews of movies marked as positive, negative, and neutral. Classifying reviews of a new movie is an example of
Supervised Learning
Unsupervised Learning
Reinforcement Learning
None of these
The selling price of a house depends on many factors. For example, it depends on
the number of bedrooms, number of kitchen, number of bathrooms, the year the house was
built, and the square footage of the lot. Given these factors, predicting the selling price of
the house is an example of ____________ task.
Binary Classification
Multilabel Classification
Simple Linear Regression
Multiple Linear Regression
Regarding bias and variance, which of the following statements are true? (Here ‘high’ and ‘low’ are relative to the ideal model.)
(i). Models which overfit are more likely to have high bias
(ii). Models which overfit are more likely to have low bias
(iii). Models which overfit are more likely to have high variance
(iv). Models which overfit are more likely to have low variance
(i) and (ii)
(ii) and (iii)
(iii) and (iv)
None of these
State whether the statements are True or False.
Statement A: When the hypothesis space is richer, overfitting is more likely.
Statement B: When the feature space is larger, overfitting is more likely.
False, False
True, False
True, True
False, True
What is the purpose of restricting hypothesis space in machine learning?
Can be easier to search
May avoid overfit since they are usually simpler (e.g. linear or low order decision surface)
Both of the above
None of the above
Suppose, you got a situation where you find that your linear regression model is under fitting the data. In such situation which of the following options would you consider?
You will add more features
You will start introducing higher degree features
You will remove some features
None of the above.
Consider a simple linear regression model with one independent variable (X). The output variable is Y. The equation is : Y=aX+b, where a is the slope and b is the intercept. If we change the input variable (X) by 1 unit, by how much output variable (Y) will change?
1 unit
By slope
By intercept
None
You have generated data from a 3-degree polynomial with some noise. What do you expect of the model that was trained on this data using a 5-degree polynomial as function class?
Low bias, high variance
High bias, low variance.
Low bias, low variance.
High bias, low variance.
What are the optimum number of principal components in the above figure ?
10
20
30
40
a
b
c
d
Which of the following methods do we use, to find the best fit line for data in Linear Regression?
Least Square Error
Maximum Likelihood
Logarithmic Loss
Both A and B
Suppose that we have N independent variables (X1,X2… Xn) and dependent variable is Y. Now Imagine that you are applying linear regression by fitting the best fit line using least square error on this data.
You found that correlation coefficient for one of it’s variable(Say X1) with Y is -0.95.
Which of the following is true for X1?
Relation between the X1 and Y is weak
Relation between the X1 and Y is strong
Relation between the X1 and Y is neutral
Correlation can’t judge the relationship
The most popularly used dimensionality reduction algorithm is Principal Component Analysis (PCA). Which of the following is/are true about PCA?
1. PCA is an unsupervised method
2. It searches for the directions that data have the largest variance
3. Maximum number of principal components <= number of features
4. All principal components are orthogonal to each other
1 and 2
1 and 3
2 and 3
1, 2 and 3
All of the above
What will happen when eigenvalues are roughly equal in PCA?
PCA will perform outstandingly
PCA will perform badly
Can’t Say
None of above
PCA works better if there is?
(i) A linear structure in the data
(ii) If the data lies on a curved surface and not on a flat surface
(iii) If variables are scaled in the same unit
1 and 2
2 and 3
1 and 3
1 ,2 and 3
Explore all questions with a free account