K-Nearest Neighbours (KNN) and tree-based algorithms are two of the most intuitive and easy-to-understand machine learning algorithms. Both are simple to explain and demonstrate, making them perfect for those new to the field. For beginners, it is crucial to test their knowledge of these algorithms as they are simplistic yet immensely powerful. These are commonly asked in interviews as well. Searching for and practising KNN interview questions can help one better understand the algorithm and its practical applications. In this article, we explain the top 30 KNN interview questions or KNN MCQS that help you succeed.
Let us now look at some KNN interview questions that will also act as KNN practice problems for a beginner and an experienced professional.
A) TRUE
B) FALSE
Solution: A
The training phase of the algorithm consists only of storing the feature vectors and class labels of the training samples.In the testing phase, a test point is classified by assigning the label which are most frequent among the k training samples nearest to that query point – hence higher computation.
A) 3
B) 10
C) 20
D 50
Solution: B
Validation error is the least when the value of k is 10. So it is best to use this value of k
A) Manhattan
B) Minkowski
C) Tanimoto
D) Jaccard
E) Mahalanobis
F) All can be used
Solution: F
All of these distance metric can be used as a distance metric for k-NN.
A) It can be used for classification
B) It can be used for regression
C) It can be used in both classification and regression
Solution: C
We can also use k-NN for regression problems. In this case the prediction can be based on the mean or the median of the k-most similar instances.
A) 1 and 2
B) 1 and 3
C) Only 1
D) All of the above
Solution: D
The above mentioned statements are assumptions of KNN algorithm
A) K-NN
B) Linear Regression
C) Logistic Regression
Solution: A
k-NN algorithm can be used for imputing missing value of both categorical and continuous variables.
A) It can be used for continuous variables
B) It can be used for categorical variables
C) It can be used for categorical as well as continuous
D) None of these
Solution: A
Manhattan Distance is designed to calculate the distance between real valued features.
A) 1
B) 2
C) 3
D) 1 and 2
E) 2 and 3
F) 1,2 and 3
Solution: A
Both Euclidean and Manhattan distances are used for continuous variables, whereas the hamming distance is used for categorical variables.
A) 1
B) 2
C) 4
D) 8
Solution: A
sqrt( (1-2)^2 + (3-3)^2) = sqrt(1^2 + 0^2) = 1
A) 1
B) 2
C) 4
D) 8
Solution: A
sqrt( mod((1-2)) + mod((3-3))) = sqrt(1 + 0) = 1
Context: 11-12
Suppose, you have given the following data where x and y are the 2 input variables and Class is the dependent variable.
Below is a scatter plot which shows the above data in 2D space.
A) + Class
B) – Class
C) Can’t say
D) None of these
Solution: A
All three nearest point are of +class so this point will be classified as +class.
A) + Class
B) – Class
C) Can’t say
Solution: B
Now this point will be classified as – class because there are 4 – class and 3 +class point are in nearest circle.
Context 13-14:
Suppose you have given the following 2-class data where “+” represent a postive class and “” is represent negative class.
A) 3
B) 5
C) Both have same
D) None of these
Solution: B
5-NN will have least leave one out cross validation error.
A) 2/14
B) 4/14
C) 6/14
D) 8/14
E) None of the above
Solution: E
In 5-NN we will have 10/14 leave one out cross validation accuracy.
A) When you increase the k the bias will be increases
B) When you decrease the k the bias will be increases
C) Can’t say
D) None of these
Solution: A
large K means simple model, simple model always condider as high bias
A) When you increase the k the variance will increases
B) When you decrease the k the variance will increases
C) Can’t say
D) None of these
Solution: B
Simple model will be consider as less variance model
A) Left is Manhattan Distance and right is euclidean Distance
B) Left is Euclidean Distance and right is Manhattan Distance
C) Neither left or right are a Manhattan Distance
D) Neither left or right are a Euclidian Distance
Solution: B
Left is the graphical depiction of how euclidean distance works, whereas right one is of Manhattan distance.
A) I will increase the value of k
B) I will decrease the value of k
C) Noise can not be dependent on value of k
D) None of these
Solution: A
To be more sure of which classifications you make, you can try increasing the value of k.
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
In such case you can use either dimensionality reduction algorithm or the feature selection algorithm
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
Both are true and self explanatory
A) k1 > k2> k3
B) k1<k2
C) k1 = k2 = k3
D) None of these
Solution: D
Value of k is highest in k3, whereas in k1 it is lowest
A) 1
B) 2
C) 3
D) 5
Solution: B
If you keep the value of k as 2, it gives the lowest cross validation accuracy. You can try this out yourself.
Note: Model has successfully deployed and no technical issues are found at client side except the model performance
A) It is probably a overfitted model
B) It is probably a underfitted model
C) Can’t say
D) None of these
Solution: A
An overfitted module seems to perform well on training data, but it is not generalized enough to give the same results on new data.
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
Both the options are true and are self explanatory.
A) The classification accuracy is better with larger values of k
B) The decision boundary is smoother with smaller values of k
C) The decision boundary is linear
D) k-NN does not require an explicit training step
Solution: D
Option A: This is not always true. You have to ensure that the value of k is not too high or not too low.
Option B: This statement is not true. The decision boundary can be a bit jagged
Option C: Same as option B
Option D: This statement is true
A) TRUE
B) FALSE
Solution: A
You can implement a 2-NN classifier by ensembling 1-NN classifiers
A) The boundary becomes smoother with increasing value of K
B) The boundary becomes smoother with decreasing value of K
C) Smoothness of boundary doesn’t dependent on value of K
D) None of these
Solution: A
The decision boundary would become smoother by increasing the value of K
A) 1
B) 2
C) 1 and 2
D) None of these
Solution: C
Both the statements are true
Context 29-30:
Suppose you have trained a k-NN model and now want to get a prediction on test data. Before getting the prediction, suppose you want to calculate the time taken by the k-NN to predict the class for the test data.
Note: Calculating the distance between 2 observation will take D time.
A) N*D
B) N*D*2
C) (N*D)/2
D) None of these
Solution: A
The value of N is very large, so option A is correct
A) 1-NN >2-NN >3-NN
B) 1-NN < 2-NN < 3-NN
C) 1-NN ~ 2-NN ~ 3-NN
D) None of these
Solution: C
The training time for any value of k in KNN algorithm is the same.
Here are some resources to get in-depth knowledge of the subject.
If you are just starting with Machine Learning and Data Science, here is a course to assist you in your journey to Master Data Science and Machine Learning. Check out the detailed course structure in the link below:
Being prepared for KNN interview questions is crucial for anyone looking to enter the field of data science or machine learning. Understanding the basics of the KNN algorithm, its practical applications, and how to handle technical questions can help you demonstrate your knowledge and problem-solving skills. By practising KNN interview questions and working through example problems, you can improve your understanding and feel more confident during the interview process. With these tips in mind, you can confidently approach KNN interviews and set yourself up for success in your data science career.
Here are a few articles that might be useful for your interview:
WOW !! Nice Sir, Thankyou.
Answer to Q 12 should be + class. All together there are 8 observation 4 of - and 4 of+. But the nearest ones for 1,1 is +
Hi Sasikanth, There was a typo in the questions. I have updated the same, thanks for the feedback
Lovely test. Please, can I get the soft copy of the questions & answers for both the KNN & TREE Algorithms to my email?: [email protected].