Learn everything about Analytics

Introduction to k-Nearest Neighbors: Simplified (with implementation in Python)

Note: This article was originally published on Oct 10, 2014 and updated on Mar 27th, 2018

Introduction

In four years of my career into analytics I have built more than 80% of classification models and just 15-20% regression models. These ratios can be more or less generalized throughout the industry. The reason of a bias towards classification models is that most analytical problem involves making a decision. For instance will a customer attrite or not, should we target customer X for digital campaigns, whether customer has a high potential or not etc. These analysis are more insightful and directly links to an implementation roadmap. In this article, we will talk about another widely used classification technique called K-nearest neighbors (KNN) . Our focus will be primarily on how does the algorithm work and how does the input parameter effect the output/prediction.

 

Table of Contents

  • When do we use KNN algorithm?
  • How does the KNN algorithm work?
  • How do we choose the factor K?
  • Breaking it Down – Pseudo Code of KNN
  • Implementation in Python from scratch
  • Comparing our model with scikit-learn

 

When do we use KNN algorithm?

KNN can be used for both classification and regression predictive problems. However, it is more widely used in classification problems in the industry. To evaluate any technique we generally look at 3 important aspects:

1. Ease to interpret output

2. Calculation time

3. Predictive Power

Let us take a few examples to  place KNN in the scale :

Model comparisonKNN algorithm fairs across all parameters of considerations. It is commonly used for its easy of interpretation and low calculation time.

 

How does the KNN algorithm work?

Let’s take a simple case to understand this algorithm. Following is a spread of red circles (RC) and green squares (GS) :

scenario1You intend to find out the class of the blue star (BS) . BS can either be RC or GS and nothing else. The “K” is KNN algorithm is the nearest neighbors we wish to take vote from. Let’s say K = 3. Hence, we will now make a circle with BS as center just as big as to enclose only three datapoints on the plane. Refer to following diagram for more details:

scenario2 The three closest points to BS is all RC. Hence, with good confidence level we can say that the BS should belong to the class RC. Here, the choice became very obvious as all three votes from the closest neighbor went to RC. The choice of the parameter K is very crucial in this algorithm. Next we will understand what are the factors to be considered to conclude the best K.

 

How do we choose the factor K?

First let us try to understand what exactly does K influence in the algorithm. If we see the last example, given that all the 6 training observation remain constant, with a given K value we can make boundaries of each class. These boundaries will segregate RC from GS. The same way, let’s try to see the effect of value “K” on the class boundaries. Following are the different boundaries separating the two classes with different values of K.

K judgement

K judgement2

If you watch carefully, you can see that the boundary becomes smoother with increasing value of K. With K increasing to infinity it finally becomes all blue or all red depending on the total majority.  The training error rate and the validation error rate are two parameters we need to access on different K-value. Following is the curve for the training error rate with varying value of K :

training errorAs you can see, the error rate at K=1 is always zero for the training sample. This is because the closest point to any training data point is itself.Hence the prediction is always accurate with K=1. If validation error curve would have been similar, our choice of K would have been 1. Following is the validation error curve with varying value of K:

training error_1This makes the story more clear. At K=1, we were overfitting the boundaries. Hence, error rate initially decreases and reaches a minima. After the minima point, it then increase with increasing K. To get the optimal value of K, you can segregate the training and validation from the initial dataset. Now plot the validation error curve to get the optimal value of K. This value of K should be used for all predictions.

 

Breaking it Down – Pseudo Code of KNN

We can implement a KNN model by following the below steps:

  1. Load the data
  2. Initialise the value of k
  3. For getting the predicted class, iterate from 1 to total number of training data points
    1. Calculate the distance between test data and each row of training data. Here we will use Euclidean distance as our distance metric since it’s the most popular method. The other metrics that can be used are Chebyshev, cosine, etc.
    2. Sort the calculated distances in ascending order based on distance values
    3. Get top k rows from the sorted array
    4. Get the most frequent class of these rows
    5. Return the predicted class

Implementation in Python from scratch

We will be using the popular Iris dataset for building our KNN model. You can download it from here.

# Importing libraries
import pandas as pd
import numpy as np
import math
import operator
#### Start of STEP 1
# Importing data 
data = pd.read_csv("iris.csv")
#### End of STEP 1

data.head() 

# Defining a function which calculates euclidean distance between two data points
def euclideanDistance(data1, data2, length):
    distance = 0
    for x in range(length):
        distance += np.square(data1[x] - data2[x])
    return np.sqrt(distance)

# Defining our KNN model
def knn(trainingSet, testInstance, k):
 
    distances = {}
    sort = {}
 
    length = testInstance.shape[1]
    
    #### Start of STEP 3
    # Calculating euclidean distance between each row of training data and test data
    for x in range(len(trainingSet)):
        
        #### Start of STEP 3.1
        dist = euclideanDistance(testInstance, trainingSet.iloc[x], length)

        distances[x] = dist[0]
        #### End of STEP 3.1
 
    #### Start of STEP 3.2
    # Sorting them on the basis of distance
    sorted_d = sorted(distances.items(), key=operator.itemgetter(1))
    #### End of STEP 3.2
 
    neighbors = []
    
    #### Start of STEP 3.3
    # Extracting top k neighbors
    for x in range(k):
        neighbors.append(sorted_d[x][0])
    #### End of STEP 3.3
    classVotes = {}
    
    #### Start of STEP 3.4
    # Calculating the most freq class in the neighbors
    for x in range(len(neighbors)):
        response = trainingSet.iloc[neighbors[x]][-1]
 
        if response in classVotes:
            classVotes[response] += 1
        else:
            classVotes[response] = 1
    #### End of STEP 3.4

    #### Start of STEP 3.5
    sortedVotes = sorted(classVotes.items(), key=operator.itemgetter(1), reverse=True)
    return(sortedVotes[0][0], neighbors)
    #### End of STEP 3.5


# Creating a dummy testset
testSet = [[7.2, 3.6, 5.1, 2.5]]
test = pd.DataFrame(testSet)
#### Start of STEP 2
# Setting number of neighbors = 1
k = 1
#### End of STEP 2
# Running KNN model
result,neigh = knn(data, test, k)

# Predicted class
print(result)
-> Iris-virginica
# Nearest neighbor
print(neigh)
-> [141]

 

Now we will try to alter the k values, and see how the prediction changes.

# Setting number of neighbors = 3 
k = 3 
# Running KNN model 
result,neigh = knn(data, test, k) 
# Predicted class 
print(result) -> Iris-virginica
# 3 nearest neighbors
print(neigh)
-> [141, 139, 120]

# Setting number of neighbors = 5
k = 5
# Running KNN model 
result,neigh = knn(data, test, k) 
# Predicted class 
print(result) -> Iris-virginica
# 5 nearest neighbors
print(neigh)
-> [141, 139, 120, 145, 144]

 

Comparing our model with scikit-learn

from sklearn.neighbors import KNeighborsClassifier
neigh = KNeighborsClassifier(n_neighbors=3)
neigh.fit(data.iloc[:,0:4], data['Name'])

# Predicted class
print(neigh.predict(test))

-> ['Iris-virginica']

# 3 nearest neighbors
print(neigh.kneighbors(test)[1])
-> [[141 139 120]]

We can see that both the models predicted the same class (‘Iris-virginica’) and the same nearest neighbors ( [141 139 120] ). Hence we can conclude that our model runs as expected.

 

End Notes

KNN algorithm is one of the simplest classification algorithm. Even with such simplicity, it can give highly competitive results. KNN algorithm can also be used for regression problems. The only difference from the discussed methodology will be using averages of nearest neighbors rather than voting from nearest neighbors. KNN can be coded in a single line on R. I am yet to explore how can we use KNN algorithm on SAS.

Did you find the article useful? Have you used any other machine learning tool recently? Do you plan to use KNN in any of your business problems? If yes, share with us how you plan to go about it.

If you like what you just read & want to continue your analytics learningsubscribe to our emailsfollow us on twitter or like our facebook page.

 

You can also read this article on Analytics Vidhya's Android APP Get it on Google Play

27 Comments

  • Harshal says:

    Useful article.
    Can you share similar article for randomforest ?
    What are limitations with data size for accuracy?

  • saurabh says:

    Good one please share the value of
    Red circle and green square

    • Tavish Srivastava says:

      Saurabh,

      The first graph is for illustrating purposes. You can create a random dataset to check the algorithm.

  • DEBASHIS ROUT says:

    I am currently doing part time MS in BI & Data Mining. I found this article is really helpful to understand in more detail and expecting to utilize in my upcoming project work. I need to know do you have any article on importance of Data quality in BI , Classification & Decision Tree.

  • saraswathi says:

    Hello
    the article is very clear and precise. I would like some clarification on the single line
    “To get the optimal value of K, you can segregate the training and validation from the initial dataset.”. Do you mean that we segregate those points on the border of the boundaries for validation and keep the remaining for training. This is not very clear to me. Can you please elaborate ?

    Thanks.

    • Tavish Srivastava says:

      Saraswathi,

      Here is what I meant : Take the entire population, and randomly split it into two. Now on the training sample, score each validation observation with different k-values. The error curve will give you the best value of k.

      Hope it becomes clear now.

      • saraswathi says:

        I want to make sure I understand this correctly. Please confirm or correct.

        You say, take the entire population and split into two – are these two divisions, one for training and one for validation ? ( I am assuming so).

        So, now I use different values of K to cluster the training samples.

        I try to see where the validation samples fall in these clusters.

        I draw the error curve and choose the K with the smallest error ?

        • Tavish says:

          Saraswathi,
          Let me make it even simpler. Say, you have 100 datapoints. Split this population into two samples of 70 and 30 observations. Use these 70 observation to predict for the other 30. Once you have the prediction for a particular value of k, check the misclassification with actual value. Repeat this exercise for different value of k. Hopefully, you will get a curve similar to that shown in the article. Now choose the k for which the misclassification is least.

          Hope this makes it clear.

          Tavish

  • Harvey S says:

    Nice tease: “KNN can be coded in a single line on R. ”

    Can you give an example?

  • Felix says:

    Hi, great post, thanks. I would like to add, that “low calculation time” is not true for the prediction phase with big, high dimensional datasets. But it’s still a good choice in many applications.

    • Tavish says:

      Felix,

      You are probably right for cases where the distance between observations comparable in the large dataset. But in general population have natural clusters which makes the calculation faster. Let me know in case you disagree.

      Tavish

  • kabir singh says:

    I am trying to figure out churn analysis, any suggestions where I am start looking?

    BTW following this website for 6-8 months now, you guys are doing an amazing job

  • Najma Naaz says:

    That was very helpful. Thank you! Can you please share a concise article on neural nets and deep learning as well?

  • Stonehead Park says:

    Excellent post, I appreciate your effort. 🙂

  • Soumya Shreya says:

    It is a really nice and well explained article, I am a beginner in the field of data science and machine learning and I find these articles really helpful to learn and understand the algorithms. Thanks for publishing them.
    Can you suggest me some datasets where I can experiment and apply KNN.

  • Krishna says:

    How do we handle categorical features with kNN? Do we need to create dummies for them? Do you suggest any other distance method other than euclidean distance if we have more number of features? I feel we have to treat outliers as they may have impact on the distances, similarly missing values.. Please share your opinion.

    • Aishwarya Singh says:

      Hi,

      Yes you can create dummies for categorical variables in kNN.
      Apart from Euclidean distance, there are other methods that can be used to find the distance such as Manhattan or Minkowski.
      For outliers adn missing value treatment, you can refer this article .

  • Aanish Singla says:

    IMO limitation of KNN comes into play when dimensions increase because in higher dimensions, finding neighbors which are quite close to each other in all dimensions might be tough, hence so called neighbors might be really far apart from each other which defeats the purpose of the algorithm.

    Kindly share your thoughts/experiences.

  • Amlesh Kanekar says:

    I found it “inspiring”. Have spent last 4 months learning linear algebra, statistics, python. This learning list was culled from analyticsvidhya.com. Now just glimpsing through your article gives me the confidence to code knn from scratch. Thank you!

    • Aishwarya Singh says:

      Hi Amlesh,

      Glad you found this useful!

      • Amlesh Kanekar says:

        I created my own dataset to experiment with KNN. When I plotted my data, the three targets/labels I have are extremely randomly distributed across the 2D plane … no clustering of the three colours is evident.
        The Iris dataset shows a fairly high degree of clustering.
        Should I continue with my dataset or there is the concept of “so-and-so distribution does not qualify for KNN”?
        I can email a picture of my data plot if needed.