Improving your Deep Learning model using Model Checkpointing- Part 1

Himanshi Singh 18 Mar, 2021 • 5 min read

Introduction

Deep learning is ubiquitous – whether it’s Computer Vision applications or breakthroughs in the field of Natural Language Processing, we are living in a deep learning-fueled world. Thanks to the rapid advances in technology, more and more people are able to leverage the power of deep learning. At the same time, it is a complex field and can appear daunting for newcomers.

Note: If you are more interested in learning concepts in an Audio-Visual format, We have this entire article explained in the video below. If not, you may continue reading.

 

One common concern for everyone in this field is- How can they improve their Deep Learning models? Are there any ways or techniques which can help them to improve their models? Well, there certainly are such techniques which you need to know if you also want to improve your model performance. And in this article, I’m going to cover one of such techniques, which I must say is very important while building a Neural Network. This technique is called Model Checkpointing. And it majorly has two advantages-

  • Saves the best model for us.
  • In case of system failure, not everything is lost

We’ll discuss each one in detail. Let’s begin!

 

1. Saving the Best Model

Let’s discuss what do we mean by “Best Model” and how it can be saved? Let’s say that this is the visualization of the performance of a model-

Model Checkpointing - Saving the Best Model

Here the blue line represents the training loss and the orange line represents the validation loss. On the X-axis, we have the number of epochs and on the Y-axis we have the loss values. Now, while making the predictions, the weights and biases stored at the very last epoch will be used. So the model will train completely till the specified number of epochs, which is 50 in this case. And the parameters learned during the last epoch will be used in order to make the predictions.

model checkpointing - epoch vs loss

But if you look closely in this particular graph, the best validation loss is around this epoch, which is epoch number 45-

best validation loss

Let me take the model history in order to elaborate on this a bit more. So here is the model history for a model which has been trained for 50 epochs-

Model Checkpointing - 50 epochs

And you can see the epoch numbers here. Now we can see that we have the training loss, training accuracy, validation loss, and validation accuracy shown here. Let’s look at the valuation loss as highlighted here-

Model Checkpointing - valuation loss

So what we generally do is we take the parameters of the model at the last epoch, which is epoch 50 here, and make the predictions. Now, in this case, we can see that the valuation loss at epoch number 50 is 0.629, whereas if you see the lowest validation loss was 0.61, which was at epoch 45.

Model Checkpointing - 45 epochs

So through the model checkpointing, instead of saving the last model or the parameters of the last epoch, we are going to save the model which produces the best results. And this model is called the Best Model. So basically Model Checkpointing will help us save the best model.

 

2- In case of system failure, not everything is lost

Now, since it seems the best model, the second useful advantage of this technique is that in case your system breaks or fails during the training process, you will not lose much information since the model is being saved constantly. Now, we know that through model checkpointing, we can save the best model but you must be wondering, how do we do that? How do we know which model is the best model?

So to answer that in Keras, we have to define two parameters. One is “Monitor” and the other one is “Mode”.

Monitor and Model

The first one refers to the quantity that we wish to monitor, such as validation loss or validation accuracy and “Mode” refers to the mode of that quantity. Let me explain this with an example. So let’s say we wish to monitor the validation loss in this case. While we are monitoring the validation loss, the mode will be minimum because we want to minimize the loss.

validation loss

Similarly, if we are monitoring the validation accuracy, the mode will be maximum since we want the maximum accuracy for the validation set.

validation accuracy

So after every epoch, we will monitor either the validation loss or the validation accuracy and save the model, if these values have improved from the previous model.

Now, these are the common steps that we perform while creating any deep learning model, and we setup model checkpointing at the time of Model Training-

    1. Loading the dataset
    2. Pre-processing the data
    3. Creating training and validation set
    4. Defining the model architecture
    5. Compiling the model
    6. Training the model
    1. Setting up model checkpointing
    7. Evaluating model performance

End Notes

After reading this article you should have got an intuition behind the Model Checkpointing technique which can be really helpful and can do wonders if you’re looking forward to improving your deep learning model. For the implementation of this technique, stay tuned! I’m going to cover its implementation in the next article.

If you are looking to kick start your Data Science Journey and want every topic under one roof, your search stops here. Check out Analytics Vidhya’s Certified AI & ML BlackBelt Plus Program

If you have any questions, let me know in the comments section!

Himanshi Singh 18 Mar 2021

I am a data lover and I love to extract and understand the hidden patterns in the data. I want to learn and grow in the field of Machine Learning and Data Science.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

Stephen Cobb
Stephen Cobb 20 Mar, 2021

Some of the math is over my head but I do understand most of the concepts.

Deep Learning
Become a full stack data scientist

  • [tta_listen_btn class="listen"]