Learn everything about Analytics

Home » Incremental and Reinforced learning for Image classification

Incremental and Reinforced learning for Image classification

Introduction

One of the biggest challenges that deep learning faces today is the addition of newer labels to the neural models without altering the architecture or storing previous datasets. The issues generally revolve around the fact that storing data over time causes the system memory to bloat up and also significantly increases the training time. As for the neural architecture it pretty much is defined at the first stage of training or we can call it Model Zero, and using the previous learnings while adding new labels is practically impossible.

The usual solution we resort to is transfer learning that helps us to use the weights and biases of trained models and using it for training it over custom models, the catch here is that the previous labels aren’t carried forward. So either we need all the data that the system was ever trained for, or we need something out of the box.

This is where incremental learning in a modified system with hybrid neural architectures can help us attain the desired results, without compromising on accuracy, or without the need for huge volumes of data.

What if i told you, what we intend on learning from this article is not only incremental learning but also reinforcing new data and labels into the system on the fly. 

What if you don’t need huge volumes of data to train a model?
Dream come true?

Without wasting much time, let’s get right to it. You can find the code below right here.

The steps below mention the necessary code snippets and the explanations for the requirements of the respective functions.

1. Import all the necessary libraries

import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, Dense, MaxPooling2D, Flatten, BatchNormalization, Reshape, LeakyReLU, multiply, Embedding, UpSampling2D, Dropout
from tensorflow.keras.models import Model, load_model, Sequential
from tensorflow.keras.optimizers import Adam
from tensorflow.keras.preprocessing.sequence import pad_sequences
import os
import numpy as np
import json
import cv2
import matplotlib.pyplot as plt
import random
from tqdm import tqdm

 

2. We define a very simple architecture for test purposes. You can build your custom architectures to suit your requirements.

def req_model(no_of_labels):
    input_layer = Input(shape=image_inp_shape_conv)
    conv_1 = Conv2D(filters=64, kernel_size=(2,2),activation='relu', padding='same')(input_layer)
    flatten_1 = Flatten()(conv_1)
    out_layer = Dense(no_of_labels, activation="softmax")(flatten_1)
    
    model = Model(input_layer, out_layer)
    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
    return model

3. Create a basic function that would keep a track of your labels and make the necessary updates to the file as need be.

def labels():
    if os.path.isfile(labels_file_path):
        labels_file = open(labels_file_path, "r")
        labels = json.load(labels_file)
    else:
        labels_dict = {0:"noise"}
        with open(labels_file_path, "w") as f:
            json.dump(labels_dict, f)
        labels = labels_dict
    return labels

4. For a simple image classification type problem, let’s not create any fancy pre-processing. If need be you can create your own custom pre-processing pipeline and replace the existing one.

def preprocess(image):
    image_arr = cv2.imdecode(np.frombuffer(image, np.uint8), -1)
    image_processed = cv2.resize(image_arr, image_inp_shape)
    image_processed = cv2.cvtColor(image_processed, cv2.COLOR_BGR2GRAY)
    image_processed = (image_processed) / 255.0
    return image_processed
def preprocess_gan(image):
    image_arr = cv2.imdecode(np.frombuffer(image, np.uint8), -1)
    image_processed = cv2.resize(image_arr, image_inp_shape)
    image_processed = cv2.cvtColor(image_processed, cv2.COLOR_BGR2GRAY).reshape((dense_output,))
    image_processed = (image_processed) / 255.0
    return image_processed

 

5. Create a prediction function for your user to be able to utilize the model.

def predict(image):
    labels_read = labels()
    if os.path.isfile(model_file):
        model = load_model(model_file)
    else:
        model = req_model(no_of_labels=len(labels_read))
    test_image = np.expand_dims(image, axis=0)
    results = model.predict(test_image)
    predicted_label = labels_read.get(str(np.argmax(results[0])))
    return predicted_label

 

6. Now comes the challenging part. Essentially what we need to write in the next function is to be able to alter the NumPy array of the weights and biases of the neural architecture.

For the new labels to be added to the system the final layer of the neural model needs to have an additional element to accommodate the alterations. Along with that, the previous weights of the model need to be carried forward into the updated shape of the network.

Incremental reinforced learning 6

Model updation for GAN and Custom Model

As you can see there is an updated script for the GAN model as well. Don’t worry we will come to that in the later steps.

 

7. Create a training script for the model.

def train(image, ground_truth, add_label:bool):
    labels_read = labels()
    if os.path.isfile(model_file):
        if add_label:
            model_old = load_model(model_file)
            model_new = req_model(no_of_labels=len(labels_read))
            model = update_model_arc(model_v1=model_new, model=model_old)
        else:
            model = load_model(model_file)
    else:
        model = req_model(no_of_labels=len(labels_read))
    test_image = np.expand_dims(image, axis=0)
    ground_truth_arr = np.asarray([float(ground_truth)])
    for i in tqdm(range(0,max_iter), desc="training req model..."):
        history = model.train_on_batch(test_image,ground_truth_arr)
    model.save("model.h5")

8. Create a GAN model architecture.

Before we process further let’s discuss the need for the same. Reinforcement for the system can be made by feeding the samples that the user is uploading while testing the system, for training purposes and validation. The question is what happens when too many samples of a new label are shown to the system for training.

In a single line — it would create a bias in the system. In order to avoid this, we can use a Conditional GAN that would essentially create dummy data for all the labels that were already trained in the system and will pass the latest user data along with it.

Thereby creating a balance in the number of samples or variety that the system is trained on.

Incremental reinforced learning Generator and Discriminator arc

Generator and Discriminator arc

Stacked GAN with training pipeline

Stacked GAN with training pipeline

 

9. Let’s now create the wrapper function that would be needed to train and update the models.

Training and use of CGAN is optional. To get immediate results avoid training with CGAN and comment out the training pipeline for the same.

Incremental reinforced learning 9

 

10. Save the labels so that they can be reused for prediction and reinforcement.

def rev_labels(labels_dict):
    rev_labels_read = {}
    for key,val in labels_dict.items():
        rev_labels_read[val] = key
    return rev_labels_read

11. Now all we need to do is compile everything and create the possible scenarios.

Scenario 1 — Your model is being trained for the first time.
Scenario 2 — Your model was trained but requires updates as predictions were wrong
Scenario 3 — Your model was trained on certain labels and now a new label needs to be added.

Incremental reinforced learning 11

12. And finally let’s see the results of our extensive coding.

extensive coding.

With this, you will be able to train image classification models on the fly without having the need to stores huge volumes of data. This kind of system can prove to be extremely productive when you have huge numbers of users utilizing the system and want higher accuracy over time.

The same architecture can be tweaked and used for a variety of use cases, such as sentiment analysis, document classification, sound analysis, etc.

I hope the tutorial helps you in developing groundbreaking technology.

Thank you…

 

Author

Author

The simplest way to describe me would be a technophile. I have an extremely curious mind with a knack to tinker with trending technologies and resolving real-world problems by trying to visualize the problem from a different perspective. Last but not least at the end of the day I really want my work to create a constructive impact on society for its upliftment.

 

You can also read this article on our Mobile APP Get it on Google Play