Object detection is a tremendously important field in computer vision needed for autonomous driving, video surveillance, medical applications, and many other fields.
We are grappling with a pandemic that’s operating at a never-before-seen scale. Researchers all over the globe are frantically trying to develop a vaccine or a cure for COVID-19 while doctors are just about keeping the pandemic from overwhelming the entire world. On the other hand, many countries have found social distancing, using masks & gloves a way to curb the situation a little.
I recently had an idea to apply my deep learning knowledge to help the current situation a little. In this article, I’ll introduce you to the implementation of RetinaNet with little background & working on it.
The cherry on top? We’ll build a “Face mask detector” using RetinaNet to help us in this ongoing pandemic. You can extrapolate the same idea to build an AI-enabled solution for your smart home. This AI-enabled solution would open the gate of your building to only people, who are wearing masks and gloves.
As the cost of drones is decreasing with time, we see a large spike in the generation of aerial data. So, you can use this RetinaNet model to detect different objects such as automobile vehicles (bikes, cars, etc.) or pedestrians in aerial images or maybe even in satellite images to solve your different business problems.
So, you see applications of object detection model is endless.
RetinaNet is one of the best one-stage object detection models that has proven to work well with dense and small scale objects. For this reason, it has become a popular object detection model that we use with aerial and satellite imagery.
RetinaNet was introduced by Facebook AI Research to tackle the dense detection problem. It was needed to fill in for the imbalances and inconsistencies of the single-shot object detectors like YOLO and SSD while dealing with extreme foreground-background classes.
In essence, RetinaNet is a composite network composed of:
For better understanding, Let’s understand each component of architecture separately-
A fully convolutional network (FCN) is attached to each FPN level for object classification. As it’s shown in the diagram above, This subnetwork incorporates 3*3 convolutional layers with 256 filters followed by another 3*3 convolutional layer with K*A filters. Hence output feature map would be of size W*H*KA , where W & H are proportional to the width and height of the input feature map, and K & A are the numbers of object class and anchor boxes respectively.
At last Sigmoid layer (not softmax) is used for object classification.
And the reason for the last convolution layer to have KA filters is because, if there’re “A ” number of anchor box proposals for each position in feature map obtained from the last convolution layer then each anchor box has the possibility to be classified in K number of classes. So the output feature map would be of size KA channels or filters.
The regression subnetwork is attached to each feature map of the FPN in parallel to the classification subnetwork. The design of the regression subnetwork is identical to that of the classification subnet, except that the last convolutional layer is of size 3*3 with 4 filters resulting in an output feature map with the size of W*H*4A .
Reason for last convolution layer to have 4 filters is because, in order to localize the class objects, regression subnetwork produces 4 numbers for each anchor box that predict the relative offset (in terms of center coordinates, width, and height) between the anchor box and the ground truth box. Therefore, the output feature map of the regression subnet has 4A filters or channels.
Focal Loss (FL) is an improved version of Cross-Entropy Loss (CE) that tries to handle the class imbalance problem by assigning more weights to hard or easily misclassified examples (i.e. background with noisy texture or partial object or the object of our interest ) and to down-weight easy examples (i.e. Background objects).
So Focal Loss reduces the loss contribution from easy examples and increases the importance of correcting misclassified examples. Focal loss is just an extension of the cross-entropy loss function that would down-weight easy examples and focus training on hard negatives.
So to achieve these researchers have proposed-
1- pt to the cross-entropy loss, with a tunable focusing parameter ≥0. RetinaNet object detection method uses an α-balanced variant of the focal loss, where α=0.25, γ=2 works the best.
So focal loss can be defined as –
The focal loss is visualized for several values of γ ∈[0,5], refer Figure 1. We shall note the following properties of the focal loss-
Note:- when γ = 0 , FL is equivalent to CE. Shown blue curve in Fig
Intuitively, the modulating factor reduces the loss contribution from easy examples and extends the range in which an example receives the low loss.
You can read about Focal loss in detail in this article (link to my Focal loss article.), Where I’ve talked about the evolution of cross-entropy into Focal loss, the need for focal loss, comparison of focal loss with Cross entropy.
And the cherry on top, I’ve used a couple of examples to explain why Focal loss is better than cross-entropy.
Now let’s see the implementation of RetinaNet to build Face mask Detector in Python –
Any deep learning model would require a large volume of training data to give good results on the test data. In this article (Link to my Web scrapping article), I’ve talked about the Web Scrapping method to gather a large volume of images for your deep learning project.
We start by creating annotations for the training and validation dataset, using the tool LabelImg. This excellent annotation tool lets you quickly annotate the bounding boxes of the objects to train the machine learning model.
You can install it using below command in anaconda command prompt
pip install labelImg
You can annotate each JPEG file using labelmg tool like below and it’ll generate XML files with coordinates of each bounding box. And we’ll use these xml files to train our model.
import os print(os.getcwd())
git clone https://github.com/fizyr/keras-retinanet.git %cd keras-retinanet/ !pip install . !python setup.py build_ext --inplace
import numpy as np import shutil import pandas as pd import os, sys, random import xml.etree.ElementTree as ET import pandas as pd from os import listdir from os.path import isfile, join import matplotlib.pyplot as plt from PIL import Image import requests import urllib from keras_retinanet.utils.visualization import draw_box, draw_caption , label_color from keras_retinanet.utils.image import preprocess_image, resize_image
pngPath='C:/Users/PraveenKumar/RetinaNet//maskDetectorJPEGImages/' annotPath='C:/Users/PraveenKumar/RetinaNet//maskDetectorXMLfiles/' data=pd.DataFrame(columns=['fileName','xmin','ymin','xmax','ymax','class']) os.getcwd() #read All files allfiles = [f for f in listdir(annotPath) if isfile(join(annotPath, f))]
#Read all pdf files in images and then in text and store that in temp folder #Read all pdf files in images and then in text and store that in temp folder for file in allfiles: #print(file) if (file.split(".")[1]=='xml'): fileName='C:/Users/PraveenKumar/RetinaNet/maskDetectorJPEGImages/'+file.replace(".xml",'.jpg') tree = ET.parse(annotPath+file) root = tree.getroot() for obj in root.iter('object'): cls_name = obj.find('name').text xml_box = obj.find('bndbox') xmin = xml_box.find('xmin').text ymin = xml_box.find('ymin').text xmax = xml_box.find('xmax').text ymax = xml_box.find('ymax').text # Append rows in Empty Dataframe by adding dictionaries data = data.append({'fileName': fileName, 'xmin': xmin, 'ymin':ymin,'xmax':xmax,'ymax':ymax,'class':cls_name}, ignore_index=True) data.shape
# pick a random image filepath = df.sample()['fileName'].values[0] # get all rows for this image df2 = df[df['fileName'] == filepath] im = np.array(Image.open(filepath)) # if there's a PNG it will have alpha channel im = im[:,:,:3] for idx, row in df2.iterrows(): box = [ row['xmin'], row['ymin'], row['xmax'], row['ymax'], ] print(box) draw_box(im, box, color=(255, 0, 0)) plt.axis('off') plt.imshow(im) plt.show() show_image_with_boxes(data)
#Check few records of data data.head()
#Define labels & write them in a file classes = ['mask','noMask'] with open('../maskDetectorClasses.csv', 'w') as f: for i, class_name in enumerate(classes): f.write(f'{class_name},{i}\n') if not os.path.exists('snapshots'): os.mkdir('snapshots')
Note: – It’s better to start with a pre-trained model in lieu of training a model from scratch. We’ll use the ResNet50 model that’s already pre-trained on the Coco dataset.
URL_MODEL = 'https://github.com/fizyr/keras-retinanet/releases/download/0.5.1/resnet50_coco_best_v2.1.0.h5' urllib.request.urlretrieve(URL_MODEL, PRETRAINED_MODEL)
Note: – You can use below snippet of code to train your model if you’re using Google Colab.
#Put your training data path & file that has labels for your training data !keras_retinanet/bin/train.py --freeze-backbone \ --random-transform \ --weights {PRETRAINED_MODEL} \ --batch-size 8 \ --steps 500 \ --epochs 15 \ csv maskDetectorData.csv maskDetectorClasses.csv
But If you’re training on your local Jupyter notebook or different IDE then you can below command from your command prompt
python keras_retinanet/bin/train.py --freeze-backbone --random-transform \ --weights {PRETRAINED_MODEL} --batch-size 8 --steps 500 --epochs 15 csv maskDetectorData.csv maskDetectorClasses.csv
Let’s analyze each argument passed to the script train.py.
from glob import glob model_paths = glob('snapshots/resnet50_csv_0*.h5') latest_path = sorted(model_paths)[-1] print("path:", latest_path) from keras_retinanet import models model = models.load_model(latest_path, backbone_name='resnet50') model = models.convert_model(model) label_map = {} for line in open('../maskDetectorClasses.csv'): row = line.rstrip().split(',') label_map[int(row[1])] = row[0]
#Write a function to choose one image randomly from your dataset and predict using Trained model. def show_image_with_predictions(df, threshold=0.6): # choose a random image row = df.sample() filepath = row['fileName'].values[0] print("filepath:", filepath) # get all rows for this image df2 = df[df['fileName'] == filepath] im = np.array(Image.open(filepath)) print("im.shape:", im.shape) # if there's a PNG it will have alpha channel im = im[:,:,:3] # plot true boxes for idx, row in df2.iterrows(): box = [ row['xmin'], row['ymin'], row['xmax'], row['ymax'], ] print(box) draw_box(im, box, color=(255, 0, 0)) ### plot predictions ### # get predictions imp = preprocess_image(im) imp, scale = resize_image(im) boxes, scores, labels = model.predict_on_batch( np.expand_dims(imp, axis=0) ) # standardize box coordinates boxes /= scale # loop through each prediction for the input image for box, score, label in zip(boxes[0], scores[0], labels[0]): # scores are sorted so we can quit as soon # as we see a score below threshold if score < threshold: break box = box.astype(np.int32) color = label_color(label) draw_box(im, box, color=color) class_name = label_map[label] caption = f"{class_name} {score:.3f}" draw_caption(im, box, caption) score, label=score, label plt.axis('off') plt.imshow(im) plt.show() return score, label plt.rcParams['figure.figsize'] = [20, 10]
#Feel free to change threshold as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change threshold as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirementscore, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
#Feel free to change it as per your business requirement score, label=show_image_with_predictions(data, threshold=0.6)
http://arxiv.org/abs/1605.06409.
https://arxiv.org/pdf/1708.02002.pdf
https://developers.arcgis.com/python/guide/how-retinanet-works/
https://github.com/fizyr/keras-retinanet
https://www.freecodecamp.org/news/object-detection-in-colab-with-fizyr-retinanet-efed36ac4af3/
https://deeplearningcourses.com/
https://blog.zenggyu.com/en/post/2018-12-05/retinanet-explained-and-demystified/
To conclude, we went through the complete journey to make a face mask detector with the implementation of RetinaNet. We created a dataset, trained a model, and ran inference (here is my Github repo for the notebook and dataset).
Retina Net is a powerful model that uses Feature Pyramid Networks & ResNet as its backbone. I was able to get decent results for face mask detector with very limited dataset & very few epochs (6 epochs with 500 steps each) only. You can change the threshold value.
Note:-
In general, RetinaNet is a good choice to start an object detection project, in particular, if you need to quickly get good results.
If you enjoyed this article, leave a few claps, it will encourage me to explore further machine learning opportunities 🙂
Praveen Kumar Anwla
I’ve been working as a Data Scientist with product-based and Big 4 Audit firms for almost 5 years now. I have been working on various NLP, Machine learning & cutting edge deep learning frameworks to solve business problems. Please feel free to check out my personal blog, where I cover topics from Machine learning – AI, Chatbots to Visualization tools ( Tableau, QlikView, etc.) & various cloud platforms like Azure, IBM & AWS cloud.
Good information learned a lot
Thanks for the great article. Can RetinaNet be used for real-time detection?
Hi, wondering how the "MaskDetectorData.csv" file gets generated. Is it through a to_csv command on the dataframe "data" or some other method
Hi Prithvi, Thanks for pointing that out. MaskDetectorData.csv is actually our training data. i.e. "data" in our case.