# A brief Study of Image Thresholding Algorithms

Krithika 08 Aug, 2022

## Introduction

This article briefly introduces Image thresholding and the algorithms used for image thresholding. Image thresholding is a simple image segmentation technique. It is used to convert a grayscale image or RGB image to a binary image. In this article, we will look into thresholding algorithms like simple thresholding, otsuâ€™s thresholding, and adaptive thresholding technique, along with a brief note on a deep learning algorithm (U-Net) for image segmentation.

## What is Image Thresholding?

Before understanding the term Image Thresholding, let us first understand the term Image Segmentation. Image segmentation is a common technique used to divide an image into groups of pixels based on some criteria.

Image thresholding is a type of image segmentation that divides the foreground from the background in an image. In this technique, the pixel values are assigned corresponding to the provided threshold values. In computer vision, thresholding is done in grayscale images.

The below images show a grayscale image and the image obtained after applying thresholding on it.

## Why do we need Image Thresholding?

Let us understand the importance of image thresholding with an example-

Take a look at the images below,

Comparing the first image, the mask in the second image is visible clearly. Let’s take another example,

Â  Â  Â  Â  Â  Â  Â  Â  Â  Â Â Â Â Â  Â  Â  Â  Â Â

The first image, the original image, is a little distorted than the second image we obtained after applying thresholding. So thresholding is useful in extracting text that is not clear in the image.

Image thresholding helps us divide an image’s foreground and background, which can help to identify the objects that are not clearly visible in the images.

## Understanding different thresholding techniques

In this article, we will learn about different techniques used in image thresholding and implement those techniques using OpenCV.

## Simple Thresholding

Simple Thresholding is also known as Binary thresholding. This technique sets a threshold value and compares each pixel to that particular threshold value. If the pixel value is less than or equal to the assigned threshold, then the pixel value is set to zero or to the maximum value.

#### Implementation of Simple Thresholding using OpenCV:

Importing necessary libraries

```import cv2
import matplotlib.pyplot as plt
```

Converting a color image into grayscale

```image = cv2.imread('/content/drive/MyDrive/AV/OpenCV/test.jpg')
cv2_imshow(image)
# coverting color image into grayscale
orig_img = cv2.cvtColor(image, cv2.COLOR_BGR2GRAY)
```

Binary Thresholding

```# Arguments of function cv2.threshold
# cv2.threshold(grayscaled image, threshold value, maximum value of pixel, type of threshold)
# Output is a tuple containg the threshold value and thresholded image

t, thresh = cv2.threshold(orig_img,70,255,cv2.THRESH_BINARY)
cv2_imshow(thresh)
```

Binary Inverse Thresholding

```t,thresh1 = cv2.threshold(orig_img,70,255,cv2.THRESH_BINARY_INV)
cv2_imshow(thresh1)
```

Truncate Thresholding

```rect,thresh2 = cv2.threshold(orig_img,70,255,cv2.THRESH_TRUNC)
cv2_imshow(thresh2)
```

Threshold to zero

```rect,thresh3 = cv2.threshold(orig_img,70,255,cv2.THRESH_TOZERO)
cv2_imshow(thresh3)
```

Threshold to zero inverse

```rect,thresh4 = cv2.threshold(orig_img,127,255,cv2.THRESH_TOZERO_INV)
cv2_imshow(thresh4)
```

The below image is obtained after applying simple thresholding

## Otsuâ€™s Thresholding

One of the ways to achieve an optimal threshold is Otsuâ€™s method. In this method, we find the spread of foreground and background of the pictures for all possible values of threshold. The threshold with the least spread is taken as the optimal threshold.

#### How does Otsuâ€™s thresholding work?

The idea in Otsuâ€™s thresholding is to maximize the between-class variance. The between-classÂ  variance can be defined as follows,

Here, Â  is the between-class variance of two classes – foreground class and background class.Â

Let,Â  be the number of pixels in the background and foreground classes, respectively. n the total number of pixels in the image then,

Â The mean of background class and foreground class is represented as

Otsuâ€™s algorithm calculates the between-class variance for all possible threshold values. The threshold with the highest between-class variance is taken as the optimal threshold value. Values less than the optimal threshold value falls into one class and other values fall into another class.

#### Implementation of Otsu’s Thresholding:

Implementation of Otsu’s Thresholding using OpenCV

```blur = cv2.GaussianBlur(orig_img,(5,5),0) #Applying Gaussian Blurr on image to get better threshold

t,thresh5 = cv2.threshold(blur,128,255,cv2.THRESH_BINARY+cv2.THRESH_OTSU)
print('Threshold obtained by Otsu Thresholding : ', t)
cv2_imshow(thresh5)
```

The below image was obtained after applying Otsuâ€™s binarization thresholding.

Both Simple thresholding and Otsuâ€™s thresholding are global thresholding techniques using a single threshold value in image thresholding. But a single threshold value may not be sufficient because it may work well in a certain part of the image but may fail in another part. To resolve these limitations, adaptive thresholding can be used.

Adaptive thresholding is a local thresholding technique. This technique considers each pixel and its neighborhood. The arithmetic mean or Gaussian mean of pixels intensity is commonly used to calculate the threshold of the neighborhood; then the threshold value is used to classify the pixel. In Gaussian mean, pixel value farther from the center of the region contributes less in finding the threshold of the region, while in arithmetic mean, all pixel values contribute equally.Â

#### Implementation of Adaptive thresholding using OpenCV:

```# Arithmatic Mean Adaptive thresholding

cv2_imshow(thresh6)
```
```# Gaussian Mean Thresholding

cv2_imshow(thresh7)

```

The below images were obtained after applying adaptive thresholding:

## Introduction to UNet: Deep learning Model for Image Segmentation

In this article, we will be discussing U-Net Architecture for Image segmentation. The UNet architecture was introduced for BioMedical Image segmentation by Olag Ronneberger et al. With this U-Net architecture, the segmentation of images can be computed with a modern GPU within small amounts of time. UNet uses the concept of a Fully Convolution Network along with little modification. This model helps to localize the object in an image and find the mask of that object.

#### U-Net Architecture:

The image below shows the architecture of U-Net.Â

• This model got its name from the U-shaped architecture.
• As we can see in the image, this architecture has two paths created as an encode-decoder network.
• We apply two convolution layers and max pooling layers in the left path.
• The ReLU activation function follows each convolution.
• On the right path, we apply transpose convolutions along with two regular convolutions

#### Implementation of UNet using Keras:

```import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import tensorflow_datasets as tfds
import matplotlib.pyplot as plt
import numpy as np
```
```dataset, info = tfds.load('oxford_iiit_pet:3.*.*', with_info=True)
```
```def resize(input_image, input_mask):
input_image = tf.image.resize(input_image, (128, 128), method="nearest")
```
```def augment(input_image, input_mask):
if tf.random.uniform(()) > 0.5:
# Random flipping of the image and mask
input_image = tf.image.flip_left_right(input_image)
```
```def normalize(input_image, input_mask):
input_image = tf.cast(input_image, tf.float32) / 255.0
```
```def load_image_train(datapoint):
input_image = datapoint["image"]
```
```def load_image_test(datapoint):
input_image = datapoint["image"]
```
```train_dataset = dataset["train"].map(load_image_train, num_parallel_calls=tf.data.AUTOTUNE)
```
```BATCH_SIZE = 64
BUFFER_SIZE = 1000
train_batches = train_dataset.cache().shuffle(BUFFER_SIZE).batch(BATCH_SIZE).repeat()
train_batches = train_batches.prefetch(buffer_size=tf.data.experimental.AUTOTUNE)
validation_batches = test_dataset.take(3000).batch(BATCH_SIZE)
test_batches = test_dataset.skip(3000).take(669).batch(BATCH_SIZE)
```
```def display(display_list):
plt.figure(figsize=(15, 15))
for i in range(len(display_list)):
plt.subplot(1, len(display_list), i+1)
plt.title(title[i])
plt.imshow(tf.keras.utils.array_to_img(display_list[i]))
plt.axis("off")
plt.show()
sample_batch = next(iter(train_batches))
random_index = np.random.choice(sample_batch[0].shape[0])
#Displaying an image and it's corresponding masked image
```
```#Creating 2 convolution blocks with ReLU activation function
def double_conv_block(x, n_filters):
# Conv2D then ReLU activation
x = layers.Conv2D(n_filters, 3, padding = "same", activation = "relu", kernel_initializer = "he_normal")(x)
# Conv2D then ReLU activation
x = layers.Conv2D(n_filters, 3, padding = "same", activation = "relu", kernel_initializer = "he_normal")(x)
return x
```
```#Creating downsampling or encoder blocks
def downsample_block(x, n_filters):
f = double_conv_block(x, n_filters)
p = layers.MaxPool2D(2)(f)
p = layers.Dropout(0.3)(p)
return f, p
```
```# Creating Upsampling or decoder blocks
def upsample_block(x, conv_features, n_filters):
# Transpose convolution Layer
x = layers.Conv2DTranspose(n_filters, 3, 2, padding="same")(x)
# concatenate
x = layers.concatenate([x, conv_features])
# dropout
x = layers.Dropout(0.3)(x)
# Conv2D twice with ReLU activation
x = double_conv_block(x, n_filters)
return x

```
```def build_unet_model(Image_Size):
# Input Layer
inputs = layers.Input(shape=Image_Size)
# Creating 4 downsampling layers
f1, p1 = downsample_block(inputs, 64)
f2, p2 = downsample_block(p1, 64*2)
f3, p3 = downsample_block(p2, 64*4)
f4, p4 = downsample_block(p3, 64*8)
# Bottleneck
bottleneck = double_conv_block(p4, 1024)
# Creating 4 upsampling layers
u6 = upsample_block(bottleneck, f4, 512)
u7 = upsample_block(u6, f3, 256)
u8 = upsample_block(u7, f2, 128)
u9 = upsample_block(u8, f1, 64)
# Output Layer
outputs = layers.Conv2D(3, 1, padding="same", activation = "softmax")(u9)
# Creating model with Keras
unet_model = tf.keras.Model(inputs, outputs, name="U-Net")
return unet_model
```
```# Creating a model with input shape(128, 128, 3)
unet_model = build_unet_model((128,128,3))

# Compiling the model
# loss Categorical cross entropy
# Metrics - Accuracy
```
```# Model Training
TRAIN_LENGTH = info.splits["train"].num_examples
STEPS_PER_EPOCH = TRAIN_LENGTH // BATCH_SIZE
VAL_SUBSPLITS = 5
TEST_LENTH = info.splits["test"].num_examples
VALIDATION_STEPS = TEST_LENTH // BATCH_SIZE // VAL_SUBSPLITS
model_history = unet_model.fit(train_batches,epochs=15,steps_per_epoch=STEPS_PER_EPOCH,validation_steps=VALIDATION_STEPS,validation_data=test_batches)
```
```# Creating mask for predicted class
```
```# Prediction
def show_predictions(dataset=None, num=1):
if dataset:
else:
```
```import cv2
image1 = cv2.resize(image, (128,128))
cv2_imshow(image1)

image1 = tf.expand_dims(image1, axis=0)