Learn everything about Analytics

Home » Highlight elimination for vehicle detection at night time

Highlight elimination for vehicle detection at night time

This article was published as a part of the Data Science Blogathon.

Table of contents

  1. Introduction
  2. Methodology
  3. Further modifications
  4. Results
  5. Conclusion and future work

Introduction

Detection of vehicles on road scenes is the first step in a vision-based traffic surveillance system. There are many motion-based approaches for vision-based vehicle detection.

Background modeling is a common approach to achieve this, in which the moving vehicles are considered the foreground and stationary regions of the scene, i.e. road, are considered the background.

The extracted foreground at night time or in the dim light from background modeling techniques contains the vehicles as well as highlights.

As discussed above, efficient vehicle detection is crucial in many traffic surveillance projects. Much work has been done for vehicle detection in the daytime. Night-time surveillance videos face problems to detect vehicles because of many reasons. Highlights from vehicle headlights falling on the road is one such major cause of interference in detection. This added interference in the detection of vehicles, due to highlights can further lead to overestimation of vehicle sizes or erroneous vehicle tracking.

Highlight elimination part 1

Source

Having discussed the motivation behind this blog let’s understand the methodology to solve the problem at hand.

Methodology

Let’s go through 2 main methods that help is efficient vehicle detection at night time :

1. Edge threshold – Let’s observe the image below which shows the boundary edge map of the frame (after applying Sobel filter on foreground mask) boundary edge map is evaluated. As can be clearly seen, the vehicle has sharp edges. This factor is utilized in the project and foreground blocks that contain interior edge pixels(> threshold) are classified as vehicles.

HIGHLIGHT ELIMINATION 2

Source

2. Intensity factor – In a frame containing these 3 objects: vehicle, highlight, and dark background (since night time videos are used), a very useful difference are of the intensity level. Since highlights are brighter than vehicles, an optimum threshold is applied to distinguish high-intensity blocks from low-intensity blocks.

Highlight Elimination 3

Further modifications

Following factors have also been included along with the above methods to resolve some more issues in vehicle detection at night time :

1. Size constraint: A big distinguishing factor between vehicles and highlights is the size/ number of blocks they fall in. The vehicle occupies many continuous blocks in each direction(can be thought of as a rectangle) whereas highlight would occupy fewer blocks. A threshold of 2-3 continuous blocks occupancy in each direction was applied to remove highlight blocks.

2. Lone highlights: Sometimes the highlights of street lights also interfere with vehicle detection. Such type of lights which occupy less area in foreground block is termed as lone highlights. Area constraint (less than 5% foreground pixels in a foreground block) was applied to remove such lone highlights.

3. HSV color factor: Sometimes the vehicle is not fully detected, due to less bright regions inside the vehicle or less edgy pixels. When the vehicle moves frame by frame the hue of the vehicle remains constant and this property can be exploited in detecting the whole or maximum of the vehicle. The three parameters of HSV color were tried for various constraints to find the suitable combination for better detection.

Results

1. In the first set of results let’s see how the methodology works on video footage at night time.

Applying edge sum threshold and optimum intensity factor: In the two images below, we can see the high edge sum values are getting detected based on the threshold that we apply.

edge 2Edge sum threshold = 35, Source : self project

edge 1

Edge sum threshold = 45, Source : self project

In the first image highlight falling on the road is also getting detected along with the car whereas in the second image almost the whole car is getting detected and the highlight on road is also decreased.

2. In the second set of results, the use of HSV factor is depicted :

7

Before applying any hue constraint, Source : self project

8

After applying red hue constraint, Source: self project

As can be seen, in the second image a large portion of the bus is getting detected after applying the red hue constraint.

Conclusion and future work

We have discussed the methods that are used for the removal of highlights that interfere with vehicle detection mainly at night time. More work can be done in this field as there are many issues that remain unresolved as listed below:

  1. Hole filling may be applied for efficient vehicle detection.
  2. Much better detection of white color vehicles (as their hue matches with that of highlight) may be done.
  3. Weather conditions like rain and storm can further lead to errors in the above-mentioned methods. They need to be handled separately.

The media shown in this article are not owned by Analytics Vidhya and is used at the Author’s discretion. 

You can also read this article on our Mobile APP Get it on Google Play