The Decision Review System (DRS) is quite ubiquitous in the sport of cricket these days. Teams are starting to rely heavily on the DRS to overturn tight umpiring decisions and that, quite often, can turn the match in their favor.
This ball tracking concept, part of the DRS, is now an all too familiar sight for cricket fans:
This got me thinking – could I build my own ball tracking system using my knowledge of deep learning and Python?
I’m a huge cricket fan and I’m constantly looking for different use cases where I can apply machine learning or deep learning algorithms. The idea of building a ball tracking system came to me when I was working on my previous project focused on generating insights from cricket commentary data.
Cricket teams and franchises use this idea of ball-tracking to understand the weak zones of opposition players as well. Which position is a particular batsman vulnerable in? Where does the bowler consistently pitch in the death overs?
Ball tracking systems help teams analyze and understand these questions. Here is one such example from the recent cricket match:
In this article, we will walk through the various aspects of a ball tracking system and then build one in Python using the example of cricket. This promises to be quite a unique learning experience!
Note: If you’re completely new to the world of deep learning and computer vision, I suggest checking out the below resources:
Let’s quickly familiarize ourselves with two popular terms in Computer Vision prior to a discussion about the Ball Tracking System – Object Detection and Object Tracking.
Object Detection is one of the most fascinating concepts in computer vision. It has a far-reaching role in different domains such as defense, space, sports, and other fields. Here, I have listed a few interesting use cases of Object Detection in Defense and Space:
But what is object detection?
Image Classfication + Localization = Object Detection
Object Detection is the task of identifying an object and its location in an image. Object detection is similar to an image classification problem but with one additional task – identifying the location of an object as well – a concept known as Localization.
As you can see here, the location of the object is represented by a rectangular box that is popularly known as a Bounding Box. Bounding Box represents the coordinates of the object in an image. But wait – how is Object Detection different from Object Tracking? Let’s answer this question now.
Object Tracking is a special case of Object Detection. It applies to only video data. In object tracking, the object and its location are identified from every frame of a video.
Object Detection applied on each frame of a video turns into an Object Tracking problem.
Remember that Object Detection is for an image whereas Object Tracking is for the sequence of fast-moving frames. Both of the problems involve the same task but the terms are interchangeably used depending upon the type of data that you’re working with.
Ball Tracking System is one of the most interesting use cases of Object Detection & Tracking in Sports. A Ball Tracking System is used to find the trajectory of the ball in a sports video. Hawk-eye is the most advanced ball tracking system used in different sports like cricket, tennis, and football to identify the trajectory of the ball from high-performance cameras.
We can develop a similar system using the concepts of computer vision by identifying the ball and its location from every frame of a video. Here is a demo of what we will be building in this article:
Awesome, right?
The Ball Tracking System, as I’m sure you’ve gathered by now, is a powerful concept that transcends industries. In this section, I will showcase a few popular use cases of ball-tracking in sports.
We’ve discussed this earlier and I’m sure most of you will be familiar with the hawk-eye in cricket.
The trajectory of the ball assists in making critical decisions during the match. For example, in cricket, during Leg Before Wicket (or LBW), the trajectory of the ball assists in deciding whether the ball has pitched inside or outside the stumps. It also includes information about the ball hitting the stumps or not.
Similarly, in tennis, during serves or a rally, the ball tracking system assists in knowing whether the ball has pitched inside or outside the permissible lines on the court:
Every team has a set of match-winning players. Picking their wickets at the earliest opportunity is crucial for any team to win matches. With the help of ball-tracking technology, we can analyze the raw videos and generate heat maps.
From these heatmaps, we can easily identify the strong and weak zone of a batsman. This would help the team to develop a strategy against every player ahead of a match:
Can you think of other use cases of a ball tracking system in sports? Let me know in the comments section below!
There are different tracking algorithms as well as pre-trained models for tracking the object in a video. But, there are certain challenges with them when it comes to tracking a fast-moving cricket ball.
Here are the few challenges that we need to know prior to tracking a fast-moving ball in a cricket video.
Hence, in this article, I will focus on 2 simple approaches to track a fast-moving ball in a sports video:
Let’s discuss them in detail now.
One of the simplest ways could be to break down the image into smaller patches, say 3 * 3 or 5 * 5 grids, and then classify every patch into one of 2 classes – whether a patch contains a ball or not. This approach is known as the sliding window approach as we are sliding the window of a patch across every part of an image.
Remember that the formation of grids can be overlapping as well. It all depends on the way you want to formulate the problem.
Here is an example that showcases the non-overlapping grids:
This method is really simple and efficient. But, it’s a time taking process as it considers several patches of the image. Another drawback of the sliding window approach is that it’s expensive as it considers every patch of an image.
So next, I will discuss the alternative approach to the sliding window.
Instead of considering every patch, we can reduce the patches for classification based on the color of the ball. Since we know the color of the ball, we can easily differentiate the patches that have a similar color to that of the ball from the rest of the patches.
This results in a fewer number of patches to classify. This process of combining similar parts of an image via color is known as segmentation by color.
Time to code! Let’s develop a simple ball tracking system that tracks the ball on the pitch using Python. Download the necessary data files from here.
First, let’s read a video file and save the frames to a folder:
Reading frames:
As our objective is to track the ball on the pitch, we need to extract the frames that contain the pitch. Here, I am using the concept of scene detection to accomplish the task:
Output:
The outlier in the plot indicates the frame number during which the scene changes. So, fix the threshold for obtaining the frames before a scene change:
Now, we have obtained the frames that contain a pitch. Next, we will implement a segmentation approach that we discussed earlier in the article. Let’s carry out all the steps of the approach for only a single frame now.
We will read the frame and apply Gaussian blur to remove noises in an image:
Output:
As the color of the ball is known, we can easily segment the white-colored objects in an image. Here, 200 acts as a threshold. Any pixel value below this 200 will be marked as 0 and above 200 will be marked as 255.
Output:
As you can see here, the white-colored objects are segmented. The white color indicates white colored objects and black indicates the rest of the colors. And that’s it! We have separated the white-colored objects from the others.
Now, we will find the contours of segmented objects in an image:
Draw the contours on the original image:
Output:
Next, extract the patches from an image using the contours:
It’s time to build an image classifier to identify the patch containing the ball.
Reading and preparing the dataset:
Split the dataset into train and validation:
Build a baseline model for identifying the patch containing ball:
Evaluate the model on the validation data:
Repeat the similar steps for each frame in a video followed by classification:
Have a glance at the frames containing the ball along with the location:
Next, we will draw the bounding box around the frames that contain the ball and save it back to the folder:
Let’s convert back the frames into a video now:
Output:
How cool is that? Congratulations on building your own ball tracking system for cricket!
That’s it for today! This brings us to the end of the tutorial on ball tracking for cricket. Please keep in mind that a baseline model is built for image classification tasks. But there is still a lot of room for improving our model. And also, there are few hyperparameters in this approach such as the size of the Gaussian filter and the thresholding value that must be adjusted depending on the type of video.
What are your thoughts on the system we built? Share your ideas and feedback in the comments section below and let’s discuss.