Mediapipe Tasks API and its Implementation in Projects

Tarun R Jain Last Updated : 30 Aug, 2023
9 min read

Introduction

Deep Learning has revolutionized the field of AI by enabling machines to learn and improve from large amounts of data. Mediapipe, a cross-platform and open-source framework for building multimodal ML pipelines, has introduced a new Tasks API that makes it easier than ever to incorporate Deep Learning models into your projects.

This article will explore three exciting projects using the Mediapipe Tasks API focused on a separate domain: Audio, Image, and Text. With these examples, you will learn how to apply Deep Learning to solve real-world problems and build cutting-edge applications.

Before we jump into building end-to-end projects, let’s first look at Mediapipe.

Learning Objectives:

In this article, we will:

  1. We will understand the basics of Mediapipe tasks API.
  2. How to install it in the local system?
  3. How to build our own object detection?
  4. How to implement object detection using OpenCV and Mediapipe?
  5. How to implement audio classification using Mediapipe?
  6. Understanding text sentiment analysis and implementing it using Mediapipe

This article was published as a part of the Data Science Blogathon.

What is Mediapipe Tasks API?

Mediapipe is an open-source and flexible framework for building multimodal ML pipelines that allow developers to create complex processing graphs for Audio, Image, and other sensor data. It provides a set of pre-built components called “Graphs” that are easily combined to create end-to-end ML pipelines.

With the recent release of the Mediapipe Tasks API, developers can now access pre-trained Deep Learning models for various tasks, including Audio, Image, and Text processing. These pre-trained models are trained on large datasets using state-of-the-art techniques. They are made available in the “tflite” format, optimized for deployment on a broad range of edge devices such as IoT and Android/IOS. The Tasks API provides a simple and compatible interface for using these models, making it easy for developers to integrate Deep Learning into their projects without needing a deep understanding of the underlying models.

Installation of Mediapipe

To install mediapipe in your local system, you can use pip install and a specific version of mediapipe.

pip install mediapipe==0.9.1

One can also use Google Colab to run the following projects. Run the following commands in Google Commands:

  • !pip install -q flatbuffers==2.0.0
  • !pip install -q mediapipe==0.9.1

Let’s go ahead and build our first project.

Project 1: Build your own Object Detection

Object detection is a Computer Vision technique that involves identifying and locating objects within an image or video. It is a critical task in various applications such as Surveillance, Autonomous Vehicles, and Robotics. In simpler terms, object detection is like finding hidden treasures in a picture or video. Imagine playing a game to locate all the objects hidden in an image. Object detection is like a computer playing a game, but instead of finding the objects for fun, it does it to help us solve real-world problems.

Now in this project, you will:

  1. Understand how Mediapipe Tasks API can simplify the process of object detection by providing pre-trained models and machine learning algorithms.
  2. Understand the significance of the tflite format and how it can help developers deploy c on mobile devices.

Implement Object Detection using OpenCV and Mediapipe

First, we need to import the required libraries.

import cv2
import matplotlib.pyplot as plt

import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision

Download the Pre-trained Model: Efficientdet Lite Model. And you can use any input image that you need to detect objects.

model = "efficientdet_lite2_uint8.tflite"
img = plt.imread(f"{input_image}")
plt.imshow(img)

"Mediapipe Tasks API
#use Mediapipe Tasks API
base_options = python.BaseOptions(model_asset_path=model)
options = vision.ObjectDetectorOptions(base_options=base_options,score_threshold=0.5)
detector = vision.ObjectDetector.create_from_options(options)

#using Mediapipe Image Attribute initialize the input image path. 
image = mp.Image.create_from_file(input_image)
detect_image = detector.detect(image)
image = image.numpy_view()

Mediapipe Tasks API works based on three endpoints:

  • BaseOptions: This line initializes the BaseOptions class with the path to the object detection model in TFLite format, which is ‘efficientdet_lite2.tflite’ in this case.
  • ObjectDetectorOptions: This line initializes the ObjectDetectorOptions class with the BaseOptions object as a parameter. Additionally, it sets the minimum score threshold for object detection to 0.5, which denotes that the bounding box will be marked only when the probability score is more than 0.5.
  • ObjectDetector: This line creates an instance of the ObjectDetector class using the ObjectDetectorOptions object as a parameter. The create_from_options method initializes the ObjectDetector with the specified options.

Since we used Mediapipe to read the input image path, it needs to be converted into numpy to add the bounding box, label, and mAP score on the detected object.

for detection in detect_image.detections:
    # Insert bounding_box
    bbox = detection.bounding_box
    # the bounding box contains four parameters: 
    #x, y, width and height
    start_point = bbox.origin_x, bbox.origin_y
    end_point = bbox.origin_x + bbox.width, bbox.origin_y + bbox.height
    cv2.rectangle(image, start_point, end_point, (0,255,0), 25)

    # mAP score and the Detected image label
    target = detection.categories[0]
    category_name = target.category_name
    score = round(target.score, 2)
    label = f"{category_name}:{score}"
    loc = (bbox.origin_x+15,bbox.origin_y+25)
    cv2.putText(image, label, loc, cv2.FONT_HERSHEY_DUPLEX,14,(255,0,0),20)
plt.imshow(image)

The provided code detects objects in an image by drawing a bounding box around each detected object, displaying the object label and its mAP score on the image. To accomplish this, the OpenCV library helps to draw the bounding boxes and text on the image.

"Mediapipe Tasks API

Putting All Together: Object Detection using Mediapipe Tasks API

import cv2
import matplotlib.pyplot as plt

import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision

input_image = "surface-81OnSSXJo-I-unsplash.jpg"
model = "efficientdet_lite2_uint8.tflite"

#use Mediapipe Tasks API
base_options = python.BaseOptions(model_asset_path=model)
options = vision.ObjectDetectorOptions(base_options=base_options,score_threshold=0.5)
detector = vision.ObjectDetector.create_from_options(options)

#using Mediapipe Image Attribute initialize the input image path. 
image = mp.Image.create_from_file(input_image)
detect_image = detector.detect(image)

image = image.numpy_view()
for detection in detect_image.detections:
    # Insert bounding_box
    bbox = detection.bounding_box
    # the bounding box contains four parameters: 
    #x, y, width and height
    start_point = bbox.origin_x, bbox.origin_y
    end_point = bbox.origin_x + bbox.width, bbox.origin_y + bbox.height
    cv2.rectangle(image, start_point, end_point, (0,255,0), 25)

    # mAP score and the Detected image label
    target = detection.categories[0]
    category_name = target.category_name
    score = round(target.score, 2)
    label = f"{category_name}:{score}"
    loc = (bbox.origin_x+15,bbox.origin_y+25)
    cv2.putText(image, label, loc, cv2.FONT_HERSHEY_DUPLEX,14,(255,0,0),20)

plt.imshow(image)
plt.axis("off")

Time for the second project

Project 2: Audio Classification to detect Speech or Silent

Audio classification involves categorizing audio signals into predefined classes based on their content. It is an important task, as it has numerous applications in Music, speech recognition, and sound monitoring.

Mediapipe tasks API provides a range of ML algorithms for audio classification applications. These algorithms are optimized for processing sequential data and are capable of learning complex patterns in audio signals. Popular algorithms include RNN and CNN, which are capable of processing spectrograms and other time-frequency representations of audio signals.

Implement Audio Classification using Mediapipe

First, import the required libraries. In our example, we will use .wav audio files, thus import wavfile from scipy.io to process the input audio file.

from mediapipe.tasks import python
from mediapipe.tasks.python.components import containers
from mediapipe.tasks.python import audio
from scipy.io import wavfile
import urllib
import numpy as np

Yamnet is the Transfer learning technique to classify an audio signal using Mediapipe. We don’t have to worry about converting the time domain signal to the frequency domain. Yamnet’s pre-trained model will take care of that. Download Model: Yamnet Tflite model.

model = "yamnet_audio_classifier_with_metadata.tflite"

#download sample audio file 
audio_file_name = 'speech_16000_hz_mono.wav'
url = f'https://storage.googleapis.com/mediapipe-assets/{audio_file_name}'
urllib.request.urlretrieve(url, audio_file_name)

The sample audio can be displayed using the following code:

from IPython.display import Audio, display

file_name = 'speech_16000_hz_mono.wav'
display(Audio(file_name, autoplay=False))

The audio is of just 4 seconds which is both speech and silent background sound. Here the target in Audio Classification is to predict whether the audio contains speech or silence. The process is similar to the previous code example.

For AudioClassifierOptions, we pass max_results instead of threshold score. max_results=4 specify the timestamps to record in the audio file. The max_results specifies the maximum number of classification results that the classifier should return. In this case, the value is set to 4, which means that the classifier will return up to 4 different predictions, ranked in order of their confidence level.

#mediapipe tasks API endpoints
base_options = python.BaseOptions(model_asset_path=model)
options = audio.AudioClassifierOptions(base_options=base_options, max_results=4)
classifier = audio.AudioClassifier.create_from_options(options)

To read the input audio file and process the AudioData we use scipy and Mediapipe container components.

sample_rate, wav_data = wavfile.read(audio_file_name)
audio_clip = containers.AudioData.create_from_array(wav_data.astype(float) / np.iinfo(np.int16).max, sample_rate)
result = classifier.classify(audio_clip)

Now we shall loop through a result list of timestamps and corresponding classification results, and printing out the top classification label and score for each timestamp.

for idx, timestamp in enumerate([0,750,1500,3000,4500]):
    target = result[idx]
    label = target.classifications[0].categories[0]
    print(f'Timestamp {timestamp}: {label.category_name} ({label.score})')
"code output

Putting All Together: Audio Classification Using Mediapipe Tasks API

from mediapipe.tasks import python
from mediapipe.tasks.python.components import processors
from mediapipe.tasks.python.components import containers
from mediapipe.tasks.python import audio
from scipy.io import wavfile
import urllib
import numpy as np

model = "yamnet_audio_classifier_with_metadata.tflite"

#download sample audio file from Mediapipe Assets storageapis
audio_file_name = 'speech_16000_hz_mono.wav'
url = f'https://storage.googleapis.com/mediapipe-assets/{audio_file_name}'
urllib.request.urlretrieve(url, audio_file_name)

base_options = python.BaseOptions(model_asset_path=model)
options = audio.AudioClassifierOptions(base_options=base_options, max_results=4)
classifier = audio.AudioClassifier.create_from_options(options)

sample_rate, wav_data = wavfile.read(audio_file_name)
audio_clip = containers.AudioData.create_from_array(wav_data.astype(float) / np.iinfo(np.int16).max, sample_rate)
result = classifier.classify(audio_clip)

for idx, timestamp in enumerate([0,750,1500,3000,4500]):
    target = result[idx]
    label = target.classifications[0].categories[0]
    print(f'Timestamp {timestamp}: {label.category_name} ({label.score})')

Lastly, let’s complete our final project

Project 3: Text Sentiment Analysis

Sentiment analysis is a subfield of Natural Language Processing (NLP) that aims to extract information from text, such as opinions, emotions, and attitudes expressed by individuals. The goal of sentiment analysis is to automatically classify the polarity of a piece of text, whether it is positive, negative, or neutral.

In comics, sentiment analysis extracts the emotional tone of a character’s dialogue or a particular scene. With the help of sentiment analysis, we can automatically classify the sentiment of the dialogue as negative, which can help us understand how the character is feeling and how the emotions may impact the storyline. In this example, we shall take two dialogues from DC/Marvel comics and apply Sentiment Analysis using Mediapipe pre-trained BERT model.

Implement Sentiment Analysis using Mediapipe

This program is pretty straightforward. Since this is the third project, you now have an idea about how to use API endpoints in a Python program. It’s an NLP project, so our input will now be a text.

from mediapipe.tasks import python
from mediapipe.tasks.python import text

sample_text1 = "We’ll do our part, dear sister, and let our maker do his!…It’ll work out"
sample_text2 = "Some people are in such utter darkness that they will burn you just to see a light"

Let us define the three API endpoints of Tasks API. Download the pre-trained transformer model: Bert Text Classifier

model = "bert_text_classifier.tflite"
base_options = python.BaseOptions(model_asset_path=model)
options = text.TextClassifierOptions(base_options=base_options)
classifier = text.TextClassifier.create_from_options(options)

I hope at this point, the above-mentioned code is self-explanatory. We are using the same 3 API endpoints to classify the label or sentiment of the text.

for input_text in [sample_text1,sample_text2]:
    sentiment = classifier.classify(input_text)
    label = sentiment.classifications[0].categories[0]
    print(f"{input_text} is:\n {label.category_name}. Score:{label.score}\n")
"Mediapipe Tasks API

Putting All Together: Text Sentiment Analysis Using Mediapipe Tasks API

from mediapipe.tasks import python
from mediapipe.tasks.python import text

#example text to classify
sample_text1 = "We’ll do our part, dear sister, and let our maker do his!…It’ll work out"
sample_text2 = "Some people are in such utter darkness that they will burn you just to see a light"

#define mediapipe API endpoints
model = "bert_text_classifier.tflite"
base_options = python.BaseOptions(model_asset_path=model)
options = text.TextClassifierOptions(base_options=base_options)
classifier = text.TextClassifier.create_from_options(options)

#get the sentiment
for input_text in [sample_text1,sample_text2]:
    sentiment = classifier.classify(input_text)
    label = sentiment.classifications[0].categories[0]
    print(f"{input_text} is:\n {label.category_name}. Score:{label.score}\n")

Yes, we did it👍

Conclusion

In conclusion, the Mediapipe Tasks API has proven to be a powerful tool for implementing Deep Learning models in real-world projects. The key takeaways are:

  • The Mediapipe Tasks API is a versatile and easy-to-use tool for implementing Deep Learning models in real-world projects. Its pre-trained models are accurate and robust, and it provides a wide range of APIs and tools for data processing and model evaluation.
  • Through building three projects using the Tasks API, we have demonstrated the applicability of Deep Learning in solving problems across various domains. From object detection in images to audio classification and sentiment analysis, Deep Learning has been used for the automation of a wide range of tasks.
  • Pre-trained models are an essential component of Deep Learning projects, as they provide a starting point for training and can save time and resources compared to training a model from scratch.
  • Integration of Mediapipe to other tools and APIs is straightforward.

I hope these examples have inspired you to explore the potential of the Mediapipe Tasks API for your projects.

Frequently Asked Questions

Q1. What is MediaPipe used for?

A. MediaPipe is a framework used for building perception pipelines that process various media inputs, enabling tasks like hand tracking, facial recognition, and pose estimation.

Q2. What is the use of MediaPipe in Python?

A. MediaPipe in Python offers developers tools to create applications that utilize computer vision and machine learning for tasks like object tracking, gesture recognition, and augmented reality experiences.

Q3. What is MediaPipe and how it works?

A. MediaPipe is a cross-platform framework developed by Google that facilitates the building of real-time media processing pipelines. It employs pre-built components to process input data and extract meaningful insights.

Q4. What is MediaPipe in OpenCV?

A. MediaPipe is not a part of OpenCV. It’s a separate framework developed by Google for real-time media processing tasks, while OpenCV is a widely-used library for computer vision and image processing.

The media shown in this article is not owned by Analytics Vidhya and is used at the Author’s discretion.

Data Scientist at AI Planet || YouTube- AIWithTarun || Google Developer Expert in ML || Won 5 AI hackathons || Co-organizer of TensorFlow User Group Bangalore || Pie & AI Ambassador at DeepLearningAI

Responses From Readers

Congratulations, You Did It!
Well Done on Completing Your Learning Journey. Stay curious and keep exploring!

We use cookies essential for this site to function well. Please click to help us improve its usefulness with additional cookies. Learn about our use of cookies in our Privacy Policy & Cookies Policy.

Show details