Build your own Computer Vision Model with the Latest TensorFlow Object Detection API Update

Pranav Dar 16 Jul, 2018 • 3 min read


  • The latest version of the popular TensorFlow Object Detection API has been released
  • Updates include support for accelerating the training process thanks to Cloud TPUs, several new pretrained models
  • There are also further improvements to the mobile deployment process making it easier to work with TensorFlow Lite



Computer vision (CV) is one of the hottest research topics in machine learning these days. From self-driving cars to Instagram and Facebook’s object detection technology, it has seen a rapid rise in recent times thanks to advances in hardware.

Object detection is easily one of the most common applications of computer vision. Thanks to the wonderful open-source community ML has, object detection has seen a lot of interest as more and more data scientists and ML practitioners line up to break new ground. Keep up with that trend, Google, one of the leaders in ML (perhaps THE leader in ML), has released the latest version of it’s popular TensorFlow Object Detection API framework.

Ever since it’s release last year, the TensorFlow Object Detection API has regularly received updates from the Google team. These updates have included pretrained models trained on datasets like Open Images, among other things. We have seen the community embrace this framework with open arms – detecting objects on a football field, pedestrian counting, finding cracks in the streets, etc. There have been all sorts of amazing experiments performed using this API.

And here comes the latest, and quite a major update released by Google. The changes were announced in a blog post and we have mentioned the highlights below:

  • Tuning hyperparameters and retraining your computer vision model can be a tedious task if you lack computational power. So this latest update has added support for accelerating the training process of object detection models via Google’s Cloud TPUs
  • Mobile deployment has received some love in this release. The entire process has been improved by making it easier to export a model to mobile using the TensorFlow Lite format
  • Quite a few model architecture definitions have been released, including RetinaNet, a MobileNet adaptation of RetinaNet, and the Pooling Pyramid Network
  • Several pretrained models, based on the COCO dataset, have also been released in this update


Our take on this

This is another quality example of Google’s efforts to expand the reach of ML and help newcomers and practitioners get the most out of their work. This will undoubtedly help in advancing the research efforts in computer vision and object detection. They have even released a short tutorial on how to train a model on their Cloud TPUs, which you can check out in their blog post.

On the other side of the story, Google of course benefits as more and more data scientists and ML folks are integrated into the TensorFlow community. It’s a win-win situation for all sides! If you need help getting started with object detection, check out the below guide to get you on your way:

Understanding and Building an Object Detection Model from Scratch in Python

You can also enroll in Analytics Vidhya’s soon-to-be-launched ‘Computer Vision using Deep Learning‘ course which will cover a whole host of topics using real-world case studies. It’s a beginner friendly course and does not assume any familiarity with computer vision.


Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!


Pranav Dar 16 Jul 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers


Sven Voss
Sven Voss 12 Aug, 2018

Sometime I wonder how searchengines are working. I have been looking for detection tecnology and they recommend me your site. It is very interesting. At least the part I understood, but it is definetly not what I am looking for