I have conducted tons of interviews for data science positions in the last couple of years. One thing has stood out – aspiring machine learning professionals don’t focus enough on projects that will make them stand out.
And no, I don’t mean online competitions and hackathons (though that is always a plus point to showcase). I’m talking about off-the-cuff experiments you should do using libraries and frameworks that have just been released. This shows the interviewer two broad things:
And guess which platform has the latest machine learning developments and code? That’s right – GitHub!
So let’s look at the top seven machine learning GitHub projects that were released last month. These projects span the length and breadth of machine learning, including projects related to Natural Language Processing (NLP), Computer Vision, Big Data and more.
This is part of our monthly Machine Learning GitHub series we have been running since January 2018. Here are the links for this year so you can catch up quickly:
I’ll be honest – the power of Natural Language Processing (NLP) blows my mind. I started working in data science a few years back and the sheer scale at which NLP has grown and transformed the way we work with text – it almost defies description.
PyTorch-Transformers is the latest in a long line of state-of-the-art NLP libraries. It has beaten all previous benchmarks in various NLP tasks. What I really like about PyTorch Transformers is that is contains PyTorch implementations, pretrained models weights and other important components to get you started quickly.
You might have been frustrated previously at the ridiculous amount of computation power required to run state-of-the-art models. I know I was (not everyone has Google’s resources!). PyTorch-Transformers eradicates the issue to a large degree and enables folks like us to build state-of-the-art NLP models.
Here are a few in-depth articles to get you started with PyTorch-Transformers (and the concept of pre-trained models in NLP):
Multi-label classification on text data is quite a challenge in the real world. We typically work on single label tasks when we’re dealing with early stage NLP problems. The level goes up several notches on real-world data.
In a multi-label classification problem, an instance/record can have multiple labels and the number of labels per instance is not fixed.
NeuralClassifier enables us to quickly implement neural models for hierarchical multi-label classification tasks. What I personally like about NeuralClassifier is that it provides a wide variety of text encoders we are familiar with, such as FastText, RCNN, Transformer encoder and so on.
We can perform the below classification tasks using NeuralClassifier:
Here are two excellent articles to read up on what exactly multi-label classification is and how to perform it in Python:
This TDEngine repository received the most stars of any new project on GitHub last month. Close to 10,000 stars in less than a month. Let that sink in for a second.
TDEngine is an open-source Big Data platform designed for:
TDEngine essentially provides a whole suit of tasks that we associate with data engineering. And we get to do all this at super quick speed (10x speed on processing queries and 1/5th computational usage).
There’s a caveat (for now) – TDEngine only supports execution on Linux. This GitHub repository includes the full documentation and starter’s guide with code.
I suggest checking out our comprehensive resource guide for data engineers:
Have you worked with any image data yet? Computer Vision techniques for manipulating and dealing with images are quite advanced. Object detection for images is considered a basic step to becoming a computer vision expert.
What about videos, though? The difficult level goes up several notches when we’re asked to simply draw bounding boxes around objects in videos. The dynamic aspect of objects makes the entire concept more complex.
So, imagine my delight when I came across this GitHub repository. We just need to draw a bounding box around the object in the video to remove it. It really is that easy! Here are a couple of examples of how this project works:
If you’re new to the world of computer vision, here are a few resources to get you up and running:
You’ll love this machine learning GitHub project. As data scientists, our entire role revolves around experimenting with algorithms (well, most of us). This project is about how a simple LSTM model can autocomplete Python code.
The code highlighted in grey below is what the LSTM model filled in (and the results are at the bottom of the image):
As the developers put it:
We train and predict on after cleaning comments, strings and blank lines in the python code. The model is trained after tokenizing python code. It seems more efficient than character level prediction with byte-pair encoding.
If you’ve ever spent (wasted) time on writing out mundane Python lines, this might be exactly what you’re looking for. It’s still in the very early stages so be open to a few issues.
And if you’re wondering what in the world LSTM is, you should read this introductory article:
TensorFlow and PyTorch both have strong user communities. But the incredible adoption rate of PyTorch should see it leapfrog TensorFlow in the next year or two. Note: This isn’t a knock on TensorFlow which is pretty solid.
So if you have written any code in TensorFlow and a separate one in PyTorch and want to combine the two to train a model – the tfpyth framework is for you. The best part about tfpyth is that we don’t need to rewrite the earlier code.
This GitHub repository includes a well structured example of how you can use tfpyth. It’s definitely a refreshing look at the TensorFlow vs. PyTorch debate, isn’t it?
Installing tfpyth is this easy:
pip install tfpyth
Here are a couple of in-depth articles to learn how TensorFlow and PyTorch work:
I associate transfer learning with NLP. That’s my fault – I am so absorbed with the new developments that I did not imagine where else transfer learning could be applied. So I was thrilled when I came across this wonderful MedicalNet project.
This GitHub repository contains a PyTorch implementation of the ‘Med3D: Transfer Learning for 3D Medical Image Analysis‘ paper. This machine learning project aggregates the medical dataset with diverse modalities, target organs, and pathologies to build relatively large datasets.
And as we well know, our deep learning models do (usually) require a large amount of training data. So MedicalNet, released by TenCent, is a brilliant open source project I hope a lot of folks work on.
The developers behind MedicalNet have released four pretrained models based on 23 datasets. And here is an intuitive introduction to transfer learning if you needed one:
Quite a mix of machine learning projects we have here. I have provided tutorials, guides and resources after each GitHub project.
I have one ask – pick the project that interests you, go through the tutorial, and then apply that particular library to solve the problem. For example, you could take up the NeuralClassifier repository and use that to solve a multi-label classification problem.
This will help you broaden your understanding of the topic and expand your current skillset. A win-win scenario! Let me know your thoughts, your favorite project from the above list, and your feedback in the comments section below.
Excellent
Thanks, Kapil.
hello pranav .. first I m very delightful after joining you at this fantastic platform.actly I m working in EEG and machine learning for psychological Singhal analysis and want to use it in wearable for person who is mentally not stable of facing seizures.so can you please help me out and provide some EEG dataset and best possible machine learning algo for its analysis. pls share it here or at my email [email protected] best regards
The best article I came through this week.
Thanks Naman, glad you liked it!
Thanks for sharing this Pranav!
You're welcome, Mayursinh.
Great Projects.
Sir can you please provide NLP tutorials.
Hi Venu, I encourage you to check out the NLP category which has a whole host of tutorials: https://www.analyticsvidhya.com/blog/category/nlp/ You can also browse through the below two courses on NLP that will teach you the basics as well as the advanced topics: https://courses.analyticsvidhya.com/courses/Intro-to-NLP https://courses.analyticsvidhya.com/courses/natural-language-processing-nlp
Pranav I am confused how to start data science course , coz there are lots of tools and things ,but don't know the exact order so if you have already worked in that please explain here
Hi Shafak, I'm not sure at which stage of your data science journey you are on, but I would recommend checking out the free learning path we have for aspiring data scientists: https://courses.analyticsvidhya.com/courses/a-comprehensive-learning-path-to-become-a-data-scientist-in-2019 It is an ideal starting point and will guide you on how to approach what seems like a daunting task.
Hi Pranav, Title mentioned as 'Video Object Removal' and the content is related to RCNN algoithmr family and YOLO implementation. Did I miss anything as I have not found any material/ code related to Object removal from Video. Thanks.
Hi Srikanth, The GitHub repository links are in the heading names - so the 'Video Object Removal' project is here: https://github.com/zllrunning/video-object-removal.
Sir,It's has been highly helpful for many people.😊
Thanks, Punam - glad you found it useful. :)
Wonderful Pranav, Great efforts
Thanks, Kamal!
Hello sir, I want to do PhD computer science on machine learning area.... pls give some suggestion and ideas ..... and I have plan to do machine algorithm applied to biometrics..... so pls kindly give some information regarding this
Hi Sridevi, Can you let me know what exactly you would want ideas on with regards to Ph.D in machine learning? If you could be specific that would help me understand the ask.
Great Article, Pranav !! A concise and informative article to touch some interesting projects in AI/ML space.
Thanks for reading it, Suvajit!
Hi, Sridevi, I am quite new to this and am trying to learn. I have coded your images recognition program listed below. #!/usr/bin/env python3 # -*- coding: utf-8 -*- """ Created on Tue Jul 16 11:27:17 2019 @author: Artebuz """ import numpy as np import matplotlib.pyplot as plt import torch import torchvision import torchvision.transforms as transforms transform = transforms.Compose( [transforms.ToTensor(), transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0,5))]) trainset = torchvision.datasets.CIFAR10(root=' ./data', train=True, download=True, transform=transform) trainloader = torch.utils.data.DataLoader(trainset, batch_size=4, shuffle=True, num_workers=2) testset = torchvision.datasets.CIFAR10(root='./data', train=False, download=True, transform=transform) testloader = torch.utils.data.DataLoader(testset, batch_size=4, shuffle=False, num_workers=2) classes = ('plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck') # Functions to show an image def imshow(img): img = img / 2 + 0.5 # unnormalize image npimg = img.numpy() plt.imshow(np.transpose(npimg,(1, 2, 0))) plt.show() #get some random training images dataiter = iter(trainloader) images, labels = dataiter.next() #show images imshow(torchvision.utils.make_grid(images)) #print labels print(' '.join('%5s' % classes[labels[j]] for j in range(4))) But, I am getting the following error. RuntimeError: The size of tensor a (3) must match the size of tensor b (4) at non-singleton dimension 0 I have tried to understand where the error lies but am unable to find it. Can you help me understand what is wrong with the code'