Deep Learning has been the most researched and talked about topic in data science recently. And it deserves the attention it gets, as some of the recent breakthroughs in data science are emanating from deep learning. It’s predicted that many deep learning applications will affect your life in the near future. Actually, I think they are already making an impact.
However, if you have been looking at deep learning from the outside, it might look difficult and intimidating. Terms like TensorFlow, Keras, GPU based computing might scare you. But, let me tell you a secret quietly – it is not difficult! While cutting edge deep learning will take time and effort to follow, applying them in simple day to day problems is very easy.
It is also fun. I kind of re-discovered the fun and curiosity of a child while applying deep learning. Through this article, I will showcase 6 such applications – which might look difficult at the outset, but can be achieved using Deep Learning implementation in less than an hour. This article is written to showcase these ground-breaking works and give you a taste of how they work.
Let’s start!
P.S. We assume that you would know basics of Python. If not, just follow this tutorial and then you can start from here again.
APIs are nothing but a software running on the other side of the internet in a remote PC which can be accessed locally. For example, you plug in bluetooth speakers to your laptop even if your machine might have inbuilt speakers. So, we are able to access the speaker remotely while sitting on our laptop.
APIs work on similar concept – some one has already done the hard work for you. You can use it to solve the problem at hand quickly. For more details on API, read here.
I’ll list out some advantages and disadvantages of building apps using API.
If you want to know more about what API’s are, check out this blog
Let’s start with our applications!
Automated Image Colorization has been a topic of interest among the computer vision community. It seems surreal to get a colorful photo of a black and white image. Imagine a 4-year old picking up a crayon and gets engrossed in his coloring book! Could we teach an artificial agent do the same and just “imagine” stuff?
Of course, this is a hard problem! This is because we as humans get “trained” each and every day by seeing how things are colored in real life. We might not notice but our brain is capturing each moment of our lives and extracting meaningful information from it, such as sky is blue and grass is green. This is hard to model in an artificial agent.
A recent study shows that if we train a neural network enough on a large number of the especially prepared dataset, we can essentially get a model which could “hallucinate” colors in a grayscale image. Here’s a demonstration of an image colorizer:
To practically implement this, we use an API developed by Algorithmia.
Requirements and Specifications:
Step 1: Register on Algorithmia and get your own API key. You can find your API key in your profile
Step 2: Install algorithmia by typing
pip install algorithmia
Step 3: Select a photo you want to colorize and upload it to the data folder provided by algorithmia.
Step 4: Create a file locally and name it trial1.py . Open it and write the code as below. Notice that you have to insert the location of your image in data folder and API key
import Algorithmia input = { "image": "data:// … " # Set location of your own image } client = Algorithmia.client(‘…’) #insert your own API key algo = client.algo('deeplearning/ColorfulImageColorization/1.1.5') print algo.pipe(input)
Step 5: Open command prompt and run your code by typing “python trial.py”. The resulting output will be automatically saved in your data folder. Here’s what I got:
That is it – you have just created a simple application which acts as a child and can fill in colors in images! Exciting stuff.
Watson is a great example to show what an artificial agent can achieve. You may have heard the story of Watson beating humans at a Question and Answering game. Although Watson uses an ensemble of many techniques for working, deep learning still is a core part of its learning process, especially in natural language processing. Here we would use one of the many applications of Watson, to build a conversation service, aka chatbot. A chatbot is an agent that respond as humans do on common questions. It can be an excellent point of contact to customers and respond to them in a timely manner.
Here we would use one of the many applications of Watson, to build a conversation service, a.k.a chatbot. A chatbot is an agent that respond as humans do on common questions. It can be an excellent point of contact to customers and respond to them in a timely manner.
Here’s a demonstration of the platform:
Requirements and Specifications:
Let’s see a step-by-step example of how to build a simple chatbot with Watson.
Step 1: Register on Bluemix and activate your conservation services to get your credentials
Step 2: Open terminal and run command as below:
pip install requests responses pip install --upgrade watson-developer-cloud
Step 3: Make a file trial.py and copy the following code in it. Remember to put your own credentials in it.
import json from watson_developer_cloud import ConversationV1 conversation = ConversationV1( username='YOUR SERVICE USERNAME', password='YOUR SERVICE PASSWORD', version='2016-09-20') # replace with your own workspace_id workspace_id = 'YOUR WORKSPACE ID' response = conversation.message(workspace_id=workspace_id, message_input={ 'text': 'What\'s the weather like?'}) print(json.dumps(response, indent=2))
Step 4: Save your file and run it by typing in console “python trial.py”. You will get an output in the console which would be the response of Watson for your input.
Input: Show me what’s nearby
Output: I understand you want me to locate an amenity. I can find restaurants, gas stations and restrooms nearby.
If you want to build a full-fledged project of conversation service with animated car dashboard (as shown in the above gif), view this github repository.
A chatbot and an application to color images in under a few minutes – not bad 🙂
Sometimes we want to see only the good in the world. How cool would it be to filter out all the bad news when reading a newspaper and only see “good” news!
With advanced natural language processing techniques (one of which is deep learning), this is becoming increasingly possible. You can now filter out news by sentiment and present it to the readers.
We will see and application of this using Aylien’s News API. Below are the screenshots of the demo. You can build your own custom query and check out the result for yourself.
Let’s see an implementation of this in python
Requirements and specifications:
Step 1: Register for an account on Aylien website.
Step 2: Get API_key and App_ID from your profile when you login
Step 3: Install Aylien News API by going in your terminal and typing
pip install aylien_news_api
Step 4: Create a file “trial.py” and copy the following code
import aylien_news_api from aylien_news_api.rest import ApiException # Configure API key authorization: app_id aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-ID'] = ' 3f3660e6' # Configure API key authorization: app_key aylien_news_api.configuration.api_key['X-AYLIEN-NewsAPI-Application-Key'] = ' ecd21528850dc3e75a47f53960c839b0' # create an instance of the API class api_instance = aylien_news_api.DefaultApi() opts = { 'title': 'trump', 'sort_by': 'social_shares_count.facebook', 'language': ['en'], 'published_at_start': 'NOW-7DAYS', 'published_at_end': 'NOW', 'entities_body_links_dbpedia': [ 'http://dbpedia.org/resource/Donald_Trump', 'http://dbpedia.org/resource/Hillary_Rodham_Clinton' ] } try: # List stories api_response = api_instance.list_stories(**opts) print(api_response) except ApiException as e: print("Exception when calling DefaultApi->list_stories: %s\n" % e)
Step 5: Save the file and run it by typing “python trial.py”. The output will be a json dump as follows:
{'clusters': [], 'next_page_cursor': 'AoJbuB0uU3RvcnkgMzQwNzE5NTc=', 'stories': [{'author': {'avatar_url': None, 'id': 56374, 'name': ''}, 'body': 'President Donald Trump agreed to meet alliance leaders in Europe in May in a phone call on Sunday with NATO Secretary General Jens Stoltenberg that also touched on the separatist conflict in eastern Ukraine, the White House said.', 'categories': [{'confident': True, 'id': 'IAB20-13', 'level': 2, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB20-13', 'parent': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB20'}, 'score': 0.3734071532595844, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB11-3', 'level': 2, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB11-3', 'parent': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB11'}, 'score': 0.2898707860282879, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB10-5', 'level': 2, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB10-5', 'parent': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB10'}, 'score': 0.24747867463774773, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB25-5', 'level': 2, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB25-5', 'parent': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB25'}, 'score': 0.22760056625597547, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB20', 'level': 1, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB20', 'parent': None}, 'score': 0.07238470020202414, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB10', 'level': 1, 'links': {'_self': 'https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB10', 'parent': None}, 'score': 0.06574918306158796, 'taxonomy': 'iab-qag'}, {'confident': False, 'id': 'IAB25', ...
Woah! I can visualize a chatbot at your service and a service like Alexa reading you news of your interest now! I am sure, you will be as excited about deep learning by now!
The best thing that’s helping the research community right now is its open source mindset. Researchers are ready to share whatever they achieved so that deep learning research would grow, and as a result, its growing leaps and bounds! Here I mention some of the open source contributions and their variants which have been created from research papers.
Note: For open sourced applications, I would recommend you to go through their official repository. This is because some of them are still in infancy stage and may break for unknown reasons
Let’s look at some open source applications!
The systems nowadays can easily detect and correct spelling mistakes, but correcting a grammatical error is a bit harder. To improve on this a bit, we can use deep learning to correct these sentences for us. This repository is an attempt especially for that.
Here’s a sequence predicting neural network was trained on a corpus of grammatically wrong sentences along with its corrected counterpart. The trained model shows promising results for sentence correction. Here’s an example below:
Input: ‘Kvothe went to market’
Output: ‘Kvothe went to the market’
You can check out a demo on the site: http://atpaino.com/dtc.html
The model still fails to correct all the sentences, but with more training data and efficient deep learning algorithms, the results could be improved.
Requirements:
Step 1: Install tensorflow from their official website. Also, download the repository from GitHub and save it locally from https://github.com/atpaino/deep-text-corrector
Step 2: Download the dataset (Cornell Movie-Dialogs Corpus) and extract it in your working directory
Step 3: Create the training data by running the command
python preprocessors/preprocess_movie_dialogs.py --raw_data movie_lines.txt \ --out_file preprocessed_movie_lines.txt
And create train, validation and test files and save them in the current working directory
Step 4: Now train the deep learning model by:
python correct_text.py --train_path /movie_dialog_train.txt \ --val_path /movie_dialog_val.txt \ --config DefaultMovieDialogConfig \ --data_reader_type MovieDialogReader \ --model_path /movie_dialog_model
Step 5: The model requires some time to train. After training, you can test it by:
python correct_text.py --test_path /movie_dialog_test.txt \ --config DefaultMovieDialogConfig \ --data_reader_type MovieDialogReader \ --model_path /movie_dialog_model \ --decode
Before I speak anything on the application, just observe the following results:
Here the first image is converted into second by a deep learning model! This is really a fun application to show what deep learning can do! In its core, the application uses GAN (generative adversarial network), which a type of deep learning which is capable to new examples on its own.
Requirements:
Just a warning before you implement this. Training a model takes too long if you are not using a GPU. Even with a high-end GPU (Nvidia GeForce GTX 1080), the training takes 2 hours for one image.
Step 1: Download the repository and extract it locally https://github.com/david-gpu/deep-makeover
Step 2: Download the “Align&Cropped Images” from CelebA dataset. Create a datasets folder by name “dataset” and extract all images in it
Step 3: Train the model by:
python3 dm_main.py --run train
and then test it by passing the image you want to convert
python3 dm_main.py --run inference image.jpg
You may have played Flappy Bird sometime in the past. For those who don’t know, it was an extremely addictive Android game in which the aim was to keep flying the bird in air by avoiding obstacles.
In this application, a flappy bird Bot is created by using advanced reinforcement learning techniques. Here’s a demo of a trained bot.
Requirements:
Implementing this is easy, as most of the nuts and bolts are included.
Step 1: Download the official repository.
Step 2: Make sure you have all the dependencies installed. Once you have, run the command as below.
python deep_q_network.py
We have just scratched the surface of what a deep learning model is capable of. There are many research papers being released every day which gives rise to many such applications. Now it’s a matter of who thinks the idea first!
I’ll list out some of the links and resources which I found worth looking at
I hope you had fun reading this article. I bet these applications would have blown your mind. Some of you might be aware about these applications & some of you might not be. If you have worked on any of these applications, share your experience with us. The other reader and I would definitely want to know about it.
If you have come across these applications for the first time, then let me know which one excited you the most. Share your suggestions / feedback with us in the comments section below.
Stay ahead this year with complete learning path for Deep Learning
Great post! I will suggest another one "Deep Dream".
Thanks for the suggestion Pinaki! Actually Deep Dream was one of the first application that got me interested in deep learning.
i find more new information,i like that kind of information,not only i like that post all peoples like that pos
Thanks!
Great article. Also a new generation platforms are coming up which allows building deep learning models using GUI drag and drop. One such example is deepcognition.ai . It could be very useful for the beginners.
Looks interesting! Thanks for sharing
thanks for shared wonderful information of giving best information.its more useful and more helpful. great doing keep sharing
Thanks thamarai
Nicely done, Faizan.
Thanks Sunil!
Congrats for sharing such a useful information.God bless you.
Thank you so much!
I m not able find workspace id,where i can find it?
I am able to run code but the output is nothing?
thank you so much for this helpful information :) but why not the developers just put the trained model on the app itself so that the user use the app with no need for internet connection.
The trained models are a bit compute heavy, and smaller systems like mobile can't handle it efficiently. So APIs are preferable over local systems.
Hi Faizan, fantastic intriguing piece. What exactly is deep dream. I would love to read more of your research and articles... Thanks
Hey Shabbir - you can read more of my articles on Analytics Vidhya itself
Can you post a similar article for R users?
Thanks for the suggestion
In the algorithmia project, I am getting error - Traceback (most recent call last): File "/home/subhasis/PycharmProjects/Algorithmia/Algorithmia.py", line 1, in import Algorithmia File "/home/subhasis/PycharmProjects/Algorithmia/Algorithmia.py", line 5, in client = Algorithmia.client('simkAgIKwJ+4+AtsVvKJdkAFafw1') AttributeError: 'module' object has no attribute 'client' Any idea why?
Hey - you should report this error in their official forum
Great article. Thank you for valuable information. Deep learning applications are also in the field of automatic text generation, Identifying handwritting, automatic machine translation and some more.
Nicely Explained