Google is making Music with Machine Learning – and has released the code on GitHub

Pranav Dar 14 Mar, 2018 • 2 min read

Overview

  • Google’s research arm, Magenta, has developed a deep neural network to generate sound
  • To utilize the algorithm, it has also released an open source hardware instrument called NSynth Super
  • It has been built using libraries like TensorFlow and openFrameworks
  • They have released the entire code on GitHub for you to make your own instrument from scratch
  • Check out the videos below for more details

 

Introduction

The field of audio processing has seen quite an interest with the rise of deep learning. But what if you are working in the music industry and are faced with a musician’s block (on the same line as a writer’s block)? You have a few initial ideas but the music is just not flowing to you.

Google has an answer for that as well.

Magenta, Google’s research arm that finds ways of using AI to help people’s creativity, has developed an instrument it calls NSynth Super. It is based on the NSynth algorithm which uses a deep neural network technique to generate sound. NSynth was released by Google a few months ago.

Rather than generating music notes, NSynth replicates the sound of an instrument. What makes the algorithm unique is that it continuously learns the core qualities of what makes up an individual sound and is able to combine various sounds to generate something completely new.

NSynth Super is an open source experimental instrument. It gives musicians (and deep learning followers) the ability to explore completely new sounds generated by the NSynth machine learning algorithm. It has been built using open source libraries like TensorFlow and openFrameworks.

The instrument can be played via any MIDI source, like a keyboard or a sequencer.

You can put together your own NSynth Super interface by following the step-by-step code and instructions provided by Google on their GitHub repository.

To deep dive into the algorithm and get the dataset behind NSynth, go to Google’s blog here.

Check out Google’s video of NSynth below:

Also check out how the NSynth Super instrument works below:

 

Our take on this

The directions given by Google on their GitHub page are detailed and will help you to create the instrument step-by-step. You (probably) won’t be able to make one as gorgeous as Google’s, but you will be able to generate crazy sound sequences to get you started.

This is a goldmine for the deep learning enthusiasts who are interested in the audio processing field. Go ahead and check the code, try it out and build your own musical database!

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 14 Mar 2018

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear

ANUPAMA R .
ANUPAMA R . 15 Mar, 2018

Classic! On a lighter note, apart from being DS one should be Music enthusiast to crack this code! Domain expertise matters here!