Most neural network models that we know of are huge and computationally heavy, which means they consume a lot of energy and are not practical for handheld devices. Almost all the smartphone applications (like speech and facial recognition programs) that rely on neural networks simply upload their data to cloud servers. It’s processed there and the result is sent back to the app.
MIT researchers have come up with a special chip that increases the speed of neural network computations by three to seven times over it’s predecessors. More impressively, it claims to reduce power consumption by an amazing 94-95 percent. This new study will make it far more easier and practical to run NNs directly in the smartphone apps. The chip can even be embedded into other household applications like the fridge, blenders, etc.
The development of this chip was led by Avishek Biswas, an MIT graduate student in electrical engineering and computer science. In an interview with MIT, Mr. Bisvas said:
“Since these machine-learning algorithms need so many computations, this transferring back and forth of data is the dominant portion of the energy consumption. But the computation these algorithms do can be simplified to one specific operation, called the dot product. Our approach was, can we implement this dot-product functionality inside the memory so that you don’t need to transfer this data back and forth?”
Our take on this
This is a very important breakthrough because it means that the smartphones and other portable gadgets in the future can perform deep learning applications (like advanced speech and facial recognition) directly on the device, instead of using rudimentary algorithms or sending the data to the cloud and waiting for the results. We will no longer have to worry about our data going to third party apps or creating bandwidth traffic. Once this technology goes commercial, expect a lot of companies to leverage it in most electronic devices.
You can also read this article on Analytics Vidhya's Android APP