MIT Researchers Built a Neural Network that can See Through Walls!

Pranav Dar 10 May, 2019 • 3 min read

Overview

  • Researchers from MIT have built a neural network that estimates the pose and movement of people who are behind a wall
  • The neural network creates a digital stick figure which shows where the person is and what pose he/she is in
  • It also identifies people accurately; out of 100, it managed to correctly identify 83 of them

 

Introduction

X-ray vision has long been limited to science fiction movies and books. We are used to seeing characters break through walls and buildings with just a gaze but could you ever have imagined that turning into reality?

A group of researchers from MIT’s Computer Vision and Artificial Intelligence Lab have found a breakthrough to bring this new reality to life. Their project, called RF-Pose, uses deep learning to see through walls and estimate the posture and movement of people.

The AI uses a neural network to train itself and make the required estimations. The neural network has been developed in such a way that it can sense and analyze radio signals which come from a person’s body. Then, it creates a digital stick figure (as you can see in the image above) which shows where the person is, and what pose he/she is in (standing, sitting, moving around).

Most neural networks require tons of data to be trained properly (supervised learning methods). his was a significant challenge as labeled training data in this study was hard to find. So the researchers had to develop and collect their own data, which they did through wireless devices and a camera. They put together a database of thousands of images which had people doing things like walking, running, standing, sitting, doing a random activity, among other things.

The next step involved extracting stick figures from the camera images which were shown to the neural network, along with the radio signal that corresponded to them. After the training process, the neural network was capable enough to estimate the pose and movement without requiring explicit use of the camera.

But the uses of this technology don’t end there. It can even identify people in a line-up with very decent accuracy. When the researchers tested the system on 100 individuals, it was able to identify 83 out of them correctly. The team will present their research paper (link below) at the Conference on Computer Vision and Pattern Recognition in Utah later this month.

I have listed a couple of resources below which provide an in depth understanding of this AI:

Watch the below video to get a sense of how the AI works in real time:

 

Our take on this

I can imagine this technology being extremely helpful in the healthcare industry. As the researchers mentioned, it can monitor diseases related to motor functions, allow elderly people to live with a bit more freedom since they can be monitor for falls and injuries. It can also be used in video games (what is it about deep learning and video games?) to estimate movement, and also in search and rescue operations to identify specific people.

I have been covering computer vision a lot lately under AVBytes and continue to be amazed at the amount of progress we have made in this field. This field is ripe for data scientists since there is so much research going on. I encourage you to read up on this technology and think of ways how you can implement such a thing if given the chance.

 

Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!

 

Pranav Dar 10 May 2019

Senior Editor at Analytics Vidhya. Data visualization practitioner who loves reading and delving deeper into the data science and machine learning arts. Always looking for new ways to improve processes using ML and AI.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers

Clear