These scientists have developed a deep neural network to decode our thoughts. The above image shows how the technology works.
The research was conducted over a course of 10 months. Three subjects were shows natural images, artificially generated shapes and alphabetical letters, all for differing lengths of time.
Brain activity was measured in two phases:
- while the subject was looking at the image
- after the subject had been shown the image(s), he/she was asked to recall the details on that image
The brain activity was then scanned by the machine. It then decoded (or “reverse-engineered”) the information to visually represent the thoughts. These reconstructed images did resemble the original images a little bit, but most of it looked like a blurred blob.
The technology is still in it’s nascent stages but the advancements are still impressive. Binary pixels are no longer the “in” thing; the AI can now detect entire objects on it’s own.
If you’re interested in how the technology works, you can see the research paper here. The below video also gives a lowdown about how this works:
Our take on this
The accuracy of the technology will only improve going forward. The day is not too far away when machines can decipher exactly what we are thinking. Whether that’s a good thing or an alarming intrusion of privacy, remains to be seen. Already industry leaders are clamoring to get a lid on how far this technology can be used. Ethics in AI is a genuinely pressing subject and this new breakthrough will only add to the list of topics in that regard.
The potential applications of this technology blow your mind away. It could paint images simply based on your thoughts. For people with speaking disorders or paralysis, this could be a dream come true. What if you could wake up and analyse your the dreams you just had? The list is endless.