- A new algorithm has been developed that uses eye blinking to detect fake videos
- Most fake videos these days don’t take this into account, and the flaw was taken advantage of
- So far the algorithm is giving an accuracy of 95% after being trained on images of both open and closed eyes
Manipulating facial expressions and body movements in videos has become so advanced that most people struggle to tell the difference between fake and real. A fake video of Barack Obama went viral last year where you see the former President addressing the camera. If you turn off the sound, you will not even realize it’s a fake video!
Neural networks have become the go-to technique for generating results, but their black box nature has created certain problems we don’t have a solution to yet. With elections soon to happen, how do we get a handle on this? The previous US elections were mired in fake news scandals, and it is only expected to get worse next time out.
Thankfully, a lot of work is being done to detect fake videos, thanks to machine learning. I have previously covered XceptionNet, an algorithm that detects face swaps. And now another unique technique has been pioneered that can detect a fake video by analyzing the subject’s eye blinking pattern in the video. The above image illustrates this point clearly – the top images are real, and the bottom ones are fake and have been generated using neural networks.
According to the researchers, adults blink between every 2-10 seconds. This, surprisingly, is not taken into account in most fake videos. And now his flaw has been detected and applied in this latest algorithm.
One neural network scans each and every frame in a given video, is trained to detect all the faces in it, and then zooms in on each person’s eyes. Another neural network then identifies if the eyes are open or closed, using the movement, appearance and other features of human eyes. This algorithm has been trained on GBs of images of both open and closed eyes and is so far performing with a very impressive 95% accuracy.
Our take on this
The research paper made for quite an interesting read. Fake videos are a pervasive problem and will continue to be so for a long time. With each solution, the adversaries come up with an even better neural network that circumvents any previous detection algorithm.
For example, fake videos can include blinking by training their algorithms on images with both closed and open eyes. This is akin to a game of cat and mouse, and solutions will constantly need to be found at each turn. For now, this algorithm seems to be working but how long? That’s a question we are afraid to find the answer to.
Subscribe to AVBytes here to get regular data science, machine learning and AI updates in your inbox!