AI Can Be Used To Detect Deepfakes – For Now

Over the past several years, software has emerged which can create a lifelike digital model of just about anyone. Known as “deepfakes,” the technology can be used to deceive or entertain – such as Game of Thrones’ Jon Snow apologizing for the absolute disaster that was season eight. 

Early deepfakes were easy to identify; while the AI-generated dupe looked real enough, there were many tells – such as jerky mouth movements or unnatural eye movements. As time has passed, however, deepfakes are getting harder to spot.

Here’s a far less convincing example: 

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

A post shared by Bill Posters (@bill_posters_uk) on

 

Meanwhile, last week we reported that the staff at the Max Planck Institute for Informatics, Princeton University and Adobe Research have developed software that allows you to now edit and change what people are saying in videos, allowing anyone to edit anybody into saying anything – by using machine learning and 3-D models of the target’s face. 

AI to the rescue?

As deepfakes become harder and harder to identify, recent research from USC’s Information Sciences Institute concludes that artificial intelligence can be used to spot the real McCoy, according to VICE

To automate the process, the researchers first fed a neural network—the type of AI program at the root of deepfakes—tons of videos of a person so it could “learn” important features about how a human’s face moves while speaking. Then, the researchers fed stacked frames from faked videos to an AI model using these parameters to detect inconsistencies over time. According to the paper, this approach identified deepfakes with more than 90 percent accuracy.

Study co-author Wael Abd-Almageed says this model could be used by a social network to identify deepfakes at scale, since it doesn’t depend on “learning” the key features of a specific individual but rather the qualities of motion in general. –VICE

Our model is general for any person since we are not focusing on the identity of the person, but rather the consistency of facial motion,” said Abd-Almageed. 

Social networks do not have to train new models since we will release our own model. What social networks could do is just include the detection software in their platforms to examine videos being uploaded to the platforms.” 

It’s anyone’s guess what happens AIs can’t detect the work of other AIs, but might want to protect John Connor at all costs just in case it’s a slippery slope. 

via ZeroHedge News http://bit.ly/2Y1clXz Tyler Durden

Leave a Reply

Your email address will not be published. Required fields are marked *