By Kyle Qian, '21
In an eerily disturbing, yet ironic video, Obama can be seen describing the dangers of deep-fake technology. The video looks completely genuine and had I not read the title of the video, I would have been completely deceived. For the unaware, deep-fake videos are videos that are entirely generated by AI algorithms. With some of the crazy progress in technology these days such as self-driving cars and virtual assistants, this does not seem overly surprising. The rise of deep-fake technology signals that we are headed towards the end of trust in media; this trend is advanced by the recent advancement of fake news, falsified evidence, and blackmail.
Deepfakes are built from generative adversarial networks (GANs), in which two machine learning models work together to create the perfect masterpiece. One model trains on a data set of real video footage and creates the forgeries, while the other model works to detect these fakes.  This process continues until the model that detects the fakes reaches its limit and can no longer accurately tell which one is real. The larger the dataset, the easier it is for the models to create believable fakes. This is why so far the most realistic fakes have been of presidents or celebrities -- they have the most video footage available.
This sort of technology will only become more and more accessible in the future. There are already communities dedicated to creating these such videos and so it is imperative that we learn how to regulate them appropriately.
Twitter and Reddit have already taken steps to ban this technology. DARPA (Defense Advanced Research Projects Agency) has recently funded research in a program, called Media Forensics, to detect these videos on online platforms. The end goal of this DARPA program is to create an automated system that can examine an image or video and generate an “integrity score”, which is composed of three main subscores.  The first consists of searching for dirty digital fingerprints, like evidence of manipulation within an image or compression artifacts. The second checks for physical accuracies -- for example, that the lighting or shadows that follow the laws of physics. Last is the “semantic level”, which involves comparing the media to contextual evidence, like the weather, the local news, etc. These three subscores combine to provide an accurate detection model that DARPA hopes to test at scale.
The most concerning element of this deep-fake technology is that creation technology will always hold an advantage over detection technology by virtue of how deep-fakes are created. Theoretically, creators of these videos can use the detection algorithms to perfect their own creations by continuously safeguarding against what these detection algorithms are looking for.
The one silver-lining with the advent of deep-fake technology is that in creating the necessary technology to combat them, our knowledge about technology evolves as a whole. The internet is already full of lies; there is nothing stopping someone from simply taking a real video and falsifying the context for that video. So, fake media is ultimately inevitable and I think of it as a necessary evil -- a reminder for us to prepare against the worst. This is a challenge we must solve in a world of increasing technological complexity.
1. Maras, Marie-Helen, and Alex Alexandrou. “Determining Authenticity of Video Evidence in the Age of Artificial Intelligence and in the Wake of Deepfake Videos.” The International Journal of Evidence & Proof, (October 2018). doi:10.1177/1365712718807226.
2. Scoles, Sarah. "These New Tricks Can Outsmart Deepfake Videos-for Now." Wired. October 17, 2018. Accessed November 18, 2018. https://www.wired.com/story/these-new-tricks-can-outsmart-deepfake-videosfor-now/.
Leave a Reply.