DeepFake
Deepfake is a concept derived from “deep learning” and “fake” in which a person’s image, audio, or video is changed by similarity and causes fraudulence and threats to privacy, among others.
Machine learning and artificial intelligence are used to create fake content. Though deep learning has been successfully used in various applications, from big data analytics to computer vision, it has been also employed to create software that can cause threats to privacy, democracy, and national security. One of those deep learning-applications areas observed is “deepfake.” Deepfakes used deep learning to produce manipulated images and videos. Deepfakes are used negatively by generating controversial videos, election biasing, and fake news using adversarial networks. It is found that most of the celebrities and politicians have appeared in videos doing something or speaking, but which they really have not done. Deepfakes enable bullying in schools or workplaces by easily putting someone in a ridiculous or compromising scenarios. Corporations are affected in the form of aggravating scams. For governments, the bigger threat from deepfakes comes from the danger it poses to democracy. People even use it for some fun as well. Therefore, it is needed to develop various detection methods to reduce cases of deepfakes.
Deep learning is a machine learning technique basically uses neural networks for prediction using various tasks such as object detection, data generation, user recommendation, and many more. Deepfake detection technologies are a solution based on deep learning to automatically detect fake contents. Deepfake technology is based on computer-graphics and machine-learning systems to synthesize images, audios, and videos in a more intelligent and faster way.
Deepfake video is created by training a neural network on a real video footage of the person from many angles and under different lighting. Then the trained network with computer-graphics techniques copy these to a different actor. The hundreds or thousands of pictures of both persons are collected and then build on using an encoder to encode all these pictures with the use of a deep learning network and the decoder to rebuild the image. All these are done using the autoencoder.
There are many deepfake technologies found in different studies that are mostly based on digital integrity, physical integrity, or semantic integrity. In digital integrity, the patterns are invisible to the human eye and can easily detect any video that has been altered. Physical integrity is based on lighting, shadows, and other physical status of an image, while semantic integrity focuses on the wider context of the contents.
Major countries put forward disciplinary regulation, but most effective initiatives are from the United States, China, and South Korea. Many social platforms such as Facebook, Twitter, and YouTube are setting teams to build a deepfake detector.
The development of machine learning and AI technologies have been used in courtrooms when images and videos are used as evidences. The digital media forensics results must be proven to be valid and reliable without manipulation. Therefore, deepfake detection technology based on AI algorithms can be used to support the determination of the authenticity of digital media.
Some applications have been developed such as DeepFakeLab for Windows, Zao, which is a Chinese app that creates a deepfake video in just a few seconds, Reality Defender, which is a plug-in for web browsers and available as an app for mobile phones. Most of the deepfake applications can be integrated to platforms such as Facebook, YouTube, and Twitter.
Though it is possible to detect and control deepfakes, deep learning has created more deepfakes than methodologies to detect the growing scale of deepfake contents. So new algorithms and methods are needed to have control in reducing the effects of deepfakes.