Thanks to advanced Machine Learning, the latest AI algorithms and high-performance computer graphics, the array of recent digital advancements has made it difficult to differentiate between real and fake media. While AI-generated content might have its own perks, it also draws attention towards the issue of privacy and security. Deepfake technology, one of the magical creations of science and technology, allows the replacement of faces, voices, and expressions in images, audio and video, making it difficult to detect these manipulations.
Deepfake AI: What is it?
Deepfake is a combination of ‘deep learning’ and ‘fake’, deepfakes are hyper-realistic images, video and audio, digitally manipulated to depict people saying and doing things that never actually happened, mentions Mika Westerlund, an innovation researcher, in his research paper titled The Emergence of Deepfake Technology: A Review.
In simpler terms, A deepfake is a synthetic media in which a person in an existing image or video is replaced with someone else's likeness. While faking videos and images is not new, deepfakes utilise powerful techniques from machine learning and artificial intelligence to manipulate and/or generate visual and audio content. These are extremely realistic and have a high potential to deceive the users.
How Does Deepfake Work?
As complex as it might sound, the working of deepfake is based on intricate layers of modern-day technologies.
How are deepfakes made?
According to Mika Westerlund’s research paper titled The Emergence of Deepfake Technology: A Review, creating a deepfake involves Generative Adversarial Networks (GANs). It is an unsupervised, machine learning (ML) model. GAN involves two networks, known as "the generator" and "the discriminator," which are taught using a specific collection of images, videos, or sounds. As the name suggests, the generator's job is to create new samples that are realistic enough to convince the discriminator. Meanwhile, the discriminator is responsible for determining if the new media it sees is genuine or fake. This process encourages both networks to get better at their tasks.
What are the technologies required to develop deepfakes?
The major technologies required to develop deepfakes are:
- Machine Learning Frameworks (ML)
- Deep Learning Algorithms
- Natural Language Processing Algorithms
- Vast Data Collection
- Face Recognition and Detection Tech
- Image and Video Editing Software
- Audio Processing Tools
- High-Performance Hardwares
The list given above is not exhaustive. However, these are the most basic technologies needed to generate deepfakes.
Deepafake AI: Limited to videos only?
Deepfake technology can not only create videos but also images and audios.
What are Deepfakes Used For: The Good and The Bad
Some of the disadvantages and threats of deepfakes are:
Deepfakes pose significant threats to various aspects of society, including journalism, national security, public trust in information, and cybersecurity:
- Journalism: Deepfakes are increasingly difficult to detect. Thus, it can challenge the ability of journalists and media outlets to differentiate real news from fake content,
- National Security and Governance Threat: Deepfakes can disseminate political propaganda, disrupt election campaigns, and manipulate public opinion via viral media content, becoming a catalyst for domestic unrest, international conflicts, and scepticism towards authorities.
- Cybersecurity: Deepfakes could be used for frauds involving market manipulation and brand sabotage through impersonation. On an individual level, these could also be used for revenge porn and blackmail.
While the term deepfake has a negative connotation attached to it, it is not all bad. Deepfake technology offers a wide range of positive applications across various industries. Some of the ways to use deepfake positively are:
- Film and Entertainment: Deepfakes can be used for digital voice generation, updating posters and film footage, recreating classic movie scenes and creating new films with long-dead actors.
- Language and Communication: Deepfakes can break language barriers in brand campaigns by translating speech, altering body language with a more culturally suitable type and varying tones and expressions.
- Business and E-commerce: Brands can use deepfake technology to showcase fashion outfits and accessories on models with various attributes, generate superpersonal content for consumers, and create virtual fitting experiences for online shoppers.
These positive applications demonstrate the versatility and potential benefits of deepfake technology across different fields.
A recent report by VMware sheds light on emerging threats, such as deepfakes, API attacks, and cybercriminals targeting incident responders. Notably, cybercriminals have incorporated deepfake technology into their attack methods, aiming to bypass security controls. According to the report, two-thirds of respondents have witnessed malicious deepfakes being used in attacks, marking a 13% increase from the previous year. These deepfakes are primarily delivered via email. The shift in cybercriminal tactics now involves using deepfake technology to compromise organizations and gain access to their systems, signalling a concerning trend in the cybersecurity landscape, mentions S Mohini Ratna, the Editor at VARINDIA.
How To Detect a Deepfake?
To detect deepfake attacks, there are several key indicators to watch for in both video and textual content.
Signs of possible deepfake content in a video are unusual facial positioning, unnatural facial or body movement, strange colour grading, odd appearance when zoomed in, inconsistent audio and weird eye movement like individuals who do not blink. In textual deepfakes, signs may include misspellings, sentences with unnatural structure, suspicious source email addresses, phrasing or wordings that do not match the supposed sender and out-of-context messages unrelated to any discussion or event.
One must note that as AI and deepfake technologies continually improve, the possibility of spotting such irregularities with obsolete tech gets low. However, with constant caution and vigil, one can always reduce their chances of being deceived on the internet.
*Mika Westerlund, D.Sc. Econ, is an Associate Professor at the Sprott School of Business at Carleton University. He is an innovation researcher with a focus on emerging technologies, practices, and phenomena that could significantly impact various aspects of current and future societies, such as social, economic, and ecological dimensions.
Comments
All Comments (0)
Join the conversation