Deepfake pornography

Deepfake pornography, or simply fake pornography, is a type of synthetic pornography that is created via altering already-existing pornographic material by applying deepfake technology to the faces of the actors. The use of deepfake porn has sparked controversy because it involves the making and sharing of realistic videos featuring non-consenting individuals, typically female celebrities, and is sometimes used for revenge porn. Efforts are being made to combat these ethical concerns through legislation and technology-based solutions.

History
The term "deepfake" was coined in 2017 on a Reddit forum where users shared altered pornographic videos created using machine learning algorithms. It is a combination of the word "deep learning", which refers to the program used to create the videos, and "fake" meaning the videos are not real.

Deepfake porn was originally created on a small individual scale using a combination of machine learning algorithms, computer vision techniques, and AI software. The process began by gathering a large amount of source material (including both images and videos) of a person's face, and then using a deep learning model to train a Generative Adversarial Network (GAN) to create a fake video that convincingly swaps the face of the source material onto the body of a porn performer. However, the production process has significantly evolved since 2018, with the advent of several public apps that have largely automated the process.

DeepNude
In June 2019, a downloadable Windows and Linux application called DeepNude was released which used GAN to remove clothing from images of women. The app had both a paid and unpaid version, the paid version costing $50. On June 27, the creators removed the application and refunded consumers, although various copies of the app, both free and for charge, continue to exist. On GitHub, the open-source version of this program called "open-deepnude" was deleted. The open-source version had the advantage of allowing to be trained on a larger dataset of nude images to increase the resulting nude image's accuracy level.

Deepfake Telegram Bot
In July 2019 a deepfake bot service was launched on messaging app Telegram that uses AI technology to create nude images of women. The service is free and has a user-friendly interface, enabling users to submit photos and receive manipulated nude images within minutes. The service is connected to seven Telegram channels, including the main channel that hosts the bot, technical support, and image sharing channels. While the total number of users is unknown, the main channel has over 45,000 members. As of July 2020, it is estimated that approximately 24,000 manipulated images have been shared across the image sharing channels.

Notable cases
Deepfake technology has been used to create non-consensual and pornographic images and videos of famous women. One of the earliest examples occurred in 2017 when a deepfake pornographic video of Gal Gadot was created by a Reddit user and quickly spread online. Since then, there have been numerous instances of similar deepfake content targeting other female celebrities, such as Emma Watson, Natalie Portman, and Scarlett Johansson. Johansson spoke publicly on the issue in December 2018, condemning the practice but also refusing legal action because she views the harassment as inevitable.

Rana Ayyub
In 2018, Rana Ayyub, an Indian investigative journalist, was the target of an online hate campaign stemming from her condemnation of the Indian government, specifically her speaking out against the rape of an eight-year-old Kashmiri girl. Ayyub was bombarded with rape and death threats, and had doctored pornographic video of her circulated online. In a Huffington Post article, Ayyub discussed the long-lasting psychological and social effects this experience has had on her. She explained that she continued to struggle with her mental health and how the images and videos continued to resurface whenever she took a high-profile case.

Atrioc controversy
In 2023, Twitch streamer Atrioc stirred controversy when he accidentally revealed deepfake pornographic material featuring female Twitch streamers while on live. The influencer has since admitted to paying for AI generated porn, and apologized to the women and his fans.

Taylor Swift
In January 2024, AI-generated sexually explicit images of American singer Taylor Swift were posted on X (formerly Twitter), and spread to other platforms such as Facebook, Reddit and Instagram. One tweet with the images was viewed over 45 million times before being removed. A report from 404 Media found that the images appeared to have originated from a Telegram group, whose members used tools such as Microsoft Designer to generate the images, using misspellings and keyword hacks to work around Designer's content filters. After the material was posted, Swift's fans posted concert footage and images to bury the deepfake images, and reported the accounts posting the deepfakes. Searches for Swift's name were temporarily disabled on X, returning an error message instead. Graphika, a disinformation research firm, traced the creation of the images back to a 4chan community.

A source close to Swift told the Daily Mail that she would be considering legal action, saying, "Whether or not legal action will be taken is being decided, but there is one thing that is clear: These fake AI-generated images are abusive, offensive, exploitative, and done without Taylor's consent and/or knowledge."

The controversy drew condemnation from White House Press Secretary Karine Jean-Pierre, Microsoft CEO Satya Nadella, the Rape, Abuse & Incest National Network, and SAG-AFTRA. Several US politicians called for federal legislation against deepfake pornography. Later in the month, US senators Dick Durbin, Lindsey Graham, Amy Klobuchar and Josh Hawley introduced a bipartisan bill that would allow victims to sue individuals who produced or possessed "digital forgeries" with intent to distribute, or those who received the material knowing it was made non-consensually.

Deepfake CSAM
Deepfake technology has made the creation of child sexual abuse material (CSAM), also often referenced to as child pornography, faster, safer and easier than it has ever been. Deepfakes can be used to produce new CSAM from already existing material or creating CSAM from children who have not been subjected to sexual abuse. Deepfake CSAM can, however, have real and direct implications on children including defamation, grooming, extortion, and bullying.

Consent
Most of deepfake porn is made with faces of people who did not consent to their image being used in such a sexual way. In 2023, Sensity, an identify verification company, has found that "96% of deepfakes are sexually explicit and feature women who didn’t consent to the creation of the content." Oftentimes, deepfake porn is used to humiliate and harass primarily women in ways similar to revenge porn.

Technical approach
Deepfake detection has become an increasingly important area of research in recent years as the spread of fake videos and images has become more prevalent. One promising approach to detecting deepfakes is through the use of Convolutional Neural Networks (CNNs), which have shown high accuracy in distinguishing between real and fake images. One CNN-based algorithm that has been developed specifically for deepfake detection is DeepRhythm, which has demonstrated an impressive accuracy score of 0.98 (i.e. successful at detecting deepfake images 98% of the time). This algorithm utilizes a pre-trained CNN to extract features from facial regions of interest and then applies a novel attention mechanism to identify discrepancies between the original and manipulated images. While the development of more sophisticated deepfake technology presents ongoing challenges to detection efforts, the high accuracy of algorithms like DeepRhythm offers a promising tool for identifying and mitigating the spread of harmful deepfakes.

Aside from detection models, there are also video authenticating tools available to the public. In 2019, Deepware launched the first publicly available detection tool which allowed users to easily scan and detect deepfake videos. Similarly, in 2020 Microsoft released a free and user-friendly video authenticator. Users upload a suspected video or input a link, and receive a confidence score to assess the level of manipulation in a deepfake.

Legal approach
As of 2023, there is a lack of legislation that specifically addresses deepfake pornography. Instead, the harm caused by its creation and distribution is being addressed by the courts through existing criminal and civil laws.

Victims of deepfake pornography often have claims for revenge porn, tort claims, and harassment. The legal consequences for revenge porn vary from state to state and country to country. For instance, in Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, whereas in Malta it is a fine of up to €5,000.

The "Deepfake Accountability Act" was introduced to the United States Congress in 2019 but has died in 2020. It aimed to make the production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. A newer version of bill was introduced in 2021 which would have required any "advanced technological false personation records" to contain a watermark and an audiovisual disclosure to identify and explain any altered audio and visual elements. The bill also includes that failure to disclose this information with intent to harass or humilitate a person with an "advanced technological false personation record" containing sexual content "shall be fined under this title, imprisoned for not more than 5 years, or both." However this bill has since died in 2023.

Controlling the distribution
While the legal landscape remains undeveloped, victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform, and removal by reporting the content to search engines.

Several major online platforms have taken steps to ban deepfake pornography. As of 2018, gfycat, reddit, Twitter, Discord, and Pornhub have all prohibited the uploading and sharing of deepfake pornographic content on their platforms. In September of that same year, Google also added "involuntary synthetic pornographic imagery" to its ban list, allowing individuals to request the removal of such content from search results.