User:DariganHissi/sandbox

= Legality of synthetic pornography =

The legality of synthetic pornography (commonly known as deepfake pornography) is an ongoing debate over the status of synthetically generated sexual media in the eyes of the law. Supporters of criminalization argue that deepfake pornography violates individuals’ sexual privacy and is often a form of revenge pornography, which is already illegal in many jurisdictions. It also often infringes on copyrighted material made by sex workers by using their content to create deepfakes of nonconsensual parties. Opponents of criminalization believe that criminalizing deepfake pornography would be a violation of an individual’s right to free speech and free expression in jurisdictions that protect these rights. Some jurisdictions that have taken steps to criminalize non-consensual deepfake pornography including the European Union and India, while others, such as the United States are still debating the issue. High profile targets of deepfake pornography include pop star Taylor Swift, Twitch streamer QTCinderella and Indian journalist Rana Ayyub.

Background
Deepfake pornography is pornographic content that is created when a machine learning algorithm matches faces, expressions and voice patterns of one subject to a second subject. This allows for the creation of photo, video and/or audio content of sexual and compromising events that have not actually taken place. The term “deepfake” has taken a broader colloquial meaning to refer to any kind of realistic digital media that shows a person saying or doing something that hasn’t taken place in reality regardless of whether or not a machine learning algorithm was involved in its creation. While deepfake technology has been used for a variety of purposes, the vast majority — a Sensity AI (then DeepTrace Labs) study reports up to 96% —of created content is still non-consensual pornographic content, usually targeting women and minors, often from marginalized communities. Non-consensual deepfake pornography can legally be seen as a threat to privacy and cybersecurity, a form of defamation and an infringement of an individual’s right to publicity. Despite this, victims in many jurisdictions lack both civil and criminal legal options, because by their very nature, deepfakes are not real and do not display a real person’s actions. Prosecution is made further difficult because creators and distributors of deepfake pornography are often anonymous and/or do not reside in the same country as their victims.

Sexual privacy
Law professor Danielle Citron defines sexual privacy as “the behaviours, expectations, and choices that manage access to and information about the human body, sex, sexuality, gender, and intimate activities.” Citron maintains that while deepfake content does not depict a victim’s actual body, it still hijacks “people’s sexual and intimate identities” in a way that violates their right to privacy and takes away their power to keep their “intimate identities” to themselves. Non-consensual deepfake pornography harms victims by manipulating their images and influencing their societal reputation by depicting them in situations that they did not consent to be shared with the public, according to Citron.

Northwestern University’s Matthew B. Kugler and Carly Pace conducted a study on American attitudes surrounding deepfake pornography and concluded based on their results that it made sense to categorize "pornographic deepfakes as speech that implicates sexual privacy” which has always been protected by the government despite free speech provisions.

Free speech
Any attempts to criminalize deepfake pornography run the risk of infringing upon the public’s rights to free speech and free expression in parts of the world where these rights are protected. Deepfakes, by definition, are not real. Opponents of criminalizing deepfake content worry that laws are too broad and have the potential to censor AI-generated political commentary alongside pornography and other types of deepfake content. Organizations like the American Civil Liberties Union and the Institute for Free Speech have suggested that deepfake pornography can be tackled using existing defamation, cyber harassment and/or child pornography laws in order to prevent overreach that would infringe on free speech protections.

Ethics researcher Dr. Alex Barber believes that regulations for deepfakes would not pose any new freedom of expression problems, and that any concerns that could be raised about deepfakes have already been decided in legal cases involving other media. Non-consensual deepfake pornography, in his view, can be tackled if it is included in harassment and defamation legislation without affecting any existing freedom of speech protections.

Legal scholar Mary Anne Franks believes that parody and satire laws make it clear that it is possible to regulate false information without violating people’s freedom of expression. In the United States, the creation, distribution and/or possession of child pornography is also not protected under the First Amendment. Current laws have been expanded to include computer-generated depictions of child pornography, but it is not yet clear if AI-generated child pornography deepfakes would also fall under this provision. Legal experts including family lawyer Claudia Ratner have suggested expanding 18 U.S. Code § 2256 to either include AI-generated deepfakes under the computer-generated category or create a new category within the code specifically for AI.

While the American Supreme Court does protect some types of false speech, it “has never held that there is a First Amendment right to intentionally engage in false speech that causes actual harm,” according to Franks. The U.S. Supreme Court has also ruled that “obscene” speech is not protected under the First Amendment, and if deepfake pornography is found to be obscene using the Miller test, that clears the way for the government to regulate it in the United States.

Revenge pornography
While deepfakes themselves are not currently illegal in most jurisdictions, many have criminalized revenge pornography. Revenge pornography refers to the sharing of sexual media of an individual without their permission. Nonconsensual deepfake pornography often has the same effect on victims as real revenge pornography, including humiliation and loss of opportunities. As such, revenge pornography laws can and have been used to charge and prosecute individuals who create and distribute non-consensual deepfake pornography and many jurisdictions consider nonconsensual deepfake pornography as a type of revenge porn. However, these laws are not applicable in other jurisdictions where the legislation requires pornography to include the victim’s own body, and often only cover the distribution of pornography, not its creation. While revenge pornography is generally considered a clearcut privacy violation, deepfake pornography occupies a more grey area due to often being created using public content, which can also make it harder to prosecute using revenge pornography legislation.

Copyright
Many types of deepfake pornography rely on real pornography to be created. Some advocates have argued that sex workers can use copyright law against individuals who use their work to make non-consensual deepfake pornography. While sex workers are not victims who have had their privacy violated, they are victims who have had their livelihoods impacted when their work is stolen and used to target others. Public figures who have copyrighted photos of themselves used in the creation of deepfake content could similarly use intellectual property law to go after perpetrators. However, these protections would likely only extend to commercial deepfakes, while many are made for non-commercial purposes and likely to be found “transformative” enough to steer clear of copyright infringement under most copyright laws including 17 U.S.C. §103. Members of the adult film industry have also expressed doubts that their efforts would be fruitful and noted that even before the advent of deepfake pornography, they struggled to successfully have pirated content removed from the internet. Copyright law would also not protect the victims who are being deepfaked if they are not celebrities or other people who are likely to have copyrighted photography of themselves.

Labelling
As an alternative to criminalizing deepfakes entirely, one solution proposed by many including Northeastern philosophy professor Don Fallis is to place labelling requirements on all altered content to ensure it is clear for users that they are consuming edited and regenerated content. For example, clearly satirical deepfake content would be protected while synthetic media that obscures its artificiality would fall outside of free speech protections. Barber has suggested that giving people the option to flag suspected deepfakes would be one way of regulating them without banning them entirely. U.S. President Joe Biden suggested in a 2023 Executive Order that all photos, videos and audio that have been AI-generated should be required to have a watermark. However, critics say labelling deepfake pornography would not necessarily eliminate the emotional harm it causes to people who are depicted Furthermore, organizations like the Foundation for Individual Rights and Expression maintain that it would be infringing on freedom of expression to compel companies to label all AI-generated material except for in very specific circumstances like election advertisements — so labelling requirements would not necessarily circumvent the free speech concerns regarding criminalizing deepfakes in their entirety.

Australia
Australia’s 2021 Online Safety Act allows for civil penalties against anyone who non-consensually shares sexual material, including altered material. It also allows for the country’s eSafety Commissioner to ask online platforms to remove content that is non-consensually shared within 24 hours or risk penalties. There are no federal criminal laws specifically for deepfake pornography, but many Australian states including the Australian Capital Territory, Queensland and New South Wales all have legislation that tackles the sharing of sexual abuse material and includes “altered” material within its purview. Australian laws tackle the distribution of deepfake pornography, but do not punish the creation or possession of said content.

Canada
Sensity AI has reported that 6% of websites containing deepfake pornography are hosted in Canada. The provinces of British Columbia, Prince Edward Island, Saskatchewan and New Brunswick have laws about intimate images that include altered images in their scope. B.C.’s law allows victims to go after people and companies distributing sexual images without consent regardless of whether the content is real or digitally generated. Individuals and companies that refuse to take down nonconsensual content can be fined up to $5,000. Quebec’s Civil Code does not specifically address deepfakes but states that it is an “invasion of privacy to use a person’s name, image, likeness, or voice for a purpose other than the legitimate information of the public,” which can allow for perpetrators to be taken to civil court. On the federal level, Justin Trudeau’s Liberal government has said it is planning to address the topic of deepfake pornography in an upcoming bill about online harms.

China
China lacks any specific deepfake legislation, but does have laws protecting a person’s right to their portrait and reputation and restricting the creation and distribution of content that violates this through labelling regulations. However, the legislation is mostly performative as there are no punishments for violating it.

European Union
The European Union’s Artificial Intelligence Act has been endorsed by all 27 EU nations and will require all artificially generated and/or manipulated content to be clearly labelled once it comes into force. While some forms of AI are prohibited entirely, and high-risk forms of AI have stringent requirements, deepfakes are included in the limited-risk category and thus, have fewer transparency regulations. The General Data Protection Regulation may also be broad enough to cover the distribution of deepfake pornography because deepfakes require personal data to create and the information they convey is also personal and recognizable, albeit incorrect.

India
Sensity AI has reported that 3% of websites containing deepfake pornography are hosted in India. India’s Ministry of Electronics and Information Technology has issued advisories requiring social media companies to remove non-consensual deepfake content within 36 hours of it being reported. A November 2023 advisory requires them to make "reasonable efforts" to identify deepfakes and take action swiftly. Organizations that do not cooperate risk losing their protected status under Section 79(1) of the country's Information Technology Act, 2000, which does not hold companies responsible for third party content hosted on their platforms.

United Kingdom
Sensity AI has reported that 12% of websites containing deepfake pornography are hosted in the United Kingdom. The government made deepfake pornography illegal near the end of 2022 with an amendment to the Online Safety Bill. Dominic Raab said that the changes would “give police and prosecutors the powers they need to bring these cowards to justice and safeguard women and girls.”  The subsequent Online Safety Act of 2023 also criminalized the sharing of non-consensual deepfakes of sexual content and removed part of pre-existing legislation that required proof that revenge porn perpetrators had intentionally shared the content to cause distress in order to convict them. For situations where intentional distress is proven, perpetrators can face up to 2 years in prison and could be put on the sex offender registry, while the maximum term in cases where the intent is not proven is capped at 6 months.

United States
Sensity AI has reported that 41% of websites containing deepfake pornography are hosted in the United States. Some states in the U.S. have specific laws regarding deepfake pornography, and others, including Missouri, Ohio and New Jersey, are currently debating measures. Georgia, Hawaii, Texas and Virginia criminalize deepfake pornography while California and Illinois give victims the right to sue perpetrators for defamation. Laws in Minnesota and New York both allow victims to sue and criminalize the content itself. Florida, South Dakota and Washington have expanded their child pornography laws to include deepfake child pornography. Penalties across states include both fines and jail time.

According to Kugler and Pace’s study, American adults support criminalizing creators of non-consensual deepfake pornography and find that the content is harmful. Most study participants did not think labelling deepfakes did enough to mitigate their harm and in follow-up studies thought creators should be found liable in both civil and criminal contexts.

A proposed federal bill “The Disrupt Explicit Forged Images and Non-Consensual Edits” or DEFIANCE Act would change the Violence Against Women Act to allow victims to sue people who made deepfakes knowing they were non-consensual. Senator Josh Hawley, who was one of the legislators who introduced the bill, said: “Innocent people have a right to defend their reputations and hold perpetrators accountable in court.” If the DEFIANCE Act passes, it will the the first federal law to protect deepfake pornography victims. The No AI FRAUD Act was also introduced by Florida congresswoman María Elvira Salazar and Pennsylvania congresswoman Madeleine Dean in 2024 to protect Americans’ rights to their likeness and voice. Other proposed federal bills involving deepfake pornography have included the Preventing Deepfakes of Intimate Images Act, Deepfakes Accountability Act, Malicious Deep Fake Prohibition Act, and the AI Labelling Act.

Rana Ayyub
Rana Ayyub, an Indian journalist and outspoken critic of Prime Minister Narendra Modi, was the target of deepfake pornography in 2018, when an explicit video purporting to be her was circulated on social media. Ayyub lodged a complaint with Delhi Police and said that internet users sharing the video were threatening to rape her. “I have never felt so scared, fearful, degraded and humiliated before… my life limb and liberty has been jeopardized and my privacy and dignity is being violated,” the complaint stated. The police closed the case in 2020 and told Ayyub that “despite efforts the culprits could not be identified.”

Twitch deepfake scandal
In January 2023, Twitch streamer Atrioc  was involved in a deepfake pornography scandal when he accidentally shared one of his web browser screens while live-streaming and revealed that he was accessing a website with pay-to-view non-consensual deepfake pornography of other popular streamers including QTCinderella, Maya Higa  and Pokimane. In a live-stream, QTCinderella cried and said that she would be suing the creator of the website. She worked with lawyer Ryan Morrison to get the website taken down, but later expressed frustration at the lack of legal options for suing the perpetrator. Atrioc did a livestream with his wife present where he apologized and said that he had only accessed the site once out of morbid curiosity after seeing an ad while visiting a different porn website. In a longer written apology, he announced he was stepping down from content creation and his company OFFBRAND to focus on fighting the harm of deepfake pornography. He also said that he had teamed up with Morrison and QTCinderella and was helping pay for the cost of removing the deepfakes using $100,000 of his savings. Two months later he returned to Twitch to share that he had been working with Ceartes DMCA on an AI bot to tackle deepfake pornography. The bot would send takedown requests to websites hosting the content that would result in the content being delisted from Google even if it wasn't always completely removed. Atrioc said that the bot had been very successful at taking down hundreds of deepfakes of streamers including Pokimane and QTCinderella, and that he had reconnected and collaborated with the women to test the bot. In subsequent updates he said that hundreds of thousands of websites had been delisted from Google using DMCA (Digital Millennium Copyright Act) takedowns and that the process was becoming faster and cheaper.

Taylor Swift
In early 2024, graphic deepfake images of musician Taylor Swift were shared on X (previously known as Twitter) that reportedly emerged on the anonymous internet forum 4chan. The images were viewed millions of times and led to backlash due to Swift’s wide influence on pop culture. Many politicians including Democratic congressman Joseph Morelle and Republican congressman Thomas Kean Jr pledged legislation to protect victims from deepfakes. The Daily Mail reported that Swift was considering legal action.