User:MichaelGonzalez26/Artificial intelligence art

Legality
Due to recent advancements in generative artificial intelligence, synthetic content, such as AI-generated images and artwork, has become much more accessible throughout the early 2020s. As a result of the increased presence of AI-generated artwork and images, various legal issues have emerged concerning the use of AI-generated images, particularly in fields such as copyright and defamation law.

In regard to the copyrightability of AI-generated artwork, both existing case law and the Copyright Office stipulate that AI-created artwork cannot be granted copyright protection due to having not been created by a human author. When examining whether an artist of a copyrighted work can successfully argue infringement against an AI-generated image, a key consideration is whether the image is a near-identical copy of the original, or merely incorporates its stylistic cues. Lastly, it has been found that using copyrighted materials to train AI models would likely be protected by fair use, as the models are merely examining the copyrighted works to learn more about them, rather than using them for their expressive elements.

In areas such as defamation law, holding an individual liable for defamatory images generated by AI would prove difficult due to the protection given to online platforms by Section 230; in addition, protections provided by the First Amendment also limit liability for those who post defamatory content for matters outside of public concern.

Copyrightability
When determining whether a work is copyrightable, a key requirement is whether the work was created by a human author, with non-humans being found to not have an ability to obtain a copyright in their work. For example, a monkey cannot register a copyright in works that it captured with a camera, as the Copyright Act utilizes terms such as “children,” “widow,” “grandchildren,” and “widower,” all terms which “all imply humanity and necessarily exclude animals.”  This requirement has long been reflected in copyright law, as well as the Copyright Office's registration guidances, which frequently emphasize that a work must owe its existent to an author in order to be copyrightable. Thaler v. Perlmutter has become a landmark case for the copyrightability of AI-generated work, where the court determined that works created by AI cannot receive copyright protection due to the Copyright Act of 1976 requiring that works be created by humans in order to receive protection.

In Thaler v. Perlmutter, the plaintiff owned a computer system, which he used in order to generate a piece of visual art. When the plaintiff attempted to register the work for copyright, the Copyright Office denied the application, stating that the work lacked human authorship, which is a requirement in order to receive copyright protection. Despite the plaintiff's arguments that copyright law should include works generated by AI due to the law's history of malleability, the court ultimately denied such an argument, stating that human creativity is an essential requirement of copyrightability, even if that creativity is channeled through tools or media. Determining that human authorship is a bedrock requirement of copyright, placing an emphasis on the Copyright Act's use of the word "author", the court ultimately denied copyright registration for a work created absent any human involvement.

Using this rationale, the Copyright Office has refuted to provide copyright protection for works generated by AI on multiple occasions, noting that the work lacked a human author, a perquisite to obtaining copyright protection in the United States. In one specific instance, an artist registered a comic book using images generated by AI; despite the work originally receiving copyright protection by the Copyright Office, the Office proceeded to revoke protection for the work after reaching out the author for details about how AI was used in the creation of their work. More specifically, the Office decided that while the elements of the comic created by the author could receive protection, the images generated by AI were not the product of human authorship and could therefore not be protected.

The Copyright Office first published its policy denying protection for AI-created works in 1973, reaffirming this decision as recently as 2021, emphasizing the "fruits of intellectual labor" from "creative powers of the mind" as a key factor in the creation process, and citing the "original intellectual conceptions of the author" (an author being limited to humans) as a key limit to copyright law. Again in 2023, the Copyright Office conformed to these principles, stating that "If a work's traditional elements of authorship are produced by a machine, the work lacks human authorship and the Office will not register it."

Despite these decisions, however, no case as of now has explicitly held that an AI-generated work is unprotectable, meaning that the legality of AI-generated works could be subject to change in the near future. Additionally, some contend that the Copyright Office--despite the position it has taken on AI-generated work--have still registered works created by the AI, as shown by a search of the copyright registry. Due to the Office also not frequently checking whether a work is created by AI, it is also believed that individuals can circumvent the requirement though simply listing a person as the author of the work, with the Copyright Office unlikely to take action to remove it; similarly, one could also circumvent the requirement by labelling the AI-created work as a piece of work that does not require listing an author, such as a Work Made For Hire. Whether a user receives intellectual property rights in AI-generated works also varies depending on the service, with such provisions stipulating that [1] the AI provider owns copyright rights in the content generated by the AI, [2] users can purchase a license to use their generated content, or [3] the rights are transferred from the provider to the user.

Image Training and Outputs
AI-generated images have also faced scrutiny in areas such as copyright infringement and fair use. This is due to these AI models having to be "trained" on images, including copyrighted works, typically without the authorization of the owner of said content.  Regarding the use of these copyrighted images for the "training" of AI images, liability is contigent upon if the image was downloaded or "stored" by the developer during the AI's training process, a requirement derived from that "fixation" element of the Copyright Act. Should it be found that the image was stored for a sufficient period of time, it is possible that the training of AI models is found to be infringing.

The analysis conducted for an AI-generated image itself can vary depending on whether the image resembles an image used for its training (referred to here as the "input"). When determining copyright infringement for a generated image in the style of another (but not resembling it), a key factor is whether the AI-produced image is substantially similar to the protected "input". For images that merely mimick the style of an "input", infringement would not be found due to stylistic choices being akin to an idea, and therefore not protected by copyright law. In cases, however, in which the generated image is a near identical copy of an "input", infringement would likely be found.

Recent cases that have been litigated in regard to copyright infringement could provide greater insight as to the nuances and issues that are faced by AI in copyright:

In Doe1 v. GitHub, the plaintiffs argued that GitHub and OpenAI utilized training data including publicly accessible repositories on GitHub, including information limited by Licenses. While the plaintiffs were not aware whether their works were used by the alogrithim in its training process, they argued that their work was ingested by the program, and therefore returned to users through its output, as the program having been trained on all publicly available GitHub repositories. They additionally claimed that the defendants knowingly failed to reach out to them to obtain authority to use their licensed materials, therefore violating their right to distribute copies of and create derivative works based on their licensed material. In short, the plaintiffs' arguments centered around the fact that the software's programming used to power the AI, Codex, does not identify the owner of the copyright in the output, and was not trained to provide attribution to those works, despite requirements that open-source licenses require attribution to the author, among other requirements.

While the case has not yet concluded, as recently as January 3rd, 2024, Judge Tigar refused to dismiss the plaintiffs' claims for standing and providing them with an opportunity to file an amended complaint. While this case primarily centers around the misappropriation of code, it is believed that a similar misappropriation could happen in the world of AI-generated art, with AI being used to generate images similar to already existing ones, and therefore violating copyright law.

In another recent case, Andersen v. Stability AI Ltd., the plaintiffs challenged the defendant's creation of its Stable Diffusion AI, arguing that the AI was "trained" on the plaintiffs' copyrighted work without obtaining the permission of the artists. Despite the AI's output images not likely being a close match compared to any of the images used in the training data, plaintiffs argued that "[e]very output image from the system is derived exclusive from the latent image, which are copies of copyrighted images", and therefore violated their copyright rights. Among the various arguments raised by the plaintiffs, they most notably asserted that Stability's use of the images constituted direct copyright infringement due to Stability "download[ing] or otherwise acquir[ing] copies of billions of copyrighted images without permission", using those images to train the AI. The court ultimately refused to dismiss this claim, as it found that Stability was not able to oppose the sufficiency of the plaintiff's allegations for direct infringement.

While the case has also not yet been decided, the court did state that the plaintiffs adequately alleged direct infringement based on the facts they presented. Should the case continue, and be decided in the plaintiffs' favor, it could have a large impact on how copyrighted images are utilized in the training of AI models.

Fair Use
Regarding AI and fair use, much of the literature written agrees that the use of copyrighted material to train AI models should be considered fair use, and therefore cannot infringe upon a copyright owner's content through using said content as part of the AI's training process (a process referred to as "fair learning"). The underlying reasoning for this is that it is not the expressive elements of copyrighted works that are being copied, but rather is the AI merely viewing the works' components in order to learn more about how the work. For example, an AI may take a photograph taken of an individual in order to learn more about the model's face geometry, rather than using the picture for its "creative elements" or aesthetics. Due to copyright protection only extending to the expressive elements of a work, this means that AI that uses a work for its non-expressive elements would not be found to violate copyright law.

This reasoning is derived from the case Sega v. Accolade, Inc.:

In Sega v. Accolade, Sega (plaintiff) had copyrighted computer code that it licensed to other companies for use with Sega’s Genesis console. Accolade (defendant), due to not wanting to be bound to the conditions of Sega’s licensing agreement, decided to “reverse engineer” Sega’s program in order to make their games compatible with Sega systems. After reverse engineering the code, Accolade created a development manual, containing functional descriptions of the interface requirements but did include not any of Sega’s code within that manual. In regard to fair use, the Sega court on appeal concluded that–despite Accolade copying Sega’s code–the copying was protected by fair use, emphasizing how Accolade’s reverse engineering and copying of Sega’s code was necessary in order to understand the functionality requirements for Sega’s console. The court sided with Accolade for this reason, concluding that when disassembly is the only way to gain access to a computer program, and that there is a legitimate reason for that disassembly, disassembling the copyrighted work is protected by fair use.

Others, however, assert that AI art is not protected by fair use, believing generative AI (such as those used to create pictures and art) is distinct from other types of AI (such as those used in self-driving cars) due to being trained to mimic the expression of the works that they are trained on, unlike AI that merely perform functional tasks. As an example, a user can prompt an AI to generate an image in the style of Pablo Picasso, with the traits of Picasso's work being visible in the generated output. Such arguments also assert that such AI models cannot receive fair use protection, as their emulation of the copied works would not make them transformative, and would also serve as a substitute for the original artist's and their work, failing factors one and four of the fair use test.

Libel & Fabrication Concerns
There is also concern that AI's generative capabilities can be used as a means to create libelous content, using artificial intelligence to create fake images of people to harm their character. For example, in March 2023, an AI-generated photograph of Pope Francis wearing a puffer jacket surfaced on the Internet, with many believing the image to be real. Some, such as the Federal Trade Commission, have emphasized the recent advancements in AI tools, as well as the increased ease of access and use of those tools, as factors which may make "synthetic media" or generated content more commonplace, increasing the potential for people to used AI in order to create libelous content or, in some cases, terroristic threats. Due to the broad protection that laws such as Section 230 provides to hosting services in the United States, it may prove difficult to hold AI companies liable for the creation of content that may be deemed as offensive. The posters of such content would likely only being liable under certain circumstances, such as posting content depicting public figures and officials with "actual malice" (stipulating that the plaintifff published the content while doubting its truthfulness or being aware of its falsity). Furthermore, in the event of litigation, AI companies could attempt to absolve themselves of liability by asserting that the individual offering the prompt is liable for the AI-generated content, rather than the AI services themselves, for generating the image. The First Amendment's limitations also pose an issue should one attempt to take legal action for an AI-generated image, with the process often being difficult for those who are injured outside of matters of public concern.

Apart from libel, it has also been found that AI-generated content has been weaponized in the form of "DeepFakes", fabricated media of people created using AI. Especially due to advances in AI over the past few years, DeepFakes have become not only easy to create, (often only taking a few hours and being found on platforms such as YouTube, Facebook, and Instagram) but have also become increasingly harder to distinguish from real life. As a result of this technology's rise to prominence, many are concerned that this technology could be used for malicious and manipulative purposes, as being used to spread political misinformation and hurt the economy.

One example of such weaponization is the use of AI to generate revenge porn and nonconsenual pornography, using an image of the person that is then imposed into a pornographic photo or video. Absent any indication that the generated pornographic was created as a parody or contained false images, anyone generating and publishing such content would very likley be found liable for defmation. As for website operators, Section 230's broad protection would again shield them from liability.