User:BoPro774/sandbox

Artificial Intelligence Law
Artificial Intelligence law deals primarily with the collective rights concerning the use of Artificial Intelligence and the AI program itself, as well as directing liability and responsibility in relation to legal allegations. In concerning legal practices, it is important to note that artificial intelligence is used and applied in many different varieties, either in fields of research study, or in maintaining and automating industry. As a result of this, legal interpretations of AI become vague and non-specific. AI law is currently under development, as legal cases fall under inspection of the judiciary and are decided upon, that which is considered artificial intelligence "law" is subject to change. The law can use regulatory practices of copyright, redefining AI as property of industry and subject to those laws. However, in the occurrence of self-autonomous program the law would have to be reinvented altogether. "Expert systems in law ought to be designed to replicate legal experts, the knowledge represented, therefore, necessarily being of a depth, richness and complexity normally possessed by such a human being." Artificial Intelligence must also be considered in the physical sense such as robotic machinery, AI law must then reconsider how smart systems can "harm" or cause harm: "harm—whether physical, emotional, or monetary—is caused by programmability and interactivity." Regulatory practices can only be achieved through understanding machine-learning and defining the permissible employment of autonomous AI, in consideration to how it is applied. Below is a list of legal cases regarding AI:

Cruz v. Talmadge, Calvary Coach, et al.
Passengers of a motor coach were injured during a crash as the driver attempted to clear a 10ft bridge (coach 12ft in height). The plaintiffs assert tort claims against "the manufacturer, owner and operator of the bus, a GPS provider and the Commonwealth of Massachusetts Department of Conversation and Recreation, the entity in charge of the relevant road signage." They further claimed that the GPS system an "arguably semi-autonomous AI systems" was responsible and that the manufacturers be held accountable. Inciting that manufacturers would be guilty of negligence and institute "foreseeability" in device systems. This would fall under AI law concerning liability of AI programs and their functions, determining a possible regulatory practice if the claims pass court. Never the less raising concerns about liability of artificial intelligence.

Nilsson v. General Motors, LLC
An autonomous vehicle crashed into a motorcycle and its driver resulting in plaintiff injury, however the plaintiff claims fault lies in the manufacturer of the autonomous vehicle and not the driver. The case results in a count of negligence and General Motors LLC and inspection of the autonomous vehicle. This case would define legal precedent in allegations pertaining to human and AI interaction and specifically the "standard of care of a reasonable person".

Holbrook v. Prodomax Automation Ltd, et. al
Automotive store employee killed by a robot outside of it jurisdiction and sector, the plaintiff claims that the robot was faulty and the manufacturer is guilty of negligence. This case could result in the development of insurance regulations for smart-robot/machine-learning industries, as well as laws that could be taken to reinvent the production process: promoting innovation and investment in the industry and AI itself.

United States Legal Practices
The case of the DABUS patent application in the USPTO, is evidence of strong regulatory practices that deny the individuality/personality of artificial intelligence. This case involved the autonomous software known as DABUS, created by Stephen Thaler et al., the program's sole purpose is to invent new devices without instruction. One such device was submitted to the EPO for patentability and was rejected, the EPO ruling that "an AI system lacks a legal personality." Thaler transitioned to the USPTO (United States Patent and Trademark Office), where his patent application was subsequently rejected under the premise that "conception is a mental process that can only be performed by humans." The USPTO made this decision through deliberation of the Univ. of Utah v. Max-Planck-Gesellschaft Zur Foerderung der Wissenschatfen case that ruled that humans are the only beings capable of invention under U.S patent law. Furthermore, the SCOTUS ruled that US Copyright law "requires human involvement," eliminating the capability of artificial intelligence to hold authorship and individual creativity. These decision would work towards removing autonomous technologies as being of individual, human creativity, if artificial intelligence ever evolves to such a degree.

The United States Congress passed the bill "Fundamentally Understanding the Usability and Realistic Evolution of Artificial Intelligence Act of 2017", also known as the FUTURE of AI act, housed under the Department of Commerce, that would act to develop a federal advising committee to " inform policymakers on the implications and functions of AI." The committee would effectively promote US competitiveness and industry internationally (for AI), promote machine-learning developments for the US workforce, ensure industries develop unbiased artificial intelligences, and maintain privacy rights in relation to AI maintain informatics. The primary application of this bill is to develop understanding of artificial intelligence, machine-learning, and smart-robotics in legal terms, as well as proliferate information relating to AI itself. The public collective knowledge on information of AI is limited and "poor comprehension of AI also extends to Congress," further limiting the US terms of global competition and investment in AI globally. This bill works to counter act this lack of public information, sharing and developing AI technologies in scientific development, ethics training, legal practice, and widespread opportunity for "rural" areas.