User:Miguelsnchz723/sandbox

Philosophy and ethics Main articles: Philosophy of artificial intelligence and Ethics of artificial intelligence.

Alan Turing wrote in 1950 "I propose to consider the question 'can a machine think'?"[161] and began the discussion that has become the philosophy of artificial intelligence. Because "thinking" is difficult to define, there are two versions of the question that philosophers have addressed. First, can a machine be intelligent? I.e., can it solve all the problems the humans solve by using intelligence? And second, can a machine be built with a mind and the experience of subjective consciousness?[171]

The existence of an artificial intelligence that rivals or exceeds human intelligence raises difficult ethical issues, both on behalf of humans and on behalf of any possible sentient AI. The potential power of the technology inspires both hopes and fears for society.

2/22/15
So im going to be focusing my research on AI and whether the mind of a human can be duplicated into a machine, which is the fundamental focus of AI research. I have found three articles on JSTOR that will provide the basis as to what it takes for a machine to “think” like a human would as well as the qualities a machine must exhibit in order for it to be considered AI. I also want to touch on the sort of paradox that AI research exhibits in the sense that machine ethics is centrally concerned with how machines behave towards humans and other machines, but without the guarantee that human life wont be devalued, research funding to further enhance AI to a Strong AI, a machine that excels in qualities involving; reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects faces a roadblock. 1)	http://www.jstor.org/stable/20024925 2)	http://www.jstor.org/stable/27759223 3)	http://www.jstor.org/stable/192417

2/24/15
1)	http://www.jstor.org/stable/20024925 Bolter, David J. "Artificial Intelligence." Daedalus 113.3 (1983): 1-18. JSTOR. JSTOR. Web. 22 Feb. 2015.

Abstract: Nature Vs. Nurture The two main questions behind AI are; can the machine think, and is it intelligent? In the online article by David Bolter titles Artificial intelligence he focuses on the philosophical idea of Nature vs. Nurture. Philosophy is defined as the study of the fundamental nature of knowledge, reality, and existence, to which Nature vs. Nurture; the idea of how much of our understandings are hereditary and how much is taken from our surroundings seems to describe the idea of philosophy perfectly. Bolter opens up the article with a definition of AI research and how they hope to implement the mind of a human into a machine by mapping out its functions the same way that software works on a computer. He explains that the Department of Defense provides a great amount of support to develop these programs with obvious reasons why. Midway through his article he explains that the artificial intelligence goals have developed into machines achieving the complete assimilation of man and machine, while opponents of AI like Herbert Dreyfus and Joseph Weizenbaum, find this completely absurd or dangerous or both. I have not gotten a chance to examine my other two articles but I will do so later on this week.

3/1/15
2)	http://www.jstor.org/stable/27759223 Henley, Tracy B. "Natural Problems And Artificial Intelligence." Behavior And Philosophy (1990): 43-56. JSTOR. JSTOR. Web. 22 Feb. 2015. -	Abstract: What is AI? What is it Used for and the natural problems associated with it? Henley Opens up the article by referring to AI as major innovations for business endeavors everywhere because of its promise of automated expertise. Henley quickly describes how militaries are looking into funding the developments of expert systems, which are infinitely long term investments, and if they prove to be truly intelligent could serve unlimited possibilities for the military and certain industries. Intelligence in the military or industrial settings means bottom line performance for what gets the job done best, in relation to what the consumer is paying for the research and developments. Henley then goes on to display why our understanding of actual AI is misrepresented. First problem with our understanding of AI is that there is no solidified definition of what intelligence really is. And secondly, the use of expert systems in the military as well as in industrial environments have established what some may define as inadequate guidelines to judging expert systems correctly.

3/4/15
3)	http://www.jstor.org/stable/192417 Thagard, Paul. "Artificial Intelligence, Psychology, and the Philosophy of Discovery." Proceedings of the Biennial Meeting of the Philosophy of Science Association (1982) 166-175 JSTOR. JSTOR, Web. 22 Feb. 2015. -	Displays the connection between AI and cognitive psychology. Begins by stating that relations relevant to discovery of AI are that science is done by human thinkers and second that the mind can be understood as a computational device in other words known as computationalism. Feels that it is in AI that mechanisms can be exactly specified and explored so AI is used as a theoretical device to develop ideas on cognitive science. Also demonstrates connections among artificial intelligence, psychology, and the philosophy of discovery. This article mainly focuses on answering the question of whether or not artificial intelligence is relevant to the philosophy of science. This article seems a bit biased in some pages but overall seems to provide a good basis for why AI is relevant to cognitive science which also revolves around the question of whether or not a human brain can be displayed as a computer program in machines.

Article Draft
Computationalism

Computationalism is the idea that “the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing”. AI, or implementing machines with human intelligence was founded on the claim that “a central property of humans, intelligence can be so precisely described that a machine can be made to simulate it”. A program can then be derived from this human human computer and implemented into an artificial one to, create efficient artificial intelligence. This program would act upon a set of outputs that result from set inputs of the internal memory of the computer, that is, the machine can only act with what it has implemented in it to start with. A long term goal for AI researchers is to provide machines with a deep understanding of the many abilities of a human being to replicate a general intelligence or STRONG AI, defined as a machine surpassing human abilities to perform the skills implanted in it, a scary thought to many, who fear losing control of such a powerful machine. Obstacles for researchers are mainly time contstraints. That is, AI scientists cant establish much of a database for commonsense knowledge because it must be ontologically crafted into the machine which takes up a tremendous amount of time, to combat this, AI research looks to have the machine able to understand enough concepts in order to add to its own ontology, but how can it do this when machine ethics is primarily concerned with behavior of machines towards humans or other machines, limiting the extent of developing AI. The wiki page section on knowledge representation states that in order to function like a common human, AI must also display "the ability to solve subsymbolic commonsense knowledge tasks such as how artists can tell statues are fake or how chess masters don’t move certain spots to avoid exposure," but by developing machines who can do it all AI research is faced with the difficulty of potentially putting a lot of people out of work, while on the economy side of things businesses would boom from efficiency, thus forcing AI into a bottleneck trying to developing self improving machines.

COMMENT
Miguel, in what way is nature vs nurture a philosophy? Work to be more precise in your claims. Jbdolphin (talk) 06:54, 13 April 2015 (UTC)