User talk:Miguelsnchz723

I plan to be editing the section on Computationalism on the wikipedia page of Artificial Intelligence

Research Log #1
03/05/15

Search Terms:

Nature, nurture, AI, Ethics

Using the library database access to Jstor i looked up these terms and found my first source to use on my article by David J Bolter, Explaining the philosophy behind Nature vs. Nurture and how that goes into human development of intelligence. The link to the article is http://www.jstor.org/stable/20024925. I have added a more in depth abstract of this source into my annotated bibliography

Research Log #2
03/12/15

Search Terms:

AI, Nature, Behavior, Philosophy

Using the library database linked to Jstor I stumbled across an article from philosopher Tracy B Henley in which we get a breakdown of what AI actually is, as well as the business ventures leading to greater innovations and the various lethal as well as non lethal uses of AI in the military or in the workforce. Here is the stable link to the article http://www.jstor.org/stable/27759223

Research Log #3
03/17/15

Search Terms:

Ethics, AI, Cognition

My last scholarly source, an article by Paul Thagard in which he displays the connection between AI and cognitive psychology. Begins by stating that relations relevant to discovery of AI are that science is done by human thinkers and second that the mind can be understood as a computational device in other words known as computationalism. The link to the article is http://www.jstor.org/stable/192417

Article Draft
Computationalism is the idea that “the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing”. AI, or implementing machines with human intelligence was founded on the claim that “a central property of humans, intelligence can be so precisely described that a machine can be made to simulate it”. A program can then be derived from this human human computer and implemented into an artificial one to, create efficient artificial intelligence. This program would act upon a set of outputs that result from set inputs of the internal memory of the computer, that is, the machine can only act with what it has implemented in it to start with. A long term goal for AI researchers is to provide machines with a deep understanding of the many abilities of a human being to replicate a general intelligence or STRONG AI, defined as a machine surpassing human abilities to perform the skills implanted in it, a scary thought to many, who fear losing control of such a powerful machine. Obstacles for researchers are mainly time contstraints. That is, AI scientists cant establish much of a database for commonsense knowledge because it must be ontologically crafted into the machine which takes up a tremendous amount of time, to combat this, AI research looks to have the machine able to understand enough concepts in order to add to its own ontology, but how can it do this when machine ethics is primarily concerned with behavior of machines towards humans or other machines, limiting the extent of developing AI. In order to function like a common human AI must also display, "the ability to solve subsymbolic commonsense knowledge tasks such as how artists can tell statues are fake or how chess masters don’t move certain spots to avoid exposure," but by developing machines who can do it all AI research is faced with the difficulty of potentially putting a lot of people out of work, while on the economy side of things businesses would boom from efficiency, thus forcing AI into a bottleneck trying to developing self improving machines.

Article Draft 2
Computationalism

Computationalism is the idea that “the human mind or the human brain (or both) is an information processing system and that thinking is a form of computing”. AI, or implementing machines with human intelligence was founded on the claim that “a central property of humans, intelligence can be so precisely described that a machine can be made to simulate it”. A program can then be derived from this human human computer and implemented into an artificial one to, create efficient artificial intelligence. This program would act upon a set of outputs that result from set inputs of the internal memory of the computer, that is, the machine can only act with what it has implemented in it to start with. A long term goal for AI researchers is to provide machines with a deep understanding of the many abilities of a human being to replicate a general intelligence or STRONG AI, defined as a machine surpassing human abilities to perform the skills implanted in it, a scary thought to many, who fear losing control of such a powerful machine. Obstacles for researchers are mainly time contstraints. That is, AI scientists cant establish much of a database for commonsense knowledge because it must be ontologically crafted into the machine which takes up a tremendous amount of time, to combat this, AI research looks to have the machine able to understand enough concepts in order to add to its own ontology, but how can it do this when machine ethics is primarily concerned with behavior of machines towards humans or other machines, limiting the extent of developing AI. The wiki page section on knowledge representation states that in order to function like a common human, AI must also display "the ability to solve subsymbolic commonsense knowledge tasks such as how artists can tell statues are fake or how chess masters don’t move certain spots to avoid exposure," but by developing machines who can do it all AI research is faced with the difficulty of potentially putting a lot of people out of work, while on the economy side of things businesses would boom from efficiency, thus forcing AI into a bottleneck trying to developing self improving machines.