Talk:Artificial intelligence/Archive 3

Lenght
Because of their length, the previous discussions on this page have been archived. If further archiving is needed, see How to archive a talk page.

Business part and the amount of robots
That 1995 information on the amount of robots world wide should be replaced. —Preceding unsigned comment added by 91.155.157.163 (talk) 18:21, 6 October 2007 (UTC)

Link to peer reviewed paper
Hi, I recently added some new information regarding the coparison of various classification techniques with a reference to a peer reviewed article. There seems to be some controversy on this subject, the link has been removed several times. I am currently doing my PhD on this topic an I know the information is very relevant.

Is the reference and the external link http://www.pattenrecognition.co.za suitable for this site? If not, what can I do so that this information is not repeatedly removed?

cvdwalt
 * There are lots of pages about AI on wikipedia, i personally believe that the main AI page should be as general and introductory as possible, i wonder if your information should be on the Pattern Recognition page, or under its own subheading ? Bugone 08:38, 9 March 2007 (UTC)


 * Some minor points; It would have been better to use the "ref" tag so there is link between your information and the link to your site, look at the reference in the opening paragraph for an example. Perhaps the link to your site should have been added towards the bottom of the list of external links, try and keep the more important links at the top. Having said that, im sure your expertise could be very useful around here, i hope you dont get discouraged, plenty of work for everyone. Bugone 08:51, 9 March 2007 (UTC)


 * The new information claimed above contasins no information about AI - simply a report that there is a paper on this topic (not in itself a remarkable fact). The link to the paper is spam IMHO. (signature added, aopologies) Michael Fourman 11:37, 9 March 2007 (UTC)


 * I fixed the link so its direct to the paper, so its not obvious advertising, but i agree it doesn't add information. Its hard to justify its importance unless it mentions some conclusions or how the research is relevent. Actually it has been added to the pattern recognition page now which i suspect is where it belongs, so it really needs to be tied in better here or removed. cvdwalt, can you tie it better ? Bugone 03:48, 10 March 2007 (UTC)

Sept 2006
(originally placed in the middle of another message, Gwernol 07:10, 7 September 2006 (UTC)):

1956: Demonstration of the first running AI program the Logic Theorist (LT) written by Allen Newell, J.C. Shaw and Herbert Simon at Carnegie Mellon University. 1979: The first computer-controlled autonomous vehicle, the Stanford Cart, is built. The Stanford Cart, built by Hans Moravec, becomes the first computer-controlled, autonomous vehicle when it successfully traverses a chair-filled room and circumnavigates the Stanford AI Lab.
 * The following data available at certain websites(confirmed by the availability of the same data in more than one site) are seen to disagree with the information at wikipedia.

No references to cyc/AM yet?

"Hello, Professor Falken..."
There is also the AI of Joshua in the MGM film WarGames. Very good movie from the 80's, and Joshua is a type of AI that learns how to learn. --Seishirou Sakurazuka 23:38, 7 November 2006 (UTC)

Poor definition
1st sentence says AI is "the study of ...". It is not! It is intelligence which is synthetic / manufactured / man-made. The study of AI is not AI! "Life" is not the study of living things, that's "biology". "Intelligence" is not the study of clever and stupid things, it's an ability or a potential for creative / creative thought. And "artificial intelligence" is not the study of anything! The article definition is wrong. Paul Beardsell 09:26, 18 December 2006 (UTC)

Killer example. When someone says that "artificial intelligence is impossible" (or imminent or whatever) it is not the study of something which is being referred to. Paul Beardsell 09:54, 18 December 2006 (UTC)


 * We get it! (but why did you write it then?) --moxon 12:55, 19 December 2006 (UTC)


 * Too much spare time? On a very similar naming issue at artificial life my argument was not accepted.  There it is held that artificial life is the academic discipline, not the thing itself.  If you agree so readily with me perhaps you could review artificial life?  Paul Beardsell 20:51, 19 December 2006 (UTC)


 * Perhaps it is the case that "AI" can refer both to 'intelligence which is synthetic / manufactured / man-made' and the study of this? It does not have to be either one or the other. S Sepp 16:23, 20 December 2006 (UTC)


 * I think it is also used that way but that such usage has arisen out shorthand or laziness. Context sometimes allows that by AI what is meant is "the study of AI" or "research into AI".  But, without context, it means the thing itself.    Paul Beardsell 22:58, 20 December 2006 (UTC)

Darpa prize money
Darpa has since secured the prize money for the 2007 grand challenge. It's $2 million for first place, $1 million for second and 500 grand for third. 24.107.153.58 06:06, 30 December 2006 (UTC)

The 1994 VaMP is probably the most influential driverless car ever. Is anybody out there in a legal position to upload an image of the VaMP? Science History 13:37, 15 June 2007 (UTC)

Challenge and Prize - Purpose?
Could somebody explain what the purpose of this section is? Why is it even mentioned? The DARPA Grand Challenge isn't the only challenge of this kind (I believe the UK MOD runs a similar prize with autonomous submarines). The title of the section is misleading and the section seems out of place when describing AI in general. DPMulligan 14:59, 1 January 2007 (UTC)

AI in business
Says 750,000 Robots were in use worldwide, 500,000 from japan. From Japan or in Japan? theres a big difference between the two.Peoplez1k 01:48, 18 January 2007 (UTC)

I think this section needs some work, it reads like a list. Bugone 03:04, 10 March 2007 (UTC)

General Quality
Given the amount of erudition evident on this (talk) page, the page we're talking about is an embarrassment, especially in the history section. The selection of systems discussed seems bizarre (and probably a result of self or friend promotion), and impoverished. For example, a surge in interest in game theory (if we agree that this is one of the major recent features of AI research) certainly is not called "Machine Learning" (corrected). And applications of Bayes theory still seem to be very important (viz the recent development of probabilistic logics) and have little or nothing to do with principal components (that bit I deleted wholesale).

Basically, this page seems to be under attack by vandals or people ignorant of its subject matter (or possibly Steven Colbert). Witbrock 19:03, 28 January 2007 (UTC)


 * ✅ This should be fixed now CharlesGillingham (talk) 01:32, 8 December 2007 (UTC)

Programing language
Computers need help to even get their inner workings right. With every new programing language the computer gets more independent. We have meaningful variables instead of memory locations and now even garbage collection. But it is Sisyphus-work. With every more complex program, we (and not the computer) invents new IT-techniques. Why can the computer not sort its own hard drive. At least it can do defragmentation on its own. But new file systems are really hard work. And math systems like maple do an amazing job, but it does not reflex. Nothing of the power of maple or mathematica is used to improve the program itself. As typical application I want the computer to optimize my code for speed and memory footprint. Optimization were done by hand a long time and now the optimization techniques are programmed by hand, wow one step ahead, yeah and the compiler price is high as a sky-scraper, because of the many man-hours. How can we expect that a computer will be able to do something meaningful in the real world? Even solve our problems, care for something? Anyhow, I am still happy with its number-crunching abilities. Arnero 16:27, 9 February 2007 (UTC)

Opening paragraph
On 16 Jan something bad happened to the opening paragraph which has been left unchanged until 12 Feb when Bugone attempted to "cleanup the rambling". I will try and restore some of the original content. --moxon 14:40, 15 February 2007 (UTC)
 * Looks good, i've also been trying to cleanup the history section, i believe the history section in this page should just be a summary of what is in the "History of Artificial Intelligence" page, key moments so to speak, for the finer details they can see the main history page. The events i put in this summary may be POV so perhaps someone else could look over what where some of the key moments -- bugone

In the opening Paragraph it says "AI is studied in overlapping fields of computer science, psychology, philosophy, neuroscience, and engineering,..." I think linguistics is missing there. -- Simon (theother@web.de) —Preceding unsigned comment added by 85.179.207.159 (talk) 20:26, August 28, 2007 (UTC)

History section
I've put the timeline into table form, but the entries themselves need to be looked at. In particular, it strikes me as odd that nothing of note happened between 1974 and 1997. One would think there must have been some major developments between those years that would fit into this article. Robotman1974 11:42, 8 March 2007 (UTC)
 * Perhaps it would be appropriate to put a link to AI_Winter which talks about the lull in research/funding at that time, its not actually an event though, more lack of events so not sure it belongs. Bugone 23:22, 8 March 2007 (UTC)

Good work with the history section. I suggest we move to the top. It's closing paragraph can form a prelude to "Schools of thought". And if any of you can think of a way to prevent every second sci-fi fan (including me:) from adding a paragraph about his favourite character to the fiction section, it would help. --moxon 18:41, 9 March 2007 (UTC)

A.I
Is it not a misery!I think we should add something about he criticism for the article because is it not that disappointing that scientists make robots-bots for just testing their ability to coordinate and making human looking robots instead finding ways to clear mines or sending bots to mars to find or explore areas on the planet.82.114.81.148 21:27, 22 March 2007 (UTC)

AI languages and programming styles
IMO this section discredited AI with overgeneralized statements and oversimplified examples:
 * Bayesian work often uses Matlab or Lush
 * real-time systems are likely to use C++
 * prototyping rather than bulletproof software engineering
 * basic AI program is a single If-Then statement + All programs have If-Then logic
 * but actually it is an automated response
 * --moxon 10:59, 18 May 2007 (UTC)


 * ✅ This is fixed in the current version CharlesGillingham (talk) 02:01, 8 December 2007 (UTC)

Sims as Landmark in AI History
Is The Sims really a Landmark in the AI History? i think it's not and its Reference in AI Time Line should be removed.
 * -- AFG 16:56, 18 May 2007 (UTC)

I agree. Most of the other things in the list are watershed events: major changes in the field of AI. While it's impressive that the Sims is the best-selling computer game of all time, I'm not aware that the AI in The Sims is significant from a standpoint of pushing the field of research forward. Colin M. 03:15, 22 May 2007 (UTC)


 * ✅ Sims are long gone. CharlesGillingham (talk) 02:01, 8 December 2007 (UTC)

General comments
The best quality of this article is that it is short. The 'Mechanisms' section needs a complete re-write, however.

The 'if-then' opening paragraph is oversimplifying things to an extreme. I like the idea of talking about inference in the beginning though. If you can do inference, then you can make decisions. Then the question becomes: what type of inference, how do you choose decisions based on your inference, and how the inference is made. In some cases the inference is trivial, i.e. optimization problems (where it is easy to compute the cost for any given solution and the problem is restricted to searching), in other cases the decision making is trivial i.e. classification problems (where there is a small number of finite classes with a known cost for misclassification and the difficulty is in inferring the correct model for the probability of a class given some observations)

The split between two 'groups' of AI techniques as unfortunate at best. Probabilistic (i.e bayesian) techniques can be incremental and distributed. I don't think it is of any value to split models and paradigms into mutually exclusive categories. There are many orthogonal characteristics, i.e. ad-hoc approaches versus principled approaches and biologically-inspired versus mathematically-inspired.

The other splits in those cases seem to be misinformed at best. There are 'neural networks', for example, which can be a type of bayesian system (if you use bayesian formalisms), yet bayesian networks are listed in a different category. Also, expert systems and case base reasoning systems are applications of AI, not AI models. You can use any model and formalism you want for implementing these systems.

A more sensible split in my opinion would be one that talks about the different research/application fields: modeling, inference and estimation, planning, control and decision-making and optimization. Of course each of these fields incorporates aspects of the other fields. This should also be made clear in this section. --Olethros 10:42, 22 May 2007 (UTC)


 * Agreed. It would be nice if the "split" between fields could be something defined in an authoritative source (perhaps the Russell & Norvig book) so that there's an in-print justification for choosing a specific set of categories. Colin M. 12:41, 23 May 2007 (UTC)


 * Agreed. I have added an Expert tag until this page can be fixed (see below).CharlesGillingham 23:54, 27 July 2007 (UTC)


 * ✅ All these issues have been resolved. CharlesGillingham (talk) 01:36, 8 December 2007 (UTC)

Layout of page
Some of this page's text appears to have disappeared underneath a picture of Kasparov playing chess against Dark Blue - would somebody who understands the formatting better than me be able to tidy it up? Alchemeleon 17:55, 30 May 2007 (UTC)
 * I moved the template and it seems to be fixed now. -- Schaefer (talk) 19:06, 30 May 2007 (UTC)

Fiction
I moved this ever growing section to its own sub-article where different plots can be described and compared in more detail with examples. A different article name might be more appropriate. --moxon 13:22, 13 June 2007 (UTC)

Prometheus Project
Was this a challenge? This paragraph might be more appropriately added to the List of notable artificial intelligence projects. Was any AI involved? I didn't see any references to AI. --moxon 16:52, 13 June 2007 (UTC)


 * ✅ I agree, and this is gone now. CharlesGillingham (talk) 02:02, 8 December 2007 (UTC)

Bayesian networks: traditional AI or soft computing?
This page lists Bayesian networks as "traditional AI" and the soft computing page claims them as under "soft computing". Which is correct, and which page should be corrected?

Bob Petersen


 * This one was wrong. This has been corrected. CharlesGillingham (talk) 01:41, 8 December 2007 (UTC)

Topic of "problems in creating AI"
why is there no topic of the problems in creating AI? I talked with an expert in computers who told me AI can't yet reliably understand spoken language. Google search engine programers say the same that search engines can be very stupid. Where is the Con and the Pro on the success/failure of AI?--Mark v1.0 07:29, 1 July 2007 (UTC)


 * Absolutely. If you define the problems, then you can also discuss proposed solutions and how different schools of AI subscribe to very different solutions, for example Neats vs. scruffies, top-down and bottom-up, computational intelligence vs GOFAI. This goes a long way towards explaining how AI research defines itself. Some information about some of the major problems is at
 * History of artificial intelligence
 * History of artificial intelligence
 * CharlesGillingham 21:45, 27 July 2007 (UTC)

Expert Tag
I have tagged the main section of this article as needing the attention of an expert. The problems, as I see them, are:


 * 1) Emphasis: Some subjects that are not key aspects of artificial intelligence (like classifiers) are given too much emphasis. Key areas like natural language processing are not even mentioned outside the lead paragraph.
 * 2) Organization: The article could be reorganized by "schools of thought" or "approaches", or by applications (i.e. computer vision, natural language processing, expert systems, etc.), but it needs some kind of consistent organization.
 * 3) Misinformation: The article incorrectly states that AI programs are "generally" production systems. The description of "fuzzy logic" is inaccurate. I'm not sure the division of AI into two competing subfields has been done accurately, or if it's even true that there are exactly two subfields. For example, who would consider Rodney Brooks "conventional AI"?

This article needs some reference that could define the "key areas of AI research", commercially and academically. There needs to be some expert source that determines what should and should not be included in this article.CharlesGillingham 23:51, 27 July 2007 (UTC)


 * ✅ All these issues have been resolved. CharlesGillingham (talk) 01:41, 8 December 2007 (UTC)

VaMP (1995) vs 2005 DARPA Grand Challenge
We better delete most of the references to this heavily promoted DARPA race which actually did not achieve any AI milestone at all. The milestones in autonomous driving were set 10 years earlier by the much more impressive car robots of Mercedes-Benz and Ernst Dickmanns. Let us compare his VaMP robot car to the five cars that finished the course of the 2005 DARPA Grand Challenge: Stanley & Sandstorm & H1ghlander & TerraMax & Kat-5. My sources are mostly Wikipedia and the rest of the web. In 2005, the DARPA cars drove 212 km without human intervention. In 1995, the VaMP drove up to 158 km without human intervention. The DARPA cars drove on a dirt road flattened by a steamroller. The VaMP drove on the Autobahn. In both cases the road boundaries were easily identifiable by computer vision. Like many commercial cars, the DARPA cars used GPS navigation, essentially driving from one waypoint to the next (almost 3000 waypoints for the entire course, several waypoints per curve). Like humans, the VaMP drove by vision only. The DARPA cars reached speeds up to 40 km/h. The VaMP reached speeds up to 180 km/h. So the VaMP was more than four times faster although its computer processors apparently were 1000 times slower. The DARPA cars did not encounter any traffic but a few stationary obstacles. The VaMP drove in traffic around moving obstacles, passing other cars. Interestingly, the 2007 Urban Grand Challenge is trying to repeat something the VaMP was already able to do 12 years ago. Willingandable 18:07, 24 August 2007 (UTC)
 * I would like to see some technical comparison of the hard and soft computational abilities between these two approaches. --moxon 15:00, 27 August 2007 (UTC)

Topic of "problems in creating AI"
It seems to me that an encyclopedic article on AI must address the persistent challenges and setbacks. "AI Winter" is briefly mentioned, and is better described in the separate page on the "History of AI." Yet, here, the problem of unfulfilled expectations is glossed over. In fact, an example of overstatement is found in this very article, where it says: "AI logistics systems deployed in the first Gulf War save the US more money than spent on all AI research since 1950[citation needed]." Why place an unsubstantiated assertion in an article? I suggest that the article should stick to the verifiable facts. Timlevin 03:47, 27 August 2007 (UTC)

AI in Films
Any mention of AI in films?CoolRanch3 17:07, 17 September 2007 (UTC)
 * See artificial intelligence in fiction CharlesGillingham 20:30, 17 September 2007 (UTC)

AI in Fiction
Added many of the classic fictional examples of AI to the Artificial intelligence in fiction. Listed them thematically. Two concerns I have:

Thomas Kist 23:59, 23 September 2007 (UTC)
 * 1) Robot rights should probable come back to the main page
 * 2) I think a lot of people will miss the fiction page based on the current main page layout. Perhaps the fiction subsection on the main page should be rewritten to be less of a list and more of an into. Any other thoughts?

Reorganization proposal
I would like to re-organize this article into a tight summary style and bring it closer to sources like Artificial Intelligence: A Modern Approach. If you'd like to help (or have some objection) please see Talk:Artificial intelligence/Structure proposal. CharlesGillingham 23:27, 2 October 2007 (UTC)

Here's latest version of the proposal: as it is now, transcluded:


 * Perspectives on AI (is there a better title?)
 * AI in myth and fiction
 * More or less as written in artificial intelligence
 * The rise and fall of AI in public perception
 * More or less as written in artificial intelligence
 * The philosophy of AI
 * More or less as written in artificial intelligence
 * The future of AI
 * Very short description of futurists like Ray Kurzweil and the possibility of Strong AI
 * More or less as written in artificial intelligence
 * The future of AI
 * Very short description of futurists like Ray Kurzweil and the possibility of Strong AI
 * Very short description of futurists like Ray Kurzweil and the possibility of Strong AI
 * Very short description of futurists like Ray Kurzweil and the possibility of Strong AI


 * AI research
 * '''Approaches to AI reserach
 * Short bulleted list of neats vs. scruffies, GOFAI vs computational intelligence, GOFAI vs embodied/situated/Nouvelle AI, strong AI vs. "Applied AI"/intelligent agent paradigm, etc.
 * Problems of AI research
 * A bulleted list of all the major sub-problems of AI, with short paragraphs on deduction, natural language processing, computer vision, machine learning, automated planning and scheduling, commonsense knowledge/knowledge representation, and so on. as described in Russell and Norvig or other major AI textbooks.
 * Tools of AI research
 * A bulleted list, with short paragraphs on logic programming, search algorithms, optimization, constraint satisfaction, evolutionary algorithms, neural networks, fuzzy logic, production system, etc. as described in Russell and Norvig or other major AI textbooks.


 * Applications of artificial intelligence
 * A bulleted list with short paragraphs or sections on some of the applications expert systems, data mining, driverless cars, game artificial intelligence, etc.

As an alternative to "Perspectives on AI", how about "AI in popular discourse" or something along those lines? Because it seems to me the unifying characteristic of the heading's subtopics is that they are nontechnical and described mostly in writings intended for general lay audiences, rather than for other specialist academics. This applies least for the philosophy of AI, which does have a foundation of rigorous academic work on which to write an article, so perhaps it should be pulled out as a new top-level section between the popular stuff (SF, history, and predictions) and the gritty details of modern AI methods. -- Schaefer (talk) 16:57, 3 October 2007 (UTC)


 * I was thinking that history, philosophy, fiction and futurism are all subjects where people tallk about AI but don't do AI. That way the next section can be about how AI is done. CharlesGillingham 19:05, 3 October 2007 (UTC)

I think descriptive paragraphs is essential for a good article - bulleted lists might be a bit to tight. They also tend grow uncontrollably. You might have some resistance if subtopics like challenges and AI in business and toys are completely dropped; I guess they could form part of "Applied AI"? I am a bit concerned that major AI textbooks primarily focus on symbolic AI. These are not the sources used in electronic engineering (my POV). But do the restructuring and lets see what happens :-) --moxon 12:42, 5 October 2007 (UTC)


 * Absolutely. A paragraph each for each sub-sub-topic (like "natural language processing"). And I want to move the paragraphs about games, business, driverless cars, etc. into the last section "Applications of AI". Nothing would be lost except the middle section now called Artificial intelligence.


 * I'm very into including the perspectives from EECS.


 * Would you be interested in helping me write some paragraphs? I am very familiar with older topics, like knowledge representation or logic but I'm a little fuzzy on newer ones, like Bayesian networks. I'll put the current draft at Talk:Artificial intelligence/AI research (draft) CharlesGillingham 00:52, 7 October 2007 (UTC)

I'll help where I can and when I have the time (sorry about the long silence). I'd be more comfortable with a migration of the article rather than a replacement. It would keep more editors involved in the process. It wouldn't be to difficult to implement most of the suggested structural changes --moxon 20:52, 10 November 2007 (UTC)

I agree with your proposal on the "Approaches to AI reserach" section. Currently that section is confusing, trying to fit everything into two category, which is simply not the case. Classifying them from different perspectives would be a much better choice, as you proposed. Also some statements there seem problematic to me. For example: "Conventional AI mostly involves methods now classified as machine learning..." I think machine learning was never the main stream in GOFAI. Instead, knowledge representation and reasoning is an important topic for GOFAI, but is not mentioned at all in that section. Took (talk) 10:42, 23 November 2007 (UTC)


 * I have carried out most of this plan. See notes below. CharlesGillingham (talk) 01:54, 29 November 2007 (UTC)

Section "Why Artificial Intelligence Needs Philosophy"
The section newly added by Pooya.babahajyani (talk) on 6 October 2007 seems to be copied from "Some philosophical problems from the standpoint of artificial intelligence" by John McCarthy and Patrick J. Hayes, see http://www-formal.stanford.edu/jmc/mcchay69/node3.html. Therefore I will remove it. --Ioverka 02:19, 7 October 2007 (UTC)

Loebner Prize In "Research Challenges" Section.
Hi, I think Loebner Prize should be included into "Research Challenges" Section, any Comments about this? —Preceding unsigned comment added by 148.87.1.172 (talk) 16:54, 19 November 2007 (UTC)
 * ✅ CharlesGillingham (talk) 01:38, 8 December 2007 (UTC)

Major expansion
I have expanded this article to include most of the topics covered by major AI textbooks and online summaries of AI. (I've posted my research at Talk:Artificial intelligence/Textbook survey). I'm hoping that this brings the article within striking distance of the featured article criteria.

The only thing I deleted was the original "approaches to AI" section, which had too many problems to be saved. I expanded and added sources to some of the existing sections, but they are mostly intact. CharlesGillingham (talk) 03:02, 29 November 2007 (UTC)

Rewrite of AI in myth and fiction
I've been considering a re-write of the fiction section in the main article (still working on the 2nd and 4th paragraph), but I though I put where I was after I saw the ToDo list

Reasons:
 * Don't want the main AI page to have lists of fictional AI - I'd like to direct the reader to more specific lists on other pages
 * Portrayals of AI are broader than up "an comming power"

Beings created by man have existed in mythology long before their currently imagined embodiment in electronics (and to a lesser extent biochemistry). Some notable examples include: Golems, and Frankenstein. These, and our modern science fiction stories, enables us to imagine that the fundamental problems of perception, knowledge representation, common sense reasoning, and learning have been solved and let's us consider the technology's impact on society. With Artificial Intelligence's theorized potential equal to or greater than our own, the impact can range from service (R2D2), cooperation (Lt. Commander Data), and/or human enhancement (Ghost in the Shell) to our domination (With Folded Hands) or extermination (Terminator (series), The Matrix (series), Battlestar Galactica (re-imagining)). Given the negative consequences, ranging from fear of losing one's job to an AI, the clouding of our self image, to the extreme of the AI Apocalypse, it is not surprising the Frankenstein complex would be a common reaction. Subconsciously we demonstrate this same fear in the Uncanny Valley hypothesis. See AI and Society in fiction for more ...

With the capabilities of a human, an AI can play any of the roles normally ascribed to humans: protagonist antagonist cometic relief. The creation of sentient machines is the holy grail of AI, human level intelligence. The following stories deal with the birth of artificial consciousness and the resulting consequences. (This section deals with the more personal struggles of the AIs and humans than the previous See Sentient AI in fiction ...

While most portrayals of AI in science fiction deal with sentient AIs, many imagined futures incorporate AI subsystems in their vision: such as self-navigating cars and speech recognition systems. See non-sentient AI in fiction for more ...

The inevitability of the integration of AI into human society is also argued by some science/futurist writers such as Kevin Warwick and Hans Moravec and the manga Ghost in the Shell

Thomas Kist (talk) 17:00, 29 November 2007 (UTC)


 * I'm not sure what you're saying. This all looks good. You should add it. (The article artificial intelligence in fiction could also use more analysis and fewer lists). CharlesGillingham (talk) 21:50, 29 November 2007 (UTC)

New topic

 * Does not Wikipedia constitute a basis for a hybrid human-artificial intelligence? It already uses tactics that are used in CYC.  It draws reflexively on its own references as well as others.  It is competitive.  I saw it bump a University off  Google's listings for first place on the subject of artificial intelligence.  It grows sometimes by big leaps but more often a disarmingly productive global growth in small increments, each cunningly designed.  Etc.  It is clearly on Earth's side, and fits into the evolutionary emergence of silicon alongside diatoms and Equisetum.  What if Wikipedia is not within the destiny of human control? SyntheticET (talk) 18:03, 7 December 2007 (UTC)


 * Forgive me for moving this to a new topic. CharlesGillingham (talk) 01:37, 8 December 2007 (UTC)


 * A program devised by Evgeniy Gabrilovich and Shaul Markovitch of the Technion Faculty of Computer Science at the Technion - Israel Institute of Technology helps computers map single words and larger fragments of text to a database of concepts built from the online encyclopedia Wikipedia; this then helps in making broad-based connections between topics, to aid in filtering e-mail spam, performing searches and conducting electronic intelligence gathering at a more sophisticated level, according to this ScienceDaily release. Pawyilee (talk) 15:02, 9 December 2007 (UTC)

Samuel Butler
I'm not sure I agree that "Samuel Butler first raised the possibility of mechanical consciousness", since there are conscious machines described by Homer and other classical sources. Is there a source that claims this? CharlesGillingham (talk) 00:04, 22 December 2007 (UTC)


 * I wrote Samuel Butler first raised the possibility of "mechanical consciousness"..., NOT was the first to, because I have no way of knowing just who was the first. I'm even treading on thin ice there as I have not read the article he wrote for The Press and am only going by the heading it was given, Darwin Among the Machines. I frankly expected an objection to including Butler because he referred, in Erewhon, to emergence by means of Darwinian Evolution, specifically by Natural selection, so that would make his "mechanical consciousness" a form of natural, not artificial, intelligence. How did Homer handle it? Pawyilee (talk) 14:26, 22 December 2007 (UTC)


 * PS The entry reads OK if you delete the word "first". I have no objection to your doing that, especially if you pay me back with Homer! Pawyilee (talk) 14:26, 22 December 2007 (UTC)


 * Hephaestus' golden robots are mentioned in Homer's Illiad, and I think Homer may mention Galatea as well, although I can't remember. So, in answer to your question, classical sources imagine that artificial intelligence can be created by the craftsmanship of masters, like Hephaestus, Pygmalian or Daedalus. Samuel Butler's ideas are fascinating. My only question is about how his ideas fit into the history of AI in general—I just want to find the proper place to introduce him CharlesGillingham (talk) 16:07, 23 December 2007 (UTC)


 * Butler's vision makes the difference between artificial and natural an artificial one. To get to the root of the matter, check out ar- and gen, then find art and nature. I'm happy you are fascinated: that was my purpose! Pawyilee (talk) 09:31, 24 December 2007 (UTC)
 * PPS How about moving the bulk of the entry on Butler to Consciousness, then refer to it in this and the other the AI articles as sort of an Honorable mention? Pawyilee (talk) 12:42, 28 December 2007 (UTC)


 * My inclination would be to mention him as a precursor to Kurzweil, Hans Moravec and other Transhumanists: he's really an early futurist. I think the article needs a short section about futures studies and AI (see my original outline above), mentioning Moore's Law, transhumanism, the ethics of artificial intelligence, and other (serious) speculations about what AI will mean to society in the future. We could even use an entire article on the future of AI.  CharlesGillingham (talk) 17:07, 30 December 2007 (UTC)
 * I can see that happening some time in the future. Pawyilee (talk) 16:32, 31 December 2007 (UTC)


 * I've tucked him into a new section that covers futurists, ethics and fiction. I'm sorry that I've cut him down to one sentence—I wanted to keep this section down to one page. There's just way too much to cover. Maybe you would consider adding more material to the Samuel Butler article? CharlesGillingham (talk) 02:44, 22 February 2008 (UTC)


 * Somebody copied what I'd written into a new article, Darwin Among the Machines. To it I added a NZ link to Butler's letter to the editor, plus a new paragraph on George B. Dyson's 1998 book of the same name. I then cut the reference here to a couple of clauses ...expressing an idea first proposed by Samuel Butler's Darwin Among the Machines (1863), and expanded upon by George Dyson (science historian) in his book of the same name (1998). OK? Pawyilee (talk) 15:20, 27 February 2008 (UTC)

stupidity, ignorance, and laziness - unverifiable
The following section has been removed from the article because it could not be verified. Pgr94 (talk) 23:36, 28 January 2008 (UTC)


 * "There are three general limitations in AI,"
 * You should wrote a theory, become notable and then put a small link in this article like: general limitations in AI - to you theory.
 * Notable here means generally accepted by the ai researchers. I dont agree with you so I even more dont think this is a good place for this to stay here. Raffethefirst (talk) 11:20, 29 January 2008 (UTC)


 * What specific facts claimed in that section do you believe need verification? Sai Emrys   ¿?   ✍  01:17, 29 January 2008 (UTC)
 * That "stupidity, ignorance, and laziness" has something to do with AI. Pgr94 (talk) 02:15, 29 January 2008 (UTC)
 * Then I cite Professor Russell of UCB, whose AI class I have taken and who said exactly that. Presumably he's an authority (having written the main book about it). WP:V only demands verifi*ability*, not online access to the source. My paraphrase below is reasonably accurate. Sai Emrys   ¿?   ✍  16:47, 29 January 2008 (UTC)
 * I have the 1995 edition in front of me but can't see any mention of these limitations. Can you supply a page number - or better still quote the relevant sentence(s)? Pgr94 (talk) 17:54, 29 January 2008 (UTC)
 * I said that his *class* (specifically, CS 188, two years ago - though he still runs it TTBOMK) is the source, not the book (which AFAIK doesn't mention it). The book simply establishes him as an authoritative source. If you want to verify it, take his class or email him. You're welcome to reword it from "limitation" to "approach" or somesuch; I don't recall how he framed that part. But stupidity, ignorance, and laziness is a direct quote, and most of the examples I used are from ones brought up in class. A friend who took a class with Prof. Denzinger said that he talked about them also. Sai Emrys   ¿?   ✍  19:11, 6 February 2008 (UTC)
 * To be verifiable, we need to be able to check what you are claiming is true. According to WP:V the burden of proof lies with the contributor.  You should at least be able to find it in the course notes.  At any rate I doubt it is a widely held opinion and I'd have reservations about adding it to the article unless there is a peer-reviewed article that covers the topic. Pgr94 (talk) 23:46, 6 February 2008 (UTC)
 * You *can* check it easily: email him. The address is russell@cs.berkeley.edu. The burden on me is only to provide a source (which I have done), not to check it for you or to prove that the matter claimed is true - the criterion is reader-verifiability, not proven truth, as it says in the WP:V header. It is verifi*able*, therefore it meets the standard.
 * I didn't know how to add a "said in class multiple times" cite, so I left it as a comment. AFAIK it was not in the course notes or slides. Not being readily available online does not make that source any less valid.
 * As for "opinion", I don't think that's an appropriate framing; it's more that it is a way of describing problems than some sort of strong statement about "limits" in AI. As i said, I have no problem with my paraphrasing being changed to make that clearer. And you can remove "widely held" too; that's just explanatory prose to me. Sai Emrys   ¿?   ✍  02:29, 7 February 2008 (UTC)
 * From WP:V "readers should be able to check that material added to Wikipedia has already been published by a reliable source" (my emphasis). Perhaps I've misunderstood you, but at the moment it seems the only evidence for verifiability is unpublished.  Pgr94 (talk) 04:51, 7 February 2008 (UTC)
 * I'm not an AI person. However, if these are indeed "commonly stated" (as stated above), then it should be simple to find them stated in an AI textbook (or similar), and use that as the source.  Without such a source, there's no way of knowing that it isn't one specific lecturer's in-joke Bluap (talk) 05:05, 7 February 2008 (UTC)


 * (Coming from the village pump) If it's not published then it's not verified. Yes, in theory one could go to the horses mouth to ask if he said it (assuming that he is a reliable source). This is not a feasible burden to place on readers, who must be provided realistic access to the source to check its accuracy. And in the absence of actually asking this living source, there is a useful word to describe the use of your reporting of what this other person said: hearsay.--Fuhghettaboutit (talk) 15:42, 7 February 2008 (UTC)
 * I checked back with my friend - he talked to Prof. Denzinger. I was confused; Denzinger is actually at Calgary, not Berkeley. He checked by email, and Denzinger responded that it was not originally his and he didn't know the source - but he used it himself as a description. Two profs in two unrelated universities in different countries using the same rubric (if that's the right word) seems rather more than just "one specific lecturer's in-joke". I would guess that because it is somewhat tongue-in-cheek, it's unlikely to be published in an AI textbook for cultural reasons. This doesn't prevent it from being worth including in a Wikipedia article. Additionally, I have verified its existence at least in these two instances, so I don't believe that it's such a burden to ask - or that only *published* material counts as verifiable. If that were the case, then WP:V would say so. It doesn't. Like I said, email him and you'll find out for yourself. It's rather easier IMO to verify than most books. Sai Emrys   ¿?   ✍  18:56, 7 February 2008 (UTC)
 * A couple sources:
 * CS188 notes from a TA - Laziness (p 10, uncertainty)
 * CS 188 lecture slides expectimax search 2006 and 2007 - Ignorance
 * Test for Intelligent Systems (2II40), TUE (sql format) Ch 2 Q3 Q: Does probalility provide a way of summarizing the uncertainty that comes from our 'laziness' and 'ignorance'? A: Probalility provides a way of summarizing the uncertainty that comes from our 'laziness' and 'ignorance'. It deals with certain 'degrees of belief' of the agent. Probability for an event changes when new evidence appears for an event. We can divide this into 'prior probability', which stands for the probability before having evidence and 'posterior probability', which stands for the probability after having gained evidence. - note that this isn't in her lecture slides, which are ripped from CS188's. And again, different university, different lecturer. Nevertheless, it's on the test...
 * Which supports what I said - this isn't something you're likely to find published "officially", since it's a humorous way to explain what's normally called just "intelligent agents and uncertainty". Sai Emrys   ¿?   ✍  20:12, 7 February 2008 (UTC)
 * Google Scholar also turns up some links, e.g. these lecture notes (again, different university - tcd.ie) that include 'ignorance and laziness' in the same context. (Many of the hits are irrelevant, from social commentaries and the like talking about humans, but not all.) This chapter from some random book appears to refer to this concept as well from the google snippet, but I can't access the full text. Sai Emrys   ¿?   ✍  20:26, 7 February 2008 (UTC)


 * 1. Stupidity: One does not always know how to compute a perfect solution.

The perfect solution is the one that gives the right answer in the smallest amount of time. There can be any method here and the one that gives the best answer in little time is definitely not stupidity. It can be trial an error or logical deduction or something else. The approaches used to solve a problem can be very different according with the known data and the problem.
 * 2. Ignorance: One does not always have the necessary information to compute a perfect solution.

This is an approach to solve a problem. Is not an limitation to AI. There can be a lot of other approaches to solve problems.
 * 3. Laziness: One does not always have the time to compute a perfect solution.

The perfect solution is the one that gives the right solution in little amount of time. To get the perfect solution you can try to reduce the unnecessary checks. This is a thing that must to be done in order to get efficient solutions. But there are more other things to do to solve problems efficiently.

So the things you said are no limitation to AI, are not even a classification of the same things. The 1 and 2 are methods of solving problems and 3 is a speed up method applicable to other methods to solve problems. '''Also even if your list is trying to be a list of problem solving methods, the things don't work this way. There is not no such a list... you can observe such methods in humans but to apply strictly those methods in AI is wrong without understanding the background mechanism.''' So I kindly ask you to let you professor make a theory and let this theory become accepted and then try to put it on Wikipedia. Raffethefirst (talk) 08:27, 8 February 2008 (UTC)


 * I don't think you understood what I wrote.
 * I've said already that "limitation" is poor wording; "complication" is perhaps a better one. Nobody is arguing that fast, correct, certain etc answers aren't better. Sure they are. But sometimes one can't have all of those things, because of practical limitations - such as in the examples I gave. E.g. chess is simply not within current computers' ability to fully process, even though the means of doing so is quite simple if one did have that much computing power. Thus, one has to use more sophisticated methods - heuristics, better search methods, etc. I said nothing about this being not true of humans as well.
 * In any case, this section is not primarily about algorithms or ways to *address* the problems (except for completeness of the examples) - it's about listing the *problem*.
 * P.S. Please don't leave notes on my talk page that are just a rehash of discussion here. I monitor this page and will respond here. Thanks. Sai Emrys   ¿?   ✍  02:19, 10 February 2008 (UTC)

Yes I agree with you: I did not understand what you mean. I still don't understand. As you say you professor phrase it somehow else. The idea could be more intelligible if you could get his citations. I was trying to talk on you user page to understand exactly what you mean and not to change the subject here. So please explain me what you mean where you want.Raffethefirst (talk) 17:02, 10 February 2008 (UTC)

Three general problems in AI can be stated as stupidity, ignorance, and laziness. Most real-world problems have one or more of these factors.


 * Stupidity: One does not always know how to compute a perfect solution.
 * E.g. there is no known method to directly factor the multiple of two primes.
 * The solution to stupidity is generally to use an alternative method to approach the answer, or one that results in an answer that is "good enough". E.g. for prime factorization, there are various heuristics to determine whether a large number is prime.
 * Ignorance: One does not always have the necessary information to compute a perfect solution.
 * E.g. in the game Stratego, the opponent's pieces are of known position, but start as of unknown identity. In Texas hold 'em poker, the order of the deck and thus the other players' cards as well as the flop cards are unknown.
 * The solution to ignorance is generally the strategic discovery of new information or acceptance of unknowns - e.g. in Stratego one can bait or attack pieces to uncover their identity, or guess that the opponent's flag is in a well-protected location rather than in an easily reachable one. In poker, one can try to determine the other players' cards by their reactions during bidding, as well as knowing the simple probability of various flop cards and going with whatever is most likely to succeed overall.
 * Laziness: One does not always have the time to compute a perfect solution.
 * E.g. in chess, though the state is entirely known, as well as the rules of the game and the value of its outcomes, there is not enough computing power available to exhaustively go through all possible games. Checkers, however, has been solved relatively recently by exactly this method.
 * The solution to laziness is generally a utility heuristic - e.g. in chess, one can take a guess at how likely a certain move is to result in a win or a loss even without having fully computed its outcomes, based on generalized ideas such as defensive positions, numeric piece values, etc.
 * I support User:Pgr94's interpretation of WP:VER. What user Sai Emrys  advocates would obligate each user to email Dr. Russell for verification of the claim. While that would be nice and all, the world is a big place, and I'm sure Mr. Russell has better things to do with his time.--Sparkygravity (talk) 14:51, 21 February 2008 (UTC)

Not necessary to call it a "definition" ?
Saying what ai is, is like saying that we know indubitably what ai is. Saying is the modern definition lets peoples perfectly understand that is a definition that might not be perfect but is the best we have at the moment. And beginning with a 'definition' I think it gives a + professional look. Raffethefirst (talk) 07:38, 22 February 2008 (UTC)
 * reverted CharlesGillingham (talk) 08:28, 22 February 2008 (UTC)

Philosophy of AI
Section 1.3 (Philosophy of AI) states this:


 * 1) Gödel's incompleteness theorem: There are statements that no physical symbol system can prove.

As far as I know, this is flat out wrong. Firstly, Godel's incompleteness theorem has no direct connection to "physical" symbol systems (whatever that means). But more importantly, Godel's incompleteness theorem does not show that there are statements that no "symbol system" (or formal system to be clearer) can prove. Godel's incompleteness theorem shows that for each (sufficiently expressive) formal system there are true statements that it can not prove - but this does not preclude other formal systems from proving those statements. This is evident by the simple fact that you can always take a formal system, and add to it one of its unprovable true statements as an axiom, and it is then proven within the new one-axiom-larger formal system (incidentally with it's own new set of unprovable true statements). Can anyone with knowledge on the subject comment on whether I am correct in my understanding on this. Remy B (talk) 09:00, 27 February 2008 (UTC)


 * I don't have the patience to wait for confirmation - I am confident I am right - so I have changed the article accordingly. If anyone disagrees and I somehow have this whole topic back to front then we can revert. Remy B (talk) 09:23, 27 February 2008 (UTC)


 * Your revision is correct. The original was wrong. CharlesGillingham (talk) 05:34, 1 March 2008 (UTC)