Talk:Artificial intelligence/Archive 11

"Decision Stream" Editing Campaign
This article has been targeted by an (apparent) campaign to insert "Decision Stream" into various Wikipedia pages about Machine Learning. "Decision Stream" refers to a recently published paper that currently has zero academic citations. The number of articles that have been specifically edited to include "Decision Stream" within the last couple of months suggests conflict-of-interest editing by someone who wants to advertise this paper. They are monitoring these pages and quickly reverting any edits to remove this content.

Known articles targeted:
 * Artificial intelligence
 * Statistical classification
 * Deep learning
 * Random forest
 * Decision tree learning
 * Decision tree
 * Pruning (decision trees)
 * Predictive analytics
 * Chi-square automatic interaction detection
 * MNIST database  — Preceding unsigned comment added by ForgotMyPW (talk • contribs) 17:37, 2 September 2018 (UTC)

BustYourMyth (talk) 19:16, 26 July 2018 (UTC)
 * Thanks! May I suggest a bit more of an edit summary on your fixes to that. North8000 (talk) 00:58, 27 July 2018 (UTC)

Dear BustYourMyth,

Your activity is quite suspiciase: registration of the user just to delete the mention of the one popular article. Peaple from different contries with the positive hystory of Wikipedia improvement are taking part in removing of your commits as well as in providing information about "Decision Stream".

Kind regards, Dave — Preceding unsigned comment added by 62.119.167.36 (talk) 13:36, 27 July 2018 (UTC)

I asked for partial protection at WP:ANI North8000 (talk) 17:08, 27 July 2018 (UTC)

Semi-protected edit request on 5 September 2018
1. First sentence: Natural Intelligence

2. Last sentence of second paragraph: change "Capabilities generally classified as AI as of 2017 include successfully understanding human speech,[5] competing at the highest level in strategic game systems (such as chess and Go),[6] autonomous cars, intelligent routing in content delivery network and military simulations." to "Modern machine capabilities generally classified as AI include: successful understanding of human speech,[5] competing at the highest level in strategic game systems (such as chess and Go),[6] machine learning [hyperlink], autonomous driving, intelligent routing in content delivery network, and military simulations."

3. First sentence, third paragraph: add Oxford Comma after "success".

4. Third paragraph: omit "that often fail to communicate with each other" from "For most of its history, AI research has been divided into subfields that often fail to communicate with each other"; there is insufficient explanation as to how AI goals, tools, and philosophical values are actually failing to communicate with each other.

5. Third sentence, third paragraph: change "'machine learning'" to "'pattern recognition'" or "'data analysis'"; machine learning, as later explained in Wiki article under "Learning", is a fundamental aspect of AI, not solely a subset or tool.

5. Third sentence, third paragraph: change "differences" to "values" or "qualities". It isn't possible for subfields to be based on differences.

6. Add [262 ] after "The main areas of competition include general machine intelligence, conversational behavior, data-mining, robotic cars, and robot soccer as well as conventional games." Efalstrup (talk) 18:56, 5 September 2018 (UTC)


 * I agree with revisions 2 and 6, the rest are not valid. I don't understand what you want changed with 1 (intelligence is already linked), an oxford comma is not strictly required for 3 and I'm not sure it's appropriate based on the style guidelines this article adheres to, 4 is directly supported by a citation and makes the important distinction that it only applies to "most of its (AI's) history", machine learning in 5 is said to be a goal, not a tool, and the second 5 is also already correct in the article. It states that subfields are divided by philosophical differences, not just "differences", which is possible and correctly written. Zortwort (talk) 19:23, 5 September 2018 (UTC)
 * Upon further inspection, I have to change my agreement with revision 6, actually, as your source doesn't appear to confidently support the claim in the article. Zortwort (talk) 19:29, 5 September 2018 (UTC)

Semi-protected edit request on 8 September 2018
1 – "Natural Intelligence" is not linked, it’s only bolded in this article. I suggested it have a link to Intelligence, regardless if its text is displayed as "Natural Intelligence" or just "Intelligence".

3 – The article does in fact predominantly use the Oxford comma: “…statistical methods, computational intelligence, and traditional symbolic AI.”, “…large amounts of data, and theoretical understanding”, “…new ties between AI and other fields (such as statistics, economics and mathematics), and a commitment by researchers to mathematical methods and scientific standards”, etc. Since the style guidelines of this article are inconsistent when it comes to commas, you are right that in this case it shouldn’t matter. Thus, it should not be of concern if a comma is added after “network” in this part of the sentence: “…intelligent routing in content delivery network and military simulations.” Although grammatical, these details inevitably matter for an article’s formality.

4. The source describes it as “the rough shattering of AI in subfields—vision, natural language, decision theory, genetic algorithms, robotics ... and these with own sub-subfield—that would hardly have anything to say to each other." If the Wiki article is going to concede to this source’s negative portrayal of the AI subfields expanding into new research areas, then it should at least explain why this is a bad thing. This article takes a slightly more neutral view:

https://www.sciencedirect.com/science/article/pii/S2095809917300772

It says “Since the 1970s, AI has expanded into research fields, including mechanical theorem proving, machine translation, expert systems, game theory, pattern recognition, machine learning, robotics, and intelligent control. The exploratory processes related to these fields have led to the development of many technologies and to the formation of various schools of symbolism, connectionism, and behaviorism.” Maybe this content could be oriented into this section of the Wiki article.

5a. You’re right: Since the Wiki article does state that the sub-fields are based upon these goals, it implies fundamentality.

5b. Agreed. At least change “…or deep philosophical differences.” to “…or deep philosophical differences (symbolic vs. sub-symbolic AI).” If the philosophical differences are cited as the cause of the sub-field divides, the Wiki article should at least mention those differences (as citation 16) states.

6. Agreed, however citations are still needed to back up all of the other examples named in this sentence. Efalstrup (talk) 23:04, 8 September 2018 (UTC)
 * 1. Intelligence is linked in the first sentence, it would be redundant to link it twice.
 * 3. To be honest that whole sentence reads poorly, for example how are "autonomous cars" a "machine capability"? I'll revise it to make more sense.
 * 4. I don't think the portrayal here is necessarily negative, it's just saying that it's been difficult for AI which achieve different things to communicate with each other. Your source doesn't necessarily contradict McCorduck's claim, and I wouldn't say she's non-objective.
 * 5b. I don't fully know, but there may be other philosophical differences besides symbolic vs sub-symbolic? It seems fine to leave it general the way it is, especially considering that sentence is in the lede.
 * 6. It's pointless to add citations which don't support the claim just because of a need for citations. It already says citation needed, if you find one which appropriately supports the claims there then it can be added.
 * Finally, I noticed that you're editing this article as part of a school project. I don't mean to be rude or anything, but considering this is a semi-protected article and until you're a confirmed user you won't be able to edit it, why don't you consider asking to be assigned another page? Just might make things easier. Regards, Zortwort (talk) 01:47, 9 September 2018 (UTC)
 * Reading through that sentence from 3, I believe it means intelligent routing in CDNs and military simulations, which would make sense. Consequently there should definitely be no comma after networks. Zortwort (talk)

Semi-protected edit request on 26 September 2018
Ravirajbhat154 (talk) 17:20, 26 September 2018 (UTC)


 * There are a couple problems here. First, there is not specific requested edit specified.      But also they are unsourced text/  "takes" on AI written by a Wikipedia editor. Albeit with such embedded within an image. North8000 (talk) 21:39, 26 September 2018 (UTC)

@North8000 (talk) I am very happy to share my collective knowledge. I have added a clear reference in this blog. This is my understanding of Artificial Intelligence, please feel free to reach out to me if you have any input/suggestions/different opinion.Ravirajbhat154(talk) 04:59, 27 September 2018 (UTC)
 * ❌ Your understanding or what you wrote on your blog is not required here. Facts (even in diagrams/images) must be sourced from reliable sources. –Ammarpad (talk) 05:56, 27 September 2018 (UTC)

@Ammarpad Thank you for your comments. To create this collective image i have used Wikipedia itself as a reference. Please let me know anything i have to modify in these images. Ravirajbhat154 (talk) 06:46, 27 September 2018 (UTC)


 * Still ❌. Wikipedia is not a reliable source, as it is user-generated content. Fish +Karate  10:16, 27 September 2018 (UTC)

Ravirajbhat154, thanks for your efforts but I don't think that you understand how Wikipedia works in this respect. To state it only a bit oversimplified, we don't write from our expertise, we write what is in published sources. Einstein would not be allowed to write about relativity here from his expertise, he could only put in material from (his) published works. North8000 (talk) 12:45, 27 September 2018 (UTC)

Algorithm not Optimal
I changed the note on the tic-tac-toe algorithm to say it was "not optimal" because it loses playing O as follows: 1) X corner  2) O center  3) X opposite corner  4) O corner  5) X corner, winning.UnvoicedConsonant (talk) 02:26, 5 January 2019 (UTC)
 * I've changed the parenthetical to specify that it's only optimal for player one.
 * ApLundell (talk) 02:42, 5 January 2019 (UTC)

Proposed Article
Dear fellow editors,

Is it possible to include this article? Please comment.

Thank you.

LOBOSKYJOJO (talk) 01:46, 8 January 2019 (UTC)

Use of Artificial Intelligence in the USA A new market report published by the EDWEEK Market Brief on January 4, 2019 stated that Artificial Intelligence or AI can cause a marked change on various aspects of society with unclear repercussions on education. This is a claim of some entrepreneurs and futurists. This report tries to move farther than projections and offer an accurate benchmark about AI’s current development and status considering different metrics or standards across research and industry in the United States as well as globally. https://marketbrief.edweek.org/marketplace-k-12/artificial-intelligence-attracting-investors-inventors-academic-researchers-worldwide/ In the United States, Congress will conduct hearings on the progress of the Department of Defense on the field of AI. The agenda is the National Security Commission on Artificial Intelligence, which Congress authorized in the most recent National Defense Authorization Act. Senior members of the US Congress and heads of government agencies will formulate proposals to develop Artificial Intelligence in reinforcing national security. https://foreignpolicy.com/2018/12/10/congress-can-help-the-united-states-lead-in-artificial-intelligence/ According to a report from the Information technology and Innovation Foundation, the United States remains behind other Asian nations when it comes to adoption of an AI Plan in the Asian region. Korea is the leader followed by Singapore, Thailand, China, and Taiwan. https://www.techrepublic.com/article/us-lags-behind-asia-in-ai-robotics-adoption-putting-businesses-at-risk/ Related to this development, China has released the nation’s Next Generation Intelligence Development Plan with the objective of becoming the global leader in AI by the year 2030. https://futureoflife.org/ai-policy-china/ For instance, China is investing in STEM education as well as research and development while engaging the best talent from the Silicon Valley. According to the head of robotics research at the JD Silicon Valley Research Center JD is working on different areas including Artificial Intelligence. https://phys.org/news/2018-04-china-jdcom-silicon-valley-center.html The public and private sectors must work together to help the USA maintain a leadership role in AI worldwide. The Trump administration released a STEM Education federal government plan which entails that the Commission must collaborate with the academe (university administrators) to better comprehend what the American government and industries must do to develop the students’ acquisition of advanced Science, Technology, Engineering, and Mathematics degrees. https://www.whitehouse.gov/wp-content/uploads/2018/12/STEM-Education-Strategic-Plan-2018.pdf The United States also requires federal data privacy laws that will protect its citizens and enable private companies to secure data sets for building machine-learning systems or algorithms. The government and private sector must keep in mind that data is a principal driver of Artificial Intelligence and machine learning. https://www.cfr.org/report/reforming-us-approach-data-protection At present, the US lags behind the European Union (EU) in establishing policies and rules. EU officials are more concerned regarding the condition of data privacy and regulations compared to the USA for some time now. https://www.vox.com/the-big-idea/2018/3/26/17164022/gdpr-europe-privacy-rules-facebook-data-protection-eu-cambridge

Basics of AI in philosophy of science
I belong to a faction of AI disciples who have longed for a basic connection of AI to classic philosophy of science, which could explain its historical waves of optimism and disappointments, unite or relate its disparate branches, and allow for an evaluation of their development. None of the AI-famous names mentioned in the HISTORY section of the article seems to be a name in the specific area of philosophy of science, if it is not wrongly equated with philosophy of logic and mathematics.

In order to clarify the issue we have reviewed the text of the AI-article and its TALK and REVISION sections. Finally a colleague found there a contribution made on December 14, 2018 that we came to consider very relevant and valuable as it appeared in the section on “Basics”. On the base of the early “iconic” AI-project of DENDRAL it pedagogically relates logic as base of programming to empirical and inductive methodology in the theory of science.

We discovered that the mentioned contribution was, in our opinion summarily, reverted about one hour later by “master-editor” North8000 with the shorthand justification that it “Promotes authors and sources and says nothing informative about the topic.” A couple of us have now read through thoroughly the references and concluded that the reversion could at least have been preceded by a TALK. In order to understand the informativeness of the contribution, it would have been necessary, before the reversion, to read its references. This is for grasping as we and certainly several readers do, the need for a philosophical base and core concepts that transcend the scope of a simple introduction to AI. We try to do this now in this TALK, and with the insertion in the article of an improved version of the contribution. In this we also follow the anonymous editing exhortations that are found inserted into the text of Edit-section, referring to “core concepts that are helpful for understanding AI; feel free to greatly expand” or earlier, “Major intellectual precursors”, i.e. beyond sheer logic.

In doing so and contributing with a further improvement, we qualify our objection to the reverting editor’s claim that “the original contributor only promotes authors and sources and says nothing informative about the topic”: that is, we follow the Wikipedia principle of “Assume good faith”, including what we tried to do: to “demonstrate good faith”. And we did  “be helpful: explain”, improving the reverted contribution. And let us mention our application of “Please do not bite the newcomers”. 154.44.138.30 (talk) 11:23, 23 February 2019 (UTC)
 * The issue here is that you are promoting a novel synthesis that is at odds with mainstream thought. You need to get your ideas published in reliable independent secondary sources before we consider them, and those sources must not be predatory open access or other questionable journals. What sources were presented for this edit, were primary, not secondary. Guy (Help!) 15:08, 23 February 2019 (UTC)

Basics of AI in philosophy of science (PART II)
This TALK refers to today’s proposed contribution to the AI-article’s section on Basics. On 14 December 2018 a user submitted an addition to the AI-article’s section on “Basics”]. It did refer to AI-related work in philosophy of science. As I already explained in an earlier TALK, a colleague called my attention upon the contribution and the fact that it had been reverted one hour later the same day by (“master editor”) User:North8000. A couple of us realized that the reverted contribution contained valuable information but it had assumed too much in the way of the Latin expression-proverb ‘’Intelligenti pauca’’. It could be improved at the cost of more space, words with data, and we submitted our carefully drafted improvement together with the aforementioned painstakingly formulated TALK on 23 February 2019.

Imagine our surprise, therefore, on seeing later that the new contribution and its references had been read, checked, reflected upon, deleted and completed with a few lines of TALK-comments within the span of fifteen minutes. This time, however, it was deleted by another (“grandmaster editor first class”) User:JzG, with new different motivations,. This was done with an unclear mention of unknown “personal reflections” made by a third obscurely involved user User:Bruce1ee, and with an unclear relation if not identity of JzG with an enigmatic “Guy” who does not display a proper user-page. The shift from North8000 to JzG (and Bruce1ee?) may well be a fully legitimate ‘’canvassing’’. It is unfortunate, however, that when (ab)used by common users it is derogatorily classified as meatpuppetry.

Be that as it may the first reversal’s motivation not expressing “assumption of good faith” was that the text ‘’Promotes authors and sources and says nothing informative about the topic’’. After our improvements and explanation of why it does not promote but rather references relevant authors comes this second reversal. It was now justified by deviation from unidentified ‘’mainstream thought’’, lack of secondary sources, and reliance on unspecified predatory sources or questionable journals.

We do not know of any predatory source or questionable journal used or referenced by us. It must be a misunderstanding of a research report on the web site of a university that was archived at the Internet Archive. For the rest we do not dare to undervalue the so-called predatory sources, because it could be like seeing the (relatively) open access Wikipedia as predatory on e.g. Encyclopedia Britannica. Wikipedia’s supervising editors would be perceived as anonymous computer nerds who are supposed to have polymath-qualifications while implementing a democratic bureaucracy, a formalistic game of enforcing policy-guidelines, eschewing academic or professional disciplinary peer-review.

The main author West Churchman, mentioned in our contribution, was a logician and philosopher of science involved in AI-research, internationally known author and authority on operations analysis, systems science and management. The other names are established names in AI and/or professors in information sciences at known universities. A professor emeritus at an accredited governmental university who writes a review of primary sources can be regarded as a secondary source that is more peer-reviewed than many anonymous journalists in media that are often sensationalistic, controversial and occasionally hit by scandals. Incidentally, regarding “reliable independent secondary sources”, a cursory view of the whole approved AI-article in mid-March 2019 reveals 7 references to anonymous journalists as ‘’BBC News”, 7 to ‘’The Guardian” newspaper, others to the New York Times, NY Daily News, Los Angeles times, and culminating with a reference to Washington Post in the form of an author named “Post, Washington” (sic). Several references are to “arXiv”, a repository of electronic preprints approved for posting after unspecified moderation but not formal peer review, like research reported at accredited universities.

The reading and study of our text and its references clarifies in terms of established texts on philosophy of science (our secondary sources) the meaning of an absent ‘’mainstream thought’’. We did not claim to ‘’synthesize’’ but, rather, to offer available explanations of why it could not be synthesized. The meaning of synthesis itself needs to be explained to non-philosophical AI-disciples. In doing this we contribute to the clarification of those items that are already found in the text and its references, such as talk about ‘’superintelligence’’ and its reviews as Is artificial intelligence really an existential threat to humanity’. Or, then, the controversies and therefore the glaring lack of “mainstream thought”, around John McCarthy’s original philosophy of artificial intelligence.

In other words, we stand for and acknowledge that our text fulfills criteria for being included in the AI-article, the more so for its relevance in the aftermath of the recent case of the Boeing 737 Max ‘’maneuvering characteristic augmentation system” MCAS. The problem in philosophy of technology consists in the AI-context of the capricious attribution of accidents and incidents to the ‘’human factor’’ included in human-computer interaction. The ‘’teething problems’’ become increasingly philosophical, intellectually challenging, in view of the permanent “childhood” of continuous renewals of an evolving technology, further exemplified by the future of self-driving vehicles, drone warfare and complete computerization of society.

Because of all this we ask to be allowed to submit again our present further improved text for a reappraisal. Please feel free and welcome to propose or implement yourself specific improvements, changes, or additions to the text, not only simple, easy wholesale deletion. For the rest, when accepted, in the open access spirit of Wikipedia, more people besides supervising editors will be allowed to try to dispute and improve our contribution to the question that is too complex for only a couple of us, supposed nerds.

If the further editing of the AI-article were to be constructive following the Wikipedia guidelines such as demonstrating good faith and helping the newcomers, the supervising editors themselves could suggest to include the few references in the sections on Other sources and/or Further reading. Or, better, one might boldly consider to move the whole improved contribution with its references (see below) to the introductory top of the AI-article (cf. Kaplan & Haelein) since it is a base for classifying and placing AI in the greater context of intelligence and knowledge:


 * Churchman, C. West; Buchanan, Bruce G. (1969). "On the Design of Inductive Systems: Some Philosophical Problems". The British Journal for the Philosophy of Science. 20 (4): 311-323. Retrieved 22 February 2019.
 * Churchman, C. West (1971). The Design of Inquiring Systems. Basic Books - (ISBN 465-01608-1).
 * Ivanov, Kristo (2015). “Computers as embodied mathematics and logic”. Research review. Umeå University, Dept. of Informatics. Internet Archive. Retrieved 22 February 2019.

154.44.138.30 (talk) 10:58, 22 March 2019 (UTC)


 * Well, in the post see a lot of creative casting aspersions on people and inventing and assuming bad faith.  And, instead of addressing the reasons given in the edit summary when the material was removed (which you never did) you invented and argued against straw man reasons that were never given.  This and related articles have been the target of persistent reference spamming efforts.  Those tend to look like where the added material really didn't improve the article and a reference is added of the type where the main goal appears to be to promote the reference.  So my advice in fields like this would be to make sure that the text adds to and is suitable for the article, and use references that have recognition rather than more obscure ones that are trying to use Wikipedia to achieve recognition.  North8000</b> (talk) 14:01, 22 March 2019 (UTC)


 * Citing Wiki guidelines does not validate the actual content. This addition feels more like a publicity campaign for Churchman, et al. than an attempt to improve the article. One could potentially see their work being used as reference in some areas, but not as the underpinnings of any broad discussion of AI. The field is too varied to give so much real estate to this particular bit of PR. Ipanemo (talk) 14:24, 22 March 2019 (UTC)
 * This article is a target for WP:REFSPAM and additions often tend to be examined with that in mind. This addition and discussion bother me for several reasons. First, any mention of books/papers from 1969/1971 should be only briefly mentioned, if at all. AI was still in the womb at that point. Heuristic methods and expert systems were around; but these don’t approach what we consider AI. Frankly, I see no reason to include John von Neumann and Alan Turing in an AI article – putting Kant, Hegel, Singer, Leibniz, et. al. well off track. I don’t understand your mention of the Boeing MCAS which is unrelated to AI. Closest that could get is cybernetics, and that’s tenuous. Basically, the addition looks like a PR campaign, as per Ipanemo. O3000 (talk) 15:57, 22 March 2019 (UTC)

Well, because of limits of time and energy I give up, after implementing my first conclusion from having been declared wrong in my last undertaking: I submit my final attempt to contribute, a statement that the Basics of AI are not to be sought in any obscure history of philosophy of science but only in references to reflections by experienced successful practitioners that already have recognition, (cf. Kaplan and Haenlein).

The second conclusion in the shadow of increasing TALK for reaching consensus is to note that the recommendation of Wikipedia civility suggests that the Latin proverb of Intelligenti pauca must be wrong. Nevertheless, unfortunately, I fear that it may still be considered by many to be right.

A final note for the record: my not having perceived any demonstrated good faith in me is not equivalent to accusation of bad faith, for which I myself have been repeatedly accused from the very beginning (my main goal being only to promote references authors and sources, an example of spamming effort, citing Wikipedia guidelines only for validating my content, casting aspersions on people, trying to achieve recognition, public campaign for the main references, no attempt to improve the article, give estate to own PR). 154.44.138.30 (talk) 17:42, 22 March 2019 (UTC)
 * 154.44.138.30, my suggestion is to skip the creative drama-fest and refocus all of that effort to discussing the topic at hand, the specifics and merits of the proposed additional content, and to (re)work proposed content into something that is good additions to the article. <b style="color: #0000cc;">North8000</b> (talk) 00:09, 24 March 2019 (UTC)

Thank you, all of you, for giving me the attention you feel I have deserved. I confirm that from now on I cannot follow up whatever text may be added to this talk. Unexpectedly I returned to it now a last time only in order to remark that in a sudden impulse earlier today, 26 March 2919, I thought that all I had written for this talk and the main article would uncontroversially justify at least the modest inclusion of my main reference in the article’s section 12 on “Further reading”. I did so and surprisingly it was immediately removed under the insinuation that it was a citation spam, despite my mentioning this TALK. My guilt is then to claim that a reference is important because of specified reasons. This happens after all the effort for writing the improved contribution to the AI-article and to its TALK. Despite all the admonitions to further improve the text not a single detailed comment could be offered by the supervising editors on any part of what was written. 154.44.138.30 (talk) 20:10, 26 March 2019 (UTC)

In March 2016, Alpha Go and the world champion of chess and professional nine-segment player Li Shishi carried out the war between men and women, and finally won, which led to the enthusiasm for learning artificial intelligence. At first, everyone was dubious, including me, and felt that it was just on the cusp. But with the increasing output of artificial intelligence, we have to re-examine this field. In 2018, the Internet industry began to enter the cold winter. Even Apple recently said that it would start layoffs. However, enterprises will recruit talents that are just needed, especially in terms of artificial intelligence. As long as there are breakthroughs, the benefits will be high. There are more and more people learning artificial intelligence now, especially since the fall semester of 18 years. The number of graduates in artificial intelligence is expected to increase dramatically after two years, but everyone must believe in one fact, as long as they learn Ok, technology is at home, get salary are still not a problem. (talk) 22:30, 29 March 2019 (UTC) — Preceding unsigned comment added by Oscar hao (talk • contribs)

Who the heck is Kaplan and Haenlein?
This article is poorly written. Right from the very first paragraph, the thing starts citing two people named "Kaplan" and "Haenlein", without bothering to mention who they are, why we should care about their opinions, or even what the heck their first names are! One of them at least has a link, but the other does not. — Preceding unsigned comment added by 75.172.43.67 (talk) 06:42, 5 February 2019 (UTC)


 * I agree. This article begins with two definitions of AI, the first being simple semantics, the second is the technical definition given by the most widely used AI textbooks in the world. We don't need the other two definitions.


 * I would support cutting the Haenlein/Kaplan definition, as well as the following non-technical definition. They're not as good, not as widely accepted, and neither belong in the first paragraph of an encyclopedia article. CharlesGillingham (talk) 23:03, 13 February 2019 (UTC)


 * Removing this. I assume it was added by one of the very authors quoted (they are otherwise not too notable). ForgotMyPW (talk) 08:02, 5 April 2019 (UTC)

I think that the "Further Reading" section needs some thinning / revision
I think that the "Further Reading" section needs more thinning /revision. IMO to be in the top level AI article it should be a prominent source or publication. I'm going to boldly thin a little; please revert me if you don't agree. Sincerely, <b style="color: #0000cc;">North8000</b> (talk) 22:44, 26 March 2019 (UTC)

Yes, reference spamming is becoming a huge problem in AI-related pages. ForgotMyPW (talk) 08:04, 5 April 2019 (UTC)


 * I started on it, thinking that there were going to be a lot of "quick easy choice" removals, but didn't find many of those.  Will need more work. <b style="color: #0000cc;">North8000</b> (talk) 10:35, 5 April 2019 (UTC)

NLP, NLG, NLUI, CUI ?
Should section 4.5 Natural Language Processing be changed to one of the below? 1. Natural Language User Interface (NLUI) or CUI (Conversational User Interface)? 2. Retain it as NLP, but also mention Natural Language Generation (NLG) and CUI which talks about Voice Assistants and Chatbots? — Preceding unsigned comment added by Bhaskarns (talk • contribs) 15:37, 9 September 2019 (UTC)

Rename "8.2 Potential harm" to "8.2 Safety"
AI safety needs mentioning, there is research and it its effects are crucial considering existing safety standards covering functional safety of devices etc. The following links confirm the relevance: https://intelligence.org/why-ai-safety/ or https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety or https://80000hours.org/podcast/episodes/pushmeet-kohli-deepmind-safety-research/ or https://vkrakovna.wordpress.com/ai-safety-resources/ --LS (talk) 13:03, 16 August 2019 (UTC)


 * I Agree — Preceding unsigned comment added by Bhaskarns (talk • contribs) 15:39, 9 September 2019 (UTC)

Move discussion in progress
There is a move discussion in progress on Wikipedia talk:Disambiguation which affects this page. Please participate on that page and not in this talk page section. Thank you. —RMCD bot 20:01, 9 October 2019 (UTC)

Problem Solving vs. Planning
Need some more specifics to differentiate Planning from Problem Solving. If we take a game of Chess or Go, don't these areas appear similar? Also, don't they both need Machine Learning to actually Plan or Solve? BhaskarNS (talk) 17:11, 9 September 2019 (UTC)


 * Planning and what this article calls "problem solving" are things that AI has been working on since the 1950s. "Problem solving" includes things like logical deduction, algebra problems, puzzles and so on -- step by step reasoning, where you consider various combinations and possible solutions. Planning is goals and sub-goals; it's the study of planning in general.


 * There is an overlap -- one of the interesting discoveries of the 60s and 70s is that all of these can be solved in theory by tree-searching (e.g. what a language like Prolog does). If you consider the problem's solution as the "goal", then problem solving and planning are basically the same thing, although general tools are not always the best choice for a specific problem.


 * Modern machine learning wasn't really running full steam until the 21st century (some fifty years later). This kind of learning is used to "solve problems" without searching a tree -- just jumping to a "good" choice without actually considering every possibility. We accept a certain number of errors in modern machine learning systems. Traditional AI tried to find the unique best answer (which was more or less a failure, by the way).


 * CharlesGillingham (talk) 18:33, 7 November 2019 (UTC)

A Commons file used on this page has been nominated for speedy deletion
The following Wikimedia Commons file used on this page has been nominated for speedy deletion: You can see the reason for deletion at the file description page linked above. —Community Tech bot (talk) 14:07, 18 November 2019 (UTC)
 * Signal triangulation.png

Artificial Intelligence Defination
The machine can generate emphasizes of inferences logically & independently at very high speed its called Artificial Intelligence. It can be utilizing in everywhere & every industries with all subjects matter like Physics, Image, Voice, Mathematics, Healthcare Drug Developments, Machine design etc. generate accurate predictions to final outcomes of your projects/production. — Preceding unsigned comment added by Kpshah234 (talk • contribs) 10:04, 11 October 2019 (UTC)


 * Nope, the speed of inference has very little to do with artificial intelligence. Jeblad (talk) 12:17, 22 January 2020 (UTC)

overlooked randomness
could someone possibly add some thoughts on how randomness is needed for ml/ai in the general sense people would expect? https://ai.stackexchange.com/questions/15590/is-randomness-necessary-for-ai?newreg=70448b7751cd4731b79234915d4a1248

i wish i could do it, but i lack the expertise or the time to bring this up in Wikipedia style, as it is evident by this very post and the chain of links in it, if you care enough to dig.

cheers! 😁😘 16:12, 27 February 2020 (UTC)  — Preceding unsigned comment added by Cregox (talk • contribs)

Philosophy and Ethics section needs to be unbiased
This article needs unbiased viewpoints in the Philosophy and Ethics section from people who do not have a stake in this technology. A reference needed to understand the section's bias towards the technology is https://en.wikipedia.org/wiki/Philosophy_of_technology#Technology_and_neutrality. If possible, a sub-section on SCOT is needed to neutralize the viewpoint. — Preceding unsigned comment added by 2A02:908:4F5:AAE0:40A8:33D9:1160:CF3F (talk) 01:30, 12 March 2020 (UTC)
 * I have added an intro to the section based on https://en.wikipedia.org/wiki/Philosophy_of_technology#Technology_and_neutrality. I did not address SCOT but will try to elaborate on it shortly. Johncdraper (talk) 10:58, 13 March 2020 (UTC)
 * Added mention of SCOT. What has to happen next is a review of the whole section to check for bias re technophilia. I'll look at this after the pending edits are accepted. Johncdraper (talk) 11:11, 13 March 2020 (UTC)
 * I accepted the edit in the pending review process per the very low bar of the pending review criteria. This is not an endorsement of the edit nor acceptance as a fellow editor.  Others should review. <b style="color: #0000cc;">North8000</b> (talk) 13:12, 13 March 2020 (UTC)
 * I can't find any strong sources for "social construction of artificial intelligence" per se; I don't think this has strong enough sources for inclusion in this article. Rolf H Nelson (talk) 20:02, 15 March 2020 (UTC)
 * Rolf H Nelson What do you consider a "strong enough source"? How do you decide whether a source is "strong" or "weak"? I just want to know. Please do not take it otherwise :) — Preceding unsigned comment added by 2A02:908:4F5:AAE0:AC3F:CDDD:F27C:AE6F (talk) 00:32, 16 March 2020 (UTC)
 * Rolf H Nelson I have sympathies for SCOT having application here, but you are right; there are not that many explicit secondary sources. So, there is one strong (relatively early) source for SCOT as applied to AI. This is . Then, there is Collins, the final chapter in, looks at AI (expert systems and the science of knowledge). In terms of articles, and , as well as seem relevant. Then, I feel that many, such as Baum, e.g., in , without mentioning social construction, are taking a SCOT position to the AI community itself; he discusses social context and meaning and social norms. Cave et al. here are also adopting a SCOT perspective to AI, without mentioning it. Your opinion on whether any of this could support an introductory para (maybe rewritten) on SCOT as applied to AI is welcome. Johncdraper (talk) 09:50, 16 March 2020 (UTC)
 * WP:RS is the official guideline. Different editors have different thresholds, but for a summary article like this that's already around max reasonable length, the threshold for getting consensus on adding controversial new material is likely to be high. Rolf H Nelson (talk) 02:33, 19 March 2020 (UTC)

Relationship of AI to mathematics
--ManuelRodriguez (talk) 06:23, 1 April 2020 (UTC)
 * Artificial Intelligence has nothing to do with mathematical theories. Instead, mathematical theorems are misused to proof that AI is impossible.[1]
 * AI is different from turing machines [2]
 * Artificial intelligence is located in cognitive science not in natural science
 * Literature
 * [1] Sloman, Aaron. The irrelevance of Turing machines to artificial intelligence. The MIT Press, Cambridge, Mass, 2002.
 * [2] Wang, Pei. "Three fundamental misconceptions of artificial intelligence." Journal of Experimental & Theoretical Artificial Intelligence 19.3 (2007): 249-268.
 * I reverted b/c your text seemed off-topic for this summary page: this summary page isn't claiming in the first place that AI is merely simple mathematics, and we already state in the article that AI has relationships to cognitive science. The sources you provided seems like they would be more suitable for Turing machine; in contrast, this summary page doesn't even talk about turing machines much, let alone advocate for or against them as a pedagogical tool. But of course, we can talk about it here and you can ask for other opinions. Rolf H Nelson (talk) 04:22, 29 April 2020 (UTC)

Illustration of GPT-2
As far as I know, GPT-2 (well, GPT-3 now, but I don't have a handy instance of that to run) is pretty much the vanguard of neural network text generation... I think it would be sick to have an image of GPT-2 in action in this article. I think it demonstrates the principles in question being successfully applied to a practical, real-world task that's easy for the reader to see and evaluate (i.e. completion of coherent sentences from a prompt). It's spooky and cool, too; you really can't tell the difference between GPT-2 text and human writing. Anyway, I think that this is important to understanding the relevance and impact of the technology, so I am adding it back. { $\mathbb{JPG}$ } 21:23, 5 June 2020 (UTC)


 * To the lay person, it is just some text that's likely very difficult to read squeezed into a sidebar, that uses terms that aren't defined in the article they're reading, and that text really doesn't have much to do with what they're reading anyway. All that, and you used a caption that suggests that the program might be conscious in Wikipedia's voice. - MrOllie (talk) 21:27, 5 June 2020 (UTC)


 * Yeah, I'll admit that caption kind of sucks. I was trying to submit a better one when I got an edit conflict with you removing the embed altogether. Asking if the model is conscious does seem rather bold, but it's written to be concordant of the overall section's voice which does the same thing a lot (i.e. in the first subsection it outright says Can a machine be intelligent? Can it "think"?). It might not seem very WP:MOS but I think that describing open philosophical issues currently under study in the form of questions makes sense and is more succinct than any alternative (I suppose we could replace "Can a machine be intelligent?" with "The question of whether machines can be intelligent is the subject of active philosophical study" and then try to pull in sources for what's now a definitive statement). I mean, if you want to tangle with that for the whole section, I won't complain, but... { $\mathbb{JPG}$ } 21:38, 5 June 2020 (UTC)


 * It seems like putting the cart before the horse to quibble about the caption when there's not yet any consensus that the image should be on this page at all. Let's wait a bit and see if anyone else would care to weight in. - MrOllie (talk) 22:00, 5 June 2020 (UTC)


 * Okay, I'm requesting a third opinion on the inclusion of the image in this article, as well as in the others under disagreement (Unsupervised learning, Deep learning, Machine learning and Transformer (machine learning model)‎). I don't know how appropriate it is to discuss all five article edits on this talk page (and a lot of this conversation has occurred on your talk page as well), but it seems a better place than anything else I could come up with. { $\mathbb{JPG}$ } 22:24, 5 June 2020 (UTC)


 * Does it need to be an image? Could we have a sidebar (formatted similar to an image) of that text, properly quoted? Or just a normal block quotation in-text? With different coloring for the pre-GPT part and the GPT part? Leijurv (talk) 00:30, 7 June 2020 (UTC)


 * Agree with MrOllie that the image doesn't fit with the current article, which makes the rest of the discussion a non-starter for now. The first step would be to try to add content on GANs to the article. Right now all we have is a paragraph on deepfakes that needs copyediting and/or replacement anyway. IMHO GANs could go under 'Artificial neural networks' for lack of a better place to put it. Rolf H Nelson (talk) 00:19, 8 June 2020 (UTC)
 * Edit: I'm an idiot, turns out GPT-2 doesn't use adversarial training. GPT-2 should probably just be discussed in Natural Language Processing then, without any WP:FRINGE stuff about it being conscious. Rolf H Nelson (talk) 05:09, 8 June 2020 (UTC)
 * I've copyedited the ethics section to remove the language that was posing similar questions in the voice of the encyclopedia. I think that without the final question the image could go in the NLP section and work fine? { $\mathbb{JPG}$ } 06:11, 8 June 2020 (UTC)

I'm opposed to the current image being used on this page for the reasons specified, including that many readers won't understand the example. I would be neutral if the caption were changed to something like "OpenAI's GPT-2 can often generate coherent text from a user-provided prompt. ", and if the generating text were changed something that can be sourced, such as "What happens when you stack kindling and logs in a fireplace and then drop some matches is that you typically start a" (the example used in the quantamagazine.org source). I'm slightly discontent that there doesn't seem to be a way to set a "random seed" so that users can WP:V the text posted, but it's not a showstopper for me personally. @Leijurv I'm personally fine with either a sidebar or a screen-capture (preferably with alt content for screen-reader accessibility), a sidebar is easier to read but a screen-capture documents what the popular talktotransformer.com web site experience looked like in 2019-20. Rolf H Nelson (talk) 06:20, 11 June 2020 (UTC)

New "Hardware" section
has added a new section to the article called "Hardware". This would be fine, as the content was cited with an external link, but upon closer inspection, it was not verifiable: there was no apparent connection between the first part of the text and the source cited to support it, so I removed that section. The other section I left because I don't have access to the full source (an IEEE document).

Subsequently, RJJ4y7 undid my removal, stepped in and removed the entire section, and RJJ4y7 undid that edit as well. I have absolutely no interest in an edit war (the only winning move is not to play) so I'm hoping we can discuss.

In my view, the GPU subsection (previously titled "von Neumann") fails our verifiability policy and should be removed until sources are found that support it. RJJ4y7 has come up with two new sources for the section, but one is plainly not reliable and neither explicitly supports the claims attributed to them. The Memristors subsection is sourced to this document, which is not publicly available for free, so I can't say with certainty whether it is verifiable, but it should be checked by someone who has access. Policy indicates that reliable sources do not need to be free, so that subsection should probably remain unless it can be shown to fail verification. — Rutebega ( talk ) 01:40, 11 June 2020 (UTC)

Rutebega Rutebega

ok i will state my original rationale in creating this section as well as the subsections.

first a bit of background: as you may know AI programs are different in many ways from conventional programs in the sense that thay are networks. as such thay are Massively parallel and have certain computational requirements on software as well as hardware level in order to perform their intended function. much research has been dedicated to find suitable hardware for this perpose. and the amount of opinions on that research even more so.

as of the time of this writing, (Thursday, June 11, 2020) thare are 3 main approaches to hardware designed for ai reaserch and application

1) conventional hardware von Neumann architecture specifically graphics processing units. if you look at the Wikipedia article: AI accelerator under the section "history of ai acceleration" it covers the topic quite well. 2)memristors (and yes, i know the use of memristors in AI is controversial but that shouldn't be a reason to leave it out ?) 3)neuromorphic hardware: (see the wikipedia article Neuromorphic engineering) like the one's Brainchip Inc and intel are currently manufacturing. it should be noted that neuromorphic computing is recognized as an essential technology for the future of AI so much so that the human brain project(https://en.wikipedia.org/wiki/Human_Brain_Project) has an entire research section dedicated to it. see the wikipedia articles spinnaker and brainsclaes

my intention was to take this specific aspect of AI (hardware) and include it in the general article of AI. the subsections represent the aspects of the specific topic as explained above.

update 1:59pm same day : it seems that the hardware section i wrote has been deleated in its place a sub-subsection titled "hardware imrpovments" under subsection evaluating progress. the paragraph written dosent mention enuromorphic or other alternative computing arcitectures in any way. also why cant it just be a section on its own? its noteworthy enough

-RJJ4y7 — Preceding unsigned comment added by RJJ4y7 (talk • contribs) 14:51, 11 June 2020 (UTC)
 * This article is already borderline WP:TOOBIG (92K prose), so I personally tend to oppose additional content as WP:UNDUE unless it's extremely well-sourced (top-tier MSM, top peer-reviewed papers). Content that satisfies WP:RS but isn't top-tier should go in one of the numerous child articles that flesh out more details. I know it's a headache to talk through WP:CONSENSUS for inclusion of content given the large number of editors on this page, but it is what it is. Rolf H Nelson (talk) 03:00, 12 June 2020 (UTC)
 * IMHO we can add more about hardware, but it needs to be more strongly sourced; I couldn't find top-tier sources on memristors, but perhaps as the technology matures there will be more to say about it in the future. Rolf H Nelson (talk) 03:00, 12 June 2020 (UTC)
 * As far as my moving it to a subsection, I'd like to see what we can get consensus to expand the section to before we decide to give it its own section. Rolf H Nelson (talk) 03:00, 12 June 2020 (UTC)

"Challenge" portion of the article
I believe that the "Challenge" portion of the article could use more neutral wording. It seems as though an opinion is being expressed instead of exclusively facts. Djr102 (talk) 22:12, 30 June 2020 (UTC)djr102
 * Analyses are fine, where well-sourced and non-controversial. As always, per WP:BRD users should feel free to contribute changes. It would help to give specific examples of what you consider the worst offenders. Rolf H Nelson (talk) 04:51, 1 July 2020 (UTC)

According to UnDaoDu AI was predicted by Mayans. The 5th Age is the Age of AI" -- Book of Du prophesies
2020 UnDaoDu reveals the hidden prophesies and interpretation of what he calls the Mayan Zodiac Prophesy hidden in plain sight. A kind of Rosetta Stone roadmap for ushering the 5th Age. The 5th Age will be built on “Un” Bitcoin Un_ravelling Fiat and replace it as the global reserve currency; “Dao” Wall Street's and Silicon Valley’s will be Un_ravelled by DANOs (npo daos); “Du” qAi Un_ravels EVERYTHING.” - UnDaoDu Source
 * This is just repeating the mantra of followers of some person.   If you have a specific article suggestion, please make it.<b style="color: #0000cc;">North8000</b> (talk) 10:52, 26 July 2020 (UTC)

“intelligence demonstrated by machines”
No, simply NO!

Artificial intelligence may be be instantiated as machines, in the future, but it has never been demonstrated at all. The only thing that has been demonstrated is smartness in strongly confined areas, it is nothing close to general intelligence. It also may be instantiated as machines, but it may also be instantiated as wet artificial life, and that would not imply a machine at all.

I tried to link to this article, started to read it, and then found out it is simply not good enough. It is so full of errors… Jeblad (talk) 15:26, 17 June 2020 (UTC)
 * The article's lead section doesn't claim that artificial general intelligence has been demonstrated, though it describes other forms of "weak" or "narrow" AI that already have been demonstrated. Where are the errors in this article? Jarble (talk) 15:51, 18 August 2020 (UTC)
 * First sentence in the article; <q cite="https://en.wikipedia.org/w/index.php?title=Artificial_intelligence&oldid=972818517">Artificial intelligence (AI), sometimes called machine intelligence, is intelligence demonstrated by machines, unlike the natural intelligence displayed by humans and animals. The sentence is simply false, we are not even close to demonstrate intelligence by machines.
 * On weak AI (check Searle's definition); we are not even at the point where we can test hypothesis about the mind, we are testing hypothesis about functional machine learning, which is something very different. If we were to test hypothesis about the mind, then we must start to grasp how the mind actually work. We are not there at all.
 * It starts to be quite a number of people that use the phrase “machine intelligence” about various smart machines, to not confuse it with real artificial intelligence. It is a kind of counter-movement to the massive marketing hype that everything with a bit of “smartness” is an AI product. No, your lawn mover is not some kind of AI gadget even if it manages to avoid the flowers! Jeblad (talk) 16:44, 18 August 2020 (UTC)
 * I think that the answer comes from a much more mundane level. This article is about artificial intelligence by the normal meanings of the term.  The arguments expressed in your post are for another meaning of the term and I think would be a good addition to the article as such if sourced.  Sincerely, <b style="color: #0000cc;">North8000</b> (talk) 18:37, 18 August 2020 (UTC)
 * The article tries to articulate the marketing hype about “artificial intelligence”, which is (sorry to say) plain bullshit. Marketing hype is not, and will never be, defining the normal use of words. This is what a lot of people call “machine intelligence”, but it should probably just be called “smart machines”. It is machines that cut your lawn and avoid your flower. They have no idea why they shouldn't cut your flowers, or any ability to reason why it is so, they just obey a random constraint.
 * Too many users at Wikipedia are writing about things they have little knowledge about, but manages to drive away those that has real knowledge. This article, as many articles at Wikipedia, tries to describe a term or phrase, but ends up obfuscating the real meaning, and it is impossible to correct the mess. Jeblad (talk) 13:35, 19 August 2020 (UTC)
 * There is a whole range of definitions that exist between the one that you mentioned (which is so rigorous that it says that AI doesn't exist) and the other extreme which is overuse of the term especially in marketing and intellectual-incest-"journalism".  For example an "in between" one(from Google):  "the theory and development of computer systems able to perform tasks that normally require human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages."  <b style="color: #0000cc;">North8000</b> (talk) 14:25, 19 August 2020 (UTC)
 * There are a whole range of definitions that are simply false. Listing every technique ever to tangent on some subject will not make the article right, it will just be a very visual mark of yet another overly broad article of gobbledygook.
 * A few pretty good definitions
 * Merriam-Webster dictionary: AI is a branch of computer science dealing with the simulation of intelligent behavior in computers.
 * Russel and Norvig, among others: AI is a subfield of computer science aimed at specifying and making computer systems that mimic human intelligence or express rational behaviour, in the sense that the task would require intelligence if executed by a human.
 * The Cambridge Handbook of Artificial Intelligence: The attempt to make computers do the sort of things human and animal minds can do – either for technological purposes and/or to improve our theoretical understanding of psychological phenomena.
 * And a definition that is somewhat off…
 * English Oxford Living Dictionary, it appears in several of their publications: The theory and development of computer systems able to perform tasks normally requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages.
 * A comment on the last from an assignment at University of Oslo: <q cite="https://www.uio.no/studier/emner/matnat/ifi/IN5480/h18/deliverables/individual-assignments/second-iteration/individualassignment_kathino_2.pdf">I think this definition is a bit misleading in that it defines “visual perception” and “decision-making” as requiring human intelligence. These are tasks that some animals can perform, and while I might understand the intention of the definition is not to separate these abilities between human and animal intelligence, it makes it seem like these would require some “generally intelligent” intelligence, while artificial narrow intelligence can complete these tasks even better than some humans. – Kathinka O. Aspvin
 * I found the reply to be quite on the point! Jeblad (talk) 19:48, 21 August 2020 (UTC)
 * This reinforces what I said in that that are a lot of viable definitions. <b style="color: #0000cc;">North8000</b> (talk) 20:09, 21 August 2020 (UTC)

Announcement of reference removal
In the chapter “Basics”, an arxiv paper was referenced "Goodfellow, Ian J.; Shlens, Jonathon; Szegedy, Christian (2014). Explaining and Harnessing Adversarial Examples.". The paper was presented at a highly specialized conference ICLR with a low amount of public visibility. The amount of people, who have read the paper or will in the future is low. If no counter arguments are provided, i will delete the reference from the article in the near future. This will provide more space for the remaining references.--ManuelRodriguez (talk) 09:04, 17 September 2020 (UTC)
 * Sounds good. IMO references should be there to clearly support the cited statement and/or be really useful for the typical readers. Articles hot technical like this tend to be full of reference spamming which serves neither of those purposes. <b style="color: #0000cc;">North8000</b> (talk) 13:32, 17 September 2020 (UTC)

"Artificial intelligence tools for computer science"
This article was recently split from Artificial intelligence, but it doesn't seem to have a coherent topic: it describes tools and approaches from computer science that are used to solve problems in artificial intelligence, instead of tools from artificial intelligence that are being used to solve open problems in computer science. Should these two articles be merged again? Jarble (talk) 14:43, 28 September 2020 (UTC)
 * Comment. I split the Artificial intelligence tools for computer science article out solely for length reasons, as the Artificial intelligence article had gotten very long.  It's at 60k of readable prose right now, which is already at the top end of the recommendation of WP:SIZERULE, so merging another nearly 20k back in probably isn't advisable.  The tools article was originally a single section of the Artificial intelligence article and I didn't do any internal reorganization.  It's possible that parts of the tools article belong in different places rather than as a coherent unit; I don't have strong feelings about that.  (It's also possible I mixed the title up, and it just should be moved to Tools for artificial intelligence.) John P. Sadowski (NIOSH) (talk) 00:44, 29 September 2020 (UTC)

Announcement to remove a reference
In the article, there is a superfluous reference available. In the section “Integrating the approaches”, the 12 pages long paper “#190 Laird, John (2008). Extending the Soar cognitive architecture. Frontiers in Artificial Intelligence and Applications.” was mentioned as an additional information about the SOAR cognitive architecture. The paper explains, how SOAR is using the working memory to realize general intelligence. The problem is, that the paper has a low amount of readers and was published in a highly specialized journal. If no counter arguments are provided, i will delete the reference after a waiting period of 1 week.--ManuelRodriguez (talk) 11:20, 1 October 2020 (UTC)