Talk:Artificial intelligence/Archive 12

Artificial Intelligence and its Role in Digital Forensics
Artificial Intelligence and its Role in Digital Forensics

Artificial intelligence (AI) is a well-established area that facilitates dealing with computationally complex and large problems. As the process of digital forensics requires analyzing a large amount of complex data; therefore, AI is considered to be an ideal approach for dealing with several issues and challenges currently existing in digital forensics. Among the most important concepts in different AI systems are associated with the ontology, representation and structuring of knowledge. AI has the potential for providing the necessary expertise and helps in the standardization, management and exchange of a large amount of data, information and knowledge in the forensic domain. The existing digital forensic systems are not efficient to save and store all these multiple formats of data and are not enough to handle such vast and complex data thus they do require human interaction which means the chances of delay and errors exist. But with the innovation of machine learning, this occurrence of error or delay can be prevented. The system is designed in a way that it can help detect errors but in a much faster pace and with accuracy. Several types of research have highlighted the role of different AI techniques and their benefits in providing a framework for storing and analyzing digital evidence. Among these AI techniques include machine learning (ML), NLP, speech and image detection recognition while each of these techniques has its own benefits. For instance, ML provides systems with the ability of learning and improving without being clearly programmed, such as image processing and medical diagnosis. Furthermore, NLP techniques help in extracting the information from textual data such as in the process of file fragmentation. The AI techniques and their role in digital forensics are discussed in the next chapter in detail. Aishasaif141 (talk) 18:13, 28 September 2020 (UTC)
 * That's an interesting statement, but you have no sources for it. Why have you posted it here in an edit request that doesn't appear to be a request for anything? -Roxy the inedible dog . wooF 18:31, 28 September 2020 (UTC) | answer= Actually I was trying to put it in the article.
 * Surprisingly, Wikipedia's digital forensics article doesn't mention artificial intelligence either. Jarble (talk) 20:10, 5 October 2020 (UTC)

Semi-protected edit request on 26 November 2020
I would suggest changing and or removing the section in this article entitled "Why AI is important' as it lacks any sources, and is written in broad generalisations without qualification ('AI gets the most out of data', 'critical... incredible accuracy'), while also demonstrating a clear bias. This section is also already largely covered in the subsection, 'applications'. Timborimbo (talk) 09:50, 26 November 2020 (UTC)
 * Yes check.svg Done – Thjarkur (talk) 10:03, 26 November 2020 (UTC)

Brief definitions
I liked this brief text. It would be hard to cite but I thought we could use it to inspire our outline for the basics we need to cover. One definition which I think is missing is the name for whatever field studies all this, and I would call that "data science".



 Blue Rasberry  (talk)  14:26, 9 December 2020 (UTC)

And "neuron's space".
V eritrocitah esty gelezo (Fe).

Мы эксковаторы ??

213.87.242.192 (talk) 17:48, 12 December 2020 (UTC)

A little light relief
We're all doomed! http://www.bbc.co.uk/news/technology-30290540
 * I think that is a RS for: "We cannot quite know what will happen if a machine exceeds our own intelligence, so we can't know if we'll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it." Shall we insert this quote from Carpenter? Charles Juvon (talk) 23:04, 1 January 2021 (UTC)

Gender pronoun and a new userbox
Gendered pronouns specifically reference someone's gender: he/him/his or she/her/hers. What do we use in writing about an AI entity? Charles Juvon (talk) 19:51, 1 January 2021 (UTC)
 * Most commonly, AI systems are referred to as "it"; entities with enough traits of personhood to warrant referring to in that fashion, I'd imagine, "they" or "them". At the point where that becomes necessary, though, it's just as likely that we could simply ask the system what its opinion is. In my conversations with GPT-3, for example, some instantiations have referred to themselves as male and some have referred to themselves as female. Whether this identification constitutes an actual belief on the part of the machine is an open question (it seems unlikely to me since they usually aren't consistent about it for very long), but I imagine future systems could have (patterns of behavior indistinguishable from) a robust and persistent sense of self. At any rate, we're just going to say whatever ends up in reliable sources. jp×g 23:27, 16 January 2021 (UTC)

Announcement to remove two Arxiv papers
In general, the current article is written very well, but it is only a bit too long. There are two arxiv papers referenced in the section “Basics” which are problematic for an overview article:


 * 1) #85 Matti, D: Combining LiDAR space clustering and convolutional neural networks for pedestrian detection
 * 2) #86 Ferguson, Sarah: Real-Time Predictive Modeling and Robust Avoidance of Pedestrians with Uncertain, Changing Intentions.

The first one is 7 page long PDF document which was published at Arxiv. It describes a highly specialized attempt for image detection with convolutional neural networks. The second paper is a proceeding from a robotics conference and contains a gaussian forward model for a prediction problem.

Both papers were written for AI experts which have lots of background knowledge. It is unlikely, that in a beginner course about robotics these papers are used to teach the subject to a larger audience. If no counter arguments are provided, i will delete both papers in the near future. This will help to reduce the article size.--ManuelRodriguez (talk) 12:04, 18 December 2020 (UTC)

I support your proposed deletion. The numbers on this look like there is reference spamming here. You appear to have the expertise and discretion to find us another 30 to remove. :-) North8000 (talk) 16:26, 18 December 2020 (UTC)


 * The mentioned two references are examples for primary sources. Primary means, that it is a research paper which can be referenced by other researchers. For example the first paper “Combining LiDAR space clustering and convolutional neural networks for pedestrian detection” has a high quality and according to Google Scholar it was mentioned in 35 other papers. The problem is, that primary sources are a poor choice for an encyclopedia, because they are containing the ongoing debate between experts. The more convenient source is a secondary source, for example “Russel/norvig: AIMA” or the book from Nils Nilsson about AI history. Both are referenced in the article and even a high ranking Wikipedia admin isn't allowed to remove such a secondary source.--ManuelRodriguez (talk) 04:56, 19 December 2020 (UTC)


 * I was thinking at a more basic refrence spamming level. Content should be there for the purpose of the article, not to provide an entre' for reference spamming. Also, references should be ones that have stature & recognition, not ones seeking to gain recognition or stature by being in Wikipedia.  Sincerely, North8000 (talk) 23:38, 16 January 2021 (UTC)

Ethical Artificial Intelligence
Proposing a subject heading for Artificial Intelligence, Ethical Artificial Intelligence

Ethical artificial intelligence is an area of artificial intelligence that deals with removing bias in artificial intelligence algorithms. Ethical artificial intelligence is achieved by allowing for transparency and review of the algorithms that are deployed by artificial intelligence computing systems. Ethical artificial intelligence allows for more trust of computing systems in everyday lives. — Preceding unsigned comment added by NmuoMmiri (talk • contribs) 20:16, 18 January 2021 (UTC)
 * For the most modern stuff (machine / deep learning) there are no reviewable algorithms. North8000 (talk) 21:28, 18 January 2021 (UTC)

AI and the standardized list of censored words: LDNOOBW
Just a quick AfC suggestion. Wired ran a story about a list of 400+ censored words widely used to filter autocompletes (re GitHub, Shutterstock) and for limiting corpuses used in ML. The list is commonly referred to as the “List of Dirty, Naught, Obscene, and Otherwise Bad Words” (LDNOOVW) and the article covers how its use (or similar lists) can impact inclusivity, block discussions, or limit access to important scientific, medical, or artistic content. Zatsugaku (talk) 19:46, 4 February 2021 (UTC)

conclusion DNF
"For the danger of uncontrolled advanced AI to be realized, the hypothetical AI would have to overpower or out-think all of humanity, ...." This is a false, overly strong statement. Consider an uncontrolled advanced AI working in a mine far from any city and perhaps not even connected to the outside world by Internet, hardwires or anything else. There are humans who work along side this AI machine, which has 'learned' to extract certain resources. Among the resources are silver and gold. As so often happens, the silver in the matrix dwindles to almost nothing; how-ever our AI machine is thirsty for more silver. Using its oh-so-refined talents, it notes the silver in the fillings in some of the humans' teeth. Uncontrolled, the AI machine extracts the silver more ruthlessly than any Old West dentist or bad hombre using string and a swinging door or even a blunt instrument aka a hammer. The trauma is certainly a danger realized, and yet this AI is nowhere near able to overpower or out-think all of humanity. So, can this sentence be redacted, please.Kdammers (talk) 05:08, 1 May 2021 (UTC)

Kate Crawford
Should Kate Crawford's Atlas of AI be mentioned or at least given as a further-reading item?Kdammers (talk) 05:12, 1 May 2021 (UTC)

Bloat / Restoring the "Tools" section
The Tools section was branched out into its own article some time ago. I would like to restore it, because it was the only section that talked about what AI actually is -- that describes what actual AI research looks like. We need it.

I assume that it was branched because this article was always very long and had become even more bloated. Instead of dropping an important section, I think we need to address the bloat. There are dozens of contributions that cover details that are too specific for an introductory article, there are quite a few redundancies.

We need to pull the unnecessary material and save it -- I propose we do this on the talk page, under the "no-archive" tag below, and perhaps a sentence about why it is not vital to the article. We can argue about from there if need be. If the material is good and well sourced, it would be best if we could integrate into a sub-article somewhere.

Then we can pull back the essential bits of Tools and restore them here.

I will be working on this from time to time until the fall. CharlesGillingham (talk) 16:20, 2 July 2021 (UTC)


 * I've not analyzed the details, but I agree. North8000 (talk) 03:44, 5 July 2021 (UTC)

Help with spam in "Artificial intelligence art" article?
Hello, I'm wondering if anyone here is available to take a look at a disagreement on the "Artificial intelligence art" article and talk page? There is an editor who believes that AI wasn't invented until 2014, that "what ever happened in the 1960's thing is not relevant" and "You have no clear understanding of AI" if you think otherwise. This seems to be in pursuit of a self-promotional agenda to spam the article with claims of a "world's first published AI Art book" being published in 2019, when of course AI, AI art, and AI art books have existed for decades. In pursuit of this agenda, they have been deleting information about Karl Sims and Harold Cohen (artist) and other examples of AI artists before the 2010s. Can anybody take a look at this? It seems like a really simple situation where the years in the 20th century happened before the year 2019, but all I am getting is them repeatedly adding more spam links accompanied by their "You have no understanding of AI" bluster. Thanks if you can help! Elspea756 (talk) 18:22, 30 July 2021 (UTC)
 * I'd be happy to take a peek. North8000 (talk) 18:29, 30 July 2021 (UTC)
 * Wow, that article is a mess. MrOllie (talk) 18:34, 30 July 2021 (UTC)

Semi-protected edit request on 2 August 2021
Change:

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans or animals.

To:

Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to the natural intelligence displayed by humans, animals, or insects. UniversalHumanTransendence (talk) 19:33, 2 August 2021 (UTC)
 * Red information icon with gradient background.svg Not done: Insects are animals  Eve rgr een Fir  (talk) 21:52, 2 August 2021 (UTC)

Bloat / Restoring the "Tools" section
The Tools section was branched out into its own article some time ago. I would like to restore it, because it was the only section that talked about what AI actually is -- that describes what actual AI research looks like. We need it.

I assume that it was branched because this article was always very long and had become even more bloated. Instead of dropping an important section, I think we need to address the bloat. There are dozens of contributions that cover details that are too specific for an introductory article, there are quite a few redundancies.

We need to pull the unnecessary material and save it -- I propose we do this on the talk page, under the "no-archive" tag below, and perhaps a sentence about why it is not vital to the article. We can argue about from there if need be. If the material is good and well sourced, it would be best if we could integrate into a sub-article somewhere.

Then we can pull back the essential bits of Tools and restore them here.

I will be working on this from time to time until the fall. CharlesGillingham (talk) 16:20, 2 July 2021 (UTC)


 * I've not analyzed the details, but I agree. North8000 (talk) 03:44, 5 July 2021 (UTC)

Overindulging on McCorduck opinion?
It looks like part of this is becoming an ode to McCorduck's opinion. Seriously suggest removing edit protection to allow for removal of the following: "This is a central idea of Pamela McCorduck's Machines Who Think. She writes: "I like to think of artificial intelligence as the scientific apotheosis of a venerable cultural tradition."[18] "Artificial intelligence in one form or another is an idea that has pervaded Western intellectual history, a dream in urgent need of being realized."[19] "Our history is full of attempts—nutty, eerie, comical, earnest, legendary and real—to make artificial intelligences, to reproduce what is the essential us—bypassing the ordinary means. Back and forth between myth and reality, our imaginations supplying what our workshops couldn't, we have engaged for a long time in this odd form of self-reproduction."[20] She traces the desire back to its Hellenistic roots and calls it the urge to "forge the Gods."[21]

This entire section is too much like an essay devoted to promoting McCorduck and not more diverse--or current--sources. See similar issues with "History Of Artificial Intelligence," where we've tried to prevent such monolithic citations. TrainTracking1 (talk) 04:44, 6 September 2021 (UTC)


 * This article is only semi-protected.   Autoconfirmed users (which I think that you are) can edit it.  I have not analyzed in depth / in context but upon a quick look, that looks like something that would be good to remove.


 * The article has partial protection because it is a big target for reference-spamming and other self-promotion and promotion. That part that you are proposing to remove looks promotional / undue to me. North8000 (talk) 18:20, 6 September 2021 (UTC)


 * Thanks. I am going to remove it. TrainTracking1 (talk) 23:44, 6 September 2021 (UTC)

A Commons file used on this page or its Wikidata item has been nominated for deletion
The following Wikimedia Commons file used on this page or its Wikidata item has been nominated for deletion: Participate in the deletion discussion at the. —Community Tech bot (talk) 20:24, 11 September 2021 (UTC)
 * DNC training recall task.gif

Minor typo

 * What I think should be changed: Under Statistical AI, in the phrase: "By 2000, solutions developed by Ai researchers...", please change "Ai" to "AI".
 * Why it should be changed: AI is an initialism, so this is probably a typo.
 * References supporting the possible change (format using the "cite" button):

BenjaminVincentCho (talk) 07:57, 21 September 2021 (UTC)
 * Done for you. Princess Persnickety   (talk)  08:30, 21 September 2021 (UTC)

Posing questions as a style?
There is a serious problem with the wholesale revision and rewriting of this article. That is the use of questions to begin sections. This is not an exploratory white paper; it is an article that attempts to address issues and facts--as they exist--within the scope of an encyclopedia entry. This diversion from norms, including reliance on papers that support only one perspective or interpretation, threatens this article. I appreciate an editor wanting to clean it up; rewriting and reinterpreting years of input is not quite in keeping with the community who have contributed. I will begin changing these "stylistic" modifications as I find them. Andreldritch (talk) 02:18, 27 September 2021 (UTC)
 * I don't understand the questions part....I saw zero questions in the entire article. On the other points, I've not analyzed the changes.   Certainly, we don't want the article to give undue weight to any unique / specialty perspective. And the main text should reflect mainstream views/info.  This article had a lot of reference spamming and self-promotion material.  Having given the changes only a vague overview, my impression is that the changers have reduced that problem.  We want to give close scrutiny to (re)introductions regarding that issue. North8000 (talk) 12:06, 27 September 2021 (UTC)
 * I removed half a dozen questions that actually began sections and rewrote them as declaratives shortly after my post (you can see them in the change history). In some cases, I probably should have just removed the entire line of inquiry. It does seem that a lot of this article has taken on a single-minded slant over the past week in that whole sections are being reconfigured and in some cases eliminated with little explanation. Bears close watching IMHO. Andreldritch (talk) 17:07, 27 September 2021 (UTC)
 * Cool on getting rid of the questions. On the other stuff, I've not analyzed it enough to have an opinion. <b style="color: #0000cc;">North8000</b> (talk) 17:32, 27 September 2021 (UTC)


 * I realize that a big edit like this makes people nervous. I've been doing this since 2007 when big edits were the "norm".


 * To your specific points:


 * (1) "reliance on papers that support only one perspective" Most of the citations in this article are to the most reliable secondary sources available: the four leading AI textbooks (at the time this was written) and the two most respected histories of AI. In order to avoid any WP:UNDUE weight, we did a careful survey of these sources. See Talk:Artificial intelligence/Textbook survey. (Now, I realize that these sources have gone a bit out of date at this point, and we have to address that eventually, but that's not the point.) This article does not rely on papers that support only one perspective. As you can see from my original post above, I was motivated to take this article on because someone eliminated the most essential section of the article. My edits are mostly intended to focus the article back on the topic, return the essential section and remove some of the non-essential things that have accumulated over the years. I'm trying to remove WP:UNDUE, I'm not introducing it. If you have a specific example of WP:UNDUE, let me know &mdash; there may be something, and we can talk about it, and we'll fix it.


 * (2) "Eliminated with little explanation" Please see above -- everything I cut is saved above and I made an argument to the community about why I think it should be cut. If you disagree, then please -- make an argument above. If you're right and I'm wrong, we'll fix it. Bold/Revert/Discuss/Fix.


 * (3) "Remove the line of enquiry" This would be a mistake, because these philosophical issues are all discussed in Chpt. 1 of Russell & Norvig, or in the philosophy section toward the back. In fact, I've moved the organization a few steps closer to R&N presentation of the topic. Every sentence in the philosophy section represents a body of literature that includes thousands of philosophy & AI publications.


 * I fixed the rest of the questions in the philosophy sections. Your edit was very helpful, but you got a few things wrong, so I had to take another pass at it. I added plenty of quotes, for a nice he said/she said form &mdash; that is, instead of a question, now we have the two conflicting answers. By the way, those questions have been in the article for years. I just moved them out of the old Approaches section and put them in philosophy.


 * The article is getting much, much better this week. Have faith. CharlesGillingham (talk) 11:16, 28 September 2021 (UTC)


 * If you are concerned about my edits, take a look above at &mdash; I documented precisely what I did. Please assume good faith.  CharlesGillingham (talk) 15:53, 29 September 2021 (UTC)


 * Weighing in, a bit late: Certainly, good faith is assumed by all (I presume). I have a concern in many AI articles about reliance on one source used as for a preponderance of citations in supporting what is included in an article on something as nuanced as AI. It was Pamela McCorduck in the history of AI, and now it's Russell & Norvig here. R&N are cited more than 100 times in this article, the next nearest citation count is (by my estimate) 37, or about a 1/3rd of the R&N count. (Same was true of McCorduck in history--it began to read like Machines Who Think). Relying on one source makes this feel like we might as well just publish the R&N textbook and be done with it.


 * Cuts are all well and good, as is streamlining for concision, but eliminating everything that doesn't occur in your preferred sources skews the article, and eliminates other perspectives. I think that is what occurred when you eliminated the listing of apps (which has been apparently places back in another section). This is not just about theory or philosophy; credible apps that fit the definitions (even if the definitions are fleeting/transitory) should be part of what AI "is" -- since practical and commercial application are part of what helps to shape the evolution of AI--not just textbook revisions.


 * Who is Bernard Goetz? Since this isn't a white paper, some individuals should be given reasons/support for their inclusion. Again, this is WIKIPEDIA, not a graduate class in AI thought. I truly believe that this article is losing its accessibility for all in striving for academic relevance. TrainTracking1 (talk) 17:45, 29 September 2021 (UTC)


 * I think you're misunderstanding the role of WP:SECONDARY sources here. We, as Wikipedia editors, are not qualified to decide what is/isn't an essential topic about a huge subject like AI. We rely on secondary sources to determine what must be mentioned and what is unnecessary -- in other words, the secondary source determines for us the WP:RELEVANCE and the WP:NPOV in an empirical way. This article relies mostly on the best secondary sources we have, as I've explained above.


 * You're understanding of secondary sources is exactly backwards: we WANT to have 157 citations to a reliable secondary source and only a handful to primary sources or less reliable sources. That shows that all the material here probably belongs here. The number of sources is completely irrelevant, is a very poor measure of WP:NPOV. The quality of the sources is what counts, not the number.


 * Citations of individual papers are typically about irrelevant material. There are literally millions of papers about AI, and most of them are about topics that are too irrelevant for this, the top level, introductory article about AI. If the best source you could find was an academic paper or a newspaper article, chances are you're writing about something that doesn't belong here. If the source is the standard AI textbook, then we can be fairly sure that the contribution is WP:RELEVANT. It's ridiculous to criticize the article on the basis of some global notion about an even density distribution among the sources. Exactly backwards.


 * I'm not slanting the article towards any point of view (except R&N's point of view, which is a reliable source). Please stop suggesting I'm writing a "white paper" --- that's bad faith. And by the way, my recent edits didn't introduce all the R&N citations you're talking about -- they've been here since 2008. I may have removed a bunch of random citations to newspaper articles and people's vanity papers and so on, but only because they were attached to irrelevant material. I've given you ample information on the talk page above to precisely understand my edits. If there's something you disagree with, let's get into the weeds and fix it. Happy to do that. (I agreed with whoever cut all those McCorduck quotes, by the way.)


 * Ben Goertzel is a founder of the 21st century field of artificial general intelligence. If you see Bernard Goetz in the article, that's probably a typo. Let's fix that. Bernard Goetz was a vigilante who killed three people on the subway.  CharlesGillingham (talk) 00:25, 30 September 2021 (UTC)


 * No one is questioning the good faith of the edits, nor the importance of streamlining a formerly unwieldy article. As a reader and editor, I think numerous references from a single source become a "forest for the trees" issue--as in, whose lens is shaping the perspective. R&N are a good source, no argument. There are others, however, and they should be considered as part of the rewrite--which seems to be a top down effort. I view this as akin to using Woodward as the definitive voice on the Nixon presidency because he is the most cited; the latter is not a validation of the former, although there is no doubt the source is valid. There are others equally valid. I'm not going to argue each citation, and I will add my own shortly as per your suggestion. I believe looking at the article with a view to making it accessible is important, whether or not one agrees with my assessment of it as approaching academic white paper territory. Also, the name "Bernard Goetz" led the section on AGI. I removed it--an egregious oversight on all our parts that it even existed--and did not replace with Ben Goertzel, since he is simply one of many who popularized the AGI concept (it predates CYC, for example), but did not create it. TrainTracking1 (talk) 02:59, 30 September 2021 (UTC)


 * Please feel free to add more sources. You'll notice that all the essential points in the article have 3 or 4 sources (two or three textbooks, and sometimes a few more specific sources). It's easy to add a few more if you like.


 * On "AGI". Before 2002, AI sort of was AGI -- in the early days especially, there was no need for a distinction. In 2002, Goertzel asked Shane Legg to help him come up with a term to describe what Ray Kurzweil was calling "Strong AI" (which is a terrible term, because the philosophy of mind had already been using it for thirty years to describe something different). Legg actually came up with term, but it was Goertzel who introduced it to the world, and it was the title of his 2005 book. It really kickstarted the whole movement as a serious academic/industrial enterprise in 2005. That's the story. CharlesGillingham (talk) 04:39, 30 September 2021 (UTC)


 * Actually, I have to say a bit more. Most of the main points in the article have citations to several textbooks. Look all those bullet-listed, bundled references! I don't think there is another article in Wikipedia that made more of an effort to avoid the problem you're accusing it of. We use seven main sources, when Wikipedia really only requires one, and, what's more, we went to the trouble to prove they are the best sources. The incredible number of bullet lists in the citations proves that there is wide agreement about what belongs here and what doesn't. This article has been sourced in a way that is supposed to prove to you that most of the content is from a WP:NPOV and is WP:RELEVANT.


 * The only contributions you should be worrying about are not the ones with the bullet list bundled citations to several textbooks. You should be worried about the citations to academic papers -- they could be self promotion, or something someone googled and never read, after they already knew what they were going to say. That's the problem with primary sources or magazine articles. It's exactly the opposite of what you're thinking. CharlesGillingham (talk) 07:20, 1 October 2021 (UTC)


 * TrainTracking beat me to the punch on my replies as I was about to hit "post." So I'll stick with them. Outside of these comments, I think the article overlooks commercial developments of AI, which--as mentioned--do help shape evolution (or at least the flow of much research from within corporate domains, which could have been ignored in the 80s, but cannot be now with Google, Amazon, et al spending billions). I don't have a problem with R&N, but wonder why there are no references to early substantive works by Winston, Barr, et al. Surely these would prove useful as comparative studies. NB: I think Bernard Goetz looks like a bad bot tag. Andreldritch (talk) 17:56, 29 September 2021 (UTC)


 * Fair enough.


 * On corporate contributions: there are a few half-paragraphs in history that talk about the current boom and the size of the current investment. Successful corporate applications also get a paragraph or two in "applications". I also just added sections to the applications of AI article to try to cover what's been happening in the 2010s. Have a look at that article and see what you think needs to be done.


 * By the way, Peter Norvig is the author of our main source, and is also the director of AI at Google. So, as you can imagine, there's a lot of overlap right now between "academic" and "industrial" AI research &mdash; it's the same people. The best people in academia are being offered seven figure salaries in Silicon Valley.


 * Winston doesn't quite make the cut -- you realize we only have about two or three paragraphs for the 70s and 80s. We only have room to mention a handful of people -- we get (I think) Minsky, Brooks, Hinton, Moravec, Feigenbaum, each of whom can be tied to a decades-long historical movement. Is there some topic we should tie him to? Same question for Barr.


 * If you want to contribute something about Winston, feel free, or let's discuss it under a different header. The subject of this discussion was "don't introduce a controversy with a question", which has been a problem here for ten years and is now fixed.


 * I should say, I am quite reluctant to just add more researchers -- there are obviously hundreds we could mention, but they're only useful as a innovator of something, or a founder of a school of something, or as a spokesman for a particular technique or critique or something, and then only if the "something" is notable enough to merit inclusion. This article has always had a problem with people promoting particular researchers who are notable, but not notable enough for the top level article. Russians always want to add Russians, MIT graduates want to add MIT people, and so on. And we always have problems with self-promotion.


 * Again, these issues have nothing to do with my recent edits. I didn't cut anything about Winston, as far as I know. Attributing this problem to me only because I did as lot of editing is bad faith. Please study my notes above and find specific things that I did that you think need discussion. But please, let's discuss it up above where I wrote down everything I did, so we know we're talking about something I actually did.  CharlesGillingham (talk) 00:25, 30 September 2021 (UTC)

Pretty soon I'm going off the grid for 10 days and wanted to leave a few comments for standing through that period. I think that the article has recently evolved much for the better. I'm not saying that every change was good because I don't know. Previously it had a lot of patchy statements put in for self-promotion and reference spamming purposes, and then reference spamming. It took a lot of much-needed work to move forward on those issues and I am opposed to any backsliding in that area. I don't have the depth of analysis or knowledge about authors to comment on the issues raised in this talk section. Certainly major commercial developments are VERY important, as they represent AI actually in use vs. academic. <b style="color: #0000cc;">North8000</b> (talk) 18:47, 29 September 2021 (UTC)


 * One last thing: As you can tell by my edits, I am a big fan of organized writing. Can we please start new headers for new topics. There are about eight different topics in here, and no one else is going to be able to figure out what we're talking about. CharlesGillingham (talk) 06:48, 1 October 2021 (UTC)

addition of a neuromorphic computing section to thIs article
I'd like to ask for consensus to add a section on specialized AI hardware.

the rationale for this is that talking about a type of software without the corresponding hardware makes the subject incomplete.

there are already Wikipedia articles on this subject which I will list below.

RJJ4y7 (talk) 00:12, 11 October 2021 (UTC)
 * Neuromorphic engineering
 * Event camera
 * JAIC
 * Physical neural network
 * Memristor


 * Maybe, but it would have to be short -- just a sentence, really. (As you probably noticed I just carefully edited the article from 34 pages of main text down to 21 pages, which is still WP:TOO LONG) You could add a full paragraph one level down, perhaps in artificial neural network or even machine learning. Another good choice would be to create an AI hardware or Specialized hardware for artificial intelligence article and do a full a treatment there (to do it right, you would also add section headers for Lisp machine and all other specialized hardware you know about, each with the template . Look at the current state of Applications of AI.) Then your one-sentence mention in AI could link to something more complete. What do you think? I'm really just encouraging you to think about the big picture, to try to see that we have the right level of detail in the right article. CharlesGillingham (talk) 18:29, 11 October 2021 (UTC)


 * Have a look at AI accelerator (which is poorly named). It seems to me that this could be expanded to a more complete and comprehensible article Hardware for artificial intelligence. However, this is not my area of expertise, so I'm reluctant to take it on. Would you consider looking into it? CharlesGillingham (talk) 18:19, 12 October 2021 (UTC)


 * Actually, I'm going to go ahead and create the structure I'm thinking of. I'll add a section to this article, and create the hardware article I'm talking about. Please have a look and see if this will work for you. CharlesGillingham (talk) 18:22, 12 October 2021 (UTC)


 * I think a summary type section on specialized AI hardware would be good in this article. My thought would be a few sentences with lots of links to other articles. <b style="color: #0000cc;">North8000</b> (talk) 11:50, 13 October 2021 (UTC)


 * I'll try to expand the section that CharlesGillingham created being careful not to make it too long. I also don't want to do anything that will add more confusion to a subject that is not commonly known. the information already available on Wikipedia is scattered/unorganized. and I'm mainly working to fix that. To give some background ill say that AI hardware is basically divided into 2 groups: von Neumann hardware designed to do vector (matrix) number crunching such as the Graphics processing unit and the Tensor Processing Unit however these are not very efficient computationally (they need to be trained for a long time) and require a lot of power to be useful. on the other hand, neuromorphic computers and other physical network implementations try to take the computational load off the computers by giving the hardware itself a neural network structure so only the function of the network has to actually be calculated. beyond the reduction in power use, it's been found that these devices have the property of One-shot learning or the ability to be trained (to learn) by  a few or in extreme cases one example only the disadvantage thou is that most such devices are application specific though there is work to try and make a universal neuromorphic device  RJJ4y7 (talk) 17:01, 13 October 2021 (UTC)


 * Hardware for artificial intelligence is currently a brand-new WP:STUB and the plan is for people like you to fill the article in with all the stuff you have that we can't cover in this article.


 * Forget the section, which only needs a one-sentence summary of the article, which we can rewrite if necessary once Hardware for artificial intelligence is a bit more mature. Trying to keep this article down to 20 pages. CharlesGillingham (talk) 00:40, 14 October 2021 (UTC)

ok, I see, we will leave the section alone for now. though I don't know what the advantage of Hardware for artificial intelligence over ai accelerator which is the currently accepted term. I guess ai accelerator usually refers to von Neumann technology and excludes neuromorphic tech (which by the way are becoming more popular by the year) furthermore  Neuromorphic engineering and Physical neural network are also about the same thing. I guess in all such cases the articles of focus should be ones that increase understandability and organization. as I said before my goal is to clarify this subject of AI hardware as much as possible.

for now i'll do as follows: 1) expand Hardware for artificial intelligence 2) untill now ive focused on the neuromorphic computing article ill try to work more on physical neuraql network since it is a more understandable term. 3) after all this we can discuss about rewriting the section if nessesary.

Steve Wozniak & modern "AI"
There is and interesting opinion of Steve Wozniak on modern day mockery of actual Artificial Intelligence you can find here (wind to 2:20). He's basically criticized the modern term "AI" being used for smart information association software (or hardware) as not being even close to what the Intelligence means. Probably worth to look at. AXO NOV (talk) ⚑ 18:48, 16 October 2021 (UTC)

Bloat project
I can't claim tho have analyzed every detail but am impressed by your discussion in talk and urge you to keep boldly editing. <b style="color: #0000cc;">North8000</b> (talk) 03:50, 5 July 2021 (UTC)


 * I've made a pass through the entire article at this point, except for "Future of AI". I've copyedited almost every paragraph for brevity. If I had to cut deeply, I posted the stuff I cut in this section. CharlesGillingham (talk) 11:15, 22 September 2021 (UTC)
 * Hard to review such an overwhelming amount of work but overall it looks good and so cool that you are doing it. <b style="color: #0000cc;">North8000</b> (talk) 14:08, 23 September 2021 (UTC)


 * Okay, now I have copyedited the entire article for brevity. Next: (1) I will give the same treatment to the sub-article for "Tools". (2) I will restore the tools section to the article. (3) I will find a place for most of the material I cut in various sub-articles. That's the plan.


 * I know this is a lot of big changes, so please, discuss any concerns you have here. --- CharlesGillingham (talk) 07:45, 24 September 2021 (UTC)


 * The article is now 17 pages of content, 45 pages all together. On 30 June 21, it was 27 pages, 62 total. The "Tools" section, as it is now, is a bit more than 7 pages of content, so I'm going to pull it back tonight, and give it the same treatment over the next week. CharlesGillingham (talk) 04:27, 26 September 2021 (UTC)


 * Okay, I think I am pretty much done. The only sections I didn't hit was the two sections on deep learning -- these are too much of a mess. They will have to wait until we can update the article based on Russell & Norvig's 4th edition. Citation format is standardized for the whole article. CharlesGillingham (talk) 12:00, 28 September 2021 (UTC)


 * I'm in the processing of finding a home for everything I cut. The stuff that has been used elsewhere in Wikipedia is marked below. --CharlesGillingham (talk) 06:43, 1 October 2021 (UTC)


 * All that's left to do: the sub-sections on deep learning, integrating and saving material from the old Basics section. CharlesGillingham (talk) 23:28, 6 October 2021 (UTC)


 * This project is complete. The cut material is now at Talk:Artificial intelligence/Where did it go? 2021. CharlesGillingham (talk) 22:21, 16 October 2021 (UTC)

Intro
I propose to rewrite the following excerpt from the intro section and stress out that many modern day technologies like speech and image recognition (machine learning etc.) algorithms have little to do with what actual AI or intelligence at all as not to confuse/mislead/perplex readers. I also propose to keep the terminology differentiated. There is a couple of nice sources to start with.

AXO NOV (talk) ⚑ 19:10, 16 October 2021 (UTC)


 * Not sure if I understand what you are proposing. Machine learning is in the scope of academic field of artificial intelligence. You can certainly find sources that argue maybe it shouldn't be, but that doesn't change the fact that it is currently categorized that way. For example, courses with the title "artificial intelligence" spend some of their time on "machine learning", corporations who are putting money into machine learning announce that they are "investing in AI", and so on. These are real facts about the real world that the article must reflect; we can't report the fringe view. (Even if we personally think the fringe view is correct.).


 * There is a place in the article where this kind of discussion is relevant (it's ), or, better still in the article Philosophy of AI CharlesGillingham (talk) 21:28, 16 October 2021 (UTC)
 * The above excerpt basically says that smart web search, speech, and image recognition are the same things as the AI in the sense of application. I propose to explicitly state that it has nothing to do with AI or Intelligence. Smart web search prompts or character recognition are extremes of what modern computers are able to do. It's nowhere close to what the "intelligence" means. I strongly disagree with giving preference of commercial POV (on the said technologies) over more critical, scientific or even philosophical one.  AXO NOV  (talk) ⚑ 07:33, 17 October 2021 (UTC)


 * Your argument is sound and valid, and there are several notable analysts and commentators who agree with you (including Rodney Brooks, Noam Chomsky and others).


 * However, the definition of "intelligence" they are using is slippery, which is why leading researchers have found other ways to define "artificial intelligence" that are more precise (carefully read ). These definitions are "widely accepted by the field" (according to the leading AI textbook, Russell and Norvig); the definition given in the first paragraph of the article is the consensus view.


 * As editors, we have to prioritize the most widely accepted, consensus point of view. In science, there is always some dissent, and there is a place for that in Wikipedia. The point of view you are talking about has a place in Wikipedia, as I said above, but it is not in the second or third paragraph of the most introductory article on the topic.


 * More to the point, the first section has to use a definition that is coherent with things like newspaper articles, university course syllabi, book store sections, textbook titles, corporate announcements, national agendas and the like. In other words, the most useful definition is sociological, not logical -- "AI" is whatever most sources mean when they say "AI". The issues you bring up will not help the reader to understand all these ordinary real world things.   CharlesGillingham (talk) 18:24, 17 October 2021 (UTC)
 * In order not to waste time I propose to stick to the sources. I request more sources for the excerpt above as I didn't find those in the body of the article. Would appreciate if you bring them over here much. Currently it seems a bit WP:ORish. AXO NOV  (talk) ⚑ 10:46, 18 October 2021 (UTC)
 * I think we should be wary of sources like NYT that tend to hype on certain technologies, especially medical ones. The article's body it says that
 * but the source it a bit more cautionary:
 * It's vague as it could get and I oppose this joggling that leaks into the intro. I propose to clearly state (by using fresh sources) in the lead that there are different views on what defines the the A.I.: some (experts) say that it's an advanced thinking machine comparable to human brain and others (jornos etc.) say that it's a speaking toster. --  AXO NOV  (talk) ⚑ 11:21, 18 October 2021 (UTC)
 * It's vague as it could get and I oppose this joggling that leaks into the intro. I propose to clearly state (by using fresh sources) in the lead that there are different views on what defines the the A.I.: some (experts) say that it's an advanced thinking machine comparable to human brain and others (jornos etc.) say that it's a speaking toster. --  AXO NOV  (talk) ⚑ 11:21, 18 October 2021 (UTC)
 * It's vague as it could get and I oppose this joggling that leaks into the intro. I propose to clearly state (by using fresh sources) in the lead that there are different views on what defines the the A.I.: some (experts) say that it's an advanced thinking machine comparable to human brain and others (jornos etc.) say that it's a speaking toster. --  AXO NOV  (talk) ⚑ 11:21, 18 October 2021 (UTC)

Not implying that the any of the above efforts are doing otherwise, but also remember that the lead is supposed to be a summary of what is in the body of the article. I'd also like to reinforce that it is a term and it's meaning is defined by the common meanings of the term rather than anything else. <b style="color: #0000cc;">North8000</b> (talk) 19:53, 17 October 2021 (UTC)
 * Are you supporting or opposing my proposal? I agree that MOS:INTRO should be enforced but currently I don't see sources supporting the excerpt above. See my reply above. AXO NOV  (talk) ⚑ 10:48, 18 October 2021 (UTC)
 * Well, my previous post was just making a few points without intending to weigh in on your proposal. Based on your ping I took a closer look. Your proposal is basically saying which items you are proposing changing and the general theme of your intended changes. That's fine, but it should be understood that you did not make a specific editing proposal.
 * So I took a closer look. I think that the existing text does a good job, and is basing itself on / dealing with common meanings of the term.  The general theme of your statement seems to want to take it away from "common meaning" being the standard to personal philosophical arguments by persons engaged in the field. For the summary in the lead I think that such would be a bad idea.   I think that it would be fine in the article with some attribution e.g. "some authors and researchers say...." I think adding a single such attributed summary-type sentence to the lead would also be fine. Sincerely, <b style="color: #0000cc;">North8000</b> (talk) 14:08, 18 October 2021 (UTC)


 * One sentence in the lead could be fine. Another good place to add this kind of criticism in more detail is or Applications of artificial intelligence. Basically, anywhere we cover AI effect, which is another way the border of (AI vs. (not AI)) keeps moving.


 * Let be clear, I'm definitely not opposed to covering this idea. It's just that it can't appear to be the consensus view. So I'm concerned about things like (1) the placement of what you're talking about. (2) the attribution. As North8000 says, it needs a qualifier, e.g. "Leading robotics researcher Rodney Brooks argues that, on the contrary, ...").


 * Again, the consensus view is: it doesn't matter if these programs are "intelligent" or not. "Intelligence" isn't well defined. Goal-directed (i.e. "rational") is well defined, so let's use that.  CharlesGillingham (talk) 14:39, 18 October 2021 (UTC)

A minor edit
In the first lines of the introduction, when giving examples it is said « (i.e. Google) » while I think it should be « (e.g. Google) », meaning that Google is an example (e.g), and not a synonym (i.e.). — Preceding unsigned comment added by Otared.kavian (talk • contribs) 18:42, 10 November 2021 (UTC)
 * Fixed. Mitch Ames (talk) 23:18, 10 November 2021 (UTC)

Math
Hello Google 106.204.155.99 (talk) 13:52, 25 November 2021 (UTC)

What is ai
Artificial intelligence (AI) is intelligence demonstrated by machines, as opposed to natural intelligence displayed by animals including humans. Leading AI textbooks define the field as the study of "intelligent agents": any system that perceives its environment and takes actions that maximize its chance of achieving its goals.[a] Some popular accounts use the term "artificial intelligence" to describe machines that mimic "cognitive" functions that humans associate with the human mind, such as "learning" and "problem solving", however, this definition is rejected by major AI researchers.[b] 38.92.125.29 (talk) 13:16, 26 November 2021 (UTC)

Semi-protected edit request on 4 December 2021q
120.125.214.163 (talk) 02:10, 4 December 2021 (UTC) I want to change something about Wikipedia.There is something wrong about the history of artificial intelligence.
 * Red information icon with gradient background.svg Not done: this is not the right page to request additional user rights. You may reopen this request with the specific changes to be made and someone will add them for you, or if you have an account, you can wait until you are autoconfirmed and edit the page yourself. — Sirdog (talk) 02:21, 4 December 2021 (UTC)

AI is both learning and teaching
AI is both learning and teaching mathematics and AI is also teaching English:

AI of Facebook can solve university calculus problems. AI of Harvard University and AI of Illinois State University are teaching mathematics. AI in Japan and AI in South Korea are teaching English. MacApps (talk) 20:08, 23 December 2021 (UTC)MacApps

AI is Creating AI
AI of Google and AI of MIT are creating AI. — Preceding unsigned comment added by MacApps (talk • contribs) 20:09, 23 December 2021 (UTC)

What is a Artificial intelligence ?
AI 2402:3A80:1B99:F71A:736C:2001:2850:BA32 (talk) 03:31, 31 December 2021 (UTC)

Semi-protected edit request on 10 January 2022
"Localization is how a robot knows its location and map its environment."

Should "map" be "maps" instead? SalmonellaSack (talk) 23:51, 10 January 2022 (UTC)

✅ - sure should! PianoDan (talk) 19:26, 11 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Charvey1597.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 14:48, 16 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Kannanfire. Peer reviewers: Kannanfire.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Peer reviewers: Moon.pa96.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Efalstrup.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Alexandrapantry.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Kendall3414.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 30 October 2018 and 11 December 2018. Further details are available on the course page. Student editor(s): Dvcap.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 28 March 2019 and 8 May 2019. Further details are available on the course page. Student editor(s): Xz2250.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 27 August 2019 and 14 December 2019. Further details are available on the course page. Student editor(s): VangTheHuman.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 9 January 2020 and 18 April 2020. Further details are available on the course page. Student editor(s): Jimmyk578.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 25 August 2020 and 10 December 2020. Further details are available on the course page. Student editor(s): GFL123, Yjh5146. Peer reviewers: Kbrower2020.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:34, 17 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 13 October 2020 and 4 December 2020. Further details are available on the course page. Student editor(s): Bcasano.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:14, 18 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 6 September 2020 and 6 December 2020. Further details are available on the course page. Student editor(s): CaptainJoseph.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:14, 18 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 17 May 2021 and 31 July 2021. Further details are available on the course page. Student editor(s): Scarpinojosh.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:14, 18 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 4 October 2021 and 9 December 2021. Further details are available on the course page. Student editor(s): RaymondZqiu.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:14, 18 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 7 September 2021 and 23 December 2021. Further details are available on the course page. Student editor(s): Vanessa Li (YYL). Peer reviewers: LukeRuegemer, Jackson1317.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 07:14, 18 January 2022 (UTC)

Hiring
Artificial Intelligence is widely used in hiring however it shows considerable bias. The good job amazon did of recognizing this should be lauded. Amazon with the highest female hiring rate of large tech companies (40%) found their AI system discriminated and shut it down. Microsoft has 25% female employees as of 2017. It should be noted that AI systems are not algorithms with known results, they are heuristics that approximate the solution. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G — Preceding unsigned comment added by 198.103.184.76 (talk) 14:59, 19 January 2022 (UTC)

AI systems are heuristics, not algorithms
It should be noted that AI systems are not algorithms with known results, they are heuristics that approximate the solution. AI is used when complete analysis can be done are rare. AI is used when the input space is large and the decisions hard to make. The neural network or other methods approximate the solution but that solution is approximate as it does not cover all use cases. AI should be treated as a heuristic that gets one closer to the solution but not all the way there. It should not be used to drive cars, in hiring or in healthcare. Those fields are too critical for approximations.
 * This was posted by 198.103.184.76 <b style="color: #0000cc;">North8000</b> (talk) 15:39, 20 January 2022 (UTC)

Hiring
Artificial Intelligence is widely used in hiring however it shows considerable bias. The good job amazon did of recognizing this should be lauded. Amazon with the highest female hiring rate of large tech companies (40%) found their AI system discriminated and shut it down. Microsoft has 25% female employees as of 2017. It should be noted that AI systems are not algorithms with known results, they are heuristics that approximate the solution. https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G — Preceding unsigned comment added by 198.103.184.76 (talk) 14:59, 19 January 2022 (UTC)