Talk:Chatbot/Archive 1

Source code
The source code, even in QBASIC, is quite obscure &mdash; it looks pretty obscure to me, and I used to do the odd bit of programming in the language a few years ago! It's not going to be very helpful for 99% of our readers. I think we should, if not remove it outright, recast this as pseudocode, and, if it would be helpful, provide an external link to source code. &mdash; Matt 18:43, 20 Sep 2004 (UTC)

We could certainly recast the program as pseudocode as an aid to comprehension. However one of the reasons for writing it in QBASIC was to give an example chatterbot which would actually run "as is". A pseudocode version might be more useful to experienced programmers but perhaps less so to neophytes or non-programmers since it would be impossible for them to run it. By comparison an interested neophyte or non-programmer can get the current code running by following the simple instructions included in the current article.

An alternative to pseudocode would be to rewrite the program to make it clearer. For all the unusual layout of the program it is actually fairly simply structured, so that would not be difficult to do. -- Derek Ross | Talk 05:15, 2004 Sep 21 (UTC)


 * One problem is that not every reader even has a QBASIC interpreter, (remember they stopped shipping it with late versions of Windows 98, not to mention non-Windows systems) nor even the knowledge of how to enter and execute such a program. Moreover, it's probably asking too much to expect a general reader to understand BASIC syntax, even if it was layed out correctly. A larger number of readers would have half a chance of reading pseudocode and getting the gist of it. If an adventurous reader wants to execute some code, I think an external link would do the trick. &mdash; Matt 09:45, 21 Sep 2004 (UTC)

Those are fair points but I still don't feel that pseudocode is enough. I contributed the code in answer to the request at the top of this page, so the source code is a response to demand. However if QBASIC is no good perhaps you can suggest a better language. Something like awk or perl perhaps ? -- Derek Ross | Talk 03:23, 2004 Sep 23 (UTC)


 * For people who want to try out or implement a chatterbot, I think the set of external links to software for various platforms is sufficient. I'd point out that the request at the top of the page asked for "links to open source bots", not for actual source code to be placed within the article. I think including a simple chatterbot in pseudocode (and the resulting conversation) could be an excellent piece of illustration, but I don't think we should use Wikipedia as a repository for sample source code. &mdash; Matt 12:03, 30 Sep 2004 (UTC)


 * I'm not sure if this conversation is dead or not, but I figured I'd add my two cents anyway. After running the code, I tried to rewrite it in VBScript (the native scripting language of my IRC client) so I could run it in an IRC channel. Now, I'm no newbie to programming, but I was pretty confused by most of the code -- I ended up mostly converting the syntax and trusting that it would work. It didn't, I figure I must have messed something up somewhere. My point is, it would be nice if the code was commented, or if the variable names were longer than one character... that way, I can learn and understand the code instead of simply be able to run it. Thanks for listening! AquaDoctorBob 14:30, 9 Jan 2005 (UTC)


 * Okay, I do have a version which is longer but more conventional in appearance. It should be easier to translate into other languages or dialects.  I'll upload it instead. -- Derek Ross |  Talk 23:12, 2005 Jan 9 (UTC)


 * Adding to the above conversation, I suppose I'm an "ordinary" person with no knowledge of programming code, and the stuff in this article baffles me. It's really not helpful at all to me, and I'd hazard a guess that more readers aren't programmers than are. As mentioned above, later WIN98 releases don't have a QBASIC interpreter, and it seems I fall into that category. I came here expecting an article about chatterbots. Perhaps the code would be better suited to a Wikibook on QBASIC programming? - Vague | Rant 07:06, Jan 18, 2005 (UTC)


 * Fair enough. I think that part of the problem for non-programmers is that the article is really just a stub followed by an example of interest to programmers. In order to improve the article we need more informative material to counterbalance the example.  I'll look at converting the WikiChat program into something that can be run more easily on modern systems too.  That will take a few days to sort out. -- Derek Ross |  Talk 15:37, 2005 Jan 18 (UTC)


 * I have no knowledge of programming code, and having the full .BAS there did it for me. I had assumed such programs would be huge and complex. To see the code (it's not like I've read it; I've merely noticed how little text it is) and then to see what it can do taught me a lesson. If you ever decide to link it away, be sure to mention in the link text that it consists of only 90 lines. 22:43, 14 October 2005 (UTC)

Well, thanks, Anonymous User. I'm glad that at least one person has found the code informative. -- Derek Ross | Talk 05:43, 15 October 2005 (UTC)

WikiChat learning sessions
The following sessions show how to teach WikiChat some simple conversation. Note that sessions are started by running the WikiChat program and ended by pressing the return key without giving a response to the computer. They show WikiChat being taught a basic conversational pattern in English.

(Session 1) Human: hello there Computer: Human: hello Computer: Human: how are you Computer: Human: fine thanks how are you Computer: fine thanks how are you Human: fine thanks Computer: Human: (Session 2) Human: hello Computer: how are you Human: fine thanks how are you Computer: fine thanks Human: glad to hear it Computer: Human:

(Session 3) Human: hello Computer: how are you Human: fine thanks Computer: glad to hear it Human:

The responses will be stored for the next session, so that there is no need to reteach them but it should be noted that while this example chatterbot will create "original" sentences after a little teaching, it has a very limited capacity and works best with short sentences.

The sections between the lines above used to form part of the article. They all belong together. The example session makes no sense without the code it refers to. -- Derek Ross | Talk 16:12, 20 November 2005 (UTC)

Attention
In an effort to clean up Artificial intelligence, instead of completely removing a paragraph mostly concerning chatterbots, I copied it under "Chatterbots in modern AI". I noticed that it repeats a some information already in this article, but there might also be some additions. Unfortunately I can not spend time on a smooth merger right now. Sorry for the inconvenience. --moxon 09:20, 20 October 2005 (UTC)

Basic source code removed from prog (not essential to understanding of subject)
Aids to understanding are often not essential even when they are useful to understanding. The BASIC source code below demonstrates that these programs can be quite short and simple, even to people who don't understand computer programming. I am surprised that anyone should think that it was intended as a tutorial on programming. It might have some tutorial value as an example of a chatterbot (the topic of the article) but hardly as an example of programming. -- Derek Ross | Talk 16:18, 20 November 2005 (UTC)

Malicious Chatterbots section of the page
I don't have the knowledge to contribute to this section, wish I did, but it doesn't seem to have much authority to it, no cites of statistics or links to articles, so it comes across as too anectodal to be of any use. Especially the part that says "as well as on Gay.com chatrooms". Why is that reference somehow more notable than 'bots that appear on any of a thousand other forums?dawno 05:21, 18 June 2007 (UTC)

Chatterbot vs. chatbot
Shouldn't this be at Chatbot, since that is the most common name? -- Visviva 11:09, 18 November 2006 (UTC)
 * I'm on the Robitron e-mail discussion list where Loebner Prize Contest entrants and Loebner himself talk about these things. There, both "chatbot" and "chatterbot" are used, so I don't see one term as being clearly dominant among people who make and use them. (Where do you see "chatbot" as being most common?) I have no strong preference myself, either. "Chat" implies conversation, while "chatter" is both humorous and slightly negative because it implies meaningless talk. (In my opinion most such programs really are meaningless in what they say, so it's a valid criticism.) So, either one works. Even if the lead title changes, both names should be preserved so that they redirect to the same article; how did you do that? --Kris Schnee 19:12, 18 November 2006 (UTC)
 * I also vote for chatbot, as it is the first and most commonly used of the names. Comparative use of the phrases on search engines seems to bear this out (for example: http://writerresponsetheory.org/wordpress/2006/01/15/what-is-a-chatbot-er-chatterbot/ ). Also, the phrase chatterbot and its promotion seems to have some underlying connection to Mauldin’s commercial chatbot (er, chatterbot) ventures. 66.82.9.110 00:09, 1 August 2007 (UTC)
 * I disagree, Michael Mauldin (founder of Lycos) invented the word "Chatterbot" to describe natural language programs. Chatbot doesn't seem to have a specific origin nor can I find (and this is a very quick Usenet archive search) a mention of the word 'Chatbot' before the use of the word 'Chatterbot'.  Perhaps we need a line in the top part of the page like "all too often shortened to Chatbot" 193.128.2.2 09:52, 1 August 2007 (UTC)
 * That's exactly my problem with it...the phrase chatterbot is associated with Mauldin's commercial ventures and their seems to be a consistent push to market the term chatterbot that isn't backed up by its usage. In fact, as I pointed out above, by far, most people use the term chatbot. Also, the constant inserting of Mauldin's name in Wikipedia (for example, in his many times recreated and then deleted for irrelevance Wikipedia biography, a version of which you just linked to again, and other now editor deleted for self promotion Wiki biographies of Mauldin's company and company employees) consistently followed by some variation on the terms "Founder of Lycos" or "Creator of the Verbot" on Wikipedia, is pretty embarrassing. I hope he's not involved with it. As for inserting the phrase "all too often shortened to Chatbot," I'd just like to point out that on Wikipedia its considered bad form to edit articles that are about yourself or people you have a close personal association with, or involve a company you work or worked for or are/were associated with, even if they are the "Founder of Lycos" and "Creator of the first Verbot." By the way, is the rumour true that you get a dollar every time you say one of those phrases or get it inserted on the web? Because that would explain a lot. 66.82.9.77 10:57, 1 August 2007 (UTC)
 * Firstly, I am not Michael Mauldin (which I suspect you think I am). One of the things that worries me is that this part of the discussion appears to becoming about the use of Wikipedia for self-promotion - something that I suspect you and I agree 100% on.  I only fixed the wikilink following your message as I clicked through it and realised it was linking to the wrong person with the same name.  I have never added any substantial content to the page (as you can see from my static IP) for exactly the Wikipedia form reasons you have stated.  I just prefer the word Chatterbot, as that was what I called it when I released my first one as DOS Freeware eleven years ago.  193.128.2.2 11:49, 1 August 2007 (UTC)

Relevance of paragraph on the philosophy of AI within article & notable names in the field
I think the paragraph about Malish should be removed. It doesn't apply specifically to chatterbots. The following paragraph (discussing Blockhead and the Chinese Room) belongs in Philosophy of artificial intelligence.--CharlesGillingham 10:14, 26 June 2007 (UTC)


 * I absolutely agree with you, it smells of self promotion. I've removed the Malish bit but will leave the other move up to someone with expertise in that field. 66.82.9.80 21:26, 31 July 2007 (UTC)


 * I've now corrected some of the language and accuracy within these paragraphs in light of the recent edits, and also removed some misconceptions. I tend to agree that the sections discussing the philosophical arguments within AI really belong in Philosophy of artificial intelligence and not here as such, as they are not merely limited to chatterbots. Also, Malish's work in the field seems to be more centred around Human decision making, rather than AI specifically (referenced here in a paper by the UK MoD presented before a US Department of Defense conference) . Although, I highly doubt that the notable names added here by various anonymous users (Turing, Searle, Malish, Block) would seek, or even require, any "self promotion". It seems to me, to be much more a case of over-zealous editing by enthusiastic followers of their respective works.


 * Finally, I removed the quote claiming that Jabberwacky is "capable of producing new and unique responses". Jabberwacky in fact, can only repeat sentences that have been previously input by other users. This was probably an earlier reference to "Kyle", which is actually one of very few programs that can actually achieve this (which was probably the rationale for its original inclusion here). 79.74.1.97 16:51, 1 August 2007 (UTC)


 * Just comparing the two versions of the article: good rewording. Many thanks. 193.128.2.2 08:57, 2 August 2007 (UTC)

I've just come into the Wiki business (wockham, for William of Ockham), and apologise if I've got anything wrong in respect of how to use the Wiki.

I changed the section on AI research to try to make it reflect more faithfully how things really stand in the research world, for example:

1. AIML is not a programming language, but a markup language, specifying patterns and responses, not algorithms. And ALICE can't really be considered an AI system, because (as both the other content of this section and the initial section point out), it works purely by very simple pattern-matching, with nothing that can be called "reasoning" and hardly any use even of dynamic memory.

2. Jabberwacky can't properly be described as "closer to strong AI" or even really as "tackling the problem of natural language", because it doesn't actually make any attempt to understand what's being said. It is designed to score well as an imposter - as something that can pass as intelligent - rather than even attempting any genuine grasp or processing of the information conveyed in the conversation. It can give the impression of more "intelligence" than other chatbots, sure, because it does do a rudimentary kind of learning, but again, it seems very misleading to suggest that this really has anything significant to do with natural language research.

3. The previous version suggested that it's the failure of chatbots as language simulators that has "led some software developers to focus more on ... information retrieval". But this seems odd, as though such developers were desperate to find a use for chatbots, rather than (more plausibly) trying to find a way to solve an information retrieval problem. My version maintained the point that chatbots have proved of use in information retrieval (as also in help systems), but deliberately avoided any speculation about how those researchers might have come to have such interests.

4. I made substantial changes to the paragraph that said: "A common rebuttal often used within the AI community against criticism of such approaches asks, 'How do we know that humans don't also just follow some cleverly devised rules?' (in the way that Chatterbots do). Two famous examples of this line of argument against the rationale for the basis of the Turing test are John Searle's Chinese room argument and Ned Block's Blockhead argument." Here are my reasons:

(a) The argument that chatbots are moderately convincing, and therefore perhaps humans converse in the same way, is unlikely to be put forward by anyone "within the AI community". AI researchers are aiming to achieve some sort of genuinely intelligent information processing, and they are well aware of the serious difficuly of doing so. Only chatterbot enthusiasts are likely to come up with this argument, and most of them are engaged on a quite different task (see 2 above).

(b) The argument is anyway very weak, and I don't think it's fair to attack my rebuttal of it as just expressing a personal point of view. Maybe it could be put better, but the point I was making is that even an everyday conversation - for example, about what to wear or about football - requires some logical connection between the various sentences (e.g. what shirt will go with what skirt or trousers, or how the placement of one player in the team will have implications for other positions - e.g. that the same player can't be in more than one position). Now it is just obvious that this sort of thing is typical of human conversation, and equally obvious that chatbots (at any rate in their currently usual form) cannot handle such logical connections. So if it's worth putting the argument in the article, then it's also worth putting this obvious rebuttal of it (though again, it could no doubt be reworded).

(c) The stuff about Block and Searle was inaccurate. It suggested, for example, that John Searle's Chinese Room argument was "an example of this line of argument" which it isn't at all. Searle isn't arguing that human conversation is like chatterbots; on the contrary. But nor is he suggesting that AI systems are as crude as chatterbots: if he were, then nobody would take his argument seriously. What he's saying is that even if a computer system could achieve a logically coherent conversation (i.e. even if ambitious AI researchers could succeed), that still wouldn't give genuine semantic content to what the system says. All this really belongs in the section on Philosophy of AI. The most that could be said here (and it could be added) is that chatbots (arguably) provide some evidence against the usefulness of the Turing Test. If even a pattern-match-response chatbot can fool a human into thinking that it's intelligent, then obviously the ability to fool a human isn't any good as a criterion of intelligence.

Wockham 21:30, 31 August 2007 (UTC)


 * I know you mean for the best, but you can't jump into an established article on Wikipedia and completely rewrite a large section of it without any consensus from the other long time editors. You don't have any sourcing for a lot of your claims and a lot of it is pure POV (examples: "despite the "hype" that they generate in some media" and "But the answer is clear"...). In fact, you've undercut most of the main parts of the article with sentences beginning with "But..." I know you think the article is inaccurate, or wrong in relation to "how things really stand in the research world", but academia isn't the only user of Wikipedia, or chatterbots, for that matter. There are other views on the subject that the article is trying to balance, and every opposing side is convinced the other is wrong. Also, the link to the chatterbot Elizabeth and its accompanying long, promotional sounding paragraph is particularly an egregious act; a quick glance at the edit history would show that many much more famous and influential bots have been ruthlessly removed from the article to prevent it from bloating uncontrollably. Again, I know you didn't mean it in bad faith, but such links generally get editors reported for SPAM and blocked from further editing. We absolutely don't link to such bots here, there is a separate article for that. 72.82.48.16 22:15, 31 August 2007 (UTC)

OK, thanks very much for this. I've got rid of the "despite the hype" and "But the answer is clear" stuff, and also the link to Elizabeth (before reading your note, in fact). I would hope, however, that the references to "help systems" and the potential of chatbots in education would be worthy of consensus (even if references to examples violate protocol).

In a section called "Chatterbots in Modern AI", and starting "Most modern AI research", I should have thought it important to reflect what is actually happening in the research world; that was why I confined my edits to that.

Regarding claims that are "unsourced", I honestly can't see that what I put is any worse than what was there before. What claims do you think need sourcing, that currently aren't?

Wockham 22:31, 31 August 2007 (UTC)



Merge Artificial conversational entity
Artificial conversational entity seems to be too short and stubby to warrant its own article, and it is basically the same as a Chatterbot. Chatterbot is the more common name for this category of software.--Cerejota (talk) 18:40, 19 January 2009 (UTC)

Oppose
I would like to propose that the term 'chatbot' should be leading. Actual for just one reaon: this term is by far most often used:

Google Trends shows us when we compare chatbot, chatterbot, embodied conversation agent and Artificial conversational entity: http://www.google.com/trends?q=chatbot%2C+chatterbot%2C+embodied+conversation+agent%2C+Artificial+conversational+entity

That 1 chatbot (by far on top) 2 chatterbot (30% usage in the US, 70% US users use chatbot. Chatterbot is often used Poland) 3 embodied conversation agent (rarely entered in Google) 4 Artificial conversational entity (rarely entered in Google)

So from a user point of view (not for academics or professionals), and Wikipedia is created for users, the term 'chatbot' should be used.

Furthermore, the term is much shorter, it sound better and much more likely that it will be adapted by an even large group of users.

Therefore I believe that the various articles should be merged in a new article named chatbot.

Discussion
Please discuss here.

Conclusion
It seems there is no objection to merge Ace into Chatterbot. It also seems there are good reasons to reverse the roles of Chatbot and Chatterbot, so Chatbot is the leading term. If no objections are posted here in the next couple of days, I shall merge the contents of Ace with the contents of Chatterbot into the leading title Chatbot, and redirect from both Chatbot and Ace.UdRuhm (talk) 14:20, 2 November 2009 (UTC)

Adding detail on chatbot "Methods"
I would like to suggest that the "Method of operation" section could usefully be revised so as to reflect its title, which perhaps should be plural: "Methods" rather than "Method" (because chatbots vary, and even a single chatbot might use a number of different techniques). In particular, it would be good to give a few examples of the sorts of methods used by the original ELIZA (e.g. replacement of near-synonyms, sets of patterns and corresponding responses, replacement of "me" with "you" etc.), so that readers who don't know much about the subject can actually get a good feel for how basic chatbots work. I'm happy to have a go at this, focusing on examples from the famous ELIZA scripts (which will, I hope, avoid controversial choices among more recent systems). But before doing so, I'd like to know how other more long-term editors feel about it.

I don't know anything about Jabberwacky's methods, and the Wikipedia page on Jabberwacky says very little on this too. It would be good if whoever does know about it could add something on them (i.e. on the methods that it uses, not just the purpose of the methods - which is apparently to model how humans learn language). Shouldn't a "methods" section focus on how things are done? And I presume that the justification for giving this special space to Jabberwacky is that it apparently works differently from other chatbots. It would be nice to be told how!

For the same reason, I suggest the section would be much better to focus just on chatbots' methods of operation, and avoid the philosophical stuff on "understanding", Searle, Block etc. This anyway seems inaccurate (e.g. Searle is American, not British), almost completely unsourced (e.g. the "Much debate" passage), and too sketchy to be of much use. Wouldn't it be better just to refer readers to the section on the Turing test for all that sort of thing? People will be looking at this section to try to find out about chatbot methods of operation, not philosophical discussion about whether chatbot are of philosophical significance. WikkPhil (talk) 17:13, 10 January 2010 (UTC)

Copy edit
I have removed a fair amount of quite loose description, some repetition and some overlinking. I don't believe I have taken out any hard facts. Charles Matthews (talk) 14:34, 13 January 2010 (UTC)

You've made a big improvement, in my view. But I still think it would be nice to have a section that's really on the methods used, and I don't see that the stuff on Searle and Block belongs in the "background" of an entry on chatbots. Weizenbaum wrote ELIZA long before their work, and the only philosophical significance of chatbots seems to be to show how easily humans can be fooled by something that cannot - by any stretch of the imagination - be called genuinely intelligent (i.e. they cast doubt on the value of the Turing Test). Weizenbaum's paper said more or less that on p. 42: "This is a striking form of Turing's test. ... ELIZA shows, if nothing else, how easy it is to create and maintain the illusion of understanding". (That is the only mention of Turing or any other philosophy publication in his entire paper, so its overall effect is to downplay any philosophical significance at all, which I reckon is dead right.) Besides, Searle's "Chinese Room" doesn't hypothesise a chatbot, but a full-blown NLP algorithm that can generate a fully appropriate answer for any question put to it. I'm not sure that Searle thinks such a thing is possible, either: he just insists that even if it were possible, it would lack genuine intentionality. That really doesn't have much to do with chatbots at all, does it? WikkPhil (talk) 22:36, 13 January 2010 (UTC)


 * Go ahead. I just removed some duplication and other text which I thought didn't add much. Charles Matthews (talk) 23:26, 13 January 2010 (UTC)

Thanks, Charles. I've now redone the "Background" section accordingly, in what I hope will be found a useful way, with relevant historical stuff about ELIZA and Weizenbaum, and leading up to more recent uses. I don't think there's anything controversial here, and I trust people will agree that ELIZA deserves a particular focus (PARRY was also influential, but much more complex and difficult to imitate, and I don't think its "script" was ever published openly in the way that ELIZA's was).

I plan next - when I get the time - to add back a section on "Methods of Operation" as discussed above, again focusing on the techniques that ELIZA pioneered. If desired, I could say something about ALICE and AIML too here, because that seems to be the most used recent system. It would be good to know what other editors think of this idea (because I appreciate that it can be controversial mentioning some systems rather than others). Thanks again, Phil WikkPhil (talk) 15:29, 16 January 2010 (UTC)

Merge Dialog system to Chatterbot
The scopes of these articles appear to be the same. If merging, it seems Chatterbot is the preferred target, as it has much more "what-links-here"s and more than 10x more traffic (according to http://stats.grok.se/). Mikael Häggström (talk) 07:17, 12 March 2011 (UTC)
 * I think I've found a clear distinction, and will point this out in each article. Mikael Häggström (talk) 05:59, 15 March 2011 (UTC)

Beneficial Chat robots
Are there beneficial chat robots I recently replied to what might be a Yahoo answers suicide bait question Then I realized that a chat robot that just sifted YA as well as blogs to find what appeared to be suicide preferences could say things like "wow, that totally makes me think of those wacky suicide prevention things like 1800chillout" or "ha! I detect romance gone awry  I just placed a personals ad for you, if you live until monday you can see who responded"  or "You seem pretty dedicated to things making sense  Have you considered going to irs.gov to file early so your relations can get all that refundy goodness theyd otherwise miss out on?" These might be phrased more kindly as well as effectively to prevent suicide, As lifesaving software goes you might prevent an actual death for every few hundred or thousand autoposts You might prevent many hundreds or thousands of emotively crummy suicide attempts as well. —Preceding unsigned comment added by 163.41.136.51 (talk) 18:38, 23 May 2011 (UTC)
 * And you might not. When writing beneficial chatbots, I think that I would try a chatbot aimed at helping people with common computer problems before aiming for the heady heights of suicide prevention. It's so easy for people to get it wrong when dealing with potential suicides. And I think that we can be pretty sure that a bot at the current levels of sophistication would be more likely to get it wrong than right. -- Derek Ross &#124; Talk'' 19:15, 23 May 2011 (UTC)

Requested move 5 January 2017

 * The following is a closed discussion of a requested move. Please do not modify it. Subsequent comments should be made in a new section on the talk page. Editors desiring to contest the closing decision should consider a move review. No further edits should be made to this section. 

The result of the move request was: moved. (non-admin closure) JudgeRM   (talk to me)  15:36, 14 January 2017 (UTC)

Chatterbot → Chatbot – chatterbots are overwhelmingly referred to as chatbots nowadays. A simple Google search will reveal this: 2.8m hits for chatbot (meaning this), 0.3m hits for chatterbot. Keizers (talk) 17:59, 5 January 2017 (UTC)


 * Support per nom. SnowFire (talk) 00:30, 7 January 2017 (UTC)
 * Support per nom; move current Chatbot page to Chatbot (disambiguation). bd2412  T 19:18, 11 January 2017 (UTC)


 * The above discussion is preserved as an archive of a requested move. Please do not modify it. Subsequent comments should be made in a new section on this talk page or in a move review. No further edits should be made to this section.

Proposed merge with IM bot
These are not really different concepts, but just different names for the same thing. Besides, both references for the "IM bot" article are about "chatbots". Amir E. Aharoni (talk) 17:12, 26 June 2017 (UTC)
 * Merged --Amir E. Aharoni (talk) 16:56, 4 July 2017 (UTC)

External links modified
Hello fellow Wikipedians,

I have just modified one external link on Chatbot. Please take a moment to review my edit. If you have any questions, or need the bot to ignore the links, or the page altogether, please visit this simple FaQ for additional information. I made the following changes:
 * Added archive https://web.archive.org/web/20080221093646/http://www.computerhistory.org/internet_history/internet_history_70s.shtml to http://www.computerhistory.org/internet_history/internet_history_70s.shtml

When you have finished reviewing my changes, you may follow the instructions on the template below to fix any issues with the URLs.

Cheers.— InternetArchiveBot  (Report bug) 01:24, 7 October 2017 (UTC)

Names
A chatbot (also known as a talkbots, chatterbot, Bot, IM bot, interactive agent, or Artificial Conversational Entity) Can we get some sources for the names or remove some of them? It's already a handful and I'm not sure they're all equally notable. I especially think "Bot" is far too universal to restrict to chatbots. Prinsgezinde (talk) 10:30, 11 August 2018 (UTC)

"AIMBot" listed at Redirects for discussion
A discussion is taking place to address the redirect AIMBot. The discussion will occur at Redirects for discussion/Log/2021 February 23 until a consensus is reached, and readers of this page are welcome to contribute to the discussion. 𝟙𝟤𝟯𝟺𝐪𝑤𝒆𝓇𝟷𝟮𝟥𝟜𝓺𝔴𝕖𝖗𝟰 (𝗍𝗮𝘭𝙠) 12:50, 23 February 2021 (UTC)

Proposed merge of Draft:Conversational AI into Chatbot
Draft contains additional information that appears to be applicable. Robert McClenon (talk) 21:35, 29 April 2021 (UTC)

Wiki Education Foundation-supported course assignment
This article is or was the subject of a Wiki Education Foundation-supported course assignment. Further details are available on the course page. Student editor(s): Ayspiff. Peer reviewers: Ayspiff.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 17:16, 16 January 2022 (UTC)

Wiki Education Foundation-supported course assignment
This article was the subject of a Wiki Education Foundation-supported course assignment, between 26 August 2019 and 11 December 2019. Further details are available on the course page. Student editor(s): Blakelevinson.

Above undated message substituted from Template:Dashboard.wikiedu.org assignment by PrimeBOT (talk) 18:52, 17 January 2022 (UTC)

Social implications of chatbots
Chatbots' reliance on a limited number of key words and stock responses, together with the increasing use of chatbots by business to replace live help, pose increasing problems for customers consigned to dealing with chatbots. The ultimate success of chatbots use by business will depend the ability of business to program customers to communicate with chatbots and limit their inquiries to topics, content and language that chatbots can "understand." 134.56.232.206 (talk) 17:29, 4 February 2023 (UTC)

Looking for reliable ELIZAs
No objections have been expressed to the suggestions above, so I hope to go ahead as planned within the next week. Does anyone know of reliable ELIZA implementations in a form that readers can inspect for themselves? Nearly all of the implementations listed on the ELIZA page are very different from Weizenbaum's original, and the one that genuinely follows his algorithm is a Java program which might make it hard to use for many people. Has anyone tried implementing a genuine ELIZA in AIML, for example, or any other simple scripting language? (I'm not sure whether AIML can handle all the processes that Weizenbaum uses, but presume it can.) Thanks, Phil WikkPhil (talk) 14:14, 30 January 2010 (UTC)

Sorry, "the next week" was very optimistic! I've still not found any reliable ELIZAs in the form of an easily-comprehensible script (e.g. AIML) that will enable examples to be given in a usable and testable format. Any suggestions for how to move forward on this? If no luck, I'll just have to explain things informally. WikkPhil (talk) 18:05, 20 August 2010 (UTC)


 * Okay, nearly thirteen years later, sorry! But better late than never. I can now report that Weizenbaum's original Eliza code has been found.
 * See {Shrager, Jeff; Berry, David M.; Hay, Anthony; Millican, Peter (2022). "Finding ELIZA - Rediscovering Weizenbaum's Source Code, Comments and Faksimiles". In Baranovska, Marianna; Höltgen, Stefan (eds.). Hello, I'm Eliza: Fünfzig Jahre Gespräche mit Computern (2nd ed.). Berlin: Projekt Verlag. pp. 247–248}.
 * The ELIZA article has more details. -- Derek Ross &#124; Talk 02:05, 18 February 2023 (UTC)

Semi-protected edit request on 9 June 2023 - Update and improve information on chatbots and their impact in the e-commerce industry
Change:

A 2017 study showed that only 4% of companies were using chatbots. However, according to a 2016 study, around 80% of businesses expressed their intention to implement chatbots by the year 2020.

To:

In May 19, 2022, Meta announced the global public availability of the cloud-based platform, enabling businesses of all sizes worldwide to directly connect with WhatsApp through the WhatsApp Cloud API. Chatbots on WhatsApp have emerged as a prominent reference channel for consumers, establishing themselves as an integral part of the omnichannel e-commerce experience.

Cesarcas1994 (talk) 04:41, 9 June 2023 (UTC)
 * Red information icon with gradient background.svg Not done: Why? Specifically, why remove the perfectly good content that's already there? Actualcpscm (talk) 11:01, 9 June 2023 (UTC)
 * Of course, I share reasons to do it:
 * 1- The contents proposed for change are with respect to 2017 and a 2016 report, with respect to 2020. This with the aim of updating the content, we are already in the year 2023.
 * 2- In general, the content of the entire paragraph of the title "Messaging apps" is made up of references from the year 2016 - 2017. The market has changed a lot since then, and chatbots are a subject of continuous change, the number of chatbots in Facebook Messenger has stagnated since 2019, and chatbots on other platforms have taken much more prominence.
 * 2- The announcement in May 2022 of the WhatsApp Cloud API is a turning point since it allowed any company of any size to make use of the WhatsApp Cloud API, which is the leading platform in instant messaging. It means the tipping point to allow businesses around the world to access WhatsApp in a scalable way and establish chatbots. Prior to this date only a small number of companies had exclusive privileges to connect to the WhatsApp API such as Twilio, Zendesk, Infobip and others.
 * Thinking constructively, you can leave content from 2016 and 2017 while also including more updated content Cesarcas1994 (talk) 22:49, 9 June 2023 (UTC)

Study Forrester
Suppose we can remove this, probably not important study and outdated as well: "A study by Forrester (June 2017) predicted that 25% of all jobs would be impacted by AI technologies by 2019." Karlaz1 (talk) 16:04, 14 June 2023 (UTC)

Semi-protected edit request on 4 July 2023
Later in 2022 a chatbot by the name chatgpt launched https://en.wikipedia.org/wiki/ChatGPT Totallynotmwa (talk) 20:29, 4 July 2023 (UTC)
 * Red question icon with gradient background.svg Not done: it's not clear what changes you want to be made. Please mention the specific changes in a "change X to Y" format and provide a reliable source if appropriate. PlanetJuice (talk • contribs) 20:33, 4 July 2023 (UTC)

Semi-protected edit request on 14 August 2023
Suggestion for "Further-Reading" König, Wolfgang (2023): Why a Chatbot Didactics Model? The Grey-Box-Model of Chatbot-Didactics http://dx.doi.org/10.13140/RG.2.2.32217.29280 2003:D1:6737:4200:D406:D001:9A9D:A334 (talk) 06:14, 14 August 2023 (UTC)

Not done: Self-published monograph. See also: Further reading. Xan747 ✈️ 🧑‍✈️

Semi-protected edit request on 22 August 2023
I have found one dead link in this article, i want to improve the link by adding well written content on the same issue. The dead link is present in the reference no. 18. Mindscrafter (talk) 14:15, 22 August 2023 (UTC)
 * Red question icon with gradient background.svg Not done: it's not clear what changes you want to be made. Please mention the specific changes in a "change X to Y" format and provide a reliable source if appropriate.  — Paper9oll  (🔔 • 📝)  14:35, 22 August 2023 (UTC)