Wikipedia:Articles for deletion/SapientX


 * The following discussion is an archived debate of the proposed deletion of the article below. Please do not modify it. Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review).  No further edits should be made to this page.

The result was delete‎__EXPECTED_UNCONNECTED_PAGE__. ✗ plicit  00:56, 17 February 2024 (UTC)

SapientX

 * – ( View AfD View log | edits since nomination)

Refbombed article about an AI startup, packed with references that don’t mention the subject at all, or mention it in passing, or are PR. There are a few refs that discuss the subject in detail so it might be possible to stubify and keep this, but it seems marginal so bringing here for consensus. Mccapra (talk) 03:13, 19 January 2024 (UTC) Please add new comments below this notice. Thanks, ✗  plicit  09:02, 26 January 2024 (UTC)
 * Note: This discussion has been included in the deletion sorting lists for the following topics: Companies, Computing, Internet,  and California. Mccapra (talk) 03:13, 19 January 2024 (UTC)
 *  Relisted to generate a more thorough discussion and clearer consensus.


 * Weak keep. Notability is marginal, and the article’s Promo style is annoying, but its Product Sage does seem to have gotten significant coverage as sourced here. Llajwa (talk) 20:07, 28 January 2024 (UTC)
 * "Coverage" is not a criteria for establishing notability.  HighKing++ 20:49, 5 February 2024 (UTC)

Please add new comments below this notice. Thanks,  Sandstein   20:25, 3 February 2024 (UTC)
 * Keep - There is enough coverage on this one such as Venturebeat, Santa Vruz Works, Investor Place, and bnext.com.Royal88888 (talk) 00:18, 31 January 2024 (UTC)
 * "Coverage" is not a criteria for establishing notability.  HighKing++ 20:49, 5 February 2024 (UTC)
 * I'm a co-founder of SapientX. To date, we have 31 press articles and TV interviews, published 5 white papers, issued 14 press releases and we will be featured in Dominique Wu's soon to be released book on XR. I realize that most of the above is not highly valued by Wikipedia standards.
 * I would also like to share with you important historical milestones that are not well supported by press:
 * 1. SapientX's conversational AI work began in 2003 under ARDA's NIMD research program. This work was done by parent company Planet 9 Studios and the IP was transferred to SapientX in 2016. Under this funding, we developed Sage, the first commercial conversational 3D character. (IBM Watson also began in the NIMD program.) This can all be documented with valid footnotes.
 * 2. Bruce Wilcox joined our team in 2008 and developed an upgraded AI system later to be called ChatScript and released into open source. ChatScript is the first generative AI conversational system that I am aware of. The press falsely portrays ChatGPT as the first generative AI system. ChatScript was used in our RayGun navigation platform. Customers included BMW, Clarion, Intel, Nvidia and Magellan GPS.
 * 3. In 2016, we developed Mitsubishi Mia, the first conversational 3D character for automotive use.
 * 4. In 2021, we publicly demonstrated the first life-size conversation 3D character in a prototype for Lowe's.
 * 5. In 2022, we delivered Chief, a life-size museum docent, to the Liberty Station retail complex in San Diego.
 * The point that I would like to make is that we have consistently been leaders in conversational AI and these achievements should be captured in Wikipedia. I can provide documentation of each fact asserted above. I acknowledge that these same facts are not fully supported in the commercial press. So I ask, are press citations more valuable than actual historical achievement? I will be happy to add these facts along with citations, to the SapientX article. DavidColleen (talk) 10:46, 9 February 2024 (UTC)
 * Hi David, we require (a) in-depth (b) analysis/opinion/investigation/fact checking that are (3) clearly attributable to a source unaffiliated to the topic company. So on the basis that the TV interviews are essentially giving somebody from the company speaking, the white papers are published by the company, press releases are published by the company, and the book isn't published as of yet so we've no idea of the content, that leaves us with the 31 press articles. An analysis of those articles to date shows that they regurgitate the information provided by the company. They fail (a), (b) and (c) of the test above.  HighKing++ 15:09, 9 February 2024 (UTC)
 * Thanks HighKing. Yes, I already stated that I understood your evaluation of the present footnotes. I can introduce the above facts to the article supported by new source documents and references. I'm not versed in your rules. Shall I directly add the above facts? DavidColleen (talk) 17:38, 9 February 2024 (UTC)
 *  Relisted to generate a more thorough discussion and clearer consensus.

Relisting comment: Final relist to establish consensus. Please add new comments below this notice. Thanks, The Herald (Benison) (talk) 00:53, 11 February 2024 (UTC)
 * Comment: While it's certainly noteworthy (in the colloquial sense) that someone is developing NLP applications using symbolic AI in the year 2024, I am not convinced it's notable in the Wikipedian sense. Most of the sourcing is passing mentions and I don't see a whole lot of significant in-depth coverage. jp×g🗯️ 07:45, 4 February 2024 (UTC)
 * Of course symbolic AI has its place in 2024. Companies approaching unicorn status like Kore.AI and JustAnswer use the same underlying symbolic NLP (ChatScript). It's as effective as machine learning for intent detection. Earl Sacerdoti reviewed SapientX's NL technology for a fundraising site and said: "the symbolic-processing approach uses programs rather than statistics to interpret inputs. This makes the systems less robust than the statistically-based ones, but completely reliable. This is important for tasks like controlling automotive subsystems, where a language-based control system performing the incorrect task is distracting if not dangerous."  And we all know the unreliabilities of LLM's. SapientX blends NLP approaches as appropriate for  task. (Bruce Wilcox, SapientX). 90.214.57.60 (talk) 10:00, 5 February 2024 (UTC)
 * This is my first Wikipedia post. I'm a co-founder of SapientX. While it's currently fashionable to use machine learning and more recently large language models for machine conversation, both fail to offer the accuracy and reliability needed for serious commercial applications. For instance, Open AI, in their recent white paper, claims only 78% conversational accuracy for GPT-4 asking the same question 5 times. The core of SapientX's conversation system is ChatScript (symbolic reasoning) which yields 99% accuracy in our internal testing. ChatScript was developed by my co-founder Bruce Wilcox. Unfortunately, there is no standard for testing or third party test results. BTW, we also offer a version of our software that combines ChatScript (for accuracy) and GPT-4 (for its ability to riff).
 * JPxG suggests that press coverage is the measuring stick for noteworthiness. I disagree. I will relay to you that TomTom conducted testing of what they felt to be the three strongest conversational AI systems in the market; Cerence (formerly Nuance), SoundHound and SapientX. They reported to me that SapientX outperformed the others. Additionally, Gartner recently ranked Kore AI as the top conversational AI system. Kore uses ChatScript. Gartner did not include SapientX in the evaluation as we did not meet their revenue level. DavidColleen (talk) 10:03, 9 February 2024 (UTC)
 * Delete: I can't find any non-trade publication sources and I'm not seeing significant coverage of the company beyond the Trump chatbot review. voorts (talk/contributions) 00:30, 5 February 2024 (UTC)
 * Does raising $2,155,753.95 by crowdfunding 2,798 people make it more notable? https://www.startengine.com/offering/SAPIENTX 90.214.57.60 (talk) 13:38, 5 February 2024 (UTC)
 * No. voorts (talk/contributions) 14:23, 5 February 2024 (UTC)
 * Delete This is a company therefore GNG/WP:NCORP requires at least two deep or significant sources with each source containing "Independent Content" showing in-depth information *on the company*. "Independent content", in order to count towards establishing notability, must include original and independent opinion, analysis, investigation, and fact checking that are clearly attributable to a source unaffiliated to the subject. In plain English, this means that references cannot rely *only* on information provided by the company - such as articles that rely entirely on quotations, press releases, announcements, interviews, website information, etc - even when slightly modified. If it isn't *clearly* showing independent content then it fails ORGIND. Here, the references are simply regurgitating company announcements and have no "Independent Content" in the form of independent analysis/fact checking/opinion/etc. As noted above by a co-founder, there are very little sources and this may be WP:TOOSOON. <b style="font-family: Courier; color: darkgreen;"> HighKing</b>++ 15:10, 9 February 2024 (UTC)
 * Thanks HighKing. Your comments help me to understand your evaluation criteria. Using press articles to validate facts works well for topics such as baseball, but for deeply technical topics, such as conversational AI, I don't know a single person in the press versed enough in the topic to write a solid article without the input of someone like myself or Bruce Wilcox. Instead, they write about what is fashionable, such as LLM's this week. There is even large institutional bias, that I have encountered, at the university level. One head of an AI department at a Finnish university to me that "if it's not machine learning, it's not AI". This of course is silly.
 * Nonetheless, I believe that I can support most of the new facts, listed above, with multiple documents. Is it okay to proceed with this? DavidColleen (talk) 17:53, 9 February 2024 (UTC)
 * Hi, based on what you've said above, you may wish to consider the following. There are different standards required for supporting "facts" within the article to those we use to establish whether a topic is notable. So, for example, "facts" can be supported by *any* source which meets our criteria as a *reliable* source as per WP:RS. Sources that may be used to establish notability need to meet a different standard. Also, different topic categories may have their own guidelines which provide better explanations on what sources will meet the criteria. For companies, we use GNG/WP:NCORP and I've summarised the standards for sources which may be used to establish notability above. Be aware, this current process of AfD is only concerned with notability, not with the facts. Adding more sources to support some of the factual content may not lead to assisting in establishing notability. As you've acknowledged above, establishing notability for specialised companies is difficult because articles in newspapers are often written by journalists who may not have sufficient knowledge of the topic company. Similarly, your comment about the head of the AI department appears (to me) to be directed at the technical area of "machine learning vs AI", not at this specific company. Many years ago somebody summarised our requirements as "If the company is notable, somebody unconnected will have written something decent about it" and that still holds true albeit we've had to clarify what is meant by "somebody unconnected" and "written something decent". <b style="font-family: Courier; color: darkgreen;"> HighKing</b>++ 14:20, 13 February 2024 (UTC)
 * <p class="xfd_relist" style="margin:0 0 0 -1em;border-top: 1px solid #AAA; border-bottom: 1px solid #AAA; padding: 0px 2em;"> Relisted to generate a more thorough discussion and clearer consensus.


 * Delete Wikipedia is not for promotion BottleOfChocolateMilk (talk) 01:42, 11 February 2024 (UTC)
 * Delete - No evidence of notability — MaxnaCarta  ( 💬 • 📝 ) 00:08, 14 February 2024 (UTC)


 * The above discussion is preserved as an archive of the debate. <b style="color:red">Please do not modify it.</b> Subsequent comments should be made on the appropriate discussion page (such as the article's talk page or in a deletion review). No further edits should be made to this page.