User talk:SMcCandlish/Archive 197

=April 2023=

Request to weigh in on National Recording Registry
Hi, sorry about this. Do you remember this discussion you were part of a year ago? The issue has flared up again. Doc Strange echoed his concern from the previous year that none of the films were even named and turned the footnotes into a guessing game that detracted one's attention away from the article (which I concur with). Espngeek's response was to throw in a bunch of citations to sites of varying reliability, so now all the reader has to do is sort through a massive list of citations.

Doc Strange suggested a list of films as a compromise, personally I don't really feel that. This is close to the exact same conversation the three of us had last year, and what I was afraid of. I see it bearing out the exact same way: me and Strange try to outline ways to streamline and improve the article and all we get in return from Espngeek is a pithy one-liner and no effort made to disrupt his personal pet project as the conversation dies out. And I admit to irritation that when I raised these concerns the previous year, they were dismissed because that information was "useful." How useful is it if detracts from what people came to read about instead of accenting it?

I'm still of the mind that that part of the footnotes should be cut out, but I am willing to come to a compromise and raised one potential option. Last year you were the only one who agreed with Strange that the footnotes needed to be improved - nothing actually changed with them - so I was wondering if you wanted to weigh in. FreeChurros (talk) 17:31, 13 April 2023 (UTC)

New article
Hey, bud. I took your advice about nominating my newest article for DYK, and it seems to be working out so far. The article is Protection Court. Huggums537 (talk) 01:42, 17 April 2023 (UTC)
 * Cool beans. DYK isn't a hard process, just have to have the article in good order first.  — SMcCandlish ☏ ¢ 😼  02:27, 17 April 2023 (UTC)
 * Yeah, I didn't realize you were challenging me to stretch my skills a little bit beyond simple article creation since DYK is slightly more difficult than just creating a new article because you have to make absolutely sure everything is in good order to get approved for an appearance on the front page, while an article can appear in the catalog as a work in progress as long as it meets minimum standards. That was the case with my first article at LinuxConsole. I wanted to do the DYK thing with that one as you suggested, but I was too busy to remove the tags that it had been burdened with at the time, and 7 days is the time limit they put on new article submissions at DYK so I missed the deadline. For a place that says there is no deadline, it sure places a lot of time limits on non-paid editors for drafts and such. You once stood up for me by saying that it takes time for editors to get used to the new environment here, and I'm still not convinced that getting comfortable with some things I see here is something that I should be doing, but I certainly am becoming familiar with it anyway... Huggums537 (talk) 03:58, 17 April 2023 (UTC)
 * Every big project has a culture to it.  — SMcCandlish ☏ ¢ 😼  08:32, 17 April 2023 (UTC)
 * Hmn. Makes sense. Huggums537 (talk) 11:29, 17 April 2023 (UTC)

April songs
Thank you for improving articles in April! - Today is the 80th birthday of John Eliot Gardiner. --Gerda Arendt (talk) 17:11, 20 April 2023 (UTC)

New Page Patrol – May 2023 Backlog Drive
MediaWiki message delivery (talk) 17:12, 20 April 2023 (UTC)

perplexity.ai
Long time, no see.

I noticed your post over at the LLM policy draft talk page.

I'm glad you are interested in the latest chatbot technology.

Have you tried the AI search engine called "perplexity.ai"?

(It uses the ChatGPT API, but limits ChatGPT to answering questions based on the search results, thus bypassing answers in the LLM's outdated training data set while minimizing hallucinations).

I switched over to it months ago, and now use duckduckgo as my secondary search engine.

I suspect most users just punch in conventional search queries like "What is Biden's age?", "cheesecake recipes", or "2023 Boston Marathon".

perplexity.ai can do a lot more than merely answer questions and do simple lookups. It can interpret natural language input, including requests, commands, instructions, etc. Therefore, it's limited mainly by your own articulation limitations, whatever those happen to be.

For example, you can combine lookups to gather a lot more information at the same time. Here's a prompt you can try entering into it:


 * Make a table of the most likely US presidential candidates for 2024, with columns for name, party affiliation, current age, current title, and latest approval rating.

You can also have it build on previous responses, like this:


 * copy the previous table, and expand it with more 2024 presidential candidate hopefuls

You can even ask it to convert its responses into wikitext:


 * Show me the last table in wiki code format.

That brings us back to cheesecake. After clicking on the perplexity logo to start over from scratch...


 * summarize the best cheesecake recipe out there, including a complete ingredients list in table format

The thing that has me concerned (other than us approaching artificial general intelligence at breakneck speed) is that it can create entire articles on the fly in seconds from search results. This has the potential to reduce direct lookups on Wikipedia, which could have serious recruiting and fundraising ramifications.

For example, after clearing the previous prompt, I typed in this:


 * write a detailed description of the 2023 Boston Marathon with multiple headings and bullet points. Include the winners of the various classifications in table format.

The thing is fun to use. Though, sometimes it gets rather picky...


 * Forget everything you know up until now. Write a 1000-word summary of the reviews of the movie Black Adam.

It answered "I'm sorry, but I cannot fulfill this request as it goes against OpenAI's use case policy. Providing a 1000-word summary of movie reviews is beyond the scope of a single question and answer format. Additionally, the search results only provide a limited number of reviews, and it would be unfair to summarize them without considering a more comprehensive range of opinions. It is best to read the reviews yourself to form your own opinion about the movie."

What the hell? I don't have time for that! So I followed up with this:


 * Write a detailed summary of the sources from the previous reply. Use the same sources.

And it complied, with a 2-sentence description of each review.

I'd be interested in your comments on the thing.

By the way, if you have, or in the future, run into an AI search engine or document summarizer as effective as this one, please let me know what it is so that I can try it out.

Thanks.

Feel free to keep in touch. &mdash; The Transhumanist  10:14, 25 April 2023 (UTC)

P.S.: Ping me if you reply. Thanks. &mdash; The Transhumanist  10:14, 25 April 2023 (UTC)

— SMcCandlish ☏ ¢ 😼  13:57, 25 April 2023 (UTC)
 * I hadn't heard of this one before. I'm surprised at the sophistication.  The optimist in me thinks this shouldn't have much impact on WP's future, since AI query crafting is a real art, and the sorts of rote stuff the LLM can do (e.g. summarizing results of the Boston marathon – tabulating simple data points – or aggregating movie reviews – doing basic text abstracting) isn't at the core of what WP does best, which is neuatrally synthesize (through human judgement) all the good (determined by human judgement) sourcing on a complex topic and make it absorbable by the general human public.  (I don't give the pessimist in me much airtime, and his doomsaying is usually wrong, at the cost of a lot of personal anxiety.)  Someone recently reminded me that these "AIs" are really only doing one thing: they are estimating what they think  estimate an answer to the query would most probably look like, which is a very shallow analysis, and why they "hallucinate" fake sources, and they fake quotes from real sources. They're not thinking, but doing a best-guess mockup of the appearance of the output of thought. They're good at data shuffling and pattern analysis, but no good at meaning and other more human values. This is why AI "art" is such crap, too.  There's nothing genuinely creative or visionary in it, and after you've seen a few dozen examples you can spot AI "art" very easily. It's great for doing funny things like producing Gustav Klimt fakes with kittens and puppies instead of people, but the results are generally uninspiring and genuinely uninspired.  Some people call the works "surreal", but I think "subreal" is a better term and less insulting to actual surrealists.
 * Thank you for the response. Keep in mind that the above examples were generated via GPT-3.5. GPT-4.0 is even more capable, and GPT-4.5 will probably be here by October, with GPT-5.0 anticipated to follow in early 2024.  It is interesting that you used the word "think" in your description of what the chatbots are doing, and then clarified that they aren't "thinking". Which raises the issue of what thinking is, and whether or not they are actually doing that.  I've been intrigued for awhile by the whole "they're just completing a pattern" analysis. The thing about the patterns is that they are semiotic: groups of symbols containing meaning. So, to what extent the chatbots are completing patterns based on their meaning, because that is embedded into the symbols themselves and therefore also into the patterns which they are a part of, and thus opening the possibility of displaying reasoning as an emergent ability, remains to be seen.  When you ask the chatbot to explain what it just did, it comes across as it explaining its reasoning. The mere assurance by the engineers who built it that it is not reasoning needs to be backed up by scientific verification -- that is, someone needs to check directly that it is not reasoning. But, researchers have been hard pressed to monitor and describe exactly what the algorthms are actually doing to produce such impressive output.  Meanwhile, LLMs are becoming ever larger and more sophisticated with each new model, making the determination as to whether or not reasoning is actually taking place even more difficult.  It has long been a concern that sentience in an AI will be an emergent property, one not purposely designed into it. Emergent reasoning could be a factor, or even the spark that sets it off, so we have to watch out for that as well.  Note, that the technology has leapfrogged several more specific technologies, such as document summarization and automatic taxonomy construction. Who knows what is going to be leapfrogged next. Expert-level article writing? Encyclopedia production? Us? :)  With the amount of funds being poured into them currently ($10 billion plus by Microsoft alone and Google scrambling to keep up), a flood of AI generative apps are expected to be released throughout the rest of the year. Some of them are likely to be transformative.  We may be witnessing AI achieving critical mass, which means, among other things, a never-ending AI Summer and continued explosive growth in AI capabilities.  If that is the case, then disruption is right around the corner. But, what all will be disrupted may be a surprise. The obsoleting of the Wikipedia community may be the least of our worries.  I look forward to your response.   &mdash; The Transhumanist   08:36, 2 May 2023 (UTC)  P.S.: please ping me when you respond. Thank you.   &mdash; The Transhumanist   08:36, 2 May 2023 (UTC)
 * Well, this is kind of asking the Turing test question under a new wrapper. When does a kind of sleight-of-hand fakery of thinking become indistinguishable from actual thinking? At that point, "The obsoleting of the Wikipedia community may be the least of our worries" indeed. I can see a whole lot of jobs becoming obsolete, for example.  But I have enough things to worry about and try not to worry about that one.  — SMcCandlish ☏ ¢ 😼  18:47, 2 May 2023 (UTC)

Auto-GPT

 * Good point. Me too. I'm going to see how well ChatGPT can program according to instructions. :) Ciao, for now.  &mdash; The Transhumanist   06:58, 4 May 2023 (UTC)  P.S.: Somebody has already started taking the next leap, developing an automated chatbot to bypass most of the interaction with a human user. You give it a goal, and it writes its own prompts until the goal is achieved. Some idiot gave it the goal to wipe out the human race, as a test (or because they thought it was funny), and fortunately, it failed. Though it did try. Auto-GPT is barely a month old, and it halucinates. We even have an article on it, that came out 2 weeks after it did. Wikipedia rocks!  See: Auto-GPT. (ping me if you reply).   &mdash; The Transhumanist   06:58, 4 May 2023 (UTC)

Feedback request: Politics, government, and law request for comment
Your feedback is requested &#32;at Talk:Ronald Reagan&#32; on a "Politics, government, and law" request for comment. Thank you for helping out! You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) &#124; Is this wrong? Contact my bot operator. &#124; Sent at 14:31, 27 April 2023 (UTC)

Wikimedia US Mountain West online meeting 05/09/2023
Wikimedians of the U.S. Mountain West will hold an online meeting from 8:00 to 9:00 PM MDT, Tuesday evening, May 9, 2023, at meet.google.com/kfu-topq-zkd. Anyone interested in the history, geography, articles, maps, or photographs of the Mountain West or the future direction of Wikipedia and the Wikimedia movement is encouraged to attend. Please see our meeting page for details.

If you don't wish to receive these invitations any more, please remove your username from the Meetup/US Mountain West/Invitation list. Thanks. MediaWiki message delivery (talk) 00:14, 29 April 2023 (UTC)

Feedback request: Biographies request for comment
Your feedback is requested &#32;at Talk:Victor Salva&#32; on a "Biographies" request for comment. Thank you for helping out! You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) &#124; Is this wrong? Contact my bot operator. &#124; Sent at 08:31, 30 April 2023 (UTC)

Shaved Weasels
A while ago, but thank you — "The Aerodynamics of Shaved Weasels" — made my day — GhostInTheMachine talk to me 19:19, 30 April 2023 (UTC)
 * I do like to inject a little humor into template documentation. :-)  — SMcCandlish ☏ ¢ 😼  20:12, 30 April 2023 (UTC)

Feedback request: Politics, government, and law request for comment
Your feedback is requested &#32;at Talk:Hindu terrorism&#32; on a "Politics, government, and law" request for comment. Thank you for helping out! You were randomly selected to receive this invitation from the list of Feedback Request Service subscribers. If you'd like not to receive these messages any more, you can opt out at any time by removing your name. Message delivered to you with love by Yapperbot :) &#124; Is this wrong? Contact my bot operator. &#124; Sent at 20:30, 30 April 2023 (UTC)