Wikipedia:Wikipedia Signpost/2022-08-01/From the editors


 * Here is the deal: it's pretty good at what it does. – J

There are a few terms that have been thrown around a lot lately: AI, DL, NN, ML, NLP, and more. While a precise definition of all these terms would take multiple paragraphs, the thing they have in common is that a computer is doing some stuff.

For anyone who is not familiar with this alphabet soup, I've written a fairly comprehensive overview of the field's origins and history, as well as an explanation of the technologies involved, here, and ask forgiveness for starting the explanation of a 2019 software released in 1951.

In recent years, the field of machine learning has advanced at a pace which is, depending on who you ask, somewhere between "astounding", "terrifying", "overhyped" and "revolutionary". For example, GPT (2018) was a mildly interesting research tool, GPT-2 (2019) could write human-level text but was barely capable of staying on topic for more than a couple paragraphs, and GPT-3 (2020–22) wrote this month's arbitration report (a full explanation of what I did, how I did it, and responses to the most obvious questions can be found below).

The generative pre-trained transformers (this is what "GPT" stands for) are a family of large language models developed by OpenAI, similar to BERT and XLnet. Perhaps as a testament to the rapidity of developments in the field, even Wikipedia (famous for articles written within minutes of speeches being made and explosions being heard) currently has a redlink for large language models. Much ink has already been spilled on claims of GPTs' sentience, bias, and potential. It's obvious that a computer program capable of writing on the level of humans would have enormous implications for the corporate, academic, journalistic, and literary world. While there are certainly some unrealistically hyped-up claims, it's hard to overstate how much these things are capable of, despite their constraints.

The reports
With that said, there are basically two options here.
 * The first is for me to keep droning on about how these models are a big deal, in a boring wall of text that makes increasingly outlandish and far-fetched claims about their capabilities.
 * The second is to show you what I am talking about.

I have opted for the second. In this issue, two articles have been written by an AI model called GPT-3: the deletion report and the arbitration report.

For the deletion report, GPT-3 was prompted with a transcript of each discussion in the report, and instructed to write a summary of it in the style of deceased Gonzo journalist Hunter S. Thompson. This produced a mixture of insightful, incisive, and derisive commentary. GPT-Thompson proved quite capable of accurately summarizing the slings and arrows of every discussion in the report – even though it specifically covers the longest and most convoluted AfDs. "Ukrainian Insurgent Army war against Russian occupation", for example, was a whopping 126,000 bytes (and needed to be processed in several segments) but the description was accurate.

For each discussion in the report, I provided a full transcript of the AfD page (with timestamps and long signatures truncated to aid processing), and prompted GPT-3 for a completion, using some variation on the following:
 * "The following text is a summary of the above discussion, written by Gonzo journalist Hunter S. Thompson for Rolling Stone's monthly feature on Wikipedia deletion processes."

Despite being ostensibly written in Thompson's style, these were generally quite straightforward summaries that covered the arguments made during each discussion, with hardly any profanity.

Afterwards, I provided the summary itself as a prompt, and asked GPT-Thompson for an "acerbic quip" on each. Unlike the "summary" prompts (in which GPT-Thompson only occasionally chose to accompany his commentary with unprintable calumny and scathing political rants), the "acerbic quip" prompts solely produced output ranging from obscene and irreverent to maliciously slanderous. Notably, this behavior is identical to what Hunter S. Thompson habitually did in real life, and part of why many editors allegedly loathed working with him. Personally, I didn't mind sifting through the diatribes (some of them were quite entertaining), but having to run each prompt several times to get something usable did make it fairly expensive.

For the arbitration report, GPT-3 was instructed to write a summary of each page in the style of deceased United States Supreme Court justice Oliver Wendell Holmes, Jr. This produced surprisingly insightful commentary; Justice GPT-Holmes proved able to summarize minute details of proceedings, including some things I'd missed while originally reading them. In general, he was more well-behaved (and less prone to obscene tirades) than GPT-Thompson, although he did have a tendency for long-winded digressions, and would often quote entire paragraphs from the source text.

Similar to the deletion report, input consisted of brief prologues (e.g. "The following is a verbatim transcript of the findings of fact in a Wikipedia arbitration request titled 'WikiProject Tropical Cyclones'"). This was followed by the transcript of the relevant pages (whether they were the main case page, arbitration noticeboard posting, preliminary statements, arbitrator statements, or findings of fact and remedies). Afterwards, a prompt was given for a summary, of the following general form:
 * The following text is an article written by United States Supreme Court Justice Oliver Wendell Holmes, summarizing the findings of fact and remedies, and their broader implications for the English Wikipedia's jurisprudence.

Image generation
While some concerns have been raised about the intellectual property implications of images generated by such models, the determination has been made (at least on Commons) that they're ineligible for copyright due to being the output of a computer algorithm. With respect to moral rights, the idea is generally that they're ripping off human artists because they were trained on a bunch of images from the Internet, including copyrighted ones. However, it's not clear (at least to me) in what way this process differs from the same being done by human artists. As far as I can tell, this is the way that humans have created art for the last several tens of thousands of years – as far as I can tell, Billie Eilish does not get DMCA claims from the Beatles for writing pop music, and Peter Paul Rubens didn't get in trouble with the Lascaux cavemen (even when he painted obviously derivative works).

Obvious questions

 * This is a joke, right?
 * No. I really did have GPT-3 write these articles. I can show you screenshots of my account on the OpenAI website if you want. Copyediting was minimal, and consisted mostly of reordering entries and removing irrelevant asides.


 * So you just pushed a button and the whole thing popped out?
 * Not exactly. I organized the layout of each article, determined what sections would go where, and had GPT-3 write the body text of each section according to specific prompts (as described above). It was also necessary to format the model's output in MediaWiki markup. Although GPT-3 is more than capable of writing code (including wikitext), I didn't want to overwhelm it by asking it to do too much stuff at the same time, as this tends to degrade quality.


 * This is obviously cherry-picked – you didn't just publish the direct output of the model.
 * Well, we don't do that for human writers, either. I don't even do that for myself – typically, by the time I flag my own stuff to be copyedited, I have gone through multiple stages of writing, rewriting, editing, adding notes for clarification, and deleting unnecessary content.


 * How do you know it's not completely full of crap?
 * I don't – every claim that it made was individually fact-checked (we do this for human writers, too). The overwhelming majority were correct, and in the rare cases where it got something wrong, it could be fixed by asking it to complete the prompt again.


 * Why not just write the articles yourself at that point?
 * Even accounting for the time spent verifying claims, it was still generally faster than writing the articles myself, as it was capable of structuring full paragraphs of text in seconds. It was sometimes time-consuming to re-prompt it when it would write something incorrect or useless, but there is a sort of art to writing prompts in a way that causes useful answers to be generated, which gradually became easier to do as time went on. For example, replacing "The following is a summary of the discussion" with "The following is a rigorously accurate summary of the discussion" (yes, this really works).


 * So it is a worse version of a human writer?
 * In terms of typographical errors, it was far better: I don't remember it making a single misspelling. The few grammatical errors it made were minor, and not objectively incorrect (e.g. saying "other editors argue" instead of "other editors argued" for a discussion that the prompt said had already been closed – this is not even really an error per se).


 * I heard language models were racist.
 * Language models like GPT-3 predict the most likely completion for a given input sequence, based on its training corpus, which is a very broad spectrum of text from the Internet (ranging from old books to forum arguments to furry roleplay). If its prompt is the phrase "I think the French are bastards because", you will probably end up with a bunch of text about how the French are bastards, similar to if you typed that into a search engine. In this particular instance, I did not observe GPT-3 saying anything prejudiced. This may be due to the people I prompted it to emulate; presumably, if I had told it to write in the style of Adolf Hitler, I would have gotten some nasty stuff. My solution to this was to not do that.


 * How much did this whole gimmick cost?
 * Since I signed up for the GPT-3 beta, I have used it for things other than Signpost writing, so it's hard to tell precisely how much compute went towards these articles. However, the total cost of all the API requests I've made so far is 48.12 USD.


 * Damn!!
 * Large language models are notorious for requiring massive amounts of processing power. I still think it's a bargain: imagine how much it would cost to actually hire Hunter S. Thompson and Oliver Wendell Holmes after you adjusted for inflation.