User:Tzusheng/sandbox/Wikipedia:Wikibench/Campaign:Editquality

This Wikibench campaign page documents the goal, progress, and results of the edit quality campaign, which curates edits and associated communal labels to evaluate the performance of artificial intelligence models, such as ORES or Liftwing, that predict edit quality on the English Wikipedia.

Background
The English Wikipedia heavily relies upon AI models for identifying inappropriate edits. For example, recent changes patrollers use AI-assisted quality and intent filters to check recent changes of articles for edits that are likely damaging or made with bad faith. Other third-party vandalism detection tools, such as Huggle and SWViewers, also use AI models to identify edits likely to be vandalism. While Wikipedians rely on these AI models to carry out critical work on the English Wikipedia, the effectiveness of these AI models is unclear. The core belief underlying this project is that Wikipedians' voices are essential in evaluating AI models from their individual and collective perspectives. The fundamental obstacle to evaluating these AI models stems from the lack of available data for evaluation.

Campaign goal
The primary goal of this campaign aims to curate a dataset that can be used to evaluate AI models that predict edit quality on the English Wikipedia. Each datapoint in the dataset is an edit along with its label, which is Wikipedians' judgment on this edit. By comparing AI models' predictions against human labels on a sufficient amount of edits, it is possible to get a better understanding of AI models' performance. This understanding is critical to Wikipedians as a community to make more informed decisions when choosing or using these models.

In addition to the primary goal, this campaign also provides an opportunity for Wikipedians to compare and contrast their judgments with others. This enables the Wikipedian community to reflect, deliberate, and build a shared understanding of whether an edit is damaging or saved with bad faith.

How to participate
To participate in the campaign, Wikipedians may follow the step-by-step guide in this section to import Wikibench, the underlying technical infrastructure that supports the campaign. A registered account on the English Wikipedia is required.

Note that after a three-week pilot study, Wikibench's features associated with label submissions, including the plug-in on diff pages and the edit buttons on the entity pages, have been temporarily turned off for analysis and the next development cycle. The rest of the features, including the display of the campaign page, entity pages, and evaluation page, are still available during this period. Please visit Wikibench's research page for the full project timeline and sign up if you are interested in future updates.

Import Wikibench
Importing Wikibench can be done with a single line of code. Simply copy and paste the following JavaScript snippet into your common.js page:

After saving, you have to bypass your browser's cache to see the changes.

If you are unfamiliar with the common.js page, you may learn more about it here. If you have difficulties with the import, feel free to contact the developer of Wikibench for help.

Importing Wikibench doesn't affect any of your account settings. It only renders the look of a few Wiki pages differently, as described in the next section.

Label data
You may start labeling edits after importing Wikibench. Specifically, Wikibench renders a plug-in on diff pages, where patrollers typically access through the recent changes page and decide whether an edit should be reverted or not.

Below is a screenshot of the plug-in with the following information and user interface widgets:


 * Diff IDs: The revision IDs before and after an edit. You do not need to enter them manually as Wikibench automatically extracts the information from diff pages.
 * Edit damage: You may click either button to label an edit as damaging or not damaging based on your assessment. The optional checkbox allows you to specify that you provide the label with lower confidence when you are not fully confident.
 * User intent: You may click either button to label an edit as being saved with good or bad faith based on your assessment. The optional checkbox allows you to specify that you provide the label with lower confidence when you are not fully confident.
 * Note for edit damage: Optionally, you may add a note to your label for edit damage.
 * Note for user intent: Optionally, you may add a note to your label for user intent.
 * Submit: Clicking the submit button will store your label on a Wikibench entity page, as described in the next section.



To learn more about the reason for distinguishing edit damage and user intent when assessing edit quality, please find more information in the questions and answers section.

Label definitions

 * User intent: An edit is considered "good faith" when it is reasonably plausible that the editor's intention was to improve the article; per WP:AGF, good faith is the default until there is a concrete reason to suspect bad faith. Not all damaging edits are made in bad faith.
 * Edit damage: An edit is considered constructive when the post-edit revision is better than the pre-edit revision. For the purposes of evaluating edit damage, edits are not evaluated against what else could have been done to improve the article. "Not damaging" is a soft default; if an edit makes the article neither better nor worse at all, it is not damaging. Label data should still be provided in cases where edit damage depends on external factors. For example, an edit which introduces verifiably false information with appropriate style and formatting should be labelled as damaging.

Review data and discuss
The labels of each edit are documented on an entity page, which is a typical wiki page on the English Wikipedia. The title of each entity page follows Wikipedia's naming convention of internal links to diff pages.

The entity page of an edit is accessible from the plug-in after a label submission. The table below on this campaign page also has the links to all entity pages as long as you have Wikibench imported. Importing Wikibench is required to fetch and render data in real-time.

Without importing the Wikibench script, one may see the raw source of entity pages, which include a message box that directs viewers to this campaign page and the labels stored in the JSON syntax. Because JSON is difficult for people to read, the Wikibench script parses the content of the entity page and re-renders it in a more reader-friendly format, as shown in the screenshot below.



The re-rendered entity page supports the following user interactions:


 * Review the edit: The diff at the top of an entity page allows people to review the edit associated with the entity page.
 * Review or update the primary label: The primary label is used to compare against AI models' predictions for evaluation. When a person submits the first label of an edit via the Wikibench plug-in, their label is also considered the primary label. Subsequent label submissions will not alter the primary label. To update the primary label, visit the entity page and follow the BOLD, revert, and discuss cycle for consensus building. Because each edit only has a single version of the primary label, just like Wikipedia articles, the community should aim to make collective decisions if possible.
 * Review or update your label: People may review or update their own labels as they wish without affecting the primary label decided by the community.
 * Review individual labels: People may also see the labels of all Wikipedians who submit their labels. The label distribution and notes can be helpful to the community in deciding the primary label. Note that polling is not a substitute for discussion on Wikipedia.
 * Discuss: When disagreements arise, use the talk page associated with each entity page for discussion and consensus building.

Data curation progress
This section documents the data curation progress of the campaign. In order to fetch data and show the curation progress in real-time, importing Wikibench is required (by following the instruction above). With Wikibench imported, two bar charts that show the distribution of all curated data will appear in this section.

The two bar charts that show the data curation progress:

Alternatively, one may use Wikipedia's built-in prefix search (prefix = User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:) to find all Wikibench entity pages on the English Wikipedia.

All labeled data
All the labeled data will be available in the following table after importing Wikibench, which is required for fetching and rendering data in real time.

Data statement
There is a local consensus that on-wiki use of the data acquired through this campaign should be limited to the immediate scope of Wikibench. A strong consensus should be established prior to any on-wiki use outside this research project.

The data acquired as part of this campaign is not intended for other uses and may be inappropriate or unsuitable for many purposes. In particular, it is not a representative sample of edits made to the English Wikipedia, nor is it intended as such.

Table columns
Here is the description of each column:


 * Diff IDs: The revision IDs before and after an edit. The hyperlink in this column links to the entity page associated with each edit.
 * Primary label: The primary label collectively decided by Wikipedians for each edit.
 * Your label: Your submitted label. Blank otherwise. An asterisk denotes that you submitted the label with low confidence.
 * Disagreement: A number between 0 to 1 denotes the disagreement level among all submitted labels. The higher the number indicates the higher level of disagreements. The exact formula of the number is available in the questions and answers section. The number is not affected by the primary label.
 * Label count: The number of Wikipedians who have submitted their labels of an edit.

Table buttons
Once you import Wikibench, four buttons will appear above the table with the following names. These buttons help sort the table for different purposes with a single click:


 * Provide more labels: Click on this button when you'd like to provide labels for diffs that have fewer labels. The button first sorts your labels in ascending order, then sorts label count in ascending order. The resulting table shows diffs that need more labels at the top.
 * Compare my labels: Click on this button when you'd like to compare your labels with the primary labels. The button first sort the primary labels in descending order, then sort your labels in descending order. The resulting table shows diffs with your labels at the top.
 * Build consensus (edit damage): Click on this button when you'd like to help build consensus for diffs with higher levels of disagreement among labelers. The button first sort the label count in descending order, then sort the disagreement of edit damage in descending order. The resulting table shows diffs with a higher disagreement of edit damage at the top. Providing labels or engaging discussions may help build consensus for these diffs.
 * Build consensus (user intent): This button is identical to the previous one but is for user intent.

Entity pages on revdel'ed edits
Reviewers may have submitted their labels on certain revisions that may be revision-deleted subsequently. For recordkeeping purposes, these entity pages should be archived with the redirect creation suppressed during the move.

For editors without redirect suppression rights (i.e. pagemovers, administrators), if you have encountered such entity pages, please append to the list below and a pagemover will carry out the necessary move, with this format:

List of entity pages of revdel'ed edits

 * User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:Diff/1162855148/1162855378 → User:Tzusheng/sandbox/Wikipedia:Wikibench/Archive/Entity:Diff/1162855148/1162855378
 * User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:Diff/1162855588/1162855651 → User:Tzusheng/sandbox/Wikipedia:Wikibench/Archive/Entity:Diff/1162855588/1162855651
 * User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:Diff/1162855553/1162855588 → User:Tzusheng/sandbox/Wikipedia:Wikibench/Archive/Entity:Diff/1162855553/1162855588
 * User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:Diff/1162590593/1162590952 → User:Tzusheng/sandbox/Wikipedia:Wikibench/Archive/Entity:Diff/1162590593/1162590952
 * User:Tzusheng/sandbox/Wikipedia:Wikibench/Entity:Diff/1162446394/1162883043 → User:Tzusheng/sandbox/Wikipedia:Wikibench/Archive/Entity:Diff/1162446394/1162883043

Model evaluation
The curated data can be used to evaluate existing AI models or bots used in Wikipedia, such as ORES or ClueBot NG, or any models to be developed in the future, such as Liftwing. The purpose of an evaluation is to provide a better understanding of the performance of these AI-assisted tools so that the Wikipedian community can collectively make more informed decisions about using and deploying these tools.

Please visit the evaluation page of Wikibench to get a glimpse of how the curated dataset of Wikibench may serve as a benchmark for evaluating AI systems on Wikipedia.

Disclosure of paid participation
The WMF Terms of Use restrictions apply if you expect to receive compensation for work on Wikibench, since they do not distinguish between different types of paid editing. Since disclosure in edit summaries is not possible for technical reasons, Wikibench contributors who receive compensation for their contributions to the research project are technically obliged to disclose this either on their user page or on the talk pages of the pages they edit in the course of this research project per the Terms of Use.

However, this discussion indicates a rough consensus that the spirit of the law was not intended for paid editing in Wikipedia-related research projects, and paid editing as part of Wikibench research would likely not face the same scrutiny as more conventional paid editing. The WMF Terms of Use do not currently reflect this nuance.

You may consider adding one of the user boxes below for disclosure.

Questions and answers
If you have any questions or discussion topics about this campaign but can't find them in this section, please share your feedback on the talk page. If you have questions about Wikibench in general, please visit Wikibench's project page on the meta wiki.

What's the reason for distinguishing edit damage and user intent?

On Wikipedia, vandalism has a precise definition. Damaging edits saved with good faith are not vandalism. Because it is critical to handle different circumstances differently, different AI models are responsible for identifying edit damages and user intent. Hence, separate labels are needed for evaluating these different models.

How does Wikibench calculate the level of disagreement?

The number in the column of disagreement is the standard deviation of submitted individual labels. Consider edit damage as an example: Wikibench encodes damaging, damaging with low confidence, not damaging with low confidence, and not damaging into numbers from -1, -0.5, 0.5, to 1, respectively, then calculates the standard deviation of these encoded values. When the labels are more divided, the resulting standard deviation of the encoded value will be larger. The maximum value of the resulting deviation is one because of the range of encoded values (from -1 to 1).

Why are the campaign and entity pages under a user subpage?

Currently, all the campaign and entity pages are stored under Tzusheng's (the creator of Wikibench) subpages because it is still a research prototype. If Wikibench is well-received by Wikipedians in the pilot testing, we will make efforts to move it under the Wikipedia: subpages if appropriate.