Wikipedia:Universal Code of Conduct/2021 consultation

__NEWSECTIONLINK__

=Request for comment: Universal Code of Conduct application=

The Wikimedia Foundation is seeking input about the application of the Universal Code of Conduct.

The goal of this consultation is to help outline clear enforcement pathways for a drafting committee to design proposals for a comprehensive community review later this year. The proposals may integrate with existing processes or additional pathways that may be suggested. For more information about the UCoC project, see Universal Code of Conduct overview.

Discussions are happening on many projects and are listed at the 2021 consultations page.

Please discuss in the subsections below and let me know if you have any questions. Xeno (WMF) (talk) 22:32, 5 April 2021 (UTC)

Consultation structure
There are five topics with several questions to help start conversations. Feedback provided will have a significant impact on the draft for enforcement guidelines that will be prepared following the comment period.


 * Please do not feel obligated to answer every question, focusing on what is important or impactful. We understand giving opinions on this topic can be difficult.
 * While it will be necessary to describe experiences in a general way, these discussions are not the place to re-visit previously decided matters which should be handled at a more appropriate location.
 * Each topic has several questions to help understand how the Universal Code of Conduct might interface with different communities.
 * For answers to some frequently asked questions, please see this page.


 * Please note: If you wish to report a specific incident, please use existing pathways. If that is not an acceptable pathway, outlining in more general terms why the existing process does not work will be useful. Please avoid sending incident reports or appeals to facilitators or organizers. The people organizing discussions are not the staff that handle specific abuse reports or appeals and are not able to respond in that capacity.


 * Community support
 * 1) How can the effectiveness of anti-harassment efforts be measured?
 * 2) What actions can be taken and what structures should be available to support those being targeted by harassment and those that respond to conduct issues?
 * 3) What formal or informal support networks are available for contributors? What is necessary for these groups to function well, and what challenges are there to overcome?
 * 4) What additional opportunities are there to deliver effective support for contributors? What would be useful in supporting communities, contributors, targets of harassment, and responders?


 * Reporting pathways
 * 1) How can reporting pathways be improved for targets of harassment? What types of changes would make it more or less challenging for users to submit appropriate reports?
 * 2) What is the best way to ensure safe and proper handling of incidents involving i) vulnerable people; ii) serious harassment; or iii) threats of violence?
 * 3) In your experience, what are effective ways to respond to those who engage in behaviours that may be considered harassment?
 * 4) In what ways should reporting pathways provide for mediation, reform, or guidance about acceptable behaviours?


 * Managing reports
 * 1) Making reporting easier will likely increase the number of reports: in what ways can the management of reports be improved?
 * 2) What type of additional resources would be helpful in identifying and addressing harassment and other conduct issues?
 * 3) Are there human, technical, training, or knowledge-based resources the Foundation could provide to assist volunteers in this area?
 * 4) How should incidents be dealt with that take place beyond the Wikimedia projects but are related to them, such as in-person or online meetings?


 * Handling reports
 * 1) In what ways should reports be handled to increase confidence from affected users that issues will be addressed responsibly and appropriately?
 * 2) What appeal process should be in place if participants want to request a review of actions taken or not taken?
 * 3) When private reporting options are used, how should the duty to protect the privacy and sense of safety of reporters be balanced with the need for transparency and accountability?
 * 4) What privacy controls should be in place for data tracking in a reporting system used to log cross-wiki or persistent abusers?


 * Global questions
 * 1) How should issues be handled where there is no adequate local body to respond, for example, allegations against administrators or functionary groups?
 * 2) In the past, the global movement has identified projects that are struggling with neutrality and conduct issues: in what ways could a safe and fair review be conducted in these situations?
 * 3) How would a global dispute resolution body work with your community?

=Discussion=

How can the effectiveness of anti-harassment efforts be measured?

 * I am not sure effectiveness of effort is something that can be measured, other than in the negative - sure, if reports of harassment go up, we know our efforts are NOT effective. But a decrease in reports does not necessarily mean our efforts were effective... it could just mean fewer people had confidence that if they report harassment it will stop. Blueboar (talk) 01:46, 6 April 2021 (UTC)
 * I agree that this is not measurable. Reports will likely go up because structures are put in place; that doesn't mean more harassment happens. In fact, I think that every study I've participated in with respect to harassment on Wikimedia projects has been tainted by the fact that there was no way to say things like "it happened once 8 years ago, but never since" or "it happened and was well addressed by the community". (In other words, those studies never measured the frequency of harassment or the effectiveness of existing solutions.) Frankly, if the only purpose of the UCoC is as an anti-harassment tool, then we're doing it wrong. Risker (talk) 02:16, 6 April 2021 (UTC)
 * I think we probably could figure out an indirect measurement by measuring perceived levels of harassment, if we gave it some good thought. A survey, ongoing or intermittent, that invites randomly-chosen accounts to participate. Should be super simple, something along the lines of "1. Have you personally experienced or witnessed harassment within the past 30 days? (Terrible question, but just for a general idea.) Numbers go up or down over (some tbd time period, I agree that we shouldn't assume an initial period of upticking is evidence to the contrary, as it could simply be an increase in awareness of what constitutes harassment), but eventually we have some indirect measure. OR: allow editors to flag posts by others that include either a ping to them or are on their own talk page. This last is IMO something we should have been doing for the past ten years, but of course it requires a developer and some assessment/response mechanism. Again, this would be only an indirect measure; both methodologies are actually measuring levels of perceived harassment. —valereee (talk) 14:15, 6 April 2021 (UTC)
 * Create a facility to research the quality of talk page interactions. It would be very interesting to grab a large sample of talk page interactions and ask volunteers to rate them as constructive or problematic. This could serve several purposes. First, it could help us create a more real set of examples of what the community considers ok and not ok. Second, it could maybe help us train a bot to notice possibly problematic exchanges. Third, it could provide data over time as to whether things are getting better or worse. Chris vLS (talk) 15:14, 9 April 2021 (UTC)
 * You'd need a survey hitting a random pool of users across communities every month, tracking both "I have been harassed" (btw, I still think that choice of words is unwise), "I have done something about it", "the effort was successful/pending/unsuccessful" etc. You'd also need multiple months data before for it to have any validity. A question should also be included to track "I was falsely accused" etc. Nosebagbear (talk) 12:34, 17 April 2021 (UTC)
 * Realistically, it probably can't. Kaldari (talk) 01:02, 8 June 2021 (UTC)
 * The Movement Strategy's "Provide for Safety and Inclusion" section claims that "Among the common causes for the gender gap (...) is the lack of a safe and inclusive environment." -- this assumption (which so far I believe to be flawed), if true, would mean that having the UCC in place plus its enforcement mechanism, should significantly lead to narrowing the gender gap. If the gender gap does not shrink significantly, it would be an indication that the underlying assumptions were mistaken, and that having a UCC was not a meaningful solution. Al83tito (talk) 05:38, 2 February 2024 (UTC)

What actions can be taken and what structures should be available to support those being targeted by harassment and those that respond to conduct issues?

 * Those experiencing harassment can report it to administrators and ultimately ArbCom. This usually works. If we create a formal system beyond this, things become too bureaucratic.  Harassment issues often need a more flexible non-bureaucratic approach. Blueboar (talk) 01:52, 6 April 2021 (UTC)
 * There are two types of harassment. One is obsious cases like off-wiki harassment or situations where on-wiki harassment can be proven with one or two diffs. These probably can be solved by existing structures (though I have doubt about support, some psychological help service would be good, but I am not sure we can afford it). Another type when things are happening in small steps, and one needs a hundreds of diffs to see anything and another hundreds of diffs to see whether this is really one-side harassment and not a situation where one side of a dispute wants to get advantage by calling actions of the other side harassment. So far nobody on the English Wikipedia, including the ArbCom, was not willing to launch investigation, find the diffs, understand the situation, and work out the solution. The only structure willing to do it was T&S, but it is not scalable. I am not sure what scalable structure we could have here, but we can think about one.--Ymblanter (talk) 05:53, 6 April 2021 (UTC)
 * I'd like to see a button that can be used to flag posts as harassment. I don't know how this works to keep it from becoming its own form of harassment -- maybe you only get one such flag a month? Maybe it's a right that can be removed for abusing it? —valereee (talk) 14:19, 6 April 2021 (UTC)
 * Moved discussion of this idea to 
 * I think meaningful resources should be available to those who feel harassed for any reason. MWF benefits from a vast amount of volunteer labour and these days has access to considerable financial resource. It would be good if some of that was directed to providing help for the volunteer community at the heart of the project. WJBscribe (talk) 10:15, 7 April 2021 (UTC)
 * Let's not copy the Silicon Valley model too closely: Community values -- upheld by the community -- work more powerfully than other measures. Most social networks don't have a purpose (other than ads), so there is no agreement within the community about what is or isn't ok. So they have to hire moderators to enforce a policy. Wikipedia is the opposite. We have a purpose. Our community has a strong consensus about what is constructive. We enforce a vast array of policies millions of times a day. Having an openly edited encyclopedia shouldn't be scalable, but the community makes it so. The same is true of harassment. If it is instilled like other community values, regular editors will call it out. The most powerful tools would be those empowering regular editors do to this, maybe something like an MOS for talk page interactions, so it is as easy to tell someone they've crossed into harassment as it is tell them they bolded too many words in the lead sentence. Finally, to answer the question, we should consider a separate noticeboard. The "button" question depends on many details, would need to consider specific designs. Chris vLS (talk) 15:28, 9 April 2021 (UTC)
 * One issue with a MOS for talk page behaviour is that our existing literature on the topic (e.g. WP:NPA – "Comment on content, not contributor") already very clearly prohibits most forms of unconstructive behaviour that people engage in. However, these very policies/guidelines/essays are used as ammunition to wikilawyer with. If they're not ignored outright, it becomes a discussion about "well actually you've only pointed to an essay" or "if you had a proper argument you would be able to employ it here rather than commenting on my tone" or "NPA doesn't negate the fact that competence is required" or the other ridiculous things people know they can get away with saying, despite the statements being egregiously rude and obviously written out of spite. And people are extremely creative with their rudenesses, so I don't think a full rulebook "all the things not to do" is achievable. The problem is a culture and community with a consistently bad tone, such that there's no way to enforce any of the rules we have without banning 90%+ of our contributors in some topic areas. On the other hand, this is exactly what you're talking about with the need for the community to have good behaviour as ingrained, intrinsic values, rather than approaching this like social media moderation, to which I agree. — Bilorv ( talk ) 22:07, 9 April 2021 (UTC)
 * In a mature community like en-wiki, I would say that the current structures are sufficient - a standard one and an additional one to handle cases that demand private evidence. I am concerned about "flag buttons" - they will generate a tidal-wave of reporting without a capacity to investigate them. An activity majority will be used incorrectly and a substantial portion deliberately incorrectly (as we see even at ANI, with a greater barrier to meet). Flag-removal would also take effort, in excess of just creating a new account etc. Nosebagbear (talk) 12:39, 17 April 2021 (UTC)
 * An idea might be to strictly separate powers between editors and admins. Editors should edit based on sources and community consensus. Admins should deal with other issues and be trained to deal with them fairly. If someone is good at writing content they are not necessarily good at dealing with behavioural issues (and vice versa). Also: being able to edit and administrate can introduce bias and POV issues. Admins dealing with such issues should be as impartial and detached from the content as possible and might even be paid employees as they should have no say and no impact on the content of the encyclopaedia whatsoever. They should only deal with behavioural issues. Maybe a new role is needed to handle behavioural issues? -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:31, 4 May 2021 (UTC)
 * There needs to be a clear and non-public means of reporting harassment, just like on every other major web platform. To support admins that respond to harassment reports, the wheel war loophole needs to be fixed. Right now, any block can be undone by any other admin with no justification. Reinstating a block, however, is wheel warring, which is strictly forbidden. This is why we have "unblockable" harassers. Kaldari (talk) 01:11, 8 June 2021 (UTC)

What formal or informal support networks are available for contributors? What is necessary for these groups to function well, and what challenges are there to overcome?

 * No formal networks as far as I am aware of, unless we count T&S as network. Informally, pretty much depends on the connections. I am (almost) not on social media, I am not a member of any Wikiprojects, I have been to a few meetups but I am certainly not a Wikimeetup regular - which means if I am in trouble I am basically on my own. If I manage to formulate the issues better than my opponent can, I probably would not lose much beyond my time spent, but if my opponent has a lot of time, determination, a number of friends, and does not screw up really badly, my case is hopeless whatever I do.--Ymblanter (talk) 18:34, 6 April 2021 (UTC)
 * The lack of current structured support leads inherently to people feeling isolated or forming small groups for informal support. Such groups are likely to share similar world views and may not view issues objectively as a result given the diverse backgrounds of contributors. As noted above, the WMF has access to financial resources to provide meaningful support to the volunteer community. It should be looking at the kind of support for stressed/harassed employees which are commonly provided by other corporations. WJBscribe (talk) 10:17, 7 April 2021 (UTC)
 * I'm not aware of any formal networks. There are various informal networks - if nothing else, things like the Wikipedia Discord allow contributors to natter I "I'm thinking x in circumstance y", but it's not designed (or used) for "this is making me stressed/sad". To function well, the group needs a good understanding of what they're doing: to give an example, a stress counsellor won't know anything about the issues I'm facing and I prefer to seek concrete advice. In terms of this consultation, there is a challenge that different communities have very different circumstances but also different zeitgeists. I fear that the WMF trying to create a support network (not inherently a bad idea) may struggle with making one applicable to lots of different groups. Additionally, any network needs to take care to make sure that it doesn't provoke concerns of tranferring decision-making off-site. As something that is likely by design not to be transparent, not pushing decisions becomes even more key Nosebagbear (talk) 12:51, 17 April 2021 (UTC)
 * There is no clear point of support for users feeling harassed etc. It is crucial that some support mechanism is implemented. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:34, 4 May 2021 (UTC)
 * Basically none other than random chat groups and discussion boards, which I wouldn't really characterize as "support networks". And even those are difficult to discover and join. Kaldari (talk) 01:14, 8 June 2021 (UTC)

What additional opportunities are there to deliver effective support for contributors? What would be useful in supporting communities, contributors, targets of harassment, and responders?

 * Fix our admin tools so that they are fit for purpose. MER-C 17:56, 6 April 2021 (UTC)
 * Could you elaborate on this, so I can provide effective feedback? Xeno (WMF) (talk) 23:26, 25 April 2021 (UTC)
 * The anti-harassment effort requires the use of all admin tools.The root cause is a cultural and management management problem at the WMF that results in chronic neglect. Nearly everything is a mess. Any way it could go wrong apart from crashing, it probably does. The following are illustrative examples only:
 * 0: Critical extensions that receive nothing but maintenance commits e.g. spam blacklist, title blacklist, nuke.
 * 1: General brokenness, e.g. T20493, CAPTCHAs
 * 2: Technical debt. Example: T27471 should be a one-hour task. Instead, it requires days of refactoring.
 * 3: The UX is, on average, shit. See e.g. T27471 - is also a UX problem because this is a VERY common task. The worst instance is Special:Undelete: add ?fuzzy=0, you get a prefix search functionality and ?fuzzy=1 (now the default) you get a search title functionality. You need to know the magic incantation in order to access this. Also, I suspect there are many admins that don't even know you could search the archive by title...
 * 4: Obvious missing features as seen by e.g. comparing Special:Undelete/X to a live page history, comparing live revisions to deleted ones, comparing Special:Undelete against the normal search functionality.
 * 5: Public information that shouldn't be public e.g. T217613 (low priority? really?)
 * 6: Building upon the existing tools to make them more powerful e.g. blocks that are harder to evade, more powerful and performant abuse filters.
 * 7: Catching up on all the anti-abuse innovations at other sites that passed the WMF by e.g. sockpuppet detection.
 * Look at Twinkle for some inspiration regarding 3 and 4. MER-C 12:53, 26 April 2021 (UTC)
 * See my comments above. Treat volunteers editors, admins etc at least as well as you would paid employees doing the same job and invest the same resources. What resources are available to WMF paid staff who report feeling stressed/harassed? The same should be given to volunteer contributors. WJBscribe (talk) 10:19, 7 April 2021 (UTC)
 * I believe it was in WP:AHRFC where I first saw the idea: WMF-funded anti-harassment workshops and bystander intervention training for community members. Many of us simply do not have the skills or experience to support contributors who come to us with fears of harassment or other long-term conduct issues. These are largely social and interpersonal skills, not technical tools, so they need to be developed beyond the various admin toolkits. Using Foundation funds to create a sizeable group of contributors who can intervene in situations, de-escalate, and properly respond to allegations of abuse will make the community more resilient in the long term not only because we will have people able to handle issues properly, but this group can further train other volunteers to create a self-sustaining cultural institution like the DRN or ArbCom. — Wug·a·po·des​ 00:36, 9 April 2021 (UTC)
 * : Community Development recently ran a pilot course entitled Identifying and Addressing Harassment Online which involved seven 2-hour sessions and some light coursework each week. Interested parties may wish to signal their interest on the talk page there. It's my understanding they'll be posting the course material for self-study as well. Xeno (WMF) (talk) 23:26, 25 April 2021 (UTC)
 * totally agree. Volunteer admins have no time for complex interpersonal issues. The WMF should have paid employees dealing fairly and consistently with those issues (like an independent judicial branch). The would deal only with behavioural issues. NOT content obviously. A strict separation is needed. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:36, 4 May 2021 (UTC)
 * A tool that made it easier for non-experienced editors to create a proper complaint with evidence. That's not a "report by pressing here" button, but something that genuinely walks them through with buttons "What's the issue, who is the problematic party/parties, something to covert URLs into diffs, talks them through how to find the edit, and get that URL, say what's wrong with it, etc". That would have no negatives, and while by no means a perfect solution, would fix a significant number of issues and smooth the complexity. Nosebagbear (talk) 14:21, 17 April 2021 (UTC)
 * I agree with MER-C that the admin tools need to be updated/fixed. The Anti-Harassment Tools team has made some improvements in this regard as has DannyS712, but it's just the tip of the iceberg. We also need to provide some kind of standardized training materials or workshops for responders if we are going to expect them to be effective rather than just making problems worse. Kaldari (talk) 01:22, 8 June 2021 (UTC)

How can reporting pathways be improved for targets of harassment? What types of changes would make it more or less challenging for users to submit appropriate reports?

 * At the very least, there should be a page with a flowchart/wizard/algorithm that tells the user where is the best place to report it and what evidence they need to provide. MER-C 17:19, 6 April 2021 (UTC)
 * Each project should be looked at individually in this regard, and given help in strengthening its processes. I do not believe this should be centralised, save in respect to the most serious incidents (i.e. those that may need to involve law enforcement), or for very small projects that lack dispute resolution mechanisms. WJBscribe (talk) 10:21, 7 April 2021 (UTC)
 * If we are talking about long-term low-level harassment, it is notoriously difficult to report because (i) one should be able to make a convinving case, using many diffs, which most users are not capable to do; (ii) there should be someone willing to look at this report, wading sometimes through hundreds or thoudands of diffs, and even this might be not enough, because the behavior of the reporter or even the third partics might be not ideal as well, and the whole episodes need to be taken into account. To be honest, I do not see this happening, but at least there should be clear instructions and may be even training on how good and clear reports can be written,--Ymblanter (talk) 15:58, 9 April 2021 (UTC)
 * I'd like to see a button that can be used to flag posts as harassment. I don't know how this works to keep it from becoming its own form of harassment -- maybe you only get one such flag a month? Maybe it's a right that can be removed for abusing it? —valereee (talk) 14:19, 6 April 2021 (UTC)
 * Interesting idea. I also agree that a prominent "Report" button/link will be helpful. --Titodutta (talk) 03:52, 8 April 2021 (UTC)
 * I too think a red-flag / report button could be useful. I was going to create a section under Discussion, but then I saw your mention of it here, valereee. The way I pictured it, we wouldn’t investigate every report, but only users or diffs that crossed some threshold of report counts. It would be an option you activate on a specific diff, like you do for Thanks. How to avoid organised POV groups from abusing the system? How to allow people to indicate "no, I don’t think this edit was unacceptable"? Who's going to volunteer to review the red-flag lists? Perhaps after the a diff crosses its threshold, and gets listed, the Report button could change to a set of Voting buttons (to avoid people piling on the Report button after a listing). There would be s lot of detail to work out to get the process to functioning, but it'd be worth a try. [Should this thread be moved to "How can reporting pathways be improved..."?] Pelagic ( messages ) – (02:05 Fri 09, AEDT) 15:05, 8 April 2021 (UTC)
 * I like the idea, and I'm going to focus on possible flaws because I do think it is worth pursuing further. If flagged edits went to the community to discuss then people would turn up to this ANI-on-steroids with popcorn looking for drama. But I also wouldn't trust admins to monitor the system as many of the tone issues we have come from admins setting bad examples and dismissing concerns too freely (though some because they somehow ended up in almost sole charge of an extremely large and fast-paced area and already need 10 times more hours in the day to do the job justice). I'd only trust groups like checkusers, OTRS volunteers or arbs (... and definitely not the WMF), but this is out of scope for them and would be a very large additional burden which I'm not sure those small groups would want to or be able to manage. So I guess we have to get creative about how such a report system could be monitored. — Bilorv ( talk ) 22:07, 9 April 2021 (UTC)
 * (Moved discussion of valereee's comment, as suggested)


 * Rather than flagging posts as harassment, we should be removing them. Removal of harassment is an ethical obligation for all editors, not just a small group of enforcers. Vexations (talk) 01:02, 12 April 2021 (UTC)
 * I respectfully disagree with the suggestion of removing comments, unless they are extremely egregious. Reasons: (1) Incivility and personal attacks should be called out, but not erased, to serve as examples of what isn’t acceptable and why. (2) One person's harassment is another's spirited discussion. Keeping comments in place allows community members to discuss whether they are acceptable, and develop a picture of the wide range of community attitudes. Whitewashing talk pages stifles discussion. (3) It could drive criticism away from talk pages and into edit summaries. (4) We already see activists removing views that they oppose politically because "hate speech". Encouraging talk page redaction gives motivated groups another avenue to control and manipulate the discussion, on the basis of "I don’t like what you’re saying therefore it must be harassing me".  Pelagic ( messages ) – (07:47 Tue 13, AEDT) 20:47, 12 April 2021 (UTC)
 * , You do not find harassment agreeable, but you let it exist. That's tolerating harassment. Vexations (talk) 10:41, 13 April 2021 (UTC)
 * Harassment is a pattern of targeted behaviour, Vexations. How would we remove, say civil hounding?
 * As for the spectrum along single instances of rudeness – incivility – swearing – personal attacks – verbal abuse – outing – threats (it’s not a straight line but you get the idea), people will differ as to where they draw the line and how ... intolerant their reaction is. Should individual editors with a low tolerance for that sort of thing go forth on a crusade to bowdlerise discussions that others deem inoffensive?
 * Also, I would argue that once the target has seen the content, then the abuse / harassment / negative effect already exists, and sweeeping it under the rug isn’t always helpful. It might be, but that depends on context.
 * For example, if someone was to blow their cool and call you a rude name, would you feel better having that silently deleted, or having them come back, strike the comment, and apologise for their intemperance?
 * — Pelagic ( messages ) – (23:08 Tue 13, AEDT) 12:08, 13 April 2021 (UTC)
 * , If someone made a personal attack against me, I would want the comment removed, by anyone, as soon as possible. That removal signals that we do not tolerate personal attacks. If all we did was tag the personal attack as a violation of our behavioral standards, but let it remain, then we accept that personal attacks may remain. The fact that I have seen it already does not at all mean that the harassment then stops. The personal attack continues to be visible to other editors. Our No personal attacks policy already is quite clear about that: Derogatory comments about other editors may be removed by any editor. I would like to see that expanded to cover more forms of harassment than personal attacks. Removing awful content is not "whitewashing" how truly awful some of the discussions on Wikipedia are, it's making them less awful. Vexations (talk) 12:32, 13 April 2021 (UTC)
 * Thinking more about this, there's removal and there’s removal. For example:
 * [nothing, thread deleted]
 * PersonA, you're a -$&$&3=# and you should go &$#@*% PersonB
 * PersonB, that was out of line, you should apologise. PersonC
 * PersonA, you [redacted by PersonC] PersonB
 * PersonB, that was out of line, you should apologise. PersonC
 * — Pelagic ( messages ) – (23:21 Tue 13, AEDT) 12:21, 13 April 2021 (UTC)
 * , I wish more people felt empowered to speak up against abuse. Option 3 sounds good to me. We shouldn't need to rely on a group of enforcers to apply a set of flawed, arbitrary rules, we should act in accordance with our (collective) ethical norms and values. I have too often wanted to say to a respected elder that I felt their comments were out of line, but refrained from doing so because of the bystander effect or because I didn't want to get into trouble myself. Vexations (talk) 12:42, 13 April 2021 (UTC)
 * Pelagic, you say: would you feel better having that silently deleted, or having them come back, strike the comment, and apologise for their intemperance? But I see a lot of personal attacks (only a tiny minority directed at me) and very few that people strike and exceedingly few that people apologise for, including when someone correctly points out "this is a personal attack". So I think the question to ask is "would you feel better having that deleted or having it remain?" I'd rather it be replaced with redacted so that people can see something was written but don't read what unless they're going looking for it. It's interesting because the comments you make all apply exactly as written to WP:BLP enforcement, and this we do see as non-negotiable (at least, when it comes to article subjects—sometimes BLP overlaps with anti-harassment in interesting ways). So clearly lines can be established and these patterns of striking comments could become accepted by the community. It's just about whether it helps de-escalate conflict, or will only provide more fuel. A few months ago a user correctly struck a comment I made that fell afoul of BLP, in an interaction I behaved improperly in throughout. I'm glad they did because the result is that I had to take the situation seriously and learned something about BLP that I might not have if they had just commented "I think you shouldn't have said that". Even though I was very hostile at the time. Though it doesn't excuse my actions, hopefully I can redeem myself by behaving better in future. — Bilorv ( talk ) 00:28, 14 April 2021 (UTC)
 * Wikipedia needs a clear reporting pathway for off-wiki harassment. It can be difficult to report off-wiki harassment since the policy on the posting of personal information (WP:DOX) limits the amount of evidence that can be presented on-wiki, and editors may not necessarily be familiar with the Arbitration Committee, Trust & Safety, or individual administrators who are able to help. As a result, editors are less likely to report off-wiki harassment when they see it, especially when it targets other editors. A dedicated form for reporting off-wiki harassment (as well as other issues that need to be examined in private) would be more approachable than having to find the correct department to email. —  Newslinger  talk   15:10, 14 April 2021 (UTC)
 * An interactive flowchart, with buttons, would work well - "what has happened: options A, B, C, D", is it purely on-wiki "A, B" etc, leading to the right option. I would also suggest building in something that made it easier for non-experienced editors to add diffs - that will probably require a gadget created by the WMF, though there are certainly user scripts that would act as good examples. I do have concerns about the harassment flag. A few flaws: one, the ease of false positives, which would necessitate a certain level of repeated triggers to handle. The problem with that is that many new editors would expect, once pushed, the case to be handled, so severe cases wouldn't be seen, or even in non-severe cases, many would show up and complain that no action had been taken. Nosebagbear (talk) 13:22, 17 April 2021 (UTC)
 * [One should not] presume that the only type of harassment is that from socks/LTAs. That may be the most voluminous, and it's certainly the type the community's established processes deal best with (block, ban, disable email and TPA, range blocks, edit filters, and protection where we can), but I'm not sure it's the most severe or difficult to deal with. It'd be nice if UCOC enforcement dealt with the problem of unblockables, and also with the problem of new editors subject to problems (esp offwiki) who are not familiar with norms and reporting mechanisms available to them (indeed, Contact us has no mention of mechanisms existing, such as ArbCom's contact info). ProcrastinatingReader (talk) 18:10, 6 April 2021 (UTC) (copied from )

What is the best way to ensure safe and proper handling of incidents involving i) vulnerable people; ii) serious harassment; or iii) threats of violence?

 * See above. Certain things shouldn't be posted publicly. MER-C 17:19, 6 April 2021 (UTC)
 * Serious harassment and threats of violence (as issues potentially requiring the involvement of law enforcement) can be dealt with by a central mechanism. Projects should be better assisted to deal with specific requirements of "vulnerable people". WJBscribe (talk) 10:23, 7 April 2021 (UTC)
 * I've never (to my memory) seen an explicit threat of real-life violence, so I think they are thankfully rare, but without talking to someone who has I wouldn't be able to tell you whether they're dealt with well. On the vulnerability topic, it's just too broad a question. Are we talking about children? People with autism? People with poor mental health? Each of these groups have different vulnerabilities that are not remotely alike (and very diverse even within the group). — Bilorv ( talk ) 23:22, 9 April 2021 (UTC)

In your experience, what are effective ways to respond to those who engage in behaviours that may be considered harassment?

 * I am not sure if I've correctly understood the phrase "may be considered" - does this mean that the behaviour is not objectively harassment, but may be perceived as such? That is a very tricky area, given the multi diverse backgrounds of our contributors, which can easily lead to good faithed mistaken perceptions of others. However, inappropriate allegations of harassment are, in of themselves, a form of harassment. If the issue is that behaviour that is not actually harassment is being perceived as such, both parties may be in need of assistance - there ought to be no "first mover" advantage in who first makes a report of harassment. For example, an editor may feel harassed by an administrator who is taking legitimately taking issue with their contributions to the project. Say the administrator noticed that they had uploaded a non-free image with incorrect tagging. The administrator, following an exchange about this with the contributor, realises that the contributor does not understand the licensing requirements, reviews the contributors' other uploads and tags a number of further images for deletion. The contributors perceives this as unwarranted personal attention and/or "stalking" of their contributions. It is important in this sort of context that the labels applied by the contributor are not unquestioningly accepted - that time is taken to explain the situation carefully to the contributor, and support is given to the administrator who is likely to feel stressed as a result of the allegations being made about them. WJBscribe (talk) 10:31, 7 April 2021 (UTC)
 * Reverting an experienced editor twice and making them start the talk page discussion, curt edit summaries, following someone around because of "low quality" contributions—the difficulty is that these actions are often simply necessities to maintain the quality of the encyclopedia, as WJBscribe raises, but they are also a staple of all genuine harassment. But the community will almost always treat it as the former, even in cases where it's obviously not. Someone once reverted over one-hundred edits I made (edits prompted by discussion with lots of time for people to weigh in and made slowly and carefully with pauses to listen to feedback as I went) over the course of an hour while I was desperately trying to engage them in discussion in four different ways, and saying that I had real-life commitments and would be happy to discuss it with them if they just paused the reverting until we had established what the situation was. That was serious harassment to me, but to everybody else who observed the situation it was a "content dispute". Not so. I've spent many hours trying to see it from that person's point of view, as the experience deeply affected me, but if they'd spent five minutes trying to see it from mine they would not have done what they did. Especially if they'd known the way in which I was, to use the language of the question above, in the category of "vulnerable people" at the time. It doesn't matter how many of the pings I get now are friendly (currently at maybe 90%)—I always feel a sense of dread when I see the number below the bell.  All of these things count five times over in sensitive subject areas. If you think about it, it is expected and even desirable that some of the editors most passionate about sharing and collating free information about rape will be rape survivors or have related experiences—but all the content disputes I've seen in this topic area have someone being at best flagrantly irresponsible in their tone and choice of wording. We have a similar thing going on to the #NotAllMen/#YesAllWomen talking points of the #MeToo movement, where it just takes one person out of twenty being highly aggressive to create an environment so that everyone with trauma, related grief or mental health difficulties will not be able to stay in that topic area for health reasons, and so we lose anyone acting in good faith or who is here first and foremost to share free information, rather than to argue and POV push. If the content area is the Rwandan genocide, ask "would I be happy repeating this comment out loud to someone who lost a parent in the genocide? Am I proud to be making this comment if the other editor is in such a position." Apply this to whatever heated topics you're looking at.  The effective ways to combat these bad behaviours are by communally setting good examples and making someone the odd one out if they are starting drama or being hostile; by going out of your way to tell someone "thank you for making this comment" or expressing gratitude to someone acting in good faith; and by disengaging as rapidly as possible with someone who's in it for the argument. Warnings do not work when someone has already established "I can get away with acting like this". Admins having zero tolerance does not work in an environment like that (they don't have the power to deal with the fallout of such actions). WMF intervention does not have a chance of working. — Bilorv ( talk ) 23:22, 9 April 2021 (UTC)
 * WJB's points are excellent in this area, I echo them heavily. If it's instead meaning "activity that might be harassment, but may not be" with both viewpoints potentially be fair, then having someone talk to both parties (individually) can be worthwhile. Some discussions with WMF personnel (not just T&S), as well as some editors, is disagreement on where the Civility line gets drawn, which underpins were does the harassment line get drawn. For example, in phase 1, some staff I talked to wanted to draw the line much more akin to an office setting. I would firmly disagree with that. People will draw the line differently, but the only one that matters is what WP:CIVIL calls it at (only matters in terms of an actual rules breach). Harassment is made up of two factors: the nature of individual actions and the pattern of those actions. This attempts to look at the first. The editor verging on the edge can be helped by having someone they respect and point out that not everyone has the same thick skin. The person on the other side should also have it raised that what they may view as harassment, others view as frank discussion. Nosebagbear (talk) 13:37, 17 April 2021 (UTC)

At the moment, the community deals with vandals by RBI. The draft text of this universal code of conduct, at section 3.3, requires us to engage with them: it clearly and specifically rules out our current process of reverting vandals' edits and denying them the oxygen of attention. Where is the correct place to discuss fixes to the draft UCoC text?—S Marshall T/C 23:39, 6 April 2021 (UTC)
 * @User:S Marshall: The Code has been ratified by the Board and is no longer a draft. "The Foundation’s Legal Department will host a review of the UCoC one year after the completed version of it is accepted by the Board." (FAQ#Periodic reviews) "If you see more cultural gaps in the draft, kindly bring that to our attention on the main talk page of the Universal Code of Conduct, and these issues may be included in the first or subsequent annual reviews." [emphasis added] (FAQ#Conflict with local policies)
 * Then the Board must re-think. The first bullet point of section 3.3 rules out "The repeated arbitrary or unmotivated removal of any content without appropriate discussion or providing explanation".  On review in context, the wording probably does allow us to deal with obvious vandals via RBI, but it denies us RBI with LTA cases, POV warriors, and most areas that are of interest to Arbcom.—S Marshall T/C 14:40, 8 April 2021 (UTC)
 * Thank you for your comments, I've copied them here. Keep in mind the code is meant to remain subject to appropriate context. Also, in those cases there usually is a strong motivation for the removal. (I understand the word 'unmotivated' has raised some confusion) Xeno (WMF) (talk) 02:35, 3 May 2021 (UTC)

In what ways should reporting pathways provide for mediation, reform, or guidance about acceptable behaviours?

 * In the local context, consider the marking historical of WP:MEDCOM (discussed here) (2003 - 2018~) and WP:MEDCAB (2005 - 2012) (which didn't in fact exist; but certainly not after this discussion). Apart from the spiritual successor, WP:DRN (2011 - ), I noticed Village pump (idea lab)/Archive 29 initiated by [with ]. What types of approaches and structures can work here? What would help volunteers to be successful in performing the mediation work crucial to the collaborative process? Xeno (WMF) (talk) 16:35, 25 April 2021 (UTC)
 * I'm not quite sure how relevant this is. Mediation as a means of settling thorny content disputes is something i'd be interested in there being, but am not sure what the best means is to balance a formal mediation process (given we hvae lots of informal already) against avoiding all the rules that killed off the old form. However, in the context of UCOC, it shouldn't be trying to push communities in how they consider content disputes. Mediation in the context of conduct disputes is a very different kettle of fish. It's not a terrible idea, but needs to be considered as a distinct area. Its nature would be different to content mediation. Nosebagbear (talk) 17:07, 25 April 2021 (UTC)
 * "Guidance about acceptable behaviours" should be provided in any sanction (that is, a warning upwards), and is also a reason why editors need to know all the information in cases where they're accused of misconduct. I've had a look at even the improved T&S conduct warnings. They are abysmal. Literally, I could have received any of them and I'd struggle to figure out what field of activity triggered it, let alone a) where the line was being drawn b) what exact behaviour needed to improve in what specific ways to avoid escalation. We cannot allow rules that would not allow very high-quality, detailed, guidance to be provided, with full ability for clarity to be requested by the recipient. Nosebagbear (talk) 17:07, 25 April 2021 (UTC)
 * As User:Nosebagbear notes, DRN mediates content disputes, as did MEDCOM and MEDCAB. There have been suggestions for procedures to mediate content disputes, but they have fizzled out.  User:Xeno (WMF) - Is this question about conduct, or about content?  It appears to be about conduct, in which case I don't think that ENWP has had any real reporting pathways, but I am not sure.  Robert McClenon (talk) 22:03, 25 April 2021 (UTC)
 * Here we are mostly querying how and whether mediation can play a role in settling conduct issues between users; some of the conduct reporting pathways are discussed at File:Enwiki Reporting system summary.pdf. Xeno (WMF) (talk) 23:05, 25 April 2021 (UTC)
 * User:Xeno (WMF) - I see that I and others were pinged because we took part in a discussion two years ago on mediation of complex disputes. I am not entirely sure why we are using that discussion as a basis for any follow-up, because it was inconclusive.  Basically User:Steve Crossin suggested that we discuss mediation of complex disputes.  Some of us discussed mediation of complex disputes, and the result was that we discussed it and didn't decide anything.  The discussion was slightly useful in that it sort of determined that there were no obvious answers that we had been missing.
 * Both that discussion and the flowchart divide disputes into two groups along two different axes, resulting in four possible combinations of dispute type and forum type, content or conduct, public or private. That can be shrunk to three combinations, because there is no reason for content disputes to be private.  Actually, conduct proceedings, on closer view, have three varieties, purely public (ANI, AE), purely private (behind the scenes), and public with in camera aspects (ArbCom).  The area that is controversial in ENWP is of course private handling of conduct disputes, where many editors have long bitter memories about the Fram case, and some of us would rather not trust the handling of harassment cases to folks whose handling of the Frame case amounted to harassment by the Court of Star Chamber.
 * I wish I could offer a neutral conclusion, but my pessimistic conclusion is that any procedure under a UCOC will be distrusted in the wake of Fram. Robert McClenon (talk) 02:46, 26 April 2021 (UTC)
 * To summarize, I see that point 3.2 of the Universal Code of Conduct states that abuse of power will not be tolerated. That is an excellent principle, except that in trying to enforce the rule against harassment, the WMF committed an abuse of power.  Robert McClenon (talk) 02:52, 26 April 2021 (UTC)
 * Reading those discussions linked above my feeling is that the idea of MedCom was some 'venue of last resort' (the discussions mention attempts at DRN / exhausting DR). This doesn't seem to make sense, given MedCom was also non-binding, which seems to give it little differentiation to DRN. If another non-binding process is going to exist, one way to differentiate it from other non-binding processes may be to have it staffed by professional mediators, then a prior DR requirement and picking-and-choosing cases would be more reasonable.DR seems to work best when parties agree towards an end goal but disagree on the some details getting there, but are looking to reach that goal and so are willing to compromise, discuss and rethink. These conditions don't apply to some content disputes, especially if one party is inflexible and the current revision is their preferred version (stonewalling can be an effective tactic on here, esp on niche disputes, thus some employ it). In such cases I've seen (eventually) a party either get annoyed and say something they shouldn't thus get sanctioned, or get exhausted and withdraw. I'm not sure resolution via exhaustion/frustration is a fair outcome, or good for content. Wikipedia needs some way of resolving such content disputes without waiting for them to turn into conduct ones. Binding mediation could be a solution, but most critically the pitfall seems to be how that'd interact with Consensus (and things like WP:CCC); the two don't appear to be compatible. ProcrastinatingReader (talk) 17:41, 26 April 2021 (UTC)
 * One occasionally-used method that balances having a semi-binding decision and the concept that consensus can change is a designated respite period. I wrote up my own version of a revisit respite on my content dispute resolution toolbox page: the basic idea is to have a respite period from revisiting the decision, unless the discussion closer agrees that a significant new consideration has been introduced. I feel that having a semi-binding conclusion provides incentive for parties to work towards a compromise solution, as well as alleviating discussion fatigue. isaacl (talk) 05:48, 5 May 2021 (UTC)
 * Noticed the ping which has awaken me from my semi-retirement/idleness/lack of time in life to be here very much. That post I did back some time ago was more to discuss the idea of a way to handle content dispute resolution, where other DR processes failed/were lacking. I'm not a huge fan of RFC personally, and MedCab was closed a while ago because it was considered superflous to both DRN and MedCom, but then MedCom was also closed due to its lack of use/bureaucracy. Even so many years on, dispute resolution is probably the only thing I ever felt good at here, but the processes we have now are a little lacking for my liking. I pushed to get WQA closed back in 2011-12 as it was, in my opinion, more harm than good, so I don't really have an answer for what else could be an effective replacement. I'm honestly in favor of something else in addition to DRN, perhaps something like MedCom with a little bit of stick that's overseen by the community rather than a private committee, but I won't pretend I have all the answers here by any regard. Steven   Crossin  Help resolve disputes! 05:45, 3 May 2021 (UTC)

Making reporting easier will likely increase the number of reports: in what ways can the management of reports be improved?

 * Reports should be visible to all in an anonymized form. It should be clear to all that anonymizedaccountX filed 20 reports on fifteen different editors and anonymizedaccountY has had five reports filed about them, but the diffs and the usernames should be visible to only those with certain rights. If a report is deemed to warrant an investigation, that should be visible to all, as well as some general statement of any outcome. —valereee (talk) 14:46, 6 April 2021 (UTC)
 * A committee, similar to the Ombuds Commission, tasked with receiving and triaging reports of abuse. These can be forwarded to groups on the local wiki or to T&S depending on the severity of the particular incident and ability for the local project to effectively handle it. — Wug·a·po·des​ 00:41, 9 April 2021 (UTC)
 * I have to disagree with the proposals for an OTRS/UTRS akin group that will handle these. It is a major workload, and also will weight individuals to thinking that any case that makes it to public consideration is a "no smoke without fire" incident. Would the process also be allowed to issue boomerang blocks? Nosebagbear (talk) 13:41, 17 April 2021 (UTC)

What type of additional resources would be helpful in identifying and addressing harassment and other conduct issues?
I sometimes wonder if what I'm about to say is out of scope for what the WMF is thinking, but I think it's relevant so I'll say it anway. It addresses several of the questions posed, and none at the same time. I think it might be the elephant in the room.

I deal with an enormous amount of harassment - to me, other users, article subjects, as well as others - death threats, graphic threats of violence, threats to family members, persistent libel, doxxing, pestering, racial, sexual, you name it. The next steps are usually relatively straightforward and swiftly done in my experience - block, ban, disable email and TPA, range blocks, edit filters, and protection where we can (other lesser methods are available). In some cases we'll see a WMF ban get put in place. It just continues however, and it's usually from a relatively small group of the same people. The way I see it, a WMF global ban is not even an end goal, but usually just the start. We don't need guidelines of unacceptable behaviour to stop harassment, that is easy, we need the WMF to act in the real world, to work with ISPs, legal, PR, tech, the ordinary admins who witness it, and really anyone else they need to, in order get the crazies effectively legally and technically kicked off the site. -- zzuuzz (talk) 05:19, 6 April 2021 (UTC)
 * Excellent comment that definitely reveals the elephant in the room. The UCoC might be a redundant feel-good exercise when what is needed is real-world action regarding LTAs. Johnuniq (talk) 05:30, 6 April 2021 (UTC)
 * Agreed. What has the WMF done to escalate matters when WMF bans don't work? If nothing, the UCOC is at best social washing. MER-C 17:44, 6 April 2021 (UTC)
 * Zzuuzz put it much better than I ever could. The only thing this will do is bother and constrain the editors who are following the rules or whom are minor nuisances. For the biggest problem editors, real-world action needs to be taken, and since WP:Abuse response - our previous effort at trying to handle this matter locally - was completely ineffectual without WMF Legal teeth, this absolutely must be handled by the WMF in a more offensive-oriented matter. Playing defence doesn't work when the enemy can just assault the fortress without any meaningful repercussions. —A little blue Bori  v^_^v  Takes a strong man to deny... 18:03, 6 April 2021 (UTC)
 * ''(copied from )

I agree entirely with zzuzz's comments above - in my opinion the English Wikipedia handles most cases of harrassment as well as it can, by blocking offenders and the tools they use (e.g. open proxies, VPN endpoints, etc.), and requesting global locks if required in cases of cross-wiki abuse. However, this is ultimately a game of whack-a-mole. We have multiple LTAs that get hold of new proxies of various types incredibly easily and start up their lunacy once again. We need concerted action from the WMF in the following areas: (a) a system to proactively globally block open proxies & VPN endpoints, (b) a framework to request "Office contact" with ISPs whose subscribers commit serious, on-going, intractable abuse on Wikimedia projects, and most importantly (c) a formal way for admins, stewards, and functionaries on the various projects to work with the WMF to address the issues of long-term, serious abuse. Without these, the UCoC is going to achieve very, very little I fear. ƒirefly ( t · c ) 14:32, 6 April 2021 (UTC)
 * I feel it worth clarifying that I don't oppose the UCoC at all, I'm just skeptical it will actually achieve very much. ƒirefly  ( t · c ) 14:47, 6 April 2021 (UTC)
 * ''(copied from

Are there human, technical, training, or knowledge-based resources the Foundation could provide to assist volunteers in this area?

 * I was pointed to a community discussion, Special:PermanentLink/1016202536 wherein a strong desire was expressed for resources to be deployed to improve the ability to reach mobile editors. There are a number of phabricator reports at the linked thread. is tracking these issues. Xeno (WMF) (talk) 01:20, 6 April 2021 (UTC)
 * This critical bug in particular must have cost thousands of hours of volunteer time, and the WMF's failure to fix it after well over a year is indefensible. Lacksadaisical attitudes like this deprive us of sources for the next generation of Wikipedia editors, particularly as there are many countries in which smartphone is the only way the majority of the population can access the internet. It needs to be fixed, today. — Bilorv ( talk ) 23:37, 9 April 2021 (UTC)

How should incidents be dealt with that take place beyond the Wikimedia projects but are related to them, such as in-person or online meetings?
A general concern – it doesn't make clear about whether it applies to non-Wikimedia actions. For example, suppose someone has a Twitter or personal blog or website, and they make a post which has nothing to do with any Wikimedia project. Could such a post be punished under this code of conduct? Or should actions/statements/etc which occur outside of any Wikimedia project or event, and which aren't making any reference to any Wikimedia event, be excluded? I think, statements and actions which occur outside of the context of any Wikimedia project or event, and which don't make any reference to any Wikimedia project or event, should be out of scope for any "Code of Conduct". Mr248 (talk) 00:31, 6 April 2021 (UTC) portion copied from 
 * I feel going "outside the scope" of Wikipedia is digging a dry well. It could be a very nice well but if there is no water it is a waste of time. Editors, Admins, nor the WMF staff are world police and Wikipedia has enough to be concerned with. Otr500 (talk) 21:17, 10 April 2021 (UTC)
 * While I don't think the UCOC is particularly interested in cases outside of Wikimedia where no mention is made of on-wiki stuff, that's a fairly niche area. I am a tad concerned by the idea of the WMF taking an expansive position on authority to try and enact judgements based on disputes in wiki-adjacent but not wiki-owned areas. For example, discord. Unlike IRC which has some enforcement links, we specifically are not under any form of Wikipedia control. Were we to refuse to co-operate with, say, T&S, in our Discord role, will that be viewed as behaviour they can take action on-wiki? Nosebagbear (talk) 16:59, 25 April 2021 (UTC)

In what ways should reports be handled to increase confidence from affected users that issues will be addressed responsibly and appropriately?

 * Reports should be visible to all in an anonymized form. It should be clear to all that anonymizedaccountX filed 20 reports on fifteen different editors and anonymizedaccountY has had five reports filed about them, but the diffs and the usernames should be visible to only those with certain rights. If a report is deemed to warrant an investigation, that should be visible to all, as well as some general statement of any outcome. —valereee (talk) 14:46, 6 April 2021 (UTC)
 * +1 — Wug·a·po·des​ 00:44, 9 April 2021 (UTC)
 * Ensure private and truly independent evaluation by a professional body with clear rules and processes. Also allow for an appeal to ensure fairness and uniformity of judgement. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 12:16, 4 May 2021 (UTC)

What appeal process should be in place if participants want to request a review of actions taken or not taken?

 * The appeal process should be limited to those directly affected by an enforcement action. Such appeals should not be entitled to review all evidence against them in cases where there is a privacy issue. Importantly, unblock actions should require justification and a consensus of admins. Currently, unblocks require no justification, but reblocks are forbidden as wheel warring. The effect of this particular ruleset is that we end up with unblockable harassers. Kaldari (talk) 01:35, 8 June 2021 (UTC)

When private reporting options are used, how should the duty to protect the privacy and sense of safety of reporters be balanced with the need for transparency and accountability?

 * You can't. If incidents such as i) vulnerable people; ii) serious harassment; or iii) threats of violence are taking place, absolute privacy and safety should be guaranteed to reporters. IMO all you can do is make sure those investigating these incidents are competent, diligent and empathetic people and ideally put in place some sort of clear review mechanism - perhaps some kind of committee composed of community members and trained WMF staffers - so there's a sense of accountability. Making some kind of global WMF committee seems difficult though (just look at the Ombuds), and this mechanism only works if you can recruit competent and active community members to volunteer their time to do smoke-filled work. ProcrastinatingReader (talk) 14:24, 6 April 2021 (UTC)
 * +1 — Wug·a·po·des​ 00:44, 9 April 2021 (UTC)
 * +1 reports should be private. An independent committee composed of paid WFM staffers with some community oversight (certain members chosen from admins?) might be the best way to guarantee privacy with accountability and community control. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:47, 4 May 2021 (UTC)
 * Transparency should focus on the conduct being sanctioned rather than the reporter. If a case has merit, it matters little who reported the incident. What is important with transparency is that the person being sanctioned, and the community, can understand what was the act (harassment, etc.) that led to the sanction. Disclosure of the act involved does not require disclosure of the reporter. feminist (talk) 02:07, 9 April 2021 (UTC)
 * Agree with  on protecting those that submit reports when privacy is requested. If there are not safeguards with that respect the whole plan will crash. Also agree with  There needs to be transparency. Private courts are not conducive to any remedy where there is some form of justice. Also, it should be remembered that there are usually two sides to every coin. If there are "egregious" actions and privacy concerns then that should be dealt with accordingly. "Assuming" there are privacy issues where that is not and/or performing close-door courts sets the stage for a possible kangaroo court scenario. Otr500 (talk) 21:44, 10 April 2021 (UTC)


 * If we don't provide privacy to reporters, we won't get reports. It's that simple. Look at what happened the last time an admin was desysopped (or the time before that, or the one before that, etc.). The backlash is insane. The only people who would put up with the backlash necessary to report wrongdoing by power users are people pushed so far that they no longer care what happens to their wiki reputations as a result of the report. "No longer give a fuck about my wiki career" is not the threshold we should set for reporting problems. But that's the threshold we have so long as we (1) cling to the cult of "transparency" requiring all accusers to be public and (2) allow open retaliation against those making reports. This is fundamentally about control: some Wikipedians want to maintain control over Wikipedia by ensuring all problems are handled transparently on wiki, and they want this control because they want to be able to protect themselves and their friends from what they perceive as "unfairness" (but what I would call "criticism"). We need a confidential tip line, we need to staff that tip line with people whose competence and integrity we trust. Best of all would be if the response team (whoever they are) didn't wait for a report. On-wiki harassment is public: we all see it, we don't need the target to report it; anyone could report it or just take action on it (any editor can issue a warning, for example). So one way to get around the reporting problem is for people to proactively take action when they see harassment. Also, a "flag post" function like val suggests would go a long way to providing an easy way to report harassment and other problems without the downsides of a public on wiki ANI thread or arbcom case request. Levivich harass/hound 15:12, 15 April 2021 (UTC)
 * I Agree. The same private reporting methods with independent judgement should apply to all users (admins too). -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:47, 4 May 2021 (UTC)
 * Transparency is critical, it is not a "cult". Demanding transparency does not mean either a resistance to criticism and any step that risks unfairness should be fought against firmly. The accused needs to know the evidence and, quite possibly, awareness of the accuser if not inherently clear in order to ensure that adequate defences can be made - otherwise how to handle cases when information is not just readily accessible in on-wiki interactions? Insufficient transparency makes it hard to know if that is or isn't the case - yes what is key is the case, but not knowing the accuser can interact with that. There are exceptions, of course, but these should be limited to when there would appear to be an appreciable threat of off-wiki followup occurring. I'm not sure how the "vulnerable person" category really applies here - we very rarely have confirmed knowledge of someone belonging to a group we'd accept as vulnerable. We also generally disagree with certain groups on Wikipedia having fewer safeguards than others, and this feels like it would wander into that. Private courts, as well as being problematic inherently, would also face the same issue that admins who work heavily in AE generally become more severe over time than the general community. Nosebagbear (talk) 13:56, 17 April 2021 (UTC)
 * Privacy and safety trump transparency, otherwise the system will never be effective. We can however, be transparent about certain aspects of the system, like who is responsible for managing complaints and taking enforcement actions and how those people are elected/appointed/trained/whatever, as well as the rules they are operating under. In the end, however, we have to have some trust in those people and give them enough breathing room to do their job effectively. Kaldari (talk) 01:43, 8 June 2021 (UTC)

What privacy controls should be in place for data tracking in a reporting system used to log cross-wiki or persistent abusers?

 * Other questions: how can it integrate into the CUwiki which already does a lot of this - but not reveal stuff like IP addresses to non-CUs? How can we stay away from WP:BEANS? --Rschen7754 00:15, 1 May 2021 (UTC)

How should issues be handled where there is no adequate local body to respond, for example, allegations against administrators or functionary groups?

 * So "functionary groups" only applies to: oversighters, checkusers, 'crats, arbs, in general usage. Obviously the last one wouldn't apply, and there's limited scope for rogue 'crats. So it's OS and CU that's most relevant. We already have a global body that can handle cases like those, it's just traditionally backlogged and very slow to react - they have a good Q1 burst and then fade off in the rest of the year. If the Ombuds can keep their house in order, we don't need any change to handle them. Nosebagbear (talk) 14:04, 17 April 2021 (UTC)
 * An arbcom is only needed to handle allegations against admins if the local Community as a whole is not capable of taking action against their admins. If en-wiki didn't have an ARBCOM, then we'd obviously require (not just seek) a means of community desysops. If it's not already, any community must have a formal means of desysopping, if there isn't a local body. The bigger issue are local communities where admins are basically unblockables. That is not going to be the same as "all local communities without a local arbcom". So we can't just say "create a body and be done with it". Step 1 is going to have to be "how does the meta-Community identify when there isn't, in fact, an adequate local body (when the local body disagrees)". Step 2 will probably need to be the creation of a meta-ARBCOM for communities without a local one. Cases would only be permissable for it if a) the local community sent it on (or just generally accepted its authority) or b) the meta-community had done a step 1 review and classified it as such. Such a consensus would need to be ultra-clear. I think most small communities with a couple of admins would buy in, so long as there was confidence in where the arbs were coming from. Nosebagbear (talk) 14:04, 17 April 2021 (UTC)
 * If the WMF can help resolve the situation through what already exists and is generally accepted by the community (examples: global ban, privacy policy violations) then it should feel free to act. For example, if it had determined that Vodomar was abusing CU earlier on, I think we could have cut off a few years of the hrwiki saga. But I share the hesitations about a global ArbCom which have already been explained below. --Rschen7754 18:26, 26 April 2021 (UTC)

In the past, the global movement has identified projects that are struggling with neutrality and conduct issues: in what ways could a safe and fair review be conducted in these situations?

 * m:User:Rschen7754/Help, my wiki went rogue! summarizes these situations fairly well. The problem is that stewards have been very reluctant to take action without a very strong consensus on Meta. On one hand, I can sympathize since they are not a global ArbCom. On the other, then nobody is tasked with the problem and it is left to continue further. As far as Croatian Wikipedia, the situation was left to deteriorate from 2013-2021, when it was discovered that a local CU had violated the privacy of editors. I don't know what the solution is - but we need to do better. --Rschen7754 18:51, 11 April 2021 (UTC)
 * What does “the global movement” mean? If WMF are equating the projects with some kind of social movement (beyond sharing free content encyclopaedic content), that potentially raises neutrality issues of its own. Or is it just a synonym for WMF, in which case I would like clarification as to whether this question is referring to smaller projects with a lack of diverse membership or projects such as this one? Does WMF regard enwiki as a project that is “struggling with neutrality and conduct” for example? WJBscribe (talk) 18:08, 12 April 2021 (UTC)
 * I've used it here as a term of art reflecting that Wikimedia projects are complex multi-threaded systems with actors of varying degrees of involvement and responsibility. Consider scenarios like those in Rschen7754's essay: complaints like this go to many places, and pathways to resolution can take many forms. Using the term "movement" is meant to allow prospective answers to include what volunteers can do, what the Foundation can do, whether any regional or global bodies should have a role, what processes should be in place to allow a review to move forward. What would work best? What clearly hasn't worked? Xeno (WMF) (talk) 20:14, 12 April 2021 (UTC)
 * The issue here is more the execution rather than the review side of things. Something as severe as what is (de facto) censuring an entire local project, should not be done by any delegated conduct group. It must remain a meta-discussion. What does need improvement is some policy covering: a standard form, who must do closes and under what timeline, what actions could be permissible and who is going to execute it. Stewards are, not unreasonably, against closing without clearcut policy. Though why they failed to communicate such on the HR-wiki issues remains unclear to me. Anyway. That's my broad proposal - keep the "is there an issue with a whole community" on the meta-community level, but work on ways to formalise that to ensure actual action can be taken. Nosebagbear (talk) 14:11, 17 April 2021 (UTC)

How would a global dispute resolution body work with your community?
Any "global dispute resolution body" will likely do more harm than good if it tries to interact with the English Wikipedia. Enwiki internal governance isn't perfect, but the memory of WP:FRAM is still fresh in the minds of too many editors, and WMF's interaction with the enwiki community in that fiasco was, put simply, atrocious. feminist (talk) 16:45, 8 April 2021 (UTC)

You already have an answer to this question, and it can be summarised as "Framgate". OFFICE-invoked one-year ban from en-wiki only for harassment, OFFICE not taking any action against Fram on other WMF wikis when he gave his side of events (thereby royally damaging the WMF's arguments), stonewalling from the WMF even on matters that could (and should) have been disclosed without revealing the identity of anyone who was harassed, evidence that (once the Arbitration Committee finally got to see an expurgated version of it) was deemed too flimsy to justify the action taken, and an RfC on partial blocks that turned instead into a referendum on WMF's interference with a community's self governance. Those who will not learn from the past are condemned to repeat it. —A little blue Bori  v^_^v  Jéské Couriano 23:33, 10 April 2021 (UTC)

I endorse the above comments completely. The only way I could see this working would be if the improper actions in scope were across several wikis and it was a global issue. --Rschen7754 18:48, 11 April 2021 (UTC)

It is hard to see how a global dispute resolution body can work with established larger projects such as enwiki, dewiki etc. Framgate and superprotect are cautionary lessons as to the fact that volunteer communities are not looking to be ruled from above. Such a body should limit itself to handling: (a) global permanent bans arising from the most severe misconduct and (b) potentially working with stewards and global sysops on smaller projects without established processes. WJBscribe (talk) 18:11, 12 April 2021 (UTC)


 * Indeed. There's already the global ban mechanism. Interaction with en-wiki/de-wiki etc by such a body will not end well. The only area I can think might fit in between the global ban method and the local ARBCOM method is a hyper-complicated cross-wiki case where no local project has yet acted. I suppose ARBPOL could have an amendment saying that in exceptional cross-wiki circumstances ARBCOM may accept a case request and then second it to meta-arbcom. Nosebagbear (talk) 14:07, 17 April 2021 (UTC)


 * Further to the idea about working with stewards and global sysops on smaller projects without established processes and hyper-complicated cross-wiki cases: what about a global volunteer-led body: a multi-project dispute resolution bodies allowing smaller communities to opt-in to the scope, perhaps organized around major language groups. It could be formed through a combination of rotating secondment from projects with existing arbitration committees and general elections among the affected communities. Anyone have any thoughts on whether something like that would work (primarily asking writers to draw from their experience at smaller projects here)? Xeno (WMF) (talk) 00:19, 20 April 2021 (UTC) (Just a thought: if confidence in the fairness and efficacy of the Inter-ArbCom were strong, a local committee lacking quorum from recusals could refer to a stand-up committee including non-recused members of the local committee and such an IAC.)
 * As someone who is not possessing that small-project experience, that sounds like it would work nicely if all the facets you mentioned were present, but I'll watch with interest to see what those with the knowledge to make more based comments think on it. Nosebagbear (talk) 00:41, 20 April 2021 (UTC)
 * I don't know. The concept of a global ArbCom or even a local ArbCom has always been controversial as not being consensus-based/community-driven; quite a few medium to large wikis have gotten rid of their own - some through dysfunction/fighting. Opting in as the above proposal has would be another challenge, with over 900 wikis existing - a wiki that really needs it like hrwiki might not opt in, and it would be hard to opt in a wiki in good faith when there are 0 admins and only spambots edit it. We struggle with getting the global bot policy implemented (see m:Bot policy/Implementation), and that's a lot less complicated than a global ArbCom would be. That all being said - a global ArbCom may be more palatable than the status quo of filing a Meta RFC and waiting half a decade only to have it closed for inactivity. --Rschen7754 04:47, 20 April 2021 (UTC)
 * Some more thoughts: I really hope that such an ArbCom would be composed of community-elected members rather than those chosen by WMF. For example, there have been numerous problems with the m:OC and I believe it has to do with the WMF choosing unqualified individuals (some of whom were not even admins on any wiki) - though it has gotten better in recent years. --Rschen7754 18:15, 4 May 2021 (UTC)


 * In my opinion some independent oversight is needed to ensure uniform and fair dispute resolution within the communities. Leaving behavioural issues entirely in the hands of the community of volunteer editors (with inherent conflicts of interest, little time, patience and training) is not working very well and leads to tribalism, abuses of power and time loss. Removing behavioural dispute resolution (not content) from the hands of the community should be regarded as a positive step towards fairer processes as long as some community oversight is guaranteed. Community control and participation should be maintained, for example by ensuring that the overseeing body is composed both by professional WMF staffers and by elected community members. The actions of this body should also be potentially reviewed and overturned by the community in case of appeal. A more sophisticated "judicial" branch should be established instead of the current barbaric court of public opinion. Fair and independent dispute resolution are a crucial value for any democratic community and given the scope and ambition of Wikipedia needs to be implemented urgently. This would not be seen as rule from above as the foundation would handle the "dirty work" with oversight by the community. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 12:06, 4 May 2021 (UTC)
 * Based on everything we've seen so far, it would absolutely be viewed as rule from above, and it would need universal visibility if people actually felt it could be visible, and even then there'd be disagreements in how the WMF handled warnings, severity (as an example where that might differ, the WMF felt a year siteban appropriate for FRAM, ARBCOM, "only" a desysopping). There are many hundreds of conduct issues a day, a couple of dozen of which are complicated. Now scale that across all projects, including 290 languages the WMF don't speak. I think WMF presence at all would be a disaster, but I also can't see how it would be a feasible concept. Nosebagbear (talk) 23:29, 5 May 2021 (UTC)
 * this system should probably could come into play only for those complicated cases. And if you create a hybrid system with community volunteers and WMF moderators for example it might work to ensure uniformity across projects and lower the burden on the community. This is a foundational issue for Wikipedia I think and the WMF has hundreds of millions in budget every year. Ensuring the editor community is fair and healthy should be a primary goal of the WMF. Even deploying an average of 1 community moderator per project to assist volunteers immediately (some projects probably don't even need 1 full time given the tiny size) times 50k$ would mean around 15 million a year in extra cost. Doable but certainly a big radical project. -- &#123;{u&#124; Gtoffoletto  &#125;}  talk 11:19, 6 May 2021 (UTC)

= Additional discussion =

Questions
=General comments= The following links may be useful for background: ( copied from Universal Code of Conduct/Discussions ) (Note by Jonesey95:) The links above are copied here for convenience so that applicable excerpts of those discussions can be inserted here without having to rehash those discussions. – Jonesey95 (talk) 00:23, 6 April 2021 (UTC)
 * English Wikipedia
 * June 2019 discussion, strong opposition to WMF-imposed UCoC
 * 2020 English Wikipedia Arbitration Committee Elections: Each candidate provided feedback on the UCoC in their answers to questions.
 * May 2020 English Wikipedia discussion held after the Board of Trustees community health statement
 * 2020 English Wikipedia Anti-harassment request for comment
 * on the relationship with T&S:
 * Editors strongly feel that en-wiki issues should be handled "in-house", and only matters that affect the real world (Q2, Q3) should be passed to T&S. A better/improved dialogue between ArbCom and the WMF is also desired, with the Foundation and T&S passing along en-wiki-specific information to ArbCom to handle.
 * There was a desire from some editors, expressed in this section as well in previous sections, for the WMF to hire/find/create resources and training for mediation and dispute resolution, which would hopefully mitigate some of the most prevalent civility/harassment issues present on Wikipedia.
 * September 2020 English Wikipedia discussion

Mr248's feedback
Sorry if I have put this in the wrong place I am confused about where it goes. If I have put it in the wrong place please move it. I don't have a problem with a "Code of Conduct" per se but I have some concerns about the text of this specific code of conduct:

People who identify with a certain sexual orientation or gender identity using distinct names or pronouns I have trouble remembering what pronoun to use for people and so often try to avoid using pronouns. I'm concerned that a policy might be interpreted as saying you have to use for people the pronouns they prefer, as opposed to choosing to avoid using pronouns entirely, and hence my action of avoiding using pronouns might violate the policy. Sometimes I also call people "they", by which I mean "I don't remember what pronoun to use for you so I am just using 'they' as a default". (I think it is quite standard English to use "they" as a default pronoun when you aren't sure what pronoun to use.) I am concerned some people might make a big issue of that ("they is not my pronoun!") which would be a distraction, and honestly would make me feel unwelcome.

Note: The Wikimedia movement does not endorse "race" and "ethnicity" as meaningful distinctions among people. Their inclusion here is to mark that they are prohibited in use against others as the basis for personal attacks I think that is problematic because some people identify with their race or ethnicity, and this could be read as saying officially that their choice of personal identification is invalid. For example, if a person of Italian descent identifies their ethnicity as "Italian" (or "Italian-American" or whatever), this seems to be saying their choice to consider that an important part of their own identity is invalid. Or similarly, if an African-American person identifies as "Black", this could be read as saying that their Black identity is not "meaningful", which they may well find offensive.

Hate speech in any form I am concerned that is too vague. Some people understand "hate speech" as meaning stuff like using slurs, negative stereotypes/generalisations, etc, and I don't have a problem with prohibiting that. But other people interpret it much more expansively–for example, if a person has conservative religious views on sexual morality, some people would interpret the mere expression of those views as "hate speech"–and I'm concerned about those more expansive definitions. Of course, if a person has such views, they shouldn't be using Wikipedia as a soapbox for expressing them, but they may nonetheless be revealed somehow.

A general concern – it doesn't make clear about whether it applies to non-Wikimedia actions. For example, suppose someone has a Twitter or personal blog or website, and they make a post which has nothing to do with any Wikimedia project. Could such a post be punished under this code of conduct? Or should actions/statements/etc which occur outside of any Wikimedia project or event, and which aren't making any reference to any Wikimedia event, be excluded? I think, statements and actions which occur outside of the context of any Wikimedia project or event, and which don't make any reference to any Wikimedia project or event, should be out of scope for any "Code of Conduct". Mr248 (talk) 00:31, 6 April 2021 (UTC)
 * thank you for your comment. This is a fine place to leave it, would it be okay if I copied some relevant portions to the question buckets above? Your last paragraph for example, would fit into
 * There are ongoing discussions about the actual policy text itself at meta:talk:Universal Code of Conduct and meta:talk:Universal Code of Conduct/Policy text. Xeno (WMF) (talk) 00:38, 6 April 2021 (UTC)
 * Thanks sure you can copy my comment (or parts thereof) wherever you wish. Mr248 (talk) 00:41, 6 April 2021 (UTC)

Comments by Johnuniq
From UCoC 3.1 – Harassment: "Harassment ... may include contacting workplaces or friends and family members in an effort to intimidate or embarrass." The "in an effort" clause makes the sentence pointless because a perpetrator can say their contacting an editor's workplace was in an effort to reach out and help the person develop (in fact, any such unsolicited contact should be forbidden). Harassment is defined as several items almost all of which would earn the perpetrator an immediate and permanent block at enwiki—no UCoC is needed. Does anyone in the WMF imagine that sexual harassment and threats etc. are tolerated? The problematic items are insults (how do I tell someone that their English is not adequate or that their edits show they don't understand the topic or Wikipedia's role?) and hounding (it's hard to know whether use of Special:Contributions is done to protect the encyclopedia or merely to upset/discourage a contributor—in fact, good editors have to upset and discourage ungood editors every day). Johnuniq (talk) 01:51, 6 April 2021 (UTC)

Comments from zzuuzz
I sometimes wonder if what I'm about to say is out of scope for what the WMF is thinking, but I think it's relevant so I'll say it anway. It addresses several of the questions posed, and none at the same time. I think it might be the elephant in the room.

I deal with an enormous amount of harassment - to me, other users, article subjects, as well as others - death threats, graphic threats of violence, threats to family members, persistent libel, doxxing, pestering, racial, sexual, you name it. The next steps are usually relatively straightforward and swiftly done in my experience - block, ban, disable email and TPA, range blocks, edit filters, and protection where we can (other lesser methods are available). In some cases we'll see a WMF ban get put in place. It just continues however, and it's usually from a relatively small group of the same people. The way I see it, a WMF global ban is not even an end goal, but usually just the start. We don't need guidelines of unacceptable behaviour to stop harassment, that is easy, we need the WMF to act in the real world, to work with ISPs, legal, PR, tech, the ordinary admins who witness it, and really anyone else they need to, in order get the crazies effectively legally and technically kicked off the site. -- zzuuzz (talk) 05:19, 6 April 2021 (UTC)
 * Excellent comment that definitely reveals the elephant in the room. The UCoC might be a redundant feel-good exercise when what is needed is real-world action regarding LTAs. Johnuniq (talk) 05:30, 6 April 2021 (UTC)
 * Agreed. What has the WMF done to escalate matters when WMF bans don't work? If nothing, the UCOC is at best social washing. MER-C 17:44, 6 April 2021 (UTC)
 * Zzuuzz put it much better than I ever could. The only thing this will do is bother and constrain the editors who are following the rules or whom are minor nuisances. For the biggest problem editors, real-world action needs to be taken, and since WP:Abuse response - our previous effort at trying to handle this matter locally - was completely ineffectual without WMF Legal teeth, this absolutely must be handled by the WMF in a more offensive-oriented matter. Playing defence doesn't work when the enemy can just assault the fortress without any meaningful repercussions. —<i style="color: #1E90FF;">A little blue Bori</i>  v^_^v  Takes a strong man to deny... 18:03, 6 April 2021 (UTC)
 * (copied above to )


 * This, and the section above/below, seems to presume that the only type of harassment is that from socks/LTAs. That may be the most voluminous, and it's certainly the type the community's established processes deal best with (block, ban, disable email and TPA, range blocks, edit filters, and protection where we can), but I'm not sure it's the most severe or difficult to deal with. It'd be nice if UCOC enforcement dealt with the problem of unblockables, and also with the problem of new editors subject to problems (esp offwiki) who are not familiar with norms and reporting mechanisms available to them (indeed, Contact us has no mention of mechanisms existing, such as ArbCom's contact info). ProcrastinatingReader (talk) 18:10, 6 April 2021 (UTC)
 * (copied above comment to )

Comments by Firefly
I agree entirely with zzuzz's comments above - in my opinion the English Wikipedia handles most cases of harrassment as well as it can, by blocking offenders and the tools they use (e.g. open proxies, VPN endpoints, etc.), and requesting global locks if required in cases of cross-wiki abuse. However, this is ultimately a game of whack-a-mole. We have multiple LTAs that get hold of new proxies of various types incredibly easily and start up their lunacy once again. We need concerted action from the WMF in the following areas: (a) a system to proactively globally block open proxies & VPN endpoints, (b) a framework to request "Office contact" with ISPs whose subscribers commit serious, on-going, intractable abuse on Wikimedia projects, and most importantly (c) a formal way for admins, stewards, and functionaries on the various projects to work with the WMF to address the issues of long-term, serious abuse. Without these, the UCoC is going to achieve very, very little I fear. ƒirefly ( t · c ) 14:32, 6 April 2021 (UTC)
 * I feel it worth clarifying that I don't oppose the UCoC at all, I'm just skeptical it will actually achieve very much. ƒirefly  ( t · c ) 14:47, 6 April 2021 (UTC)
 * (copied to )

S Marshall
At the moment, the community deals with vandals by RBI. The draft text of this universal code of conduct, at section 3.3, requires us to engage with them: it clearly and specifically rules out our current process of reverting vandals' edits and denying them the oxygen of attention. Where is the correct place to discuss fixes to the draft UCoC text?—<b style="font-family: Verdana; color: Maroon;">S Marshall</b> T/C 23:39, 6 April 2021 (UTC)


 * @User:S Marshall: The Code has been ratified by the Board and is no longer a draft. "The Foundation’s Legal Department will host a review of the UCoC one year after the completed version of it is accepted by the Board." (FAQ#Periodic reviews) "If you see more cultural gaps in the draft, kindly bring that to our attention on the main talk page of the Universal Code of Conduct, and these issues may be included in the first or subsequent annual reviews." [emphasis added] (FAQ#Conflict with local policies)
 * General Question: is that first review to be one year after the Phase 1 Code ratification or after Phase 2 Enforcement Policy ratification? Pelagic ( messages ) – (01:27 Fri 09, AEDT) 14:27, 8 April 2021 (UTC)


 * Then the Board must re-think. The first bullet point of section 3.3 rules out "The repeated arbitrary or unmotivated removal of any content without appropriate discussion or providing explanation".  On review in context, the wording probably does allow us to deal with obvious vandals via RBI, but it denies us RBI with LTA cases, POV warriors, and most areas that are of interest to Arbcom.—<b style="font-family: Verdana; color: Maroon;">S Marshall</b> T/C 14:40, 8 April 2021 (UTC)


 * I will seek clarity on that question, thank you and for your comments. The Policy text is discussed at meta:talk:Universal Code of Conduct/Policy text. I also copied your comments to . Xeno (WMF) (talk) 02:43, 3 May 2021 (UTC)

WJBscribe
I am working my way through the questions above. In the meantime I wanted to raise a concern about the language of the UCoC as drafted. It includes the following:
 * "Insults: This includes name calling, using slurs or stereotypes, and any attacks based on personal characteristics. Insults may refer to perceived characteristics like intelligence, appearance, ethnicity, race, religion (or lack thereof), culture, caste, sexual orientation, gender, sex, disability, age, nationality, political affiliation, or other characteristics. In some cases, repeated mockery, sarcasm, or aggression constitute insults collectively, even if individual statements would not. (Note: The Wikimedia movement does not endorse "race" and "ethnicity" as meaningful distinctions among people. Their inclusion here is to mark that they are prohibited in use against others as the basis for personal attacks.)"

The note is problematic for a number of reasons: The note requires urgent attention. I am seriously troubled that the Board appears to have endorsed this language. <strong style="font-variant:small-caps">WJBscribe (talk) 10:42, 7 April 2021 (UTC)
 * 1) What is "the Wikipedia movement"? Is this a synonym for WMF, the Board, or is it an attempt to speak for all contributors on all projects?
 * 2) A contributor to this project may feel that their "race" and "ethnicity" are an extremely meaningful part of their self identity. Ironically, they may feel harassed if these characteristics are dismissed out of hand. Saying that these are not endorsed as "meaningful distinctions" is a potentially divisive political statement. It has not place in the UCoC.
 * 3) Why are only "race" and "ethnicity" single out, does that mean that by implication WMF (or worse all of us, if that is what Wikimedia movement means) do endorse the other characteristics listed as meaningful distinctions among people (e.g. caste, disability?)!?!
 * , thanks for bringing this up. I tend to agree that the note here raises many more questions than it answers, and in an unhelpful way. The commonly understood definitions of the listed terms are perfectly suitable. The whole note should be removed. Ganesha811 (talk) 01:33, 5 May 2021 (UTC)

Comment by Stifle
I concur with S Marshall, Firefly, and zzuuzz. This appears to be a great deal of effort being discharged in dealing with the wrong problem. Vandals (interpreted widely) don't care about rules and codes of conduct. Making more rules won't deter them. Stifle (talk) 11:03, 7 April 2021 (UTC)

Feminist
One must keep in mind that there are local differences in the prevailing standards of human rights. Despite their universal nature, international human rights treaties are always implemented on a contextual basis, taking into account local differences as to economic development, culture, social norms and politics. This applies equally to the WMF Universal Code of Conduct as well. Application of the UCoC to local wikis must – and I repeat, must – take into account the prevailing cultural and economic background of the average editor of that wiki. For example, depending on the context, some may consider use of the term Latinx to be necessary for gender neutrality, while others may consider use of the term to be culturally imperialist. How would the WMF handle local differences in enforcing the UCoC? Will the WMF potentially add fire to the conflict via enforcement, or will it seek to encourage mutual acceptance of different approaches?

I also concur fully with WJBscribe. These are material concerns with the way the UCoC is drafted. The UCoC should be amended to address these concerns before it is enforced.

Finally, the justifications for the UCoC (under the Why we have a Universal Code of Conduct section) are not terribly convincing. A set of justifications focusing on ensuring Wikimedia covers content from diverse perspectives and maximizing social benefit for editors and readers would be much more convincing than the current text which simply involves the WMF Board of Trustees professing blind faith towards a set of ideals. feminist (talk) 05:37, 8 April 2021 (UTC)
 * With regards to your last paragraph, I gave similar feedback in October at m:Talk:Universal Code of Conduct/Policy text/Archives/2020 which was not taken into account in later drafts. — Wug·a·po·des​ 00:52, 9 April 2021 (UTC)
 * Good to hear. If the WMF is unwilling to listen to the community even at the drafting stage, how can we trust them to apply the UCoC with full regard to the contexts and needs of local projects? feminist (talk) 02:02, 9 April 2021 (UTC)

Comment by otr500

 * Wikipedia would be a good first stop on the "additional resources" trip. Some clarity of the word harassment would be helpful. I have mentioned this before. Misconduct is conduct not "generally" regarded as appropriate. A simple definition of harassment would be: the act of systematic and/or continued unwanted and annoying actions of one party or a group. The policy on Harassment gives a definition: "Harassment is a pattern of repeated offensive behavior". When one editor "attacks" another editor it violates more than one policy on the first instance. This should be reflected as attacks and harassment.
 * The main caption of WP:5P4 states: Wikipedia's editors should treat each other with respect and civility. A key word to be noted is "should". Any form of personal attacks "should" be a red flag to be dealt with yet that page includes: "This page documents an English Wikipedia policy". It describes a widely accepted standard that all editors should normally'' follow. The link is to Use common sense with the question and answer: Why isn't "use common sense" an official policy? It doesn't need to be; as a fundamental principle, it is above any policy. The policy on "No personal attacks" includes harassment under the subsection Recurring attacks: Recurring, non-disruptive personal attacks that do not stop after reasoned requests to cease can be resolved through dispute resolution. In most circumstances, problems with personal attacks can be resolved if editors work together and focus on content, and immediate administrator action is not required. If I am the only one that sees a problem with this entire paragraph all of this is in vain.
 * If comments are derogatory, and serious enough to create a hostile environment, it is disruptive. "If" a personal attack (direct or Ad hominem) is serious it should not have to rise to the level of being allowed to occur several times or be considered egregious before it is deemed unacceptable.
 * Wikipedia already has separate classes for the seriousness of attacks or harassment. Those considered "Never acceptable" are classified as severe or egregious. When an editor personalizes comments it is usually in the form of an attack. It is still serious even if not to a level of egregious and should not be ignored. I just saw where an Admin blocked two editors for violating DOX and then requested oversight so we have active Admins willing to protect Wikipedia as well as editors.
 * Insulting or disparaging an editor is a personal attack regardless of the manner in which it is done.
 * WikiBullying should be presented at WP:PROPOSAL so it can be thoroughly vetted by the community. After all: (This page in a nutshell:) Bullying is not permitted on Wikipedia, and any violators will be blocked from editing. WP:BOOMERANG should not be a consideration if a legitimate report is given. No one should ever fear coming forward to make the community aware of a bullying concern.
 * Wikipedia touts being a civil community and harassment (and disruption) is contrary to this spirit and damaging to the work of building an encyclopedia. Maybe we should push that ''Wikipedia is a place where anyone can edit in a civil manner. The way to address harassment is to not be so lenient on any editor that "attacks" another. Maybe it is time to elevate "No personal attacks" and "No harassment" to a fundamental principle.
 * Most of the solutions for addressing "attacks and harassment" are already on Wikipedia. Sometimes they are not as clear as they need to be (watered down with words like "should") because we associate that any "rule" could be subjected to WP:IAR and that should not be the case in these instances. Otr500 (talk) 03:07, 11 April 2021 (UTC)


 * Thank you for your comments. Just so I understand, is the problem with the paragraph that recurring personal attacks are rarely non-disruptive, and it seems contributors are expected to simply bear them? I noticed that text was inserted here (when Wikipedia was not even 100 million edits old! =), that large update was announced here, and has seen some trimming over the years. Xeno (WMF) (talk) 03:28, 3 May 2021 (UTC)
 * Answer to u|Xeno (WMF), comments, and thoughts: Absolutely! There is a problem with wording, and a mentality, that makes it appear an editor should ever just have to accept being subjected to personal attacks, harassment, or being bullied. The first one is bad enough but the last two should never happen without consequences. The definition of recurring personal attacks would be harassment. All attacks (especially personal attacks) are disruptive to one degree or another and may continue even if ignored. Egregious personal attacks will likely be fast-tracked but creating a hostile editing environment should not be allowed. The concept that the success of this encyclopedia and the reliability depend on the collaboration of the many and is often overlooked.
 * There are few "mandates" on Wikipedia but "Do not make personal attacks anywhere on Wikipedia" seems pretty clear. We have behavioral guidelines (code of conduct) that editors should attempt to follow. There is our Civility policy (part of our code of conduct) which is also in our Five pillars. I do not see a Universal Code of Conduct as a bad thing but on this platform, we just need more community involvement and less tolerance towards those that foster a bad environment for others. An RFC involving Admins and the community input on upgrading sanctions for repeat violators might be fruitful.
 * I have not looked into any of this in detail because I "thought" I could be considered a trusted editor. I have never been blocked, don't edit war, contribute to areas I have interest in that are not subject to ARBCOM sanctions, create articles, involved in new article reviews, AFD, article maintenance, try to assist in de-escalating disagreements when I can, but hit a wall. Some years ago I was hit with an IP block but that was resolved fairly fast. I was then hit with a global IP block that included my talk page. Navigating through this was very stressful. I still have no idea what triggered the block on a better than 12-year registered editor but seeking a block exemption, when not blocked, is another minefield. This has caused me to withdraw from Wikipedia. Yes, I have asked for help. The Admin that cleared my first block told me if it happened again I could get an exemption. I just pretty much gave up on it. That way if I wake up to another block I can just find something else to do. The only thing I can imagine is I might have tried to mobile edit not logged in but I am registered so that seems unlikely. It seems that on this platform a Steward cannot perform a requested IP check unless there is disruption.
 * The point in relaying this is not about me being blocked, although a severe issue, if I have such severe issues resolving what would seem to be an easy problem, where does that leave editors with less time or knowledge on Wikipedia, especially when up against a seasoned editor, experienced in navigating around sanctions, that decides to unload on another editor? It would take a very minimum intelligence to also learn how not to exhibit a "pattern" and other things such as feigning remorse at using certain words in the "heat of an argument". Until a breaking point, they are only going to get slapped on the wrist (maybe just 24 hours), advised avoiding the other editor or other inconsequential slaps. If the offender has been around a while they know how to WP:GOAD another editor to get that person to transgress and at the very least share culpability and face sanctions also. There is a need for more civility-minded advocates.
 * I might look more into this but don't want to get too deep into something where I might end up having to defend being indiscriminately blocked, for no apparent reason again, and the battle that goes with clearing it up. Maybe if I figure out how to apply (and receive) a block exemption I might go back to being more prolific. Sensible reasoning why I might not be considered trustworthy enough would be interesting to learn. I might have just fallen through the cracks but was ignored by at least two admins so maybe I am a thorn. Since every aspect of the web includes "joining" or registering I am not clear as to why this is a stumbling block here but it would stop a lot of issues. This is an encyclopedia that touts anyone can edit but that simply is not true. Be in an area where there is a blanket IP block and see how that works for a potential new editor. That is another story though.
 * I almost quit because of the Framgate incident so I am not sure how the community would react to what could be considered another layer of control especially if there was an insane price tag of say $15 million dollars. I personally saw a WMF member making VERY inappropriate comments so think one would make sure their house is clean before bringing up issues about the condition of someone else's house. I am sure the foundation has some pretty smart people so there needs to be a way to figure out how to form a partnership with this community. Just an initial thought but every time I see the word Steward mentioned there is a reply that "they" would have to accept any new role. My understanding is that they are the liaison between the community and the foundation. If they are too busy maybe it is time to consider some alternative solution.
 * However, I am glad the issue is staying in the limelight so some improvements can result. If this encyclopedia doesn't continue to thrive it could eventually become excess baggage and sold to the highest bidder. I am sure there are entities that would offer a mind-blowing sum of money to exploit the potential advertising revenue. We need to continually strive to accept new editors (a welcoming place to spend time) and keep the active ones we have. Have a great day and good luck in this endeavor.  Otr500 (talk) 18:00, 11 May 2021 (UTC)

Comments by Dennis Brown
There are several problems / pitfalls with a "Universal Code of Conduct". The primary concern is the systemic bias in a bunch of white people from America drawing up a document that regulated the behavior of people all over the globe. I say this as a white person from America, btw. There has seemingly been a concerted effort by the Foundation to force a set of civility rules on enwp that are too harsh. No one wants harassment or abuse of any editor, and most of the time, admin are able to deal with this. When we can't (privacy, off-wiki involvement, etc), the Foundation has already taken control and forced solutions upon us, usually justified but often in a hamfisted way, overshooting the mark. Any code of conduct that tries to overregulate "civility" is going to be more problem than solution and will end up pushing people away. Wikipedia is a collaborative environment, this means sometimes you have heated arguments and ruffle a few feathers. If we start blocking people for this, then what we will have left is not the best talent, only the most hypersensitive. THAT is my biggest concern, and why I'm not really in favor of the Foundation's attempt to take control of enwp, with their stealth attack of using a Universal Code of Conduct, and very hesitant of us creating one, UNLESS... the Code derives it's authority from existing policy, and the Code itself is NOT a new policy that simply muddies the waters and creates a conflict on which to enforce: the Code, or existing policy. If the Code overrides existing policy, then it is likely that I will loudly refuse to enforce it. Dennis Brown - 2&cent; 10:32, 28 April 2021 (UTC)


 * @Dennis Brown, I share a lot of your concerns about an enforcement mechanism being thrust upon enwiki in a way that will not work. As you may or may not be aware I am a member of the committee that will be drafting the enforcement mechanisms. I want something that will work for the entire movement including projects like ours, dewiki, etc and will obviously be bringing my experience as an enwiki editor to that process. A system that noticeably intrudes on our community's self-governing mechanisms is going to cause real damage and it's my goal to avoid that happening. That said there was only 1 white native English speaker (an American woman) on the committee who came up with the text of the UCoC and a second who was a WMF staff member (an American man). So I don't think it's quite fair to say that the language was handed down by Americans (or even English speaking natives) and thus subject to the systemic bias that would go along with that. Best, Barkeep49 (talk) 15:37, 5 May 2021 (UTC)
 * We will see, friend. I'm happy to see this final product that is free from systemic bias and culturally neutral.  I don't think it can be done, it's the proverbial "can god make a rock so heavy he can't lift it" scenario.  There are too many cultures at play here.  I see this still as an end run of the WMF to gain more control, by regulating "civility".  After sparring with our tone deaf Grand Leader on Twitter (re: Framgate), I lost what little hope I had about the WMF.  Btw, I was talking bout the culture at Wikipedia, not just that one panel.  But we will see if this panel is as clever as believe they are.  Perhaps I'm wrong, perhaps it isn't a cancer to slowly eat away at the autonomy of enwp.  Perhaps it isn't a way to grab more power and control.  Perhaps they won't reinstate superprotect someday. Perhaps I could list dozens more. Perhaps, but don't bet the farm on it.  We've ceded more than enough control, I believe.  In short, I do not trust the WMF.  Period.  And obviously, I'm not shy or afraid to say so.  They have done nothing to earn my trust. Dennis Brown - 2&cent; 19:15, 5 May 2021 (UTC)

Erroneous Reports
If a mechanism is provide for editors to report violations of the Universal Code of Conduct, the staff (or volunteers) reading and acting on the reports should be aware that the large majority of reports of hounding or other harassment from the English Wikipedia will be mistaken in mostly good faith. At present, misguided reports of hounding and other harassment are not uncommon, by editors who are not familiar with electronic workplaces, and who view criticism, correction of their edits, and reversion of their edits as harassment. These reports are neither valid nor malicious. They are not real reports of harassment, but they are not deliberate false reports. They are made by new editors who are not familiar with electronic interaction. It will be necessary for many of these reports to be ignored. Robert McClenon (talk) 15:42, 5 May 2021 (UTC)

On Russian Wikipedia and the like
As we know there are still countries living under authoritarian regimes. Russia in one of them and I as a Russian will dwell on ruwiki.

It is widely known that Kremlin thrives to control all media in Russian and these include Wikipedia. There are lots of gimmicks to do this but most effective is the simplest – bans on editors who step out of Kremlin discourse boundaries. This pressure is especialy strong at times of political turmoils. For example at the time of Russian-Ukrainian war in Donbas (in which Russia officially didn’t take part) all articles on the topic was governed by special Cheka group of admins (I do not recall its actual name). This group deleted all information which breached limits of official Russian discourse not withstanding sources and other basic rules such as NPV. All editors, who kept protesting were banned forever. That is not the only example.

I consider the UCoC discussion as very good and timely pretext to pay attention to situation in Russian Wikipedia and others with the same problems. IMHO it will be appropriate to establish a permanent group for discussion cases of breaching basic rules in individual projects.

Best regards. 213.24.132.53 (talk) 14:58, 5 May 2021 (UTC)

Related: Maggie Dennis (Community Resilience & Sustainability) office hour April 17 15:00 UTC
Hi all! The Community Resilience & Sustainability team at the Wikimedia Foundation is hosting an office hour led by its Vice President Maggie Dennis. Topics within scope for this call include Movement Strategy coordination (recently transferred to CR&S), Trust and Safety (and the Universal Code of Conduct), Community Development, and Human Rights. Come with your questions or feedback, and let’s talk! You can also send us your questions in advance.

The meeting will be on April 17 at 15:00 UTC check your local time.

You can check all the details on Meta. Hope to see you there!

Best, JKoerner (WMF) (talk) 20:37, 8 April 2021 (UTC)


 * Cross-posted on April 16
 * I mean, there was reasonable notice on other channels - meta front page at start of month, VPM (for some reason) on the 8th, I cross-posted to WP:VPW on the 14th. Nosebagbear (talk) 16:21, 17 April 2021 (UTC)


 * WP:VPM is the current delivery target from m:Distribution list/Global message delivery for general messages. Xeno (WMF) (talk) 17:01, 17 April 2021 (UTC)


 * Mea culpa for not cross-posting earlier. The recording is available on YouTube; there were several UCOC-related questions. Xeno (WMF) (talk) 17:01, 17 April 2021 (UTC)


 * The office hour questions & answers are now available at IRC office hours/Office hours 2021-04-17. Xeno (WMF) (talk) 23:39, 19 April 2021 (UTC)

Early 2021 local consultations summary report and individual summaries
Users may be interested in reviewing the summary report and 15 individual summaries from the early 2021 local consultations, available at m:Special:MyLanguage/Universal Code of Conduct/2021 consultations/Enforcement. Xeno (WMF) (talk) 12:46, 26 April 2021 (UTC)

Drafting committee selected, and additional call for comments

 * Thank you for signing Open Letter from Arbcoms to the Board of Trustees. Please note the drafting committee has been selected.

Additional responses provided to the questions above will be helpful to the drafting committee's work. Please encourage interested parties to provide input as soon as possible. Thank you, Xeno (WMF) (talk) 15:57, 1 May 2021 (UTC)


 * Thanks for curating this page. Will anyone from WMF be engaging with the discussion here? I think it would help enwiki/WMF relations if there was more active participation in these discussions. There are especially some of the comments above about problematic language in the UCoC as drafted that I think ought to be addressed, so as not to give the impression they are being ignored. I am sure you appreciate that the WMF has a bit of a PR problem with large sections of the communities of its biggest projects (the thread on WP:VPW about mobile editors makes for depressing reading). UCoC has the potential to be an even greater fiasco than Superprotect and Framgrate if handled poorly. That said, I speak as someone for whom WMF is 90% of the reason why I don't contribute anymore. <strong style="font-variant:small-caps">WJBscribe (talk) 09:52, 2 May 2021 (UTC)


 * I've been consolidating all the comments made here about the Policy text as written, and will add a section at meta:Talk:Universal Code of Conduct/Policy text (indeed some of the concerns have already been raised there). The drafting committee will be looking at outlining clear enforcement pathways, and will be able to recommend improvements to the current text. The code is also meant to be subject to review. The comments will be helpful to those efforts. Xeno (WMF) (talk) 11:40, 2 May 2021 (UTC)

Join in the Community Call on Universal Code of Conduct Enforcement
The Universal Code of Conduct project facilitation team will be hosting round-table discussions for Wikimedians to talk together about how to enforce the Universal Code of Conduct on 15 and 29 May 2021 at 15:00 UTC.

The calls will last between 60 and 90 minutes, and will include a 5-10 minute introduction about the purpose of the call, followed by structured discussions using the key enforcement questions. The ideas shared during the calls will be shared with the committee working to draft an enforcement policy. Please sign up ahead of time to join. In addition to these calls, input can still be provided on the key questions at local discussions or on Meta in any language.

Thanks to everyone who has contributed to the Universal Code of Conduct 2021 consultations so far. Xeno (WMF) (talk) 19:13, 4 May 2021 (UTC)

Chess' comments
Just going to dump in some random notes I've made when reading this policy:
 * The Doxxing rule doesn't specifically exempt disclosure of personal information for legitimate reasons. As written it would ban (although I would hope it isn't enforced this way), for example, disclosing the place of employment of a Wikipedia editor engaging in undisclosed paid editing to a functionary when it is relevant to their editing activities. There needs to be an exemption for disclosure to people who have signed the foundation's access to non public information agreement. Chess (talk) (please use&#32; on reply) 11:45, 6 May 2021 (UTC)
 * The current policy considers as doxxing "sharing information concerning [other contributor's] Wikimedia activity outside the projects." What exactly does this mean? What information is prohibited from disclosure? What does "Wikimedia activity outside the projects" mean? Is it referring to going offsite onto Reddit and saying something like "User:Chess likes to patrol draftspace and copyedit", thereby sharing info about someone's onwiki activity outside of the WMF projects? Or is it more like sharing info about "Wikimedia activities" that I have done outside of the WMF projects, e.g. someone posting about how I went to some informal edit a thon onwiki being banned. If it's the first then a lot of people who like posting on Wikipediocracy are getting banned under the new UCOC. Chess (talk) (please use&#32; on reply) 11:45, 6 May 2021 (UTC)
 * The UCOC currently bans insults based on a wide list of characteristics as well as "other characteristics". It also bans slurs. Are the banned slurs the same more or less as the groups listed? While obviously the nword (with obvious exceptions for use in the context of the project, e.g. an encyclopedia article or dictionary entry on said word would often require using the word when discussing) should be banned, what about uses of the words "dumb", "stupid" or "idiotic" (as well as other low intelligence based words)? I have seen other places that ban these words as slurs and I was wondering if this UCOC would entail banning me, for example, calling the UCOC moronic. Even if insults only apply to attacking another editor, the "hate speech" section could still be used to justify sanctions for calling something "stupid". Chess (talk) (please use&#32; on reply) 13:25, 6 May 2021 (UTC)
 * For the part where we're supposed to respect people's pronouns and terms they use to describe themselves; does this apply to articles? If we're writing BLPs on enwiki we generally try to respect the subjects preferred pronouns (within reason; e.g. if someone posts the attack helicopter copypasta we don't change the article to add heliself as a pronoun but we will respect someone choosing "they") but this new policy means that we're obligated to use the subjects pronouns "when linguistically or technically feasible". Are we going to have to start using xir or E or fae or whatever pronouns the subject of a BLP decides to come up with? If so, how are we going to track these pronouns especially across multiple articles? Should we treat it as structured data and make even more convoluted templates to deal with this? If we're not doing these neologistic pronouns, then would it be possible to write a policy on accepted pronouns (he, she, they) and have it get reviewed by the WMF for compliance with the UCOC? Chess (talk) (please use&#32; on reply) 17:16, 6 May 2021 (UTC)
 * On the same note as above, some groups of people have preferred terms for themselves that we might not want to use in articles. MOS:EUPHEMISM recommends against person-first language, in the sense of using the phrasing "person with disability" instead of "disabled person". Are we obligated to change this to conform to the new UCOC? Additionally, what if a preferred name conflicts with NPOV issues? North Macedonia comes to mind as a recent example. Should we have been obligated to respect their choice and only refer the ethnic group as "Macedonians" with no disambiguation when necessary? What about chronic Lyme disease? This isn't recognized as a disability by the medical establishment but people claiming to suffer from it would identify with that term. If we're writing a BLP, we'd have to give primacy to self designation and we'd have to call them "a person with chronic Lyme disease". There's edge cases like Asperger syndrome too. This was merged into autism spectrum disorder into the DSM-V but some people still identify with the former term. Should we still use the former term when people self designate as it even though it may no longer be accurate? I'd like to see these questions addressed because they likely will come up. Chess (talk) (please use&#32; on reply) 17:16, 6 May 2021 (UTC)

thoughts from jc37
I hope that input on this is still welcome?

A couple of thoughts about "Expected behavior":

So my first thought was to ask: Is this a list of best practices, or a list of block-able offences? If this isn't made very clear, it is really open to abuse in all-too-many ways. Do we require empathy or that someone is AGF? And if we require it, who decides? Application of this can be (is) very subjective. And I really doubt we want any admin blocking someone because they weren't showing "enough" empathy in a discussion. Well-meaning, but it makes me think of w:Thought Police and other such things. And probably not the intended tone of "inclusive environment" which this proposal seems to wish to engender.

So I think that section should be re-factored a bit. For one thing, splitting out "best practices" from "blockable offences". We "can" block for flagrant incivility, for example (though we tend to give editors' a chance to mend their ways), but there's a difference between reminding someone to agf as it is an ingredient of collegiality, and blocking them for perceived violations of AGF. I don't think anyone would want this well-intended CoC to turn into a fear-inspiring "board of education" (see link).

Also, those things that are blockable (inappropriate behaviour) should show reciprocality in an "examples of appropriate behaviour" section. I understand that the obvious concepts should be obvious, but we really should lean towards clarity (even while keeping things adaptive enough for action over inaction). If it helps, view this in terms of something like w:Goofus and Gallant.

So with all that in mind, I think rather than splitting the subsections as they are, maybe something more like: "Appropriate behaviour" / "Inappropriate behaviour" / "Unacceptable behaviour". So as to make clear the difference between behaviours/attitudes which we prefer or frown upon ("Here, let me help educate/inform you on our view of what collegiality is") and that which is just simply not acceptable on the project. ("Just don't do this, or you will be sanctioned.")

If someone would like help with a refactor, please let me know, I'd be happy to help. This being a foundation thing, I'm hesitant about Being Bold : ) - jc37 01:07, 10 May 2021 (UTC)

Thank you for your participation and input, and next steps
I want to thank everyone for taking the time to provide such valuable and constructive input in this consultation.

The process of organizing the comments for the drafting committee to consider has been ongoing, and they are commencing the collaborative process.

A comprehensive community review of the draft enforcement guidelines is expected for 2021 July to September.

We continue to collect input and will be holding round-table discussions this Saturday 15 May at 15:00 UTC and all are welcome to join. Another session is scheduled for 29 May at 15:00 UTC.

Additional comments are still being invited to Meta:Talk:UCOC2021 and this page will be monitored as well.

If desired, an impartial summary of the community thoughts and positions may be posted here by a trusted local user, and cross-posted to Meta:Talk:UCOC2021.

The upcoming report about this round of enforcement consultations will also draw heavily on the input provided here. I will advise once available.

Discussion about the Universal Code of Conduct continues at Meta:Talk:Universal Code of Conduct and Meta:Talk:Universal Code of Conduct/Policy text.

I enjoyed facilitating this discussion, and welcome any input on the process and how to foster local engagement.

Please feel free to let me know if you have any questions or additional thoughts. Xeno (WMF) (talk) 01:22, 11 May 2021 (UTC)

Universal Code of Conduct 2021 consultations summary report available, upcoming round-table on 12 June
The summary report for the 2021 consultations is available here. The input from this community was quite valuable in highlighting several major themes of inquiry that were shared among projects.

The project team continues to seek thoughts and ideas from the communities in the context of open round-table discussions and other ongoing outreach. The next round-table is scheduled for 12 June 2021 at 05:00 UTC.

We're quite thankful for all the valuable time users have contributed to these discussions. Feel free to leave any comments or questions here, or the talk page of the report on Meta. Xeno (WMF) (talk) 02:05, 2 June 2021 (UTC)

Etiquette
What's wrong with the Etiquette? Will it be replaced with the Universal Code of Conduct or will both be enforced at the same time? Wouldn't it be better to discuss and vote separately adding/ removing/ modifying particular Etiquette's principles? Grillofrances (talk) 16:46, 19 March 2022 (UTC)


 * Sorry for discussing in 2022. Discussion moved here. Btw. in 2021, I had no idea about any Universal Code of Conduct consultation - I was informed only about voting to Wikimedia Foundation Board of Trustees and to ArbCom. Grillofrances (talk) 17:09, 19 March 2022 (UTC)