Wikipedia talk:Deferred revisions/Archive 1

Devil in the details

 * 1) Who exactly become "established users"?
 * 2) Doesn't "[deferred by the abuse filter]" immediately imply abuse? Would this be a problem?
 * 3) Wouldn't an option that restricts review of suspect edits to admins (or any small group) be similar to immediately protecting the page when a suspect edit is received? Can we set up this filter so that compatible, non-filter-triggering changes by other users after a deferred edit are added in while the deferred edit is left deferred*? (*that is, like using the "undo" function on a buried edit, where it will work some of the time depending on the nature of the edit and the edits since)
 * 4) Would notice of deferred edits be available on the main article page?
 * This looks like a good idea in general, but I'm wary of its subtleties. { { Nihiltres | talk | log } } 18:57, 29 January 2009 (UTC)
 * Yes, there are a details to work out, especially technical ones. Established users are reviewers in the case of a flaggedrevs implementation. This is essentially used against vandalism (maybe also test edits, spam). When the revision no longer matches, it isn't deferred any more. It would be the same way flaggedrevs do it: a draft. I'll reply more fully when I have more time. Cenarium  (Talk)  19:02, 29 January 2009 (UTC)
 * I think the best is to use the usergroup of reviewers (even though it would be nice to have the possibility to defer reviewer's edits for some filters by disallowing the reviewer from reviewing when they matched a filter, or authorize sysop only in certain cases.)
 * I see, maybe instead of using [deferred by the abuse filter], one could use [automatically deferred for review].
 * That's the main problem, instead of analyzing the diff between the new revision and the previous one, the filter should analyze the diff to the latest stable version. This way, when it no longer matches, the new revision won't be deferred anymore.
 * Normally, it should work like normal flagged revisions, a note at the top right of the page and the draft page linked. Cenarium  (Talk)  17:41, 31 January 2009 (UTC)

Comments
I think this is generally a powerful idea, and because it 'bolts on' to the bottom of the FlaggedRevs scale, it genuinely doesn't have any impact on other possible implementations of FlaggedRevs. I have rewritten the description somewhat; I hope it is clearer about what the proposal suggests. However, I have commented out this section: If a user edits a page and the abuse filter identifies the new revision as suspect, it will defer all edits by the user (much the same way rollback works), so that the version displayed to IPs by default is the last revision by the user who previously edited the article. Subsequent revisions are deferred until it doesn't match the filter anymore or it is reviewed, either back to unreviewed state or to higher levels if possible. This can be enabled for all non-talk pages, not only articles. This is not how the AbuseFilter works, and it cannot work in this fashion. FlaggedRevs works on revisions, the end product that is the result of one or more edits. The AbuseFilter works on the edits themselves: each edit is a completely separate entity that has no connection to others on the same page or by the same user. It's not possible for the AbuseFilter to go through the user's or page's history to retrospectively change things as a result of a filter hit, nor is it sensible to modify the extension to do so. What will happen is that once an edit has tripped the filter, the page will enter a 'draft' stage until a user with the review permission comes along to 'reset' it. We can also give the filter the ability to revoke the user's autoreview rights as well as their autoconfirmed status, although this may be difficult for IP editors (maybe not possible with the current code, treating IPs as individuals is difficult when they all have the same user id!). It may also be possible (with the same limitations) to modify FlaggedRevs to allow a revocation of a user's autoreview permission to be applied retrospectively, so that any existing autoreview chains that they are part of that are not anchored by explicit reviews, are broken and revert to the previous version, but that would certainly be a big change to the code.

So although I think this is a good idea, and I recognise that it is intended to be rather ahead of the code, I think we need to be careful to distinguish what can these extensions can reasonably be expected to do, and what is beyond the bounds of sensibility. Happy‑melon 19:28, 29 January 2009 (UTC)


 * I think the real catch here is that even if it were technically possible (I have no doubt that it could be made so), it would still only filter out a very small proportion of vandalism. Edits like those made on the Ted Kennedy article would still get through. Either that or I've misunderstood your proposal. The problem is that most vandalism can only be recognised as such if you know something about the subject. We need human revision. — Blue-Haired Lawyer 20:17, 29 January 2009 (UTC)
 * Not a very small portion, a good part of vandalism is obvious and can be recognized by filters (mass add, mass remove, blank, redirect to non-existent, bad additions, etc). Of course it wouldn't have caught the vandalism on Ted Kennedy, but for this one, flagged revisions should be enabled. It's not intended to be used instead of Flaggedrevs, but in addition, as we don't have the manpower to enable flagged revision on all articles (and a fortiori on all non-talk pages).
 * I rewrote the page to try to avoid technicalities behind the system. Problems arise in case of repeated edits, if the abuse filter can only work on edits, it won't be able to handle this. What could make it, is, instead of analyzing the edit, I suppose essentially the diff, it will analyze the diff to the latest stable version. Let's consider this situation:
 * An edit to a page without any flag protection
 * The rule then is that whenever the diff to the stable version (thus the latest revision that is not deferred) matches the filter, the new revision should be deferred. We can add an exception for reviewers when the latest revision is not deferred.
 * There is also this situation:
 * An edit by a non-reviewer user to a semi flag protected page with reviewed latest revision.
 * In this case, the abuse filter can work normally, it just have to analyze the edit and defer the new revision if it matches, but is should override the automatic review if the user is autoconfirmed.
 * It couldn't filter edits by reviewers, and some vandals may manage to get the rights, so it may have to be considered.
 * Now the problem of multiple edits by the same user: it'll reduce the number of identifiable vandalism if it can't compare with the revision before the user edited, but in the cases one of the edits matches, it would still be useful that the version before the user edited be displayed by default. If the abuse filter cannot do that, maybe the defer flag can be adapted to be "rollback-like", directly in the Flaggedrevs extension: when a new revision is deferred (and the previous revision was not), all latest edits by the user on this page are also deferred, in the sense that the version before the user edited is set as stable page. Cenarium  (Talk)  00:00, 31 January 2009 (UTC)


 * I know that this would all be nice, but it's fundamentally at odds with the way the AbuseFilter is, AFAIK, designed. It's hooked into completely the wrong places to look at anything other than the individual edit; it wouldn't be significantly faster than a bot at studying the bigger picture (it would save all the communication time, but it would still have to look exactly the same things up, which would be a performance nightmare).  The way the AbuseFilter works is like a nightclub bouncer: every time someone submits an edit, the AbuseFilter gets the details on the edit that they want to make, a few facts about the page itself, and a few facts about the user who's trying to make it. There is no sense of context: like a bouncer has no idea how many other clubs the person might have been thrown out of already that night, the AbuseFilter only has access to the summary stats; by that I mean things like total editcount, block log length, registration date, etc, for users, and protection status etc for pages.  There is absolutely no depth to that information, and there's no way around that: the performance impact of doing intersections on page and user histories for each and every edit made to en.wiki is simply prohibitive AFAIK.  There are two systems here that work completely orthogonally: AbuseFilter works with edits, while FlaggedRevisions works with versions.  The best result will be when they work together, not when one tries to imitate the other. I know that's not what you're trying to do, but that's the result, and it's just not going to happen. Happy‑melon 00:22, 31 January 2009 (UTC)
 * Note that the AbuseFilter does have access to editcount, account age, username, user groups (including implicit), and time since user's email was confirmed. I can't imagine protection status would cause any performance impact at all; except for cascading protection perhaps, that has to be loaded anyway when loading a page for various interface bits. Mr.Z-man 17:45, 31 January 2009 (UTC)
 * When the abuse filter analyzes an edit, I think it analyzes the diff between the current version and the new revision ? Then, it would have instead to analyze the diff between the stable version and the new revision. I suppose it would be possible if the stable version were accessible to an extension, like the current version, is this the case ? If this is possible and the defer flag can be made "rollback-like", it would solve most problems. Cenarium  (Talk)  18:50, 31 January 2009 (UTC)
 * I see where you're coming from, but I don't think it will work. In order to have a silent FlaggedRevs deployment, everyone needs to have the ability to autoreview edits, so the "stable version" will be the previous version.  If that's not the case, then you've lost the avoid-deploying-flaggedrevs-everywhere concept that was a key benefit.  What would be possible if the AbuseFilter compared to the stable version would be for it to automatically reset the article (by reviewing the top version) when vandalism was reverted.  That is, a vandal comes along and adds a penis image to an article.  That's probably a bad edit, but not unequivocally so, so it's not something we could write a filter to block. What we could do is have the AbuseFilter defer that edit (by flagging it with a negative tag); that means that no further edits to that article by anyone will be visible until someone with an explicit review permission comes along and 'resets' it, confirming that the edit was legitimate.  If the AbuseFilter was constantly comparing to the stable version, which would be the version immediately before the penis image, then if someone came and changed the penis image to a picture of flowers, then the diff would no longer trip the filter and the edits would no longer be deferred. This means that not only reviewers can reset the page, anyone can by reverting to the stable version (or otherwise modifying the page so the diff doesn't trip the filter.  Making the defer flag function like rollback is significantly more difficult, not least because it breaks the symmetry of the situation: if someone were then to make exactly the same edit again immediately afterwards, the AbuseFilter would interpret the situation differently because the stable version is now not the previous version.  If for some reason the diff of all the user's edits wouldn't trip the filter but the last edit did, it could tie itself up in a loop (or at least would flip-flop between defering and resetting the page when people made dummy edits, which would be a Bad Thing).  Happy‑melon 19:55, 31 January 2009 (UTC)
 * Let's only consider the case of a page without flag protection. There the stable version is not the latest version, but the latest version that has not been deferred. This way, I'd like to avoid that an article becomes locked to a version, and that only a reviewer can unlock it: when it doesn't match anymore, it's no longer deferred. And pages like that will be put in Special:OldDeferredPages, so that a reviewer can still check that every thing's all right. This way, for example, an IP can fix the vandalism by another IP and then edit the page without being delayed (as in Deferred revisions), but cases like this should still be logged in Special:OldDeferredPages to track things like Deferred revisions or when a part of vandalism remains. As for rollback, I see what you mean, for example when a user adds text for a few edits and then remove a lot of text (or self-reverts), the last edit may trigger a "mass removal" filter. So it comes down to it again: to work properly it would require to compare the new revision with the revision just before the user edited... (for when the latest edit is not deferred) which is not possible yet, but maybe it can be arranged ? It would also help to identify vandalism fragmented in several edits.  Cenarium  (Talk)  21:40, 31 January 2009 (UTC)

Spamblacklist
Could the same idea be applied to the spamblacklist. I've had several lengths of text disappear on me because I accidentally triggered the spam filter. It would be better if the edits went through and were deferred to another editor for reviewing (obviously not the same as the one adding the link as they're not impartial) - Mgm|(talk) 12:34, 30 January 2009 (UTC)
 * We could use it against spam, but rather additionally to the blacklist: for sites that are not disruptive enough to warrant a blacklisting, but still need review. Reviewers (so admins) would still be automatically cleared, except if we can prevent the reviewer from reviewing the revision. We'll have to see if it's technically possible... But I don't think we could do that with the blacklist, and it would lose much of its efficiency. Cenarium  (Talk)  00:23, 31 January 2009 (UTC)

Nice in theory, but tech limits unclear
I'd like to see something that achieves these goals, but this particular proposal may not be technically viable as pointed out above.

I'd recommend replacing this entire discussion with an RFC along the lines of: Technology should be developed to allow the abuse filter and spamblocklist to influence the flagged-revisions state of an edit, either on an automated basis or after input from authorized editors.

Developers will be able to tell us early on what is and is not practical to code.

If this RFC gets support and it's coded, then each Wikipedia can decide if or how to use this technology.

As the German Wikipedia is already using flagged revisions, they may take the lead in this. davidwr/ (talk)/(contribs)/(e-mail)  16:39, 30 January 2009 (UTC)
 * Agree, we could make a RFC roughly asking "do you agree to develop a system to defer edits that have been automatically identified as suspect for review", and then ask the devs to look into it if there is enough support. Cenarium  (Talk)  17:49, 4 February 2009 (UTC)
 * And then we wait. And wait. The AbuseFilter discussion started last June and we still don't have it. FlaggedRevs took how long after the initial discussions about "stable versions"? I don't think that this is a matter of just making a couple tweaks to the AbuseFilter, it would require either duplicating a lot of FlaggedRevs functionality in AbuseFilter or tying the 2 extensions together somehow. I'm not saying we shouldn't request software features, but this is asking for what's likely a significant software change (I haven't actually looked) as an alternative to something that we can basically implement right now. Until someone says "Yes, I can and am willing to code this" its basically vaporware. Mr.Z-man 23:10, 10 February 2009 (UTC)

"If it ain't broke, don't fix it"
Sorry if this ends up sounding like a rant, but come on, we already have ways of filtering out this stuff within seconds: The human brain aided by tools! We don't need a new software feature to help remove this vandalism that we are getting better at removing all the time. Tools like Huggle already make a queue of edits for us to look over, do we really need a feature to keep these suspected vandalism edits from showing up until they are approved if they are only going to be visible for about a minute tops anyway? I think not.--Res2216firestar 17:32, 30 January 2009 (UTC)
 * Many vandalism edits are reverted within minutes indeed, but not all. For example, look at the history of forum spam, this remained for 16 hours. And on high visibility pages, even a few minutes of vandalism are viewed by dozens or hundreds of persons (examples:, , , , , and those are semi'd). But lesser watched articles, missed by bots and Huggle, often remain in vandalized state for hours or days. It certainly won't create much backlog, since edits will be be reverted by Huggle, bots and manually, relatively quickly but it would filter out a great part of vandalism. Cenarium  (Talk)  19:23, 30 January 2009 (UTC)
 * missed by bots: means invest more in bots, the Germans made 1.578.272 edits for sighting and they didn't reach the target, if we invest that amount of time and energy in the development of the bots.... Mion (talk) 23:27, 30 January 2009 (UTC)
 * Bots are not a panacea - you still need human beings to invest their time, since computer programs are fundamentally unintelligent. Fritzpoll (talk) 11:05, 2 February 2009 (UTC)
 * I agree, but as someone stated, many vandals are not original, pointing new bots to specific article groups (BLP) helps a little Mion (talk) 11:12, 2 February 2009 (UTC)
 * We won't have consensus to fully enable flagged revisions on all articles, and even less on all non-talk pages, any time soon (my personal opinion is also against massive implementations but it doesn't really matter). But we'll probably have enough support to implement a system deferring edits automatically identified as suspect. And this is already a huge improvement. Cenarium  (Talk)  17:47, 4 February 2009 (UTC)
 * Similar as that this project has no aim to be finished today, work on the bots is in progress to become more intelligent,en:User:Crispy1989 "The engine's core will be made up of a feed-forward artificial neural network with back-propagation". As the good thing of this project is that we have highly skilled members, it would make sense to help out and train such a bot specific for BLP's, results from such a bot could be ported to a special BLP recent changes patrol group. Mion (talk) 01:53, 5 February 2009 (UTC)