Wikipedia:Reference desk/Archives/Computing/2020 May 2

= May 2 =

What are the current machine learning processes in place to allow anti-vandalism bots to learn from instances of vandalism that fall through the cracks and go undetected?
Lately there seems to be a decline in human-reverted vandalism and an increasing reliance on robots to revert obvious cases. I've sometimes caught vandalism that was left on pages unreverted for days even weeks whereas this would be unheard of years ago. I wonder what steps are being taken to allow the current robots on duty to learn from machine learning (speaking as someone who uses machine learning for scientific and medical research), and what are the current processes in place to "feed" undetected positives (false negatives) and false positives so robots can better learn from these mistakes? Yanping Nora Soong (talk) 20:15, 2 May 2020 (UTC)
 * Checkout the FAQ at User:ClueBot_NG. It has a section on the algorithms it uses for learning.  The is a way to report false postives, so there may also be a way to provide diffs of vandalism the bot did not check.  You can ask questions on the bot's talk page.  RudolfRed (talk) 20:53, 2 May 2020 (UTC)
 * I see from the bot's talk page that reports of false negatives are not accepted, so I was mistaken in that. RudolfRed (talk) 20:59, 2 May 2020 (UTC)
 * The original Cluebot was a work of genius. It used a few hand coded heuristics and no buzzword engineering machine learning at all.  I don't know if NG is different.  I've been wanting to get around to looking at it.  2602:24A:DE47:B270:DDD2:63E0:FE3B:596C (talk) 20:05, 3 May 2020 (UTC)