User:Headbomb/unreliable

How to install
Once installed, you can go to User:Headbomb/unreliable/testcases to see if it works.
 * Method 1 – Automatic
 * 1) Go in the 'Gadgets' tab of your preferences and select the 'Install scripts without having to manually edit JavaScript files' option at the bottom of the 'Advanced' section. Refresh this page after enabling that.
 * 2) Click on the 'Install' button in the infobox on the right, or at the top of the source page.
 * Method 2 – Manual
 * 1) Go to Special:MyPage/common.js. (Alternatively, you can go to Special:MyPage/skin.js to make the script apply only to your current skin.)
 * 2) Add  to the page  (you may need to create it), like this.
 * 3) Save the page and bypass your cache to make sure the changes take effect.

What it does
The script breaks down external links (including DOIs) to various sources in different 'severities' of unreliability. In general, the script is kept in sync with • WP:CITEWATCH

• WP:DEPRECATE

• WP:NPPSG

• WP:RSN discussions

• WP:RSPSOURCES

• WP:SPSLIST (not fully synced)

• WP:VSAFE/PSOURCES

• Predatory open access source list and common sense "duh" case I come across (like a parody website) with some minor differences.

If you see a source that should be highlighted but isn't (or shouldn't be highlighted but is), first let me know on the talk page, along with the relevant website or DOI. But since I do not want my opinion to be king, I maintain a general policy that everything is appealable at WP:RSN, in case of mistakes, accidental misclassifications, etc.

Common cleanup and non-problematic cases

 * WP:RSCONTEXT
 * The first question to answer about whether or not a source is reliable is reliable for what? For example, Gawker is considered generally unreliable, but that does not mean specific Gawker articles cannot be cited. For example, if Gawker had a scoop, which was subsequently picked up in other reliable sources, it could be entirely acceptable to write Gawker first reported the story on 12 September 2019, followed by The New York Times on 15 September and several other outlets afterwards. However, it could also constitute a violation of WP:DUE, of WP:PRIMARY, of WP:BLPSOURCE, and many other policies and guidelines. Compare these two situations, using the same hypothetical source


 * The script cannot distinguish between these nuances, so use it as a scalpel, not a hammer. If you are worried about the drive-by removal of a source which is used acceptably, you can always put a comment next to the source, e.g..


 * Book sellers (e.g. Amazon) and copyvio farms (e.g. Scribd)
 * Often times, the 'problem' highlighted by the script is really a citation in need of cleanup more than an actual sourcing problem. For instance, Amazon.com is not considered a reliable source. However, people will often link to Amazon.com as a way to refer to a book sold on Amazon. In those case, the Amazon citation should simply be converted to a proper cite book
 * Amazon.com →
 * Likewise, if an ASIN is present the solution is simply to replace the ASIN with and ISBN and/or OCLC number, if available
 * → ISBN 978-0060555665 /
 * If there's an ASIN as well as an ISBN/OCLC, the ASIN should be removed, otherwise leave it there as an identifier of 'last resort'. Likewise, if you find a link to a good document hosted in violation of copyright, simply update the citation to refer to the proper document, and remove the copyright violating link.


 * External links
 * Many sources are not acceptable as sources, but will be acceptable as external links, e.g. IMDb, Discogs, etc. For more information on the subject of which external links are acceptable, see WP:ELYES, WP:ELMAYBE, and WP:ELNO.


 * General repositories
 * Many citations will look like
 * ✅ Lewoniewski, W.; Węcel, K.; Abramowicz, W. (2017). "Analysis of References Across Wikipedia Languages". In Damaševičius, R.; Mikašytė, V. (eds.). Information and Software Technologies. Communications in Computer and Information Science. 756. Cham: Springer. pp. 561–573. . ISBN 978-3-319-67641-8.
 * When you have a yellow "article" link, but a plain DOI link, that usually means the article links to a general repository like Academia.edu, HAL, ResearchGate, Semantic Scholar, Zenodo, and many others. This is generally not problematic, though it could also be that the registrant hasn't been evaluated yet (especially if the DOI prefix is over 10.20000). Note that if there is no freely-available PDF at those repositories, but other identifiers (like the DOI or ISBN) are present, you can remove the repository link as pointless.
 * When no plain DOI link is present, you may wish to verify that the document being cited is from a proper journal, as general repositories usually do not filter against preprints and papers from predatory journals being uploaded. For example, this publication, like the one above, is also hosted on ResearchGate
 * ❌ Feng, Youjun; Wu, Zuowei (2011). "Streptococcus suis in Omics-Era: Where Do We Stand?". Journal of Bacteriology & Parasitology. S2: 001.
 * Inspection of this source reveals that the Journal of Bacteriology & Parasitology is published by the OMICS Publishing Group, one of the most infamous predatory publishers out there. Thus this second source is problematic, while the first one (published by Springer Science+Business Media), is not.


 * Google Books / OCLC
 * Google Books and OCLC host a variety of content, including self-published books and books from blacklisted publishers. As such, both Google Books and OCLC links will be highlighted in yellow. These links will often not be problematic, but you may wish to verify that the book being cited is from a reputable publisher.
 * ✅ Wong, S. S. M. (1998). Introductory Nuclear Physics (2nd ed.). New York: Wiley Interscience. ISBN 978-0-471-23973-4. OCLC 1023294425.
 * ❌ Pratt, J. (2011). Stewardship Economy: Private Property Without Private Ownership. San Francisco: Lulu.com. ISBN 978-1-4467-0151-5. OCLC 813296703.
 * Note that if you have an 'main' OCLC link, it's usually a good idea to convert it into an OCLC identifier (or remove the link if the OCLC identifier is already there), as it makes it look like a freely available version of the book is available.


 * Preprints
 * The links to arXiv/bioRxiv/CiteSeerX/SSRN preprints and documents generated via the arxiv/biorxiv/citeseerx/ssrn parameters of the various templates are obviously not problematic for reliability, so long as the citation itself isn't problematic. Citations to preprints will often be acceptable for routine claims as self-published expert sources, but they will invariably fail WP:MEDRS nor will they meet a higher standard of sourcing, as preprints are not peer-reviewed (or will reflect a state prior to peer-review). Keep in mind that several papers hosted on preprint repositories will have been published in peer-reviewed venues (and some of those papers are even technically postprints), so you should always investigate rather than assume that something is unreliable simply because it's on a preprint server. You may simply need to update things to a proper cite journal, rather than a cite arxiv or cite biorxiv.


 * Social media and video-sharing platforms
 * The sources are as reliable as the account owner is. For example, a tweet by NASA, or a YouTube video from the BBC News is as reliable as those organizations are. Videos of random people giving their opinions, not so much. Beware of WP:COPYVIOEL, where Randy in Boise uploaded a video from BBC News or some other valid source.
 * ✅ Valentine, Andre. (15 July 2023). "Celebrating the Webb Space Telescope's First Year of Science on This Week @NASA – July 14, 2023". NASA's YouTube Channel – via YouTube.com.
 * ❌ CoasterFan2105. (6 May 2016). "California Trains! 1 Hour, 150+ Trains!". CoasterFan2105's YouTube Channel – via YouTube.com.

What the script looks for
The script only operates on
 * URL links, including those generated through templates like citation, cite journal, cite web, doi, and others.
 * If no URL link is present, the text inside  tags
 * If no URL link is present, the text inside list items

That is, it will detect links to Deprecated.com, as well as references and list items that mention Deprecated.com, but it won't recognize other mentions of Deprecated.com in the text. In practice, this means that all URLs are checked (regardless of where they are), as well as all lists of publications/bibliographies/references that follow a regular format (including those in the Further reading/External links sections).


 * → John Smith (2014). " Article of things ". Deprecated.com. Accessed 2020-02-14.
 * → John Smith (2014). " Article of things ". Deprecated.com. Accessed 2020-02-14.


 * → John Smith (2014). "Article of things". Available on Deprecated.com. Accessed 2020-02-14.
 * → John Smith (2014). "Article of things". Available on Deprecated.com. Accessed 2020-02-14.

The script can easily classify DOIs by their DOI prefixes, which correspond to various registrants (for instance  belongs to the OMICS Publishing Group). Most registrants are publishers, some are individual journals.

The script can also classify DOIs through "starting patterns", but this is trickier. For example, Chaos, Solitons, & Fractals has DOIs like or. These have starting patterns of  and , which will not match other journals. However, this is very tricky to determine, as those patterns can vary over time, and can also be hard to recognized as meaningful patterns (here  is related to the ISSN of that specific journal, and isn't just a random string like ). They could also be so closely related to the patterns of other journals to cause a collision.

False positives
Because the script is looking for strings that correspond to URL domains anywhere in the url, it could match the urls of other websites. For example, the script cannot distinguish between Both will be highlighted as 'deprecated', even though Alexa.com is not.
 * → Moon cheese cures cancer!
 * → Deprecated.com traffic analysis

False negatives
Because the script is looking for sources that are often/generally problematic in some way, sources that are generally acceptable (e.g. Toxicology Reports) will not get flagged if they are misused or if a certain article is not reliable. For example, is a retracted paper. The script has no way of knowing that this is the case, and thus will not flag the paper as problematic.

Likewise an op ed published in the New York Times is only as reliable as the contributor that wrote it, but the script has no way of knowing that the article is an op ed or a regular article, and will not get flagged. Likewise, a "regular" New York Times article might also failed higher sourcing requirements like WP:BLP or WP:MEDRS. The script will also not flag those.

Likewise, if no URL/DOI is provided, the source will get not flagged. For example, the is from an iMedPub journal, a subsidiary of the (in)famous OMICS Publishing Group predatory publisher, and is not getting flagged because of a lack of recognizable URL/DOI.

Comments
For technical reasons, it will also sometimes highlight entire comments made in ordered and unordered lists for (i.e. comments that start with  or  ): This can be avoided by giving the actual link, in which case only the link will be highlighted You can also use a [.] instead of a dot to suppress the highlighting.
 * Keep When searching for sources, I found something on Deprecated.com that would indicate that the Foobarological Remedies are responsible for over 25% of remissions. This should count for meeting WP:N. User:Example (talk) 17:29, 19 August 2020 (UTC)
 * Actually, that site is not a reliable source, and does not established notability, much less efficacy per WP:MEDRS. User:Example2 (talk) 18:29, 19 August 2020 (UTC)
 * Keep When searching for sources, I found something on Deprecated.com that would indicate that the Foobarological Remedies are responsible for over 25% of remissions. This should count for meeting WP:N. User:Example (talk) 17:29, 19 August 2020 (UTC)
 * Actually, that site is not a reliable source, and does not established notability, much less efficacy per WP:MEDRS. User:Example2 (talk) 18:29, 19 August 2020 (UTC)
 * Keep When searching for sources, I found something on Deprecated[.]com that would indicate that the Foobarological Remedies are responsible for over 25% of remissions. This should count for meeting WP:N. User:Example (talk) 17:29, 19 August 2020 (UTC)
 * Actually, that site is not a reliable source, and does not established notability, much less efficacy per WP:MEDRS. User:Example2 (talk) 18:29, 19 August 2020 (UTC)

Another alternative is to use  instead of   or   to start the comments.
 * Keep When searching for sources, I found something on Deprecated.com that would indicate that the Foobarological Remedies are responsible for over 25% of remissions. This should count for meeting WP:N. User:Example (talk) 17:29, 19 August 2020 (UTC)
 * Actually, that site is not a reliable source, and does not established notability, much less efficacy per WP:MEDRS. User:Example2 (talk) 18:29, 19 August 2020 (UTC)

However, this will cause accessibility problems, suppress such highlighting in the entire comment chain (including future comments), and will cause the script to not warn you (or anyone else) that a problematic source was mentioned (including on later comments), so this method is not normally recommended.

Custom rules
It is possible to define your own set of additional rules (so that, for example, you could test a new rule locally before proposing it). These rules will be applied after the default rules, so if a link matches both a default rule and a custom rule, only the default rule's formatting will be applied. To add custom rules, create the page Special:MyPage/unreliable-rules.js and add the following: unreliableCustomRules = [ {		comment: 'Name of the rule', // Will show as a tooltip regex: /regex rules/i css: {CSS style to apply to links that match the rule}, filter: (filter to use for the rule, optional), }, ];

See the section below for concrete examples. You may add additional rule blocks by copying and pasting the code between the curly braces multiple times. Make sure that the closing curly brace has a comma after it. You can also look at User:Headbomb/unreliable.js for other examples (search for the phrase —your custom rules should be formatted the same way).

Example 1: Bypass highlighting
For example, if you do not wish to have Google Books links highlighted in yellow, you can add the following to Special:MyPage/unreliable-rules.js unreliableCustomRules = [ {		comment: 'Plain google books', regex: /\b(books\.google)/i, css: { "background-color": "" } }, ]; and the background colour #fffdd0 will no longer be applied. If you instead want to change Google Books links to a different color with a red border, like #7cfc00, then use

instead of

in the above example.

Example 2: Add a source
If you have a specific source that needs to be added, you should generally ask for it to be added on the talk page of the script (if obvious) or WP:RSN (if consensus is needed), this way everyone using the script can benefit from its detection. However, if the source doesn't warrant being flagged by the script for everyone, but you'd like it to be flagged for you (for example, Biodiversity Heritage Library and ChemSpider links), you can create your own rules by adding the following to Special:MyPage/unreliable-rules.js unreliableCustomRules = [ {		comment: 'Biodiversity Heritage Library', regex: /\b(biodiversitylibrary\.com)/i, css: { "background-color": "#40e0d0" } },	{		comment: 'ChemSpider', regex: /\b(chemspider\.com)/i, css: { "background-color": "#d8bfd8" } }, ]; and these links will be highlighted in #d8bfd8 and #40e0d0 respectively.

Source code
While I (Headbomb) came up with the idea for the script and am the person maintaining it, the basic script was designed by SD0001 with refinements by Jorm and creffett. Anything clever in the code is from them. I'm mostly just maintaining the list of sources to be covered.