Open peer review

Open peer review is the various possible modifications of the traditional scholarly peer review process. The three most common modifications to which the term is applied are:
 * 1) Open identities: Authors and reviewers are aware of each other's identity.
 * 2) Open reports: Review reports are published alongside the relevant article (rather than being kept confidential).
 * 3) Open participation: The wider community (and not just invited reviewers) are able to contribute to the review process.

These modifications are supposed to address various perceived shortcomings of the traditional scholarly peer review process, in particular its lack of transparency, lack of incentives, wastefulness, bullying and harassment.

Definitions
Open identities: Open peer review may be defined as "any scholarly review mechanism providing disclosure of author and referee identities to one another at any point during the peer review or publication process". Then reviewer's identities may or may not be disclosed to the public. This is in contrast to the traditional peer review process where reviewers remain anonymous to anyone but the journal's editors, while authors' names are disclosed from the beginning.

Open reports: Open peer review may be defined as making the reviewers' reports public, instead of disclosing them to the article's authors only. This may include publishing the rest of the peer review history, i.e. the authors' replies and editors' recommendations. Most often, this concerns only articles that are accepted for publication, and not those that are rejected.

Open participation: Open peer review may be defined as allowing self-selected reviewers to comment on an article, rather than (or in addition to) having reviewers who are selected by the editors. This assumes that the text of the article is openly accessible. The self-selected reviewers may or may not be screened for their basic credentials, and they may contribute either short comments or full reviews.

History
In 1999, the open access journal Journal of Medical Internet Research was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors but not the readers, and in 2000, the medical journals in the open access BMC series published by BioMed Central, launched using open peer review. As with the BMJ, the reviewers' names are included on the peer review reports. In addition, if the article is published the reports are made available online as part of the "pre-publication history"'.

Several other journals published by the BMJ Group allow optional open peer review, as does PLoS Medicine, published by the Public Library of Science. The BMJ's Rapid Responses allows ongoing debate and criticism following publication.

In June 2006, Nature launched an experiment in parallel open peer review: some articles that had been submitted to the regular anonymous process were also available online for open, identified public comment. The results were less than encouraging – only 5% of authors agreed to participate in the experiment, and only 54% of those articles received comments. The editors have suggested that researchers may have been too busy to take part and were reluctant to make their names public. The knowledge that articles were simultaneously being subjected to anonymous peer review may also have affected the uptake.

In February 2006, the journal Biology Direct was launched by BioMed Central, adding another alternative to the traditional model of peer review. If authors can find three members of the Editorial Board who will each return a report or will themselves solicit an external review, the article will be published. As with Philica, reviewers cannot suppress publication, but in contrast to Philica, no reviews are anonymous and no article is published without being reviewed. Authors have the opportunity to withdraw their article, to revise it in response to the reviews, or to publish it without revision. If the authors proceed with publication of their article despite critical comments, readers can clearly see any negative comments along with the names of the reviewers. In the social sciences, there have been experiments with wiki-style, signed peer reviews, for example in an issue of the Shakespeare Quarterly.

In 2010, the BMJ began publishing signed reviewer's reports alongside accepted papers, after determining that telling reviewers that their signed reviews might be posted publicly did not significantly affect the quality of the reviews.

In 2011, Peerage of Science, an independent peer review service, was launched with several non-traditional approaches to academic peer review. Most prominently, these include the judging and scoring of the accuracy and justifiability of peer reviews, and concurrent usage of a single peer review round by several participating journals. Peerage of Science went out of business only a few year after it was founded, because it could attract neither enough publishers nor enough reviewers.

Starting in 2013 with the launch of F1000Research, some publishers have combined open peer review with postpublication peer review by using a versioned article system. At F1000Research, articles are published before review, and invited peer review reports (and reviewer names) are published with the article as they come in. Author-revised versions of the article are then linked to the original. A similar postpublication review system with versioned articles is used by Science Open launched in 2014.

Also in 2013, researchers from College of Information and Computer Sciences at University of Massachusetts Amherst founded OpenReview website to host anonymized review reports together with articles, which is as of 2023 popular among computer scientists.

In 2014, Life implanted an open peer review system, under which the peer-review reports and authors' responses are published as an integral part of the final version of each article.

Since 2016, Synlett is experimenting with closed crowd peer review. The article under review is sent to a pool of 80+ expert reviewers who then collaboratively comment on the manuscript.

In an effort to address issues with the reproducibility of research results, some scholars are asking that authors agree to share their raw data as part of the peer review process. As far back as 1962, for example, a number of psychologists have attempted to obtain raw data sets from other researchers, with mixed results, in order to reanalyze them. A recent attempt resulted in only seven data sets out of fifty requests. The notion of obtaining, let alone requiring, open data as a condition of peer review remains controversial. In 2020 peer review lack of access to raw data led to article retractions in prestigious The New England Journal of Medicine and The Lancet. Many journals now require access to raw data to be included in peer review.

Adoption by publishers
These publishers and journals operate various types of open peer review:
 * Publishers:
 * BioMed Central
 * BMJ Group
 * Copernicus Publications
 * European Molecular Biology Organization (EMBO)
 * Frontiers
 * MDPI (authors have the option to publish peer review reports, etc.)
 * Nature Research
 * Open Research Europe (the European Union publishing platform for Horizon Europe and Horizon 2020 projects)
 * Journals:
 * eLife
 * GigaScience
 * PeerJ
 * PLOS
 * ReScience C
 * Semantic Web journal by IOS Press
 * WikiJournal
 * SciPost

Peer review at The BMJ, BioMed Central, EMBO, eLife, ReScience C, and the Semantic Web journal involves posting the entire pre-publication history of the article online, including not only signed reviews of the article, but also its previous versions and in some cases names of handling editors and author responses to the reviewers. Furthermore, the Semantic Web journal publishes reviews of all submissions, including rejected ones, on its website, while eLife plans to publish the reviews not only for published articles, but also for rejected articles.

The European Geosciences Union operates public discussions where open peer review is conducted before suitable articles are accepted for publication in the journal.

Sci, an open access journal which covers all research fields, adapted a post publication public peer-review (P4R) in which it promised authors immediate visibility of their manuscripts on the journal's online platform after a brief and limited check of scientific soundness and proper reporting and against plagiarism and offensive material; the manuscript is rendered open for public review by the entire community.

In 2021, the authors of nearly half of the articles published by Nature chose to publish the reviewer reports as well. The journal considers this as an encouraging trial of transparent peer review.

Open peer review of preprints
Some platforms, including some preprint servers, facilitate open peer review of preprints.
 * Beginning in 2007, the platform SciRate allowed registered users to recommend articles posted on the arXiv preprint server, displaying the number of recommendations or "scites" each current preprint had received.
 * Since 2013, the platform OpenReview provides a flexible system for performing open peer review, with various choices about "who has access to what information, and when". This platform is commonly used by computer science conferences.
 * In 2017, the platform PREreview was launched to empower diverse and historically excluded communities of researchers (particularly those at the early stages of their careers) to find a voice, train, and engage in open peer review of preprints. Reviewers can review preprints from over 20 preprint servers on the platform.
 * In 2019, the preprint server BioRxiv started allowing posting reviews alongside preprints, in addition to allowing comments on preprints. The reviews can come from journals or from platforms such as Review Commons.
 * In 2019, Qeios launched a multidisciplinary, open-access scientific publishing platform that allows the open peer review of both preprints and final articles.
 * In 2020, in the context of the COVID-19 pandemic, the platform Outbreak Science Rapid PREreview was launched in order to perform rapid open peer review of preprints related to emerging outbreaks. The platform initially worked with preprints from medRxiv, bioRxiv and arXiv.

Argued
Open identities have been argued to incite reviewers to be "more tactful and constructive" than they would be if they could remain anonymous, while however allowing authors to accumulate enemies who try to keep their papers from being published or their grant applications from being successful.

Open peer review in all its forms has been argued to favour more honest reviewing, and to prevent reviewers from following their individual agendas.

An article by Lonni Besançon et al. has also argued that open peer review helps evaluate the legitimacy of manuscripts that contain editorial conflict of interests; the authors argue that the COVID-19 pandemic has spurred many publishers to open up their review process, increasing transparency in the process.

Observed
In an experiment with 56 research articles accepted by the Medical Journal of Australia in 1996–1997, the articles were published online together with the peer reviewers' comments; readers could email their comments and the authors could amend their articles further before print publication. The investigators concluded that the process had modest benefits for authors, editors and readers.

Some studies have found that open identities lead to an increase in the quality of reviews, while other studies find no significant effect.

Open peer review at BMJ journals has lent itself to randomized trials to study open identity and open report reviews. These studies did not find that open identities and open reports significantly affected the quality of review or the rate of acceptance of articles for publication, and there was only one reported instance of a conflict between authors and reviewers ("adverse event"). The only significant negative effect of open peer review was "increasing the likelihood of reviewers declining to review".

In some cases, open identities have helped detect reviewers' conflicts of interests.

Open participation has been criticised as being a form of popularity contest in which well known authors are more likely to get their manuscripts reviewed than others. However, even with this implementation of Open Peer Reviews, both authors and reviewers acknowledged that Open Reviews could lead to a higher quality of reviews, foster collaborations and reduce the "cite-me" effect.

According to a 2020 Nature editorial, experience from Nature Communications negates the concerns that open reports would be less critical, or would require an excessive amount of work from reviewers.

Thanks to published reviewer comments, it is possible to conduct quantitative studies of the peer review process. For example, a 2021 study has found that scrutiny by more reviewers mostly does not correlate with more impactful papers.