Gates Open Research

8 things you should know about open peer review.

12 October, 2021

open peer review

Ready to get to grips with open peer review? In this blog post, we highlight 8 things you should know about open peer review and its role in the open research process.

Open peer review refers to the various possible modifications of the traditional single or double-blind peer review process that together make peer review more transparent. Open peer review is considered a more progressive approach than traditional models. Arguably, increased transparency in the peer review process not only leads to a greater understanding of the published research, but also more constructive peer reviews and public recognition for reviewers.

Just like traditional peer review models, open peer review is a key pillar of research communication. Scholars, scientists, and the public alike rely on peer review to uphold research integrity and ensure that published research is valid and trustworthy. There are a range of key differences between open peer review and blinded, closed models. Here, we’ll take a look at 8 essential facts you should know about open peer review.

#1: There are over 20 definitions of open peer review

It may be a surprise to some that there is no one universally agreed-upon definition of open peer review. One 2017 study found 22 distinct configurations of 7 key open peer review traits. According to this study, open peer review refers to some combination of openness involving:

  • author and reviewer identities
  • peer review reports
  • participation of the wider research community in peer review
  • reciprocal discussion between authors and reviewers
  • publication of research in advance of peer review
  • post-publication commenting
  • publication platforms that separate the review process from the publication process

Gates Open Research operates a post-publication, open peer review model. Articles are published online and then undergo formal, open peer review by invited reviewers. Author and reviewer identities are open and the peer review reports are published alongside the article. Furthermore, the wider research community can get involved using the open commenting system. Authors are also encouraged to respond to reviewer reports directly on the platform.

#2: Open peer review reports are published alongside the research and given a unique DOI

Open peer review reports are just as accessible to any reader as the research itself. This allows readers to see the range of reviews the papers receive – positive, negative, and neutral – which often reflects the real breadth of expert opinion in controversial and cutting-edge areas of science.

On the Gates Open Research platform, all peer review reports are published alongside the research and given a unique digital object identifier (DOI). This means that the peer review report can be cited independently from the article. The full citation for a peer review report can be obtained by clicking the Cite button next to the peer review report.

#3: Open peer review means open identities, not only open reports

When invited reviewers submit their reports, their names and affiliations are also published. Incorporating open identities into the peer review model serves multiple purposes. Firstly, peer reviewers volunteer a considerable amount of time reviewing their peers’ work and providing suggestions for improvements. With open peer review, the reviewer can be recognized and acknowledged for their time and expertise. Secondly, when reviewers’ identities are known, there is an element of accountability to peer review. As the reviews themselves are open to public scrutiny, open peer review can improve the quality of the peer-review process itself. Additionally, unreported conflicts of interest may also be spotted by the wider research community. Finally, open identities arguably lead to more honest reviewing.

#4: Open peer review isn’t limited to just traditional research articles

On Gates Open Research, all article types go through the same open peer review system, including:

  • Research Articles
  • Method Articles
  • Study Protocols
  • Software Tool Articles
  • Registered Reports

Extending peer review beyond the validation of traditional Research Articles helps to ensure that all research outputs, no matter what format, can be trusted.

#5: Open peer review can take place following publication

With traditional peer review models, the journey from submission to peer review, and then to publication can take months—if not years. Conducting peer review after publication removes the delay for others who can benefit from accessing the work during the review period. Furthermore, this prevents research from being held up by a single reviewer and allows others in the field to assess the work for themselves and start building on it. On the Gates Open Research platform, an article’s peer review status is clearly signposted next to its title to ensure that readers are aware of which stage in the peer review process it has reached.

#6: Open peer review offers a unique opportunity for early career researchers (ECRs)

Early career researchers (ECRs) can participate in the open peer review process through co-reviewing. Co-reviewing is when the invited reviewer works with a colleague (often a more junior member of their team) to assess a manuscript together. Sometimes the invited reviewer will bring in a co-reviewer with specific expertise, to ensure all aspects of the article can be assessed fairly.  On Gates Open Research, the names of all co-reviewers are listed with the reviewer reports. This allows ECRs to receive credit for their labors, form connections with others in their field, and build their review portfolio.

#7: Open peer review involves participation from the wider research community

With open commenting, anyone in the research community can contribute to reviewing an article – not just the invited reviewers. This is to encourage open scientific discussion that both engage the scientific community and serves to improve the research. Anyone who wishes to comment on an article will be asked to declare any competing interests, along with their full name and affiliation.

#8: Open peer review can take place over a series of article versions and authors can respond

Authors can choose to submit a new version of their article to Gates Open Research, either address peer review comments or if they simply have further updates to share. Moreover, authors are encouraged to respond to peer review reports openly on the platform, so both the reviewers and any other readers can see what changes have been made or the reasons why an author may have decided not to implement a reviewer’s suggestion. Additionally, this facilitates scientific discussion between authors and peer reviewers. Finally, once a new version is published, it then undergoes open peer review for a second time.

Open peer review puts transparency center stage. Now you’re up to speed with what open peer review looks like in practice, why not explore the wider benefits of open research or some of the myths surrounding open data ?

SHARE THIS POST

Author image

Jessica Truschel

Senior Digital Marcomms Executive, F1000

View contributor biography

Popular tags

logo

  • Conferences
  • Editorial Process
  • Recent Advances
  • Sustainability
  • Academic Resources

Facundo Santomé

Open Peer Review: Making Scientific Research More Transparent

by Facundo Santomé and Jonathan Steffen

When thinking about Open Peer Review, we should think about the following quotation.

“As you sit on the hillside, or lie prone under the trees of the forest, or sprawl wet-legged on the shingly beach of a mountain stream, the great door, that does not look like a door, opens.”

This quotation, from Stephen Graham’s 1926 classic The Gentle Art of Tramping, may seem a curious way to commence a discussion of the benefits of the Open Peer Review process. It contains, however, an observation that could be a motto for the Open Science movement: “The great door, that does not look like a door, opens.”

Graham is thinking of the pleasures of lying under an open sky and following whatever thoughts might float into one’s consciousness. The “great door”, which does not resemble a door, “opens”. Far from the laboratory bench or the seminar room, the author imagines that understanding will come to minds that are open to receive it. The essential is a preparedness to engage with the world without expectation or prejudice. It is this openness which is at the heart of the Open Peer Review Process.

Open Science and Open Access

Open Science is at the heart of Open Access (OA) publishing. In the definition from UNESCO , “Open Science is the movement to make scientific research and data accessible to all. It includes practices such as publishing open scientific research, campaigning for open access and generally making it easier to publish and communicate scientific knowledge. Additionally, it includes other ways to make science more transparent and accessible during the research process. This includes open notebook science, citizen science, and aspects of open source software and crowdfunded research projects.”

UNESCO goes on to define four key advantages of this movement:

  • Greater availability and accessibility of publicly funded scientific research outputs;
  • Possibility for rigorous peer-review processes;
  • Greater reproducibility and transparency of scientific works;
  • Greater impact of scientific research.

Open Peer Review is designed to facilitate the “rigorous peer-review process” specified here. It ensures that science published via Open Access is thoroughly scrutinized and adjudged scientifically valid before being shared.

The Drive towards Open Peer Review

Human nature, however, dictates that many institutions with a long and worthy history are not free from fault or blemish. The same goes for anonymous peer review. The then editor of the British Medical Journal (BMJ), Richard Smith, announced in 1999 that the journal would be abolishing it, offering the rationale that “a court with an unidentified judge makes us think immediately of totalitarian states and the world of Franz Kafka.”

It may be argued that the editor of a given journal is the ultimate judge of whether or not a submission should be published, and that the reviewer merely offers guidance to help the editor formulate an opinion, but the point is clear: what is ultimately at issue here is accountability, and proponents of the Open Peer Review process will argue that accountability is not possible without transparency.

As supporters of the Open Science movement, MDPI encourages the Open Peer Review process. We believe it offers substantial benefits to the scientific community as a whole because it helps contextualize scientific research in a transparent manner and it encourages open discussion of new findings.

Open Peer Review: Modalities

Perhaps surprisingly for a concept that has been around for at least 30 years, the term ‘Open Peer Review’ does not have a universally accepted definition. Here we point to what Tony Ross-Hellauer said.

“While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only ‘invited experts’ are able to comment. For still others, it includes a variety of combinations of these and other novel methods.”

At MDPI, we work mainly with one variant of this broad definition. It starts with a traditional peer review process (closed), after which the review reports (including the identities of the reviewers in some cases) are published alongside the authors’ responses and the article.

There has been extensive debate as to whether reviewers might be less willing to review if they know their review might be published (with or without their name). We have not seen this happening, although it is true that the number of signed review reports is still low compared to open review reports published anonymously. Last year, a total of 14,880 review reports were published with the name of the expert who reviewed the article.

Benefits of Open Peer Review

As indicated, one benefit of instilling more openness in the peer review process is greater transparency and trust. It provides the scientific community with a window into the editorial decision-making process.

Transparency is one of the fundamental pillars of science. Open Peer Review offers researchers the opportunity to have a fully transparent process guiding the assessment of their work. Having reviewers’ reports published alongside the article also helps contextualize research and gives readers the benefit of additional expert opinions.

Increasing transparency in the peer review process also leads to more constructive peer reviews. It encourages high-quality comments, generally improving the overall quality of both the review and the article itself.

Credit for reviewers is another of the main benefits. Reviewing papers for submission is demanding and time-consuming, and Open Peer Review allows reviewers to gain credit for their work. It also provides insight in the form of feedback to authors.

Open Peer Review at MDPI

The  journal  Life  was a pioneer  in offering this opportunity to its authors in 2014. The first article openly published with peer-review reports was a review by Nobel Laureate Werner Arber. The review reports were published as supplementary material to the review.

This practice soon proved a popular option and the initiative was extended to 14 journals the same year. By 2018 the option of Open Peer Review for submitted papers was available across the  whole MDPI portfolio .

Authors published in MDPI journals can choose to publish review reports and author responses with the published paper (open reports). Reviewers have the choice to have their names listed on their published report (open identity).

In 2020, MDPI published a total of 34,293 articles in Open Peer Review. This accounted for 21% of the total number of articles published in that year. However, only 14,880 reports were published with open identity.

This initiative has taught us that authors’ acceptance of full transparency varies across different disciplines.

Authors in specific fields of biology and medicine are more likely to welcome such openness. This year, journals such as  BioTech  (56%),  European Burn Journal  (58%),  Journal of Developmental Biology  (41%),  Livers  (41%),  Epigenomes  (41%),  Geriatrics  (40%),  Diabetology  (40%),  Infectious Disease Reports  (40%) or the  International Journal of Neonatal Screening  (40%) among others, published above 40% of their articles in Open Peer Review format.

In disciplines such as the physical sciences, though, the percentage of Open Peer Review has been much lower. Some of MDPI’s leading journals in this subject, such as  Energies  (3%),  Symmetry  (3%), and  Mathematics  (3%) have seen less than 5% of their articles published together with their reports in 2021.

Working in the Service of our Contributors

MDPI will continue encouraging authors to choose Open Peer Review. However, we will remain flexible towards them and also towards reviewers regarding their preferences. Our goal is to offer the best possible scientific publishing service. And make sure that the peer review process is rigorous, transparent, and benefits the scientific community as whole.

Lying on a hillside like Stephen Graham and staring at the sky in the hope that the great door which is not a door might open is very different from working in a laboratory, or in the field, or, indeed, in the editorial office of a scientific publisher. Yet so many burning questions exist to which science has yet to find answers. As a leading Open Access publisher, we believe in encouraging openness of thinking and communication in addressing those unanswered questions. The Open Peer Review process is fundamental to that aspiration.

Related posts

open peer review in research

Open Science , Academic Resources , Open Access

6 Benefits of Open Access

Guide to sentence structure

Open Science , Academic Resources , English Resources

Author Services Guide To Sentence Structure

open peer review in research

Open Access in Switzerland

Will AI Replace Academic Editors?

Will Artificial Intelligence Replace Academic Editors?

Interview With Academic Editors

Academic Resources , Interviews , Open Science

Interview With Our Academic Editors

open peer review in research

Why Open Data is Important

What is academic editing

Academic Resources , Editorial Process , Open Science

What Are The Benefits of Academic Editing?

open peer review in research

Open Access in South Korea

open peer review in research

Open Science , Academic Resources , Journals

Why Journal Metrics are Important and How to Use Them

Guide to word classes

Author Services Guide To Word Classes

' src=

it’s necessary to take an open peer review to ensure high quality of the submission.

' src=

it’s necessary to take an open peer review to ensure high quality of the submission.

Add comment Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Privacy Preference Center

Privacy preferences.

From sharing the latest MDPI blog news, to showcasing our most popular articles, the newsletters will keep you in the loop with everything good going on in the science world.

Open peer review: promoting transparency in open science

  • Open access
  • Published: 26 May 2020
  • Volume 125 , pages 1033–1051, ( 2020 )

Cite this article

You have full access to this open access article

open peer review in research

  • Dietmar Wolfram   ORCID: orcid.org/0000-0002-4991-276X 1 ,
  • Peiling Wang Ph.D.   ORCID: orcid.org/0000-0003-4202-7570 2 ,
  • Adam Hembree 2 &
  • Hyoungjoo Park   ORCID: orcid.org/0000-0003-4271-1196 1  

298k Accesses

60 Citations

83 Altmetric

Explore all metrics

Open peer review (OPR), where review reports and reviewers’ identities are published alongside the articles, represents one of the last aspects of the open science movement to be widely embraced, although its adoption has been growing since the turn of the century. This study provides the first comprehensive investigation of OPR adoption, its early adopters and the implementation approaches used. Current bibliographic databases do not systematically index OPR journals, nor do the OPR journals clearly state their policies on open identities and open reports. Using various methods, we identified 617 OPR journals that published at least one article with open identities or open reports as of 2019 and analyzed their wide-ranging implementations to derive emerging OPR practices. The findings suggest that: (1) there has been a steady growth in OPR adoption since 2001, when 38 journals initially adopted OPR, with more rapid growth since 2017; (2) OPR adoption is most prevalent in medical and scientific disciplines (79.9%); (3) five publishers are responsible for 81% of the identified OPR journals; (4) early adopter publishers have implemented OPR in different ways, resulting in different levels of transparency. Across the variations in OPR implementations, two important factors define the degree of transparency: open identities and open reports. Open identities may include reviewer names and affiliation as well as credentials; open reports may include timestamped review histories consisting of referee reports and author rebuttals or a letter from the editor integrating reviewers’ comments. When and where open reports can be accessed are also important factors indicating the OPR transparency level. Publishers of optional OPR journals should add metric data in their annual status reports.

Similar content being viewed by others

open peer review in research

The changing forms and expectations of peer review

open peer review in research

Peer Review in Scholarly Journal Publishing

open peer review in research

Explore related subjects

  • Medical Ethics

Avoid common mistakes on your manuscript.

Introduction

Peer Footnote 1 review represents one of the foundations of modern scholarly communication. The scrutiny of peers to assess the merits of research and to provide recommendations for whether research exhibits sufficient rigor and novelty to warrant publication is intended to reduce the risk of publishing research that is sloppy, erroneous or, at worst, fabricated. The process of peer review is intended to help improve the reporting of research and to weed out work that does not meet the research community’s standards for research production.

Traditionally, peer review uses forms of blinded review where parties involved remain anonymous to reduce bias in the evaluation process. The most extensive form of blinded review, triple blind, anonymizes the process so that the author(s), reviewer(s) and the handling editor(s) are not aware of each other’s identities. A more common implementation is double blind peer review, where the author(s) and reviewer(s) are not aware of each other’s identities. To ensure author anonymity, authors must remove all content that might identify them to any reviewer. Single blind review is also commonly practiced, where reviewers are aware of the identities of the authors, but the authors do not know who has reviewed their manuscript. The question arises whether blinded peer review reduces bias and results in a more objective review. For authors, blinded reviews are like a black box. Blinding of reviewer identities may allow reviewers to use their anonymity to deliver more critical reviews or to write reviews that lack rigor because authors and readers will not know who the reviewers are. On the other hand, requiring reviewers to identify themselves may encourage greater accountability or could cause reviewers to blunt their criticisms (van Rooyen et al. 1999 ).

The open science movement has endeavored to increase the transparency of the production of scientific knowledge and to make products of scientific inquiry more broadly available. The most visible aspect of the open science movement to date has been open access (OA), where the products of scholarship are made freely available through open access journals or repositories. More recently, efforts have extended to the availability of open data and software, where datasets are shared and re-used. One of the last components of open science to be adopted is open peer review (OPR), where aspects of the peer review process, which have traditionally been hidden or anonymous, are made public.

Debate about the benefits of and concerns about OPR have been evident in scholarly communication. Malone ( 1999 ) believed that a fully open system increases responsibility and accountability and protects all parties more equitably: “Openness in peer review may be an idea whose time has come. What do you think?” (p. 151). At the 2016 Annual Meeting of the Association for Information Science and Technology, a panel of well-known scientists and editors engaged in a conversation and debate with conference attendees on the emerging open peer review innovation in the era of open science (Wang and Wolfram 2016 ). Similarly, at the 8th Peer Review Congress ( 2017 ), leaders in academic publishing held a panel on “Transparency in Peer Review.” The panelists discussed the various shades or spectrum of transparency in open peer review practices. Also touched upon was the lack of transparency in research proposal reviews, especially for private foundations. Attendees at the Congress raised another important question: “Should there also be transparency in reviewing reports of rejected manuscripts if they are a part of the scholarly ecosystem?” Launched in 2015, Peer Review Week ( 2017 ) set its theme for 2017 as Transparency in Review. Clobridge ( 2016 ) compared the benefits and challenges of OPR for authors, reviewers, and readers. She also cited three major players of OPR, PeerJ , F1000Research , and ScienceOpen . She noted that “Open peer review, while still a relatively new phenomenon, is catching the interest of many researchers and appears to be gaining momentum as part of the next wave of open knowledge and open science” (p. 62).

Will OPR become a more common scholarly practice like open access and open data in open science? Further research is needed to understand the concept of OPR and its diverse implementations by publishers as well as the perceptions and attitudes of scientists as authors and reviewers. The purpose of this study is to conduct a thorough search for and analysis of current OPR journals to address the following research questions:

What is the current state of OPR?

What has been the trend for OPR adoption?

Who are the early adopters of OPR?

Which disciplines have adopted OPR?

Which publishers are the front runners or leaders in OPR adoption?

How transparent are the emerging OPR implementations?

Do these journals adopt open reports?

Do these journals adopt open identities?

Literature review

In the era of digital open science, OA journals have mushroomed on the Web. Do these journals provide access to quality research? Does this openness extend to peer review and, if so, how is peer review conducted by these OA journals? In a sting-operation experiment, Science correspondent John Bohannon ( 2013 ) found that of the 304 versions of a fabricated paper with flawed research submitted to 304 OA journals, 255 submissions received a decision (the mean for acceptance was 40 days; the mean for rejection was 24 days). Surprisingly, 157 journals accepted a version of the paper. Was this reflected in the peer reviews? Only 36 reviews recognized the paper’s scientific problems whereas “about 60% of the final decisions occurred with no sign of peer review” (p 64). Rupp et al. ( 2019 ) concluded “although predatory publishing did not exist ten years ago, today, it represents a major problem in academic publishing” (p 516). There is an “apparent rise in scientific fraud” (Naik 2011 ) as well as peer review fraud. A “peer review ring” scandal resulted in the retraction of 60 articles at once by a prestigious journal (Barbash 2014 ). BioMed Central discovered fake peer reviewers involved in 50 manuscripts and took actions to investigate and retract 43 papers (Lawrence 2015 ). Haven et al. ( 2019 ) report from their survey and focus group that “Biomedical researchers and social science researchers were primarily concerned with sloppy science and insufficient supervision. Natural sciences and humanities researchers discussed sloppy reviewing and theft of ideas by reviewers, a form of plagiarism” (Abstract, Results).

The mainstream peer review systems in scientific and scholarly communication typically operate anonymously (Kriegeskorte 2012 ). This established, blind peer review model for journals has been criticized as being a flawed process (Smith 2006 ) or a broken system (Belluz et al. 2016 ). Peer review bias and unfairness exist to varying degrees in different disciplines (Lee et al. 2013 ; Rath and Wang 2017 ). Is there a way to restore the trust in peer review for scientific and scholarly publishing? Pioneers and innovators believe that transparency is the key (Fennell et al. 2017 ).

OPR initiatives and practices

A small number of pioneering journals have been offering forms of OPR since the turn of the century. Launched in 2001, the journal Atmospheric Chemistry and Physics , was among the first OA OPR journals (Pöschl and Koop 2008 ), along with 36 journals published by BioMed Central ( https://www.biomedcentral.com/journals-a-z ).

More than 10 years ago, Nature conducted a four-month trial of a hybrid model in which the manuscripts underwent formal closed review by referees and were posted to a preprint site for open review by community readers. The exploratory results showed limited use in improving the process. (Opening up peer review 2007 ). In January 2016, Nature Communications started a new OPR trial where the authors could decide on a blind or open review model at submission time and have their review reports published upon the acceptance of the manuscript while the reviewers could decide if they would remain anonymous or sign the review reports (Nature 2015 ). One year into the trial, 60% of the 787 published papers had open reports (Nature 2016 ). Four years later, Nature announced that it would add eight Nature Research journals to the trial project beginning in February 2020. The announcement reports that in 2018, 70% of the trial journal articles published open reports; 98% of the authors who published their reviewer reports responded they would do so again. Over the four years, 80% of papers had at least one referee named, which seemed to corroborate the results of a 2017 survey of Nature referees: the majority favored experimenting with alternative and more transparent models (Nature 2020 ).

F1000 beta-tested an open research platform as F1000Research in 2012. Articles submitted to F1000Research are published within 6–14 days and followed by a totally transparent peer review process during which a reviewer’s recommendation and report are published alongside the article. The process was not moderated by an editor. A key difference between post-publication OPR is that F1000Research does not make decisions on acceptance or rejection. Instead, it adopts the algorithm for indexing based on the review results: a minimum of 2 approved or 1 approved plus 2 approved with reservations by reviewers. Another distinct feature is that the review process is totally transparent and open in real-time with both open identities and open reports ( https://f1000research.com/for-referees/guidelines ).

Choosing a middle ground, PeerJ launched a new optional OPR journal in 2013; as of this writing, 80% of authors have chosen open reports, and 40% of reviewers have signed review reports ( https://peerj.com/benefits/review-history-and-peer-review/ ). Adopting a similar model, the publisher MDPI first announced optional post-publication OPR in 2014 by the journal Life and by 2018 all journals adopted optional OPR. Rittman ( 2018 ) reports that 23% of MDPI journal papers published at least one review with open identities. The percentage of the 14 early OPR MDPI journals with open reports include Publications (60%), Dentistry (52%), Medical Sciences (51%), Quantum Beam Science (48%), Life (46%), Brain Sciences (44%), J (43%), Behavioral Sciences (41%), Economies (40%), Cosmetics (39%), Administrative Sciences (38%), Condensed Matter (37%), Animals (34%) and Atoms (33%). EMBO Press reports that currently, 95% of their authors chose to publish review reports alongside their papers (EMBO Press 2020 ).

Another option for open reports, in addition to appearing alongside the article (e.g., PeerJ ) or in a stand-alone volume (e.g., Elsevier), is for reviewers to deposit their review reports to a research partnership service such as Publons.com. Here the decision to publish reports is made by the reviewers rather than the authors or publishers, given that Publons was created to credit reviewers and authenticate their claims. Recently, Wiley partnered with Publons for their OPR initiatives with 40 participating journals (Wiley 2018 ). Wiley’s prestigious journal Clinical Genetics was the pioneering journal for this initiative (Graf 2019 ). As of March 2020, Wiley added 10 titles in early 2020 to expand this initiative (Moylan 2020 ).

OPR research

As an innovation in peer review, OPR pursues transparency and openness to improve the process (Wang et al. 2016a , b ). Transparency in peer review was rigorously studied by researchers for the journal BMJ in the 1990s before the first journals implemented OPR. These early research examples that studied the effect of making reviewer identities known to authors or posting reviewer names with the paper concluded that these practices had no effect on the quality of the reviews (Godlee et al. 1998 ; van Rooyen et al. 1999 ). Walsh et al. ( 2000 ) conducted a controlled trial in British Journal of Psychiatry to investigate whether open peer review was feasible. Of the 322 reviewers, 245 (76%) agreed to sign their reviews. A total of 408 unsolicited manuscripts of original research were randomly assigned to the two groups of reviewers. To evaluate the reviews, a seven-item instrument was used to compare the quality of the reviews: importance of research question, originality, methodology, presentation, constructiveness of comments, substantiation of comments, and interpretation of results; in addition, the tone of the review was rated. With cautious notes, the researchers reported that the signed reviews were more courteous and of higher quality than unsigned reviews. Bornmann et al. ( 2012 ) compared the reviewer comments of a closed peer review journal and an open peer review journal. They found that the reviewer comments in the open review journal were significantly longer than the reviewer comments in the closed review journal.

Since then, a few studies have investigated author and reviewer attitudes towards OPR, characteristics of open reviews and methods of OPR adoption by existing and new journals. In 2012, Elsevier began a pilot OPR project of selected trial journals (Mehmani and van Rossum 2015 ). A survey of editors, authors, and reviewers of the five participating trial journals was conducted in 2015 to assess the impact of open review (Mehmani 2016 ). Forty-five percent of the reviewers revealed their identities. The majority of the reviewers (95%) commented that publishing review reports had no influence on their recommendations. Furthermore, 33% of the editors identified overall improvement in the review quality, and 70% of these editors said that the open review reports were more in-depth and constructive. Only a small proportion of the authors indicated that they would prefer not to publish in open review journals. Mehmani reported high usage of review reports by counting the clicks to the review reports, which indicated the value of open review to the readers.

At a webinar sponsored by Elsevier to discuss how to improve transparency in peer review, Agha ( 2017 ) reported on the experience of two Elsevier pilot OPR journals ( International Journal of Surgery and Annals of Medicine and Surgery ) that published peer reviewer reports as supplemental volumes. He concluded: “60% of the authors like it or like it a lot and 35% are more likely to publish because of it.” Bravo et al. ( 2019 ) observed and analyzed Elsevier’s pilot project of five OPR journals from 2015 to 2017. In order to compare referee behavior before and after OPR, the dataset included 9220 submissions and 18,525 reviews from 2010 to 2017. They found “that publishing reviewer reports did not significantly compromise referees’ willingness to review, recommendations, or turn-around time. Younger and non-academic scholars were more willing to accept invitations to review and provided more positive and objective recommendations. Male referees tended to write more constructive reports during the pilot. Only 8.1% of referees agreed to reveal their identity in the published report.” (Abstract). The authors also published review reports alongside their paper. Wang et al. ( 2016a , b ) analyzed the optional OPR journal PeerJ ’s publicly available reports for the first three years of the journal (2013–2016). They found that the majority of the papers (74%) published during this time period had open reports; 43% of which had open identities.

If transparency in peer review is the key to tackling the various issues facing the current peer review system, will authors and reviewers embrace OPR? Several large-scale surveys have collected data on attitudes towards OPR with diverse findings. Mulligan et al. ( 2013 ) found that only 20% of respondents were in favor of making the identity of the reviewers known to authors of the reviewed manuscripts; 25% of respondents were in favor of publishing signed review reports. In 2016, the OpenAIRE consortium conducted a survey of OPR perceptions and attitudes by inviting respondent participation through social media, distribution lists and publishers’ newsletters. Of the valid 3062 responses, 76% of respondents reported having taken part in an OPR process as an author, reviewer or editor. The survey results show that the respondents are more willing to support open reports (59%) than open identities (31%). The majority of the respondents (74%) believe that reviewers should be given the option to make their identities open. (Ross-Hellauer et al. 2017 ) Another survey of European researchers conducted by the European Union’s OpenUP Project in 2017 received 976 valid responses. The results of this survey also show that respondents support open reports (39%) more than open identities (29%). This survey also reports a gender difference in supporting open identities (i.e., 35% of female researchers versus 26% of male researchers) (Görögh et al. 2019 ).

A recent survey by ASAPbio ( 2018 ) asked authors and reviewers in the life sciences about their perspectives on OPR. Of the 358 authors, the majority were comfortable (20.67%) or very comfortable (51.96%) with publishing their recent paper’s peer reviews with referees’ names; when asked about the same reviews to be published without referees’ names, the number dropped but still represented the majority: 19.56% were comfortable and 37.71% were very comfortable. Of the 291 reviewers, the majority would be comfortable (32.30%) or very comfortable (40.21%) with posting their last peer review anonymously given the opportunity to remove or redact appraisals or judgments of importance; regarding signing the same review, 28.15% of respondents were comfortable and 32.30% were very comfortable. These results suggest that the majority of the authors are willing to publish their papers’ review reports, with a preference for signed reviews; the majority of the reviewers are willing to have their review reports published without sensitive information, with a preference for anonymity.

The analysis of nearly 2600 responses to Wiley’s 2019 Open Research Survey indicates that the respondents’ preferred peer review models are double-blind (79%), transparent (44%), and single-blind (34%). Twenty-eight percent of the respondents were not aware of the transparent review model (Moylan 2019 ).

OPR conceptualization and implementation

Despite the growing interest in OPR, there still is no uniform definition of OPR or generally agreed upon best implementation model. Ford ( 2013 ) reviewed the literature on the topic to define and characterize OPR. Acknowledging the diverse views of OPR, she states “the process incorporates disclosure of authors’ and reviewers’ identities at some point during an article’s review and publication” (p. 314). She further characterized OPR by openness (i.e., signed review, disclosed review, editor-mediated review, transparent review, and crowd-sourced/public review), and timing (pre-publication, synchronous, and post-publication).

Ross-Hellauer ( 2017 ) conducted a systematic literature review and identified seven elements based on 22 definitions of OPR. Of the seven elements, open identities and open reports are considered core elements to recognize OPR journals. The other five elements in the order of frequency of occurrences include open participation , open interaction , open pre-review manuscripts , open final-version commenting , and open platforms/decoupled review . These elements formed a framework for two surveys conducted by OpenAIRE (Ross-Hellauer et al. 2017 ) and OpenUP (Görögh et al. 2019 ). Similarly, Tennant et al. ( 2017 ) provided a comprehensive review of journals’ peer review practices from the past to the present, which they published in the OPR journal F1000Research . Taking a much broader perspective, they examined the pros and cons of open reviews, including public commentary and staged publishing.

Fresco-Santalla and Hernandez-Perez ( 2014 ) illustrated how OPR has been manifested by different journals: open reviews (for all or specific papers), signed reviews (obligatory, pre- or post-publication), readership access to review reports (required or optional) and readership comments (pre- or post- publication). Wang and Tahamtan ( 2017 ) identified 155 OPR journals, of which the majority were in medicine and related fields. They also found the various characteristics in the implementations by the OPR journals. According to Tattersall ( 2015 ), there were ten leading OPR platforms.

This research focuses on the two core elements of OPR journals that Ross-Hellauer ( 2017 ) identified: (1) open identities, where reviewer names were made public; (2) open reports, where the original reviews or integrated reviews were publicly available. In addition, we considered when a journal adopted OPR, the journal’s discipline coverage, and its publisher. For included OPR journals, authors’ rebuttals were not considered in this study, nor were open comments from registered or unregistered readers. This study did not include journals that implemented only one of the following OPR elements in Ross-Hellauer ( 2017 ): open participation, open interaction, open pre-review manuscripts, open final-version commenting and open platforms/decoupled review.

Data collection

Although a few journal directory sources attempt to identify OPR (e.g., Directory of Open Access Journals and Transpose), there is no established standard to describe aspects of OPR systematically. Journal records are submitted by users, and the schemas are open for interpretation. To identify relevant OPR journals, we used multiple search strategies and tracked different sources. The Directory of Open Access Journals (DOAJ) indexes more than 14.5 thousand journals and nearly 4.8 million articles. From the results of the advanced search for journals with the filter set to “open peer review,” we retrieved 133 OPR journals. Some DOAJ entries for journals were blogs rather than venues for the publication of research and were thus excluded. Each of the journals was accessed to verify if it publishes open identities or open reports; those misclassified were removed from the dataset. Several websites about peer review and scientific publishing were periodically scanned to keep current on the OPR development: ASAPbio (Accelerating Science and Publication in biology); the International Congress on Peer Review and Scientific Publication; Peer Review Week. Transpose, a database of journal policies on peer review and pre-printing ( https://transpose-publishing.github.io/ #/), was a particularly rich source for identifying candidate journals but many records were not verified by the publishers or editors, and many duplicated or erroneous records had to be corrected by checking the original journals.

Data verification and cleaning

This study used two criteria to select OPR journals, open identities and open reports ; at least one of the two core elements had to be implemented to qualify as an OPR journal. Data from different sources needed to be transformed and verified. As of 23 November 2019, the Transpose database listed 294 OPR journals that adopted open identities and 232 OPR journals that publish open reports, many of which were misclassified perhaps due to the crowdsourcing nature of the database and the record contributors’ ability to distinguish OA from OPR. Unexpectedly, the publisher field was another confusing concept. For example, the newly launched journal Geochronology listed the European Geosciences Union (EGU) as the publisher while the journal’s Website had Copernicus Publications as the publisher. Therefore, each OPR journal’s website was visited to verify the data. Some journals (e.g., several journals published by Copernicus Publications and journals by Kowsar) indicated in their editorial policies that they follow OPR. To identify which year the journal started or transitioned to OPR, we accessed issues of the journals to find open reports or open identities in the published articles. If none of the articles published review reports or reviewer identities as of December 2019, the journal was excluded. Further efforts were made to search Websites of the publishers of known OPR journals to identify additional OPR journals that were not indexed in DOAJ or Transpose. For example, Transpose had listed 10 OPR journals for Wiley, but Wiley’s Website news pointed to an excel file of 40 OPR trial journals. We also searched newsletters and lists related to peer review, from which we identified OPR adoption, for example, from PLOS in 2019.

Identification of the year a journal began OPR could be a difficult and time-consuming task if a journal did not provide the precise date it adopted OPR. In these cases, we manually checked each issue to find the earliest OPR article. If a journal publisher clearly posted information about when OPR was adopted on their editorial or peer review policy page, we used that year (e.g., Kowsar and Wiley).

In this paper, we updated the dataset reported in Wolfram et al. ( 2019 ), which was collected in 2018 and consisted of 20 publishers and 174 OPR journals. The final dataset for this expanded study includes 38 publishers and 617 OPR journals as of December 2019. Data were stored in an Excel spreadsheet and were analyzed using cross-tabulations, queries, and qualitative assessment of relevant journal content. Stored information included: journal metadata, year of first OPR use, publisher (name and country of headquarters), policy for reviewer identity, policy for report availability, and high-level journal discipline.

Descriptive data

The growth of OPR adoption—measured either by existing or new journals—is summarized in Fig.  1 by broad discipline. The journals were classified into six broad topical areas using a modified form of the DOAJ classification scheme to determine which disciplinary areas have adopted OPR. Most journals did not report when they adopted OPR or if they have always used OPR. First OPR usage was confirmed by searching early issues of the journals to identify when OPR practices began. In many cases, OPR adoption coincided with the first journal issue.

figure 1

Growth of OPR journals by discipline groups

The early adopters of OPR can be traced back to the beginning of the 2000s. The journals Atmospheric Chemistry and Physics and European Cells & Materials each implemented a different OPR model, although both launched their first issues in 2001. Similarly, 36 OPR journals published by BioMed Central implemented another model in the same year. Since then, there has been steady growth in the number of journals that have adopted OPR, most noticeably in the Medical and Health Sciences, and more recently, in the Natural Sciences over the past 10 years. This growth has increased dramatically since 2017, in which time the total number of OPR journals has more than doubled. The disciplinary distribution of OPR journals appears in Table 1 . For each discipline group, its first OPR year and number of articles suggest how OPR is being adopted. Medical and Health Sciences had the most early adopters.

A summary of the most prolific publishers contributing to OPR and their headquarters’ country appears in Table 2 . Although many journals today attract an international audience and are managed by international teams of researchers, the prevalence of OPR journals associated with publishers based in Europe stands out. Twenty-four of the 38 (63.2%) identified publishers are based in Europe and account for 445 out of the 617 titles (72.1%). Although the publishers are based in Europe, many of the journals they publish may support journals originating from other areas of the world (e.g., Kowsar). Furthermore, 500 of the OPR journals (81.0%) are published by only five publishers (MDPI, SDI, BioMed Central, Frontiers Media S.A., Kowsar). This points to the important role that publishers have played to date in the promotion of OPR.

OPR transparency in current practice

A fundamental principle of OPR is transparency. This includes open identities and/or open reports. Publishers and editors of journals adopted different levels of transparency, where one or both of the transparency elements may be optional or required (e.g., EMBO Press  2020 ). Table 3 reports the adoption of open reports based on the broad discipline of the journals. The percentage of mandatory open reports is highest in the Medical and Health Sciences (64.0%), and second highest in the Multidisciplinary category (50.0%). Mandatory open reports are much lower for Humanities (14.3%) and Technology (5.7%), where optional open reports are more common. The availability of mandated or optional open identities was much more common across all disciplines, with only 9 journals (8 from the Natural Sciences and 1 from Medical and Health Sciences) requiring anonymity. Summary data for open identity adoption by discipline appear in Table 4 .

Open identities may be mandated, optional (decided by the reviewer) or anonymous. Similarly, open reports may be mandated, optional (decided by the author or editor), or not available. The frequency of each combination appears in Table 5 . When reviewers remain anonymous and their reports are not made available, this is traditional blind peer review (the lower right cell). The vast majority of OPR journals (608 or 98.5%) either require reviewers to identify themselves (268 or 43.4%) or allow reviewers to choose whether to identify themselves (340 or 55.1%). Similarly, 536 (86.9%) of the journals either require reports to be open (274 or 44.4%) or allow authors or editors to choose whether to make the reports open (259 or 42.3%). Only 189 (30.6%) journals require both open identities and open reports.

Transparency of the emerging OPR implementation approaches

The current OPR landscape is complex and exhibits a variety of configurations ranging from opening some aspects of the established blind-review process to a fully transparent process. Although there is no simple way to define the emerging OPR practices, a descriptive framework focusing on how open identities and open reports are being fulfilled during the review process and what end products are available for open access are depicted in Fig.  2 .

figure 2

Process–product approaches

At the implementation level, an OPR journal needs to decide:

Who makes decisions: reviewer, author, and editor/journal;

When the decision is made for a specific core element: pre-, post, or concurrent process;

What is contained in open reports: original reports, a consolidated letter, or invited commentaries by reviewers who made significant contributions to the paper’s revision;

Where the open reports can be accessed.

These four factors can potentially define the level of transparency which a journal puts into practice for OPR. For example, F1000Research is the most transparent OPR journal because its peer review process is totally open; both referee identity and review comments are instantly accessible alongside the manuscript while it is being reviewed and revised. As a contrast, the OPR journals published by Frontiers only publish each paper with its reviewers’ names, which is a minimum level of open identity. The process and the main product are still very much closed to the readers for whom the articles are published.

The emerging models varied in terms of transparency. Figure  3 shows four representative implementations:

Frontiers’ OPR journals publish only referee identities alongside articles without open reports as an open identities-only model;

PeerJ provides optional open identities to referees and optional open reports to authors, representing a range of journals adopting this model;

BMC’s OPR journals publish both open identities and open reports alongside articles;

F1000Research , the first of its kind, makes the review process itself open in addition to open identities and open reports. F1000Research , as post-publication OPR, has no acceptance or rejection decision to be made as a result of peer review, but an article will not be indexed in any bibliographic databases without passing the threshold within a defined timeframe consisting of two approved (✔✔) or one approved (✔) plus two approved with reservations (??).

figure 3

OPR models as implemented by publishers

This study represents the first comprehensive investigation of the scope and depth of OPR adoption in the open science era. Since the BMJ experiments with open reviews more than 20 years ago, the adoption of OPR has gone from 38 journals in 2001 to at least 617 journals by the end of 2019. Figure  1 demonstrates that there has been steady growth in the number of OPR journals over time, led by journals in Medical and Health Sciences and the Natural Sciences, but with much higher growth since 2017. This growth has been prompted by a small number of publishers. The remaining disciplines have been much slower and later to adopt OPR. The Humanities have different scholarship cultures as compared to the Natural Sciences and have been slow in adopting open access overall (Eve 2017 ; Gross and Ryan 2015 ).

Several publishers have served as pioneers and early promoters of OPR. The five publishers of the most OPR journals that have led the way—MDPI, SDI, BioMed Central, Frontiers Media S.A. and Kowsar–have adopted different implementations of OPR. BioMed Central, as one of the earliest OPR journal publishers in this study, and SDI require both open reports and open identities. Kowsar requires open reports but makes referee identities optional. MDPI makes open reports and open identities optional for authors and reviewers, respectively. Frontiers Media S.A. requires open identities but does not provide open reports for its OPR journals.

More than 60% of the publishers in this study, who publish more than 70% of the OPR journals identified, are based in Europe, signifying Europe’s leading role in the OPR movement. This strong European effort is also seen in the larger open science movement, where organizations such as OpenAIRE and OpenUP are investigating all aspects of this movement, including OPR. Eleven of the identified publishers are based in the United States, indicating that there is also a growing interest in adopting OPR outside of Europe. Publishers based in other countries than those of the more prolific publishers have been slower to adopt forms of OPR as evidenced from the singular representation by these nations.

Multiple OPR practices emerge from the analysis of the data that show different levels of transparency in implementation. The level of transparency can be characterized along a continuum. The most transparent model is the concurrent open review process exemplified by F1000Research , where reviewers’ identities and reports are instantly available alongside manuscripts and are published upon submission following initial format checking. Another model that promotes total transparency, exemplified by many BioMed Central journals, provides access to the complete report history and author exchanges as well as open identities alongside the published articles, after acceptance. The next several implementations that allow authors and/or reviewers to participate in open review decisions during the process include: mandated open reports but optional open identities (e.g., Kowsar journals), mandated open reports without open identities (e.g., the journal Ledger ), and optional open reports with optional open identities (e.g., PeerJ ). The most limited implementation, used by the Frontiers Media S.A. journals, is a closed review process with the published articles including only the names of the reviewers.

Two recommendations arise from the findings:

Publishers should make their OPR information (policies, open reports, open identities) more accessible and should more prominently display their OPR status and adoption. This information was sometimes buried and difficult to locate.

A repository or registry of OPR journals that provides key elements relevant to OPR is needed. Information contained in sources such as DOAJ and Transpose is limited and frequently incorrect.

The adoption of the OPR innovation is growing. This growth has been largely spurred by a small number of publishers, primarily based in Europe. To date, OPR has been adopted mostly by journals in the Medical and Health Sciences and the Natural Sciences. However, the number of OPR journals remains a very small percentage of scholarly journals, overall. The fact that there are multiple approaches to the adoption of OPR indicates there is no consensus at present regarding best practices. The highest level of OPR transparency includes open identities along with open reports, but only a minority of the OPR journals identified have adopted complete transparency.

Limitations of the present research must be recognized. Currently, there is no universal way to identify journals that adopt OPR. Our approach was to cast a broad net using multiple sources to identify candidate OPR journals, which is time-consuming and often hit-or-miss. It is possible that we have missed OPR journals that are not indexed by the databases searched or by the publishers already in our dataset despite the fact that we expanded our searches to the OPR publishers to ensure inclusion. Similarly, given the growth in the number of OPR journals over the past couple of years, the findings presented here represent a snapshot as of late 2019. The OPR landscape is changing quickly. Like any indexing source, there may also be a regional or language bias, where there are additional examples of OPR journals that may not be evident due to a lack of familiarity with the publication language. Although most publishers post annual reports with metric data including the number of articles, citation counts, Journal Impact Factor, rejection rate, etc., they lack annual OPR metric data on the number or percentage of articles with optional open reports and open identities; both are essential metric data to document OPR adoption.

The next phase of this research is examining open report contents using text mining approaches to determine if there are quantitative and qualitative differences in the open reviews based on the OPR approaches used. A scoring instrument is being developed and tested to measure different models.

Data availability

A csv file of the journal data can be found at: https://doi.org/10.5281/zenodo.3737197 .

This paper represents a greatly expanded version of a study presented at the 17th International Society for Scientometrics and Informetrics Conference held in Rome, Italy in September 2019 (Wolfram et al. 2019 ).

Agha, R. (2017). Publishing peer review reports. Webinar: Transparency in Peer Review . https://researcheracademy.elsevier.com/navigating-peer-review/fundamentals-peer-review/transparency-peer-review .

ASAPbio (2018). Transparency, recognition, and innovation in peer review in the life sciences (February 2018)/Peer review survey results. https://asapbio.org/peer-review/survey .

Barbash, F. (2014). Scholarly journal retracts 60 articles, smashes ‘peer review ring’, The Washington Post , July 10. https://www.washingtonpost.com/news/morning-mix/wp/2014/07/10/scholarly-journal-retracts-60-articles-smashes-peer-review-ring/?utm_term=.4ab26f14adb9.

Belluz, J., Plumer, B., & Resnick, B. (2016). The 7 biggest problems facing science, according to 270 scientists. Vox . https://www.vox.com/2016/7/14/12016710/science-challeges-research-funding-peer-review-process .

Bohannon, J. (2013). Who’s afraid of peer review? A spoof paper concocted by Science reveals little or no scrutiny at many open-access journals. Science , 342 , 60–65. https://www.sciencemag.org/content/342/6154/60.full.pdf .

Bornmann, L., Wolf, M., & Daniel, H. D. (2012). Closed versus open reviewing of journal manuscripts: how far do comments differ in language use? Scientometrics, 91 , 843–856. https://doi.org/10.1007/s11192-011-0569-5 .

Article   Google Scholar  

Bravo, G., Grimaldo, F., López-Iñesta, E., Mehmani, B., & Squazzoni, F. (2019). The effect of publishing peer review reports on referee behavior in five scholarly journals. Natural Communication, 10 , 322. https://doi.org/10.1038/s41467-018-08250-2 .

Clobridge, A. (2016). Open peer review: the next wave in open knowledge? Online Searcher: Information Discovery, Technology, Strategies, 40 (4), 60–62.

Google Scholar  

Eve, M. P. (2017). Open access publishing models and how OA can work in the humanities. Bulletin of the Association for Information Science & Technology, 43 (5), 16–20. https://doi.org/10.1002/bul2.2017.1720430505 .

EMBO Press. (2020). Transparent Peer Review. https://www.embopress.org/policies .

Fennell, C., Corney, A., & Ash E. (2017). Transparency—the key to trust in peer review, Elsevier Connect, https://www.elsevier.com/connect/transparency-the-key-to-trust-in-peer-review .

Ford, E. (2013). Defining and characterizing open peer review: A review of the literature. Journal of Scholarly Publishing, 44 (4), 311–326. https://doi.org/10.3138/jsp.44-4-001 .

Fresco-Santalla, A., & Hernández-Pérez, T. (2014). Current and evolving models of peer review. The Serials Librarian, 67 (4), 373–398. https://doi.org/10.1080/0361526X.2014.985415 .

Graf, C. (2019) Why more journals are joining our transparent peer review pilot. Director, Research Integrity and Publishing Ethics at Wiley. September 20, 2019. https://www.wiley.com/network/researchers/latest-content/why-more-journals-are-joining-our-transparent-peer-review-pilot .

Gross, J., & Ryan, J. C. (2015). Landscapes of research: perceptions of open access (OA) publishing in the arts and humanities. Publications, 3 , 65–88. https://doi.org/10.3390/publications3020065 .

Godlee, F., Gale, C. R., & Martyn, C. N. (1998). Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA, 280 , 237–240.

Görögh, E., Schmidt, B., Banelytė, V., Stanciauskas, V., & Woutersen-Windhouwer, S. (2019). OpenUP Deliverable D3.1—Practices, evaluation and mapping: Methods, tools and user needs. OpenUP Project. https://doi.org/10.5281/zenodo.2557272 .

Haven, T., Tijdink, J., Pasman, H. R., et al. (2019). Researchers’ perceptions of research misbehaviours: a mixed methods study among academic researchers in Amsterdam. Research Integrity and Peer Review, 4 (1), 25. https://doi.org/10.1186/s41073-019-0081-7 .

Kriegeskorte, N. (2012). Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Frontiers in Computational Neuroscience, 6 (79), 2–18. https://doi.org/10.3389/fncom.2012.00079 .

Lawrence, R. (2015). Preventing peer review fraud: F1000Research, the F1000 Faculty and the crowd. https://blog.f1000.com/2015/04/09/preventing-peer-review-fraud-f1000research-the-f1000-faculty-and-the-crowd/ .

Lee, C. J., Sugimoto, C. R., Zhang, C., & Cronin, B. (2013). Bias in peer review. Journal of the American Society for Information Science and Technology, 64 (1), 2–17. https://doi.org/10.1002/asi.22784 .

Malone, R. E. (1999). Should peer review be an open process? Journal of Emergency Nursing, 25 (2), 150–152.

Mehmani, B. (2016). Is open peer review the way forward? https://www.elsevier.com/reviewers-update/story/innovation-in-publishing/is-open-peer-review-the-way-forward .

Mehmani, B., & van Rossum, J. (2015). Elsevier trials publishing peer review reports as articles. https://www.elsevier.com/reviewers-update/story/peer-review/elsevier-pilot-trials-publishing-peer-review-reports-as-articles .

Moylan, E. (2019). How do researchers feel about open peer review? Wiley. September 17, 2019. https://www.wiley.com/network/researchers/submission-and-navigating-peer-review/how-do-researchers-feel-about-open-peer-review .

Moylan, E. (2020). Progressing towards transparency—more journals join our transparent peer review pilot. Wiley. March 5, 2020. https://www.wiley.com/network/researchers/submission-and-navigating-peer-review/progressing-towards-transparency-more-journals-join-our-transparent-peer-review-pilot .

Mulligan, A., Hall, L., & Raphael, E. (2013). Peer review in a changing world: An international study measuring the attitudes of researchers. Journal of American Society for Information Science and Technology, 64 , 132–161. https://doi.org/10.1002/asi.22798 .

Naik, G. (2011). Mistakes in scientific studies surge. The Wall Street Journal , 10 August 2011.

Nature (2015). Transparent peer review at Nature Communications. Nature Communication, 6 , https://doi.org/10.1038/ncomms10277 .

Nature (2016). Transparent peer review one year on. Nature Communication, 7 , https://doi.org/10.1038/ncomms13626 .

Nature (2020). Nature will publish peer review reports as a trial, Editorial 05 February 2020. Nature , 578. https://www.nature.com/articles/d41586-020-00309-9 .

Opening up peer review. (2007). Nature Cell Biology, 9 , 1. https://doi.org/10.1038/ncb0107-1 .

Peer Review Congress (2017). Under the microscope: Transparency in peer review. Panel after the Peer Review Congress. Peer Review Congress, Chicago, 10–12, September 2017. Panel chaired by Alice Meadows (ORCID) with panellists: Irene Hames (Board member of Learned Publishing), Elizabeth Moylan (BMC), Andrew Preston (Publons), and Carly Strasser (Moore Foundation). https://peerreviewweek.files.wordpress.com/2017/05/prw2017-panelists22.pdf . Video at https://www.youtube.com/watch?v=8x1dho6HRzE .

Peer Review Week. (2017). Transparency In Review is focus for Peer Review Week 2017. https://peerreviewweek.files.wordpress.com/2016/06/prw_2017-_press_release-5-sept.pdf .

Pöschl, U., & Koop, T. (2008). Interactive open access publishing and collaborative peer review for improved scientific communication and quality assurance. Information Services & Use, 28 , 105–107. https://doi.org/10.3233/ISU-2008-0567 .

Rath, M. & Wang, P. (2017). Open peer review in the era of open science: A pilot study of researchers’ perceptions (Poster) In Proceedings of the Joint Conference on Digital Libraries (JCDL) , 317–318.

Rittman, M. (2018) Opening up Peer Review. https://blog.mdpi.com/2018/10/12/opening-up-peer-review/ .

Ross-Hellauer, T. (2017). What is open peer review? A systematic review. F1000Research . 2017 apr; 6:588. Available from: https://doi.org/10.12688/f1000research.11369.1 . PMID: 28580134

Ross-Hellauer, T., Deppe, A., & Schmidt, B. (2017). Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. PLoS ONE, 12 (12), e0189311. https://doi.org/10.1371/journal.pone.0189311 .

Rupp, M., Anastasopoulou, L., Wintermeyer, E., Malhaan, D., Khassawna, T. E., & Heiss, C. (2019). Predatory journals: A major threat in orthopaedic research. International Orthopaedics, 43 , 509–517. https://doi.org/10.1007/s00264-018-4179-1 .

Smith, R. (2006). Peer review: a flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99 , 178–182. https://doi.org/10.1258/jrsm.99.4.178 .

Tattersall, T. (2015). For what it’s worth—the open peer review landscape. Online Information Review, 39 (5), 649–663. https://doi.org/10.1108/OIR-06-2015-0182 .

Tennant, J. P., Dugan J. M., Graziotin D., Jacques, D. C., Waldner, F., Mietchen, D., … Colomb, J. (2017). A multi-disciplinary perspective on emergent and future innovations in peer review. F1000Research 2017 , 6:1151, https://doi.org/10.12688/f1000research.12037.3 .

van Rooyen, S., Godlee, F., Evans, S., Black, N., & Smith, R. (1999). Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ, 318 , 23–27. https://doi.org/10.1136/bmj.318.7175.23 .

Wang, P., Rath, P., Deike, D., & Wu, Q. (2016a). Open peer review: An innovation in scientific publishing. 2016 iConference.  https://core.ac.uk/download/pdf/158312603.pdf.

Wang, P., You, S., Rath, M., & Wolfram, D. (2016b). Open peer review in scientific publishing: A web mining study of peerj authors and reviewer. Journal of Data and Information Science , 1 (4), 60–80. https://content.sciendo.com/view/journals/jdis/1/4/article-p60.xml .

Wang, P., & Tahamtan, I. (2017). The state-of-the-art of Open Peer Review: Early adopters. (Poster Paper) Proceedings of the 2017 Annual Meeting of The Association for Information Science & Technology , October 27—November 1, Washington DC.

Wang, P., & Wolfram, D. (2016). The last frontier in open science: Will open peer review transform scientific and scholarly publishing? at the 2016 Annual Meeting of the Association for Information Science and Technology , October 14–18, 2016, Copenhagen, Denmark. [Panellists: Jason Hoyt, PeerJ; Ulrich Pöschl, Max Planck; Peter Ingwersen, Royal School of Denmark & Richard Smith, retired editor of The BMJ; discussant: Marcia Bates, University of California, Log Angeles].

Walsh, E., Rooney, M., Appleby, L., & Wilkinson, G. (2000). Open peer review: A randomised controlled trial. The British Journal of Psychiatry, 176 , 47–51. https://doi.org/10.1192/bjp.176.1.47 .

Wiley (2018). Bringing greater transparency to peer review: Wiley and Clarivate analytics partner to launch innovative open peer review, 13 September 2018, https://newsroom.wiley.com/press-release/bringing-greater-transparency-peer-review-wiley-and-clarivate-analytics-partner-launch .

Wiley (2020). A list of participating OPR journals is referred at https://authorservices.wiley.com/Reviewers/journal-reviewers/what-is-peer-review/index.html (direct link: https://authorservices.wiley.com/asset/photos/reviewers.html/Journals%20Included%20in%20Transparent%20Peer%20Review%20Pilot.xlsx ).

Wolfram, D., Wang, P., & Park, H. (2019). Open Peer Review: The current landscape and emerging models. Proceedings of the 17th International Society for Scientometrics and Informetrics Conference . (pp. 387–398).

Download references

This research was partially funded by a University of Wisconsin-Milwaukee Research Growth Initiative Grant.

Author information

Authors and affiliations.

School of Information Studies, University of Wisconsin-Milwaukee, Milwaukee, WI, 53211, USA

Dietmar Wolfram & Hyoungjoo Park

School of Information Sciences, University of Tennessee, Knoxville, TN, 37996, USA

Peiling Wang Ph.D. & Adam Hembree

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Dietmar Wolfram .

Ethics declarations

Conflicts of interest.

All authors declare that they have no conflict of interest.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Wolfram, D., Wang, P., Hembree, A. et al. Open peer review: promoting transparency in open science. Scientometrics 125 , 1033–1051 (2020). https://doi.org/10.1007/s11192-020-03488-4

Download citation

Received : 10 December 2019

Published : 26 May 2020

Issue Date : November 2020

DOI : https://doi.org/10.1007/s11192-020-03488-4

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Open peer review
  • Scholarly communication
  • Journal editorial policies
  • Peer review transparency
  • Transparent review models

Mathematics Subject Classification

  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • My Bibliography
  • Collections
  • Citation manager

Save citation to file

Email citation, add to collections.

  • Create a new collection
  • Add to an existing collection

Add to My Bibliography

Your saved search, create a file for external citation management software, your rss feed.

  • Search in PubMed
  • Search in NLM Catalog
  • Add to Search

What is open peer review? A systematic review

Affiliation.

  • 1 Göttingen State and University Library, University of Göttingen, Göttingen, 37073, Germany.
  • PMID: 28580134
  • PMCID: PMC5437951
  • DOI: 10.12688/f1000research.11369.2

Background : "Open peer review" (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with numerous overlapping and contradictory definitions. While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only "invited experts" are able to comment. For still others, it includes a variety of combinations of these and other novel methods. Methods : Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of "open peer review" or "open review", to create a corpus of 122 definitions. These definitions are systematically analysed to build a coherent typology of the various innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking. Results : This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase "open peer review" has been used thus far, for the literature offers 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature reviewed. Conclusions : I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

Keywords: Open Science; open peer review; publishing; research evaluation; scholarly communication.

PubMed Disclaimer

Conflict of interest statement

Competing interests: This work was conducted as part of the OpenAIRE2020 project, an EC-funded initiative to implement and monitor Open Access and Open Science policies in Europe and beyond.

Figure 1.. Definitions of OPR in the…

Figure 1.. Definitions of OPR in the literature by year.

Figure 2.. Breakdown of OPR definitions by…

Figure 2.. Breakdown of OPR definitions by source.

Figure 3.. Breakdown of OPR definitions by…

Figure 3.. Breakdown of OPR definitions by disciplinary scope.

Figure 4.. Breakdown of OPR definitions by…

Figure 4.. Breakdown of OPR definitions by type of material being reviewed.

Figure 5.. Distribution of OPR traits amongst…

Figure 5.. Distribution of OPR traits amongst definitions.

Figure 6.. Prevalence of traits (as percentage)…

Figure 6.. Prevalence of traits (as percentage) within definitions by disciplinary focus of definition.

Figure 7.. Unique configurations of OPR traits…

Figure 7.. Unique configurations of OPR traits within definitions.

Figure 8.. Five schools of thought in…

Figure 8.. Five schools of thought in Open Science (CC BY-NC, Fecher & Friesike, 2013).

Similar articles

  • Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers. Ross-Hellauer T, Deppe A, Schmidt B. Ross-Hellauer T, et al. PLoS One. 2017 Dec 13;12(12):e0189311. doi: 10.1371/journal.pone.0189311. eCollection 2017. PLoS One. 2017. PMID: 29236721 Free PMC article.
  • Open peer review at four STEM journals: an observational overview. Ford E. Ford E. F1000Res. 2015 Jan 9;4:6. doi: 10.12688/f1000research.6005.2. eCollection 2015. F1000Res. 2015. PMID: 25767695 Free PMC article.
  • The measurement and monitoring of surgical adverse events. Bruce J, Russell EM, Mollison J, Krukowski ZH. Bruce J, et al. Health Technol Assess. 2001;5(22):1-194. doi: 10.3310/hta5220. Health Technol Assess. 2001. PMID: 11532239 Review.
  • Student and educator experiences of maternal-child simulation-based learning: a systematic review of qualitative evidence protocol. MacKinnon K, Marcellus L, Rivers J, Gordon C, Ryan M, Butcher D. MacKinnon K, et al. JBI Database System Rev Implement Rep. 2015 Jan;13(1):14-26. doi: 10.11124/jbisrir-2015-1694. JBI Database System Rev Implement Rep. 2015. PMID: 26447004
  • Science peer review for the 21st century: Assessing scientific consensus for decision-making while managing conflict of interests, reviewer and process bias. Kirman CR, Simon TW, Hays SM. Kirman CR, et al. Regul Toxicol Pharmacol. 2019 Apr;103:73-85. doi: 10.1016/j.yrtph.2019.01.003. Epub 2019 Jan 8. Regul Toxicol Pharmacol. 2019. PMID: 30634024 Review.
  • Nine quick tips for open meta-analyses. Moreau D, Wiebels K. Moreau D, et al. PLoS Comput Biol. 2024 Jul 25;20(7):e1012252. doi: 10.1371/journal.pcbi.1012252. eCollection 2024 Jul. PLoS Comput Biol. 2024. PMID: 39052540 Free PMC article.
  • How to improve scientific peer review: Four schools of thought. Waltman L, Kaltenbrunner W, Pinfield S, Woods HB. Waltman L, et al. Learn Publ. 2023 Jul;36(3):334-347. doi: 10.1002/leap.1544. Epub 2023 Apr 27. Learn Publ. 2023. PMID: 38504796 Free PMC article.
  • A health sciences researcher's experience of manuscript review comments, 2020-2022. Joubert G. Joubert G. S Afr Fam Pract (2004). 2023 Oct 25;65(1):e1-e5. doi: 10.4102/safp.v65i1.5753. S Afr Fam Pract (2004). 2023. PMID: 37916700 Free PMC article.
  • Open peer review urgently requires evidence: A call to action. Ross-Hellauer T, Bouter LM, Horbach SPJM. Ross-Hellauer T, et al. PLoS Biol. 2023 Oct 4;21(10):e3002255. doi: 10.1371/journal.pbio.3002255. eCollection 2023 Oct. PLoS Biol. 2023. PMID: 37792683 Free PMC article.
  • Author Guide for Addressing Animal Methods Bias in Publishing. Krebs CE, Camp C, Constantino H, Courtot L, Kavanagh O, McCarthy J, Ort MJ, Sarasija S, Trunnell ER. Krebs CE, et al. Adv Sci (Weinh). 2023 Oct;10(30):e2303226. doi: 10.1002/advs.202303226. Epub 2023 Aug 30. Adv Sci (Weinh). 2023. PMID: 37649154 Free PMC article.
  • Armstrong JS: Barriers to Scientific Contributions: The Authors Formula. Behav Brain Sci. Cambridge University Press (CUP). 1982;5(02):197– 199 10.1017/s0140525x00011201 - DOI
  • Armstrong JS: Peer Review for Journals: Evidence on Quality Control Fairness, and Innovation. Sci Eng Ethics. Springer Nature. 1997;3(1):63–84. 10.1007/s11948-997-0017-3 - DOI
  • Bardy AH: Bias in reporting clinical trials. Br J Clin Pharmacol. Wiley-Blackwell. 1998;46(2):147–50. 10.1046/j.1365-2125.1998.00759.x - DOI - PMC - PubMed
  • Bloom T: Referee Report For: What is open peer review? A systematic review [version 1; referees: 1 approved, 3 approved with reservations]. F1000Res. 2017;6:588 10.5256/f1000research.12273.r22301 - DOI - PMC - PubMed
  • Boldt A: Extending ArXiv.Org to Achieve Open Peer Review and Publishing. J Scholarly Publ. University of Toronto Press Inc. (UTPress), 2011;42(2):238–42. 10.3138/jsp.42.2.238 - DOI

Related information

Grants and funding, linkout - more resources, full text sources.

  • Europe PubMed Central
  • F1000 Research Ltd
  • PubMed Central

Other Literature Sources

  • scite Smart Citations
  • Citation Manager

NCBI Literature Resources

MeSH PMC Bookshelf Disclaimer

The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited.

  • ENGAGE WITH OPEN SCIENCE
  • Three Reasons to Engage with Open Science
  • Open Science principles
  • Licensing and Author’s rights
  • Data Management Plan
  • Preprint servers
  • (Open) Peer Review
  • GET INVOLVED WITH OPEN SCIENCE
  • NETWORK AND COLLABORATE
  • ABOUT INTECHOPEN AND OPEN SCIENCE
  • EXPLORE OUR WEBINAR SERIES

What is Open Science?

The process of peer review involves having an author’s scholarly work, research, or ideas critically assessed by experts in the relevant field before publishing a paper, ensuring its accuracy and credibility.  IntechOpen is dedicated to publishing high-quality content and as members of the Committee on Publication Ethics (COPE) we aim to ensure the objectivity and integrity of the peer review process. All IntechOpen reviewers and editors are instructed to review submissions in line with the COPE Ethical Guidelines for Peer reviewers.

Open peer review is a transparent and collaborative approach that promotes transparency and accountability in the evaluation and validation of scientific research. In this process, reviewer comments, identities, and sometimes even pre-publication versions of the manuscript are openly shared, aligning with the principles of Open Science. By making traditionally hidden aspects of peer review publicly available, open peer review encourages broader participation in research assessment, fostering engagement, trust, and a more inclusive and collaborative scientific community.

For more information about open peer review, feel free to watch a webinar on the topic from our webinar series “Engaging with Open Science” here .

Hit enter to search or ESC to close

  • Engage with Open Science
  • Get Involved with Open Science
  • Network and Collaborate
  • About IntechOpen and Open Science
  • Explore our webinars series
  • Get in Touch
  • Open access
  • Published: 27 February 2019

Guidelines for open peer review implementation

  • Tony Ross-Hellauer   ORCID: orcid.org/0000-0003-4470-7027 1 &
  • Edit Görögh 2  

Research Integrity and Peer Review volume  4 , Article number:  4 ( 2019 ) Cite this article

15k Accesses

36 Citations

108 Altmetric

Metrics details

Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge gap, reporting work based on an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. Although the advice is aimed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).

Peer Review reports

Openness in peer review has been labelled an ‘increasing trend’ [ 1 ]. Although it has been a feature of publishers such as BMJ and BMC (formerly BioMed Central ) for almost 20 years, it has gained ground recently, spurred by the Open Science agenda for increased transparency and participation in scientific processes. Many publishers and journals already run some form of open peer review, including BMC (owned by Springer Nature ), BMJ , Copernicus , eLife , EMBO Press , F1000Research , Nature Communications , Royal Society Open Science and PeerJ . In 2018, an open letter was published in Nature [ 2 ] calling for publishers to begin to publish peer review reports. Editors and publishers representing over 100 journals have so far signed to acknowledge that they have either already implemented, or plan to implement, the publication of peer review reports.

Open peer review (henceforth OPR) can take place in different stages of the review process—pre- or post-publication—offering extended communication and knowledge exchange between researchers. Platforms and publishers implement OPR tools to encourage wider and more transparent discourse within the review process. Yet, the openness of these systems often differs in terms of what is revealed to whom and when. To bring clarity to how the term ‘open peer review’ is used, a systematic analysis by one of the current authors analysed definitions of OPR in the literature, identifying seven core traits which were used in 22 distinct configurations [ 3 ]. Across all definitions, the main core elements were revealing reviewer identities (open identities) and publishing reviews (open reports).

Attitudes to the elements of OPR vary greatly among researchers. A recent study by one of the current authors [ 4 ] found the majority of respondents to be in favour of open peer review becoming mainstream scholarly practice, with high levels of support for most traits of OPR, particularly open interaction, open reports and final version commenting (although not for opening reviewer identities to authors). Other studies also report a generally positive attitude towards open peer review [ 5 , 6 ]. Opening up identities and reports are presented similarly as primary issues in numerous studies. Baggs et al.’s analysis of nursing journals in 2008 framed transparency and revealing reviewer identities as holding the potential to increase professionalism, communication, accountability and fairness. However, the majority of respondents in the study preferred closed identities in the review process to avoid interpersonal conflict and political issues [ 7 ]. Although a very recent study on the global state of peer review by Publons reinforces these findings, it suggests that attitudes towards OPR are shifting as the new generation of researchers is more likely to review for journals with OPR options [ 6 ]. Furthermore, publishing channels with clear open access and open peer review policies are becoming a major factor in choosing dissemination tools or services. A recent study of F1000Research authors rated transparency of the peer review system, including revealing reviewer’s names, as important or very important reasons for publishing with that venue by the majority of respondents [ 5 ].

OPR is attracting increasing attention, but there are a diverse cluster of interrelated but distinct innovations, which can be combined in a myriad of combinations, that fall under this term. Hence, any publisher wishing to move in this direction faces crucial choices about which elements of openness to embrace, and these decisions will in their turn expose them to potential advantages and disadvantages for the quality of their peer review systems. Which OPR system is optimal, for which communities and in which circumstances? How should these systems be implemented, and what opportunities and pitfalls should be recognised? Given such growing interest in open peer review, combined with such a proliferation of options for how to ‘open’ peer review, it becomes urgent to offer clear guidelines for those publishers and editors interested in taking up such practices.

This article addresses this need. It presents a series of structured guidelines for the introduction of the various open peer review traits for publishers and editors. The guidelines were produced in close collaboration with a group of experts in peer review, and especially open peer review, from publishing and research. Methods included via background research, expert interviews and an expert synthesis/validation workshop. The guidelines seek to be of use to those who oversee the peer review of manuscripts for publication who are considering introducing more transparency or inclusivity to their peer review processes by implementing any of the innovations grouped under the term ‘open peer review’. The paper first gives general advice which cuts across all OPR elements, before going on to detail specific advice for each of the OPR elements. Although the advice is directed mainly at editors and publishers of scientific journals, since this is the area in which OPR is at its most mature, many of the principles may also be applicable for the implementation of OPR in other areas (e.g., books, conference submissions).

What issues with peer review does open peer review address?

Peer review is a method of scholarly quality assurance that serves to validate the soundness, substance and originality of a work, to assess and help improve it until it meets required standards for these criteria, as well as sometimes to select for ‘appropriateness’ or ‘fit’ for certain venues. Peer review, as it relates to scholarly publishing, is here understood as the formal scholarly process where an editor sends copies of a manuscript to neutral third-parties judged knowledgeable enough to be able to comment on its quality and suitability for publication. Standard peer review is understood to be typically:

Anonymous : either reviewer identities are kept from authors (single-blind) or author and reviewer are unknown to each other (double-blind). In some cases, authors identities are also hidden from editors (triple-blind)

Confidential : the process takes place behind closed doors (or, rather, password privileges) and reviews are not published

Selective : reviewers are chosen by the editor

As has been said, open peer review is a complex phenomenon, which actually represents a range of possible innovations to standard peer review processes. Each of these innovations addresses different issues with standard peer review. Following the summary presented in [ 3 ], we offer the following, non-systematic, review of the following issues that the various traits of open peer review are proposed to address:

Accountability : The increased transparency offered by open identities and reports could increase accountability and make reviewer conflicts of interest more apparent. Open participation could reduce possible problems with biases or elitism associated with editorial selection of reviewers [ 8 ]; on the other hand, it could facilitate engagement by those with conflicts of interest (particularly where anonymity is allowed). Open identities are sometimes theorised to discourage reviewers from making strong criticisms, especially against higher-status colleagues—if true (and there is little evidence against which to judge this), this could subvert review by weakening criticism [ 3 ].

Bias : Open reports allow the scientific community to examine how publication decisions were made. However, open identities remove the anonymity for reviewers (single-blind) or authors and reviewers (double-blind) which has traditionally been used to counteract such biases [ 9 , 10 ].

Inconsistency : Open identities and open reports could improve the quality of reviews, encouraging reviewers to be more thorough in their assessments (although there is too little evidence to say if this is the case) [ 11 ]. Open participation, by increasing the number of potential reviewers, could lead to more thorough review processes [ 12 ] (although note that open participation processes often fail to attract large numbers of comments). Some evidence suggests open interaction could increase the accuracy of reviews [ 13 ].

Time : Peer review often takes a long time. Publishing manuscripts online in advance of peer review, either as pre-prints or as part of the publisher workflow, speeds up dissemination and (in disciplines like Physics) enables researchers to claim priority in a finding [ 14 ]. Open platforms could help avoid cycles of review, where articles are submitted to various journals before finally being published and are reviewed anew each time. However, open identities and open reports could increase delays by increasing the number of reviewer invitations needed to secure the required number [ 11 ], and open interaction could delay processes by leading to cycles of comments back and forth between reviewers and authors [ 15 ].

Incentive : If review reports were published alongside reviewer names, it would be easier for researchers to claim credit for these activities, thus incentivising review [ 16 ]. Open participation could incentivise researchers by allowing them to seek out works they want to review.

Wasted effort : Rather than hiding the useful contextual information contained in peer review, open reports would make this available [ 17 ].

The guidelines were created in close consultation with a group of experts. The views of expert participants were sought via an interactive meeting, a brief pre-meeting questionnaire and subsequent sharing of drafts of the guidelines with all participants for feedback. Participants were chosen for their expertise in peer review—especially open peer review—and, although care was taken to include conservative voices, the topic and aim of the meeting attracted people ‘open’ to open peer review. The aim was to have most large publishers represented. The list of 15 experts who took part in the workshop, and others who contributed to the pre-meeting questionnaire and commented on drafts of the guidelines, is listed in the ‘Acknowledgements’ section of this article. All consented to be named. They represent experts in peer review and open peer review from many major publishers ( BMC (part of Springer Nature ), BMJ , Copernicus Publications , eLife , Elsevier , F1000Research , Hindawi , MDPI , Nature (part of Springer Nature ), PLOS , Royal Society Open Science , Taylor & Francis , Wiley ), along with representatives from Publons and the PEERE research consortium. All data was collected and analysed by the current authors: Dr. Tony Ross-Hellauer (male) and Dr. Edit Görögh (female), researchers in Open Science . The authors have rich previous experience in performing qualitative analysis in these areas. Seven of the participants were already known to the authors, but otherwise, no relationships were established prior to study commencement. Prior familiarity with some participants means, however, that the authors were to varying degrees aware of the broad stance of some participants towards open peer review before the study commencement.

Scoping interviews

Initial scoping of the issues was conducted via three initial semi-structured interviews with academic publishing professionals whose publishing portfolios included open peer review journals. Participants for this stage were chosen for their familiarity with the theme. All those approached assented to interview. Interviews were conducted by the current authors between 5 and 18 December 2017 via Skype. No audio or video recordings were made, but detailed notes of participant answers were taken by the interviewers. Each interview lasted between 30 and 60 min. Notes were then shared with participants for any post-interview comments, corrections or additions.

Pre-meeting questionnaire

A week in advance of the workshop, a short online pre-meeting questionnaire was distributed to workshop attendees via Google Forms, consisting of three open questions: ‘If a publisher or journal is interested in implementing OPR in some form, what advice would you give them on how to get started?’, ‘What opportunities should they look out for?’ and ‘What pitfalls should they look out for?’. In total, 14 responses were received between 16 and 23 March 2018. No personal data save names and email addresses were collected. All data were kept and stored in accordance with data protection regulations. The authors of this study then collaboratively coded answers to the three open questions, iteratively grouping them using a grounded theory approach. This resulted in a list of preliminary common categories and themes, which were then used as the basis for further investigation at the interactive workshop: technological/process issues, engaging/listening to communities, being pragmatic—where to get started, biggest drivers for selling concept, biggest problems to watch out for, how to set goals and evaluate performance.

Interactive workshop

The interactive workshop took place on 27 March 2018 at Springer Nature ’s Stables venue in London, UK. Participants were identified via purposive sampling, with the aim to include representatives of most of the major publishers who run or are experimenting with open peer review processes, but also include publishers with more traditional peer review systems, as well as funders and researchers where they had previously demonstrated interest in these issues. Twenty invitations were issued via email, of which two were declined (for lack of time) and two received no response. One confirmed participant was unable to attend due to illness, meaning that 15 participants took part in the workshop (eight male and seven female). All represented major publishers except one participant from Publons (a peer review analytics company) and one active peer review researcher (from the PEERE network).

The meeting took the form of an interactive workshop which lasted 90 min. No audio or video recording of the workshop was made, but extensive field notes were kept by the researchers. No one else was present besides the participants and researchers. The session was moderated by Edit Gorogh with note-taking by Tony Ross-Hellauer and additional facilitation from Elisabeth Moylan, one of the attendees. Edit Gorogh first presented the motivation for the exercise. Participants were first asked to consider in pairs to consider the questions: ‘For those who have implemented/experimented with OPR, what one thing would you have done differently?’ or ‘For those without OPR experience, what’s the one thing you’d really like to know first?’ Each pair was then asked to report back to the group the main points of their discussion, enabling free-form discussion of main points of interest.

The group was then split into two, and each group asked to discuss and record their main advice under each of the headings identified as main common themes following the interviews and questionnaire. Following group discussion of 15 min, each group reported back to the whole group, whereupon each group could give feedback on each other’s answers. In a final stage, the group was split into four sub-groups (self-selected) who then each provided advice specific to individual OPR traits (open identities, open reports, open interaction, open participation/pre-review manuscripts). Again, each group reported back to the whole group for further discussion. The researchers kept notes during the discussion of major and minor themes.

Iterative drafting of guidelines text

Following the workshop, the authors of this study used the answers from the workshop to further refine the structure and content of the guidelines. First, notes taken during the meeting were collated, and major and minor themes identified according to a grounded theory approach employing open and axial coding. Following this, a first draft of the guidelines was written and shared with all expert participants via Google Docs for collaborative feedback and further refinement. Following submission of this manuscript, and resulting useful critical comments from two very thorough reviewers, a revised draft was created in which the advice was further refined. This version was then shared with all participants in December 2018 for their further feedback before resubmission.

General advice on implementing open peer review (OPR)

A) set your open peer review goal(s), a1. decide what you would like to achieve with opr.

Any journal editor or publisher wishing to implement some form of OPR would be well-advised to first do their homework. What do you want to achieve? How? For which reasons? Answering these questions first will enable you to orient your engagement with OPR. Examine which particular aspects of your peer review processes you would like to improve. For example, do you want to increase the transparency of your processes, give credit to peer reviewers, enable greater participation, or just speed up the peer review process? Being clear on these primary goals is vital.

A2. Acquaint yourself with the differences between the elements of OPR

As discussed above, ‘open peer review’ can mean different things to different people. As a first step, familiarise yourself with the differences between each of these elements. For example, one of the current authors created a taxonomy of seven core traits:

Open identities: authors and reviewers are aware of each other’s identity.

Open reports: review reports are published alongside the relevant article.

Open participation: the wider community are able to contribute to the review process.

Open interaction: direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.

Open pre-review manuscripts: manuscripts are made immediately available (e.g., via preprint servers like arXiv) in advance of formal peer review procedures.

Open final version commenting: review or commenting on final ‘version of record’ publications.

Open platforms (‘decoupled review’): review is facilitated by a different organisational entity than the venue of publication.

Read widely to familiarise yourself with the pros and cons of each of these elements (a primer with links to some literature is given above in the ‘ What issues with peer review does open peer review address? ’ section and a longer list of secondary reading is included in the Additional file  1 ).

A3. Decide which elements you would like to implement

Being clear on your primary goals and relating them to specific elements of OPR will enable you to begin to build a provisional strategic plan for OPR implementation. Further refine this by studying existing models and OPR implementations through publisher websites, published literature, presentations and online resources. Use industry contacts and discussion platforms to discuss and learn lessons from publishers and journals who have already implemented OPR procedures. Be aware that the resource and time commitment is dependent on the elements selected—disclosing names of reviewers is relatively straightforward whereas publishing a full peer review history for each paper requires significant investment (see also the ‘ Assess technological feasibility of various options ’ and ‘ Assess the costs of various options ’ sections below).

B) Listen to research communities

B1. be conscious of, and sensitive to, community differences.

Be conscious that there will be differences in perceptions and willingness among different research communities. For instance, some disciplines have more of a tradition with double- or even triple-blind review and this might lead to more resistance to openness in peer review. If you are a publisher overseeing peer review at many journals, consider starting with particular disciplines that are more open to trial OPR, especially those where other journals in the field already use OPR (although note that this may be challenging for broad scope interdisciplinary journals).

B2. Consider surveying community opinions

Consider directly surveying community opinions regarding open peer review models to gauge attitudes. This may work especially well for journals with close-knit communities—for example, society journals, which regularly seek feedback from authors or society members regarding journal policies. Alternatively, or as a complement to this strategy, consider targeted ‘qualitative interviews’ to gather insights from those with particularly strong opinions regarding open peer review.

B3. Communicate your goal with the stakeholders and research community

Engage journal communities—firstly by consulting your editorial board and reviewers to get them on board with the idea. It may be necessary to ‘sell’ the benefits of opening up peer review and provide reassurances. A committed and engaged Editor who can drive such discussions may help here. Find keen researchers to work with and gauge interest in the model among communities the journal serves. Let reviewers, authors and readers know in advance, and if you are unsure of how such developments might be received, consider announcing plans in a journal editorial and seeking community feedback. In any case, include requests for community feedback in any such announcements to ensure alignment with researcher attitudes.

C) Plan technologies and costs

C1. assess technological feasibility of various options.

A deciding factor in your prioritising the elements of openness to include will be the technical possibilities of your system. Whether you are a small publisher using open source software or a large publisher which uses one of the major manuscript handling services, if your electronic editorial office and production/publication systems and workflows cannot currently be easily configured for OPR elements, they may be difficult and/or expensive to implement.

C2. Assess the costs of various options

It is important to recognise potential costs in advance. As things stand, there is a lack of infrastructure to facilitate automated workflows for many of the elements of OPR. Hence, development costs may be a major barrier—especially for smaller players. Ask yourself: Which options does your system already support, and do you have the technical staff or resources to fund system development? Consider also that costs will likely not only be in initial implementation (e.g., custom system development), but also ongoing support costs (e.g., staffing). If your needs would require significant custom system adaptations from a third-party service provider, you might consider partnering with other publishers who use these services to spread costs in implementing these changes. Alternatively, some platforms are now offering specific OPR functionality to work together with more traditional publishing services. In any case, be aware that there will usually be different ways, with differing levels of elegance and cost, to implement OPR options. For instance, publication of peer review information could be as simple as manually compiling review components and publishing a single document as Additional file  1 or as complex as an automated (XML) workflow where each element is published separately (see also the ‘ Open reports ’ section).

C3. Consider workaround options for piloting

If you are just experimenting with OPR, it may be that rather than immediately extending your whole publication architecture, it might be better to start small with workarounds, although be aware that ad hoc workarounds may produce a less smooth user-experience which could affect uptake and user attitudes to the experiments. Consider, however, that the sub-optimal nature of workaround solutions may then become an inhibiting factor in the success of the experiment. One solution here would be for a third-party OPR platform to offer their service as a plug-in to existing workflows for conducting such experiments.

D) Be pragmatic in your approach

D1. set priorities and consider a phased approach.

Be flexible and choose your battles carefully. Change is difficult and you may run into problems if you try too many things at once. Your communities may be more receptive to some elements than others, and so, prioritising the areas you would like to change and being prepared to compromise from the ideal situation or at least take a phased approach may help you maintain traction and community buy-in. It will also make it easier to systematically assess the success or otherwise of any particular innovation.

D2. Consider making options optional or piloting them first

For elements you would like to introduce but think might prove controversial, you could make them optional. Thereby, it is possible to signal your support for this innovation while allowing reviewers or authors to opt-out. Note, however, that default policies may significantly affect outcomes—if the default policy is opt-in, this might lead to lower participation than if the default were to opt-out, for instance. If reactions among research communities may be uncertain, consider introducing OPR through a pilot study with an accompanying survey for participants which would show that any final decisions would be based upon real experiences, whilst allowing the journal to experiment with the confidence of the community.

E) Further communicate the concept

E1. engage the community, especially via ‘open champions’.

Once you have decided on the model you’d like to move to, you have your communities on board, and have prioritised which OPR elements to implement, you will still need to sell your communities on the concept. As a general strategy, you should engage with the research community to find academics who are enthusiastic about OPR to be ‘open champions’ in advocating to their peers—for example, by engaging people who responded positively to your initial community consultation in step B. Moreover, the arguments above in favour of the various aspects of OPR will help sell the concept, especially with regard to increasing transparency, enhancing credit for review activities and demonstrating and (although this is an understudied area) potentially enhancing the quality of reviews.

E2. Be aware that communication is key and terminology is important

Misunderstandings could derail processes. As the stewards of the peer review process, publishers and editors have a duty of care to ensure reviewers and authors fully understand the systems of peer review in which they participate and its potential advantages and disadvantages. Use editorials, webinars, infographics and/or blog posts to articulate decisions and justify why these decisions have been made. Formulate clear policies which are easily findable on journal webpages for authors and reviewers.

F) Evaluate performance

F1. have a clear framework for assessing success.

There is a need to track review quality and acceptance rates to monitor how OPR affects processes. As said above, it is good to decide a vision for the kind of peer review you want in the context of your end-to-end publication workflow and then prioritise goals in order to reach this vision. A key part of this planning should be deciding how you will define and evaluate success. Have a clear framework for assessing success (‘of what on whom’, so on specific measures and specific population clusters). Systematically collect data and study the impact of the practice on journal performance. Key questions could be the following: is review quality improved? Is it more difficult to find reviewers? Are review times impacted? Are open reports being consulted and re-used? It is also advisable to consult with your journal community once the new process has been in place for some time, perhaps via survey, to gauge the development of their attitudes towards processes. Important here is to establish ex ante which quantifiable measures or performance indicators will be used for internal analysis. Outcomes should always be considered on an appropriate time scale, however. Change takes time.

F2. Accept that change takes time, but adjust if necessary

Bear in mind that cultural change takes time, and so, even where uptake is not as quick as wished, the broader ethical aims of transparency and accountability in scholarly publishing might make persistence desirable in spite of low uptake. However, if things really are not working, then it may be necessary to re-evaluate your goals in light of lessons learned. For example, revisit the advice in the ‘ Set priorities and consider a phased approach ’ and ‘ Consider making options optional or piloting them first ’ sections to consider phasing individual elements or making them optional.

F3. Share your results with the community

Giving updates on progress will enable community engagement, keeping authors, reviewers, editors and publishing staff updated on the progress of your initiative. These updates will also help others decide whether and how to implement similar approached. There is currently a lack of real scientific evidence on the efficacy of many traits of OPR. Once enough evidence has been gathered, consider writing up the results as a scientific study for peer-reviewed publication. Alternatively, consider partnering with peer review researchers from the start to ensure data is well-formed for such analyses and to enable rigorous external scientific analysis.

Advice on implementing specific elements of open peer review

G) open identities.

Open identities peer review is a review where authors and reviewers are aware of each other’s identities. Reviews with open identities can seem to be more constructive in tone [ 18 ]. There has been some evidence [ 11 , 19 ] that finding reviewers to review openly might be more difficult, although others have found no such negative consequences [ 18 ]. If you opt for open identities, however, it may be advisable to do the following:

G1. Devise strategies to compensate for the possibility that open identities might make it harder to find reviewers

Make sure to create a new standard reviewer invitation email which includes a clear description of the open identities review process and its potential advantages as well as disadvantages, as well as a standard follow-up text which goes deeper into these issues to convince those who are reticent. If you are keen to invite a specific person who is reticent, be ready to negotiate to persuade them by further explaining or, for example, offering more time to review if a reviewer believes they will need more time under such circumstances.

G2. Be alert to possible negative interactions and have a workflow for dealing with them

A common concern regarding open identities is that junior researchers who give negative reviews to more senior colleagues may face retaliation in some form. While it is important to note that there is at present only anecdotal evidence of this, such concerns seem nonetheless very common and underlie much of the negative response to open identities. It is hence essential to deal with these concerns early on to set authors and reviewers at ease and limit the risk to which researchers who take part in review processes are exposed. Have in place clear processes for dealing with any reviewer concerns and encourage any reviewers experiencing negative consequences to contact the journal as a matter of academic ethics. Publishers should also use their experiences to contribute to the evidence base on this issue by monitoring whether open identities leads to more positive reviews overall, for example.

G3. Enable credit

Wherever reviewer names are disclosed along with publication, be sure to use identifiers (e.g., ORCID) to link that activity to reviewer profiles and further enable credit and career evaluation. Assigning persistent identifiers like ORCID is a crucial element of best practice within current publishing practices. Moreover, if your journal is not yet participating with Publons, also consider enabling this partnership to further enable reviewer credit.

G4. Consider piloting or making open identities optional

This point was made above for OPR elements in general, but is worth applying to open identities specifically since this is one of the most controversial elements of OPR. If you are interested but not ready to fully commit to open identities, as suggested above, you could start small with a pilot and scale up. Alternatively, you could allow reviewers to opt in or out (as happens, for example, at eLife and MDPI ). As said above, to signal the journal’s support for the concept but allow reviewers choice, you could make open identities the default but enable reviewers to opt out of the process. Another possibility would be to maintain a standard single- or double-blind review process but to publish reviewer names alongside the final article (the practice at the publisher Frontiers). Bear in mind, though, that changes in these conditions could introduce biases whereby reviewers who are inclined to be more lenient towards a manuscript may be more likely to accept the review.

H) Open reports

Open reports peer review is where review reports (either full reports or summaries) are published alongside the relevant article (an example is shown below in Fig. 1 ). Often, although not in all cases (e.g., EMBO reports), reviewer names are published alongside the reports. The main benefits of this measure lie in making currently invisible but potentially useful scholarly information available for re-use. There is increased transparency and accountability that comes with being able to examine normally behind-the-scenes discussions and processes of improvement and assessment and a potential to further incentivise peer reviewers by making their peer review work a more visible part of their scholarly activities (thus enabling reputational credit via linking reviews to researchers’ ORCID or Publons profiles, for example) (Fig. 1 ).

figure 1

Screenshot of example published peer review report on F1000Research [ 22 ]

H1. Meet industry best-practice for publishing review reports

There is not yet an established industry standard for how to publish peer review reports, but this situation is now changing. The current best advice here is from Beck et al.’s 2018 article ‘Publishing peer review materials’, [ 20 ] whose first version (currently open for community review) advises that best practice is to assign individual DOIs to reports. In this way, review reports become a citable, discoverable and creditable part of the scholarly record in their own right. The authors see three routes to achieving the following:

‘Peer review materials are attached to the article as a single or numerous PDFs. Whether these materials are pulled together into one document or attached as separate documents, there should be some defined mechanism in the JATS XML tagging that would support the capture of any available metadata and identify these files in a machine-readable and interoperable way for publishers to tag this content appropriately.

Peer review materials are appended to the article within the full text (so all is machine readable) as a sub-article component of the XML.

Peer review materials are full-text XML ‘articles’ or ‘commentaries’ in their own right that link bidirectionally to the main article.’

The options presented are in order from most basic to most complex, but also from least to most desirable. This means best practice would be option 3, but most pragmatic would be option 1. In addition, machine-readable metadata should accompany the content.

H2. Be aware of potential challenges in publishing reports

Beck et al. [ 20 ] advise that where publishers had existing workflows to prepare review content for publication, this could be done in minutes. Others advised they could spend 20 to 40 min for each article in tasks like removing ‘boilerplate text’ from reports, compiling content from multiple locations and editorial checks including reviewing/redacting sensitive information or inflammatory language. Clear policies should be in place for editors handling or overseeing any derogatory or defamatory remarks, and these should be publicly available on the peer review policy pages of the journal website. Where changes or redactions occur as a result in the published reports, a disclaimer could be added to indicate this is the case. It is also important to ensure coherent version management so reports can be linked to specific versions of manuscripts. A related issue here is the status of confidential reviewer messages to the editor—should these be allowed, will they be published later? If not, consider adding a further disclaimer to any published reports that these comments have been omitted. Finally, as this is an area in which there is not yet infrastructure to enable publishing of reports in a scalable and sustainable way, there may be substantial resources and cost commitments involved in publishing peer review information, which could be a significant barrier for smaller players (see the ‘ Plan technologies and costs ’ section for suggestions on how to deal with this).

I) Open participation, pre-review manuscripts and open final version commenting

Open participation peer review allows the wider community to contribute to the review process. This can be either during publication (by making a pre-review manuscript openly available online as a pre-print or discussion paper) or after publication (by enabling comments on the publisher website or via a third-party platform like PubPeer Footnote 1 ). Whereas in traditional review, editors identify and invite specific parties (peers) to review, open participation processes allow any interested members of the scholarly community or other interested parties from outside traditional scholarly circles to participate in the review process, either by contributing full, structured reviews or shorter comments. Often open participation processes will be used as a complement to traditional, invited reviews. Crowdsourcing reviewers in this way in theory ensures that fields do not become too insular or self-referential, enabling cross-disciplinary perspectives and potentially increase the number of researchers who can contribute to the quality assurance of manuscripts.

I1. Decide who can comment

A key decision here is whether to make comments open to anybody (anonymous or registered) or whether to require some credentials before allowing comments. Various options are available depending on your own communities. At Copernicus Publications , for example, reviewers can be anonymous but open commentators on discussion papers must add their identities [ 12 ] (Fig.  2 ). However, although MDPI’s Preprints service Footnote 2 originally allowed only registered users to comment, this condition was recently relaxed. Despite concerns that this would lead to a lower-quality of comments, MDPI in fact reports having had few problems so far. One issue here is that for indexing services to accept comments (as, for example, PubMed Central Footnote 3 accepts comments from F1000Research Footnote 4 ), they may require a named individual and their affiliation.

figure 2

Screenshot of comment threads of open participation, pre-publication community discussion on an article in the Copernicus journal Biogeosciences https://www.biogeosciences.net/15/4955/2018/bg-15-4955-2018-discussion.html

I2. Consider how to foster uptake

A further crucial issue is that open participation processes often experience low uptake. Thus, as already said, they are hence often used as a complement to a parallel process of solicited peer review. At the open access journal Atmospheric Chemistry and Physics (ACP), which publishes pre-review discussion papers for community comments, only about one in five papers is commented upon [ 6 ]. Hence, open participation review is arguably better seen as a complement to, rather than a replacement for, invited peer review. In any case, some mediation of the community will help to stimulate engagement. Such mediation could take the form of reaching out to potential commentators directly to ask them to comment, or highlighting conversations via other channels such as social media, to entice others to engage.

J) Open interaction

In traditional peer review, reviewers and authors correspond only with editors. Reviewers have no contact with other reviewers, and authors usually have no opportunity to directly question or respond to reviewers. Open interaction peer review allows and encourages direct reciprocal discussion between reviewers and/or between author(s) and reviewers.

J1. Decide which workflow to enable

Allowing interaction among reviewers or between authors and reviewers, or between reviewers themselves, is another way to ‘open up’ the review process, enabling editors and reviewers to work with authors to improve their manuscript. It is therefore important to decide what workflow you will follow. Examples of journals which enable pre-publication interaction between reviewers are the EMBO Journal Footnote 5 and eLife . Footnote 6 Frontiers Footnote 7 has gone a step further, including an interactive collaboration stage with dialogue between authors, reviewers and editor(s).

J2. Be alert to how this may affect editorial workloads

While this extended dialogue might be expected to increase the editorial workload in some parts of the process, publishers practicing such methods actually report that they can also reduce workload in other parts. For example, the eLife consultation approach involves more work upfront (i.e., the consultation process and the drafting of the consensus decision letter), but time is saved later on if the editor decides on the revised version rather than sending back to the referees [ 21 ].

Item

Guideline

Completed

General advice

A) Set your open peer review goal(s)

A1.

Decide what you’d like to achieve with OPR

 

A2.

Acquaint yourself with the differences between the elements of OPR

 

A3.

Decide which elements you would like to implement

 

B) Listen to research communities

B1.

Be conscious of, and sensitive to, community differences

 

B2.

Consider surveying community opinions

 

B3

Communicate your goal with the stakeholders and research community

 

C) Plan technologies and costs

C1.

Assess technological feasibility of various options

 

C2.

Assess the costs of various options

 

C3.

Consider work-around options for piloting

 

D) Be pragmatic in your approach

D1.

Set priorities and consider a phased approach

 

D2.

Consider making options optional or piloting them first

 

E) Further communicate the concept

E1.

Engage the community, especially via “open champions”

 

E2.

Be aware that communication is key and terminology is important

 

F) Evaluate performance

F1.

Have a clear framework for assessing success

 

F2.

Accept that change takes time, but adjust if necessary

 

F3.

Share your results with the community

 

Trait-specific advice

G) Open identities

G1.

Devise strategies to compensate for the possibility that open identities might make it harder to find reviewers

 

G2.

Be alert to possible negative interactions and have a workflow for dealing with them

 

G3.

Enable credit

 

G4.

Consider piloting or making open identities optional

 

H) Open reports

H1.

Meet industry best-practice for publishing review reports

 

H2.

Be aware of potential challenges in publishing reports

 

I) Open participation, pre-review manuscripts & open final version commenting

I1.

Decide who can comment

 

I2.

Consider how to foster uptake

 

J) Open interaction

J1.

Decide which workflow to enable

 

J2.

Be alert to how this may affect editorial workloads

 

Conclusions

Open peer review is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This article has aimed to address this knowledge gap, reporting work based on literature research, expert interviews and an interactive stakeholder workshop to create best-practice guidelines for editors and journals who wish to transition to OPR. The guidelines offer practical and pragmatic advice to these purposes at both a general level and for specific OPR traits. Main points of guidance are (a) set open peer review goal(s), (b) listen to research communities, (c) plan technologies and costs, (d) be pragmatic in approach, (e) further communicate the concept and (f) evaluate performance.

It is important to recognise some limitations of our approach to the creation of these guidelines, however. Firstly, our approach was to bring initial ideas forward individually and anonymously, but then to iteratively work on the development of the guidelines collaboratively with our whole group of experts. While we believe this open, collaborative phase enabled rich and in-depth discussion, it is possible that these group dynamics resulted in some ‘bandwagon effects’ whereby our experts influenced each other in their stated opinions. Secondly, these guidelines have not yet been pilot-tested, as would happen in the development of, for example, formal reporting guidelines. Nonetheless, given the range of experience and knowledge that has been incorporated here, we are confident the guidelines bespeak the best advice possible for the implementation of open peer review at this time. The area of open peer review is fast evolving. As journals and publishers look to experiment with new processes, it is our hope these guidelines prove useful in setting expectations and guiding best-practice.

https://pubpeer.com /

https://www.preprints.org /

https://www.ncbi.nlm.nih.gov/pmc /

https://f1000research.com /

http://emboj.embopress.org /

https://elifesciences.org /

https://www.frontiersin.org /

Abbreviations

open peer review

Tennant JP. The state of the art in peer review. FEMS Microbiol Lett. 2018;365(19) Available from: https://doi.org/10.1093/femsle/fny204 . [cited 2018 Oct 1].

Polka JK, Kiley R, Konforti B, Stern B, Vale RD. Publish peer reviews. Nature. 2018;560(7720):545.

Article   Google Scholar  

Ross-Hellauer T. What is open peer review? A systematic review. F1000Research. 2017;6:588.

Ross-Hellauer T, Deppe A, Schmidt B. Survey on open peer review: attitudes and experience amongst editors, authors and reviewers. PLoS One. 2017;12(12):e0189311.

Kirkham J, Moher D. Who and why do researchers opt to publish in post-publication peer review platforms? - findings from a review and survey of F1000 Research. F1000Research. 2018;7:920.

Publons. Global state of peer review [Internet]. Clarivate Analytics; 2018. Available from: https://publons.com/static/Publons-Global-State-Of-Peer-Review-2018.pdf . Accessed 12th Feb 2019.

Baggs JG, Broome ME, Dougherty M, Freda MC, Kearney MH. Blinding in peer review: the preferences of reviewers for nursing journals - Baggs - 2008 - journal of advanced nursing - Wiley online library. J Adv Nurs. 2008;64(2):131–8.

Bornmann L, Herich H, Joos H, Daniel H-D. In public peer review of submitted manuscripts, how do reviewer comments differ from comments written by interested members of the scientific community? A content analysis of comments written for atmospheric chemistry and physics. Scientometrics. 2012;93(3):915–29.

Budden AE, Tregenza T, Aarssen LW, Koricheva J, Leimu R, Lortie CJ. Double-blind review favours increased representation of female authors. Trends Ecol Evol. 2008;23(1):4–6.

Ross JS, Gross CP, Desai MM, Hong Y, Grant AO, Daniels SR, et al. Effect of blinded peer review on abstract acceptance. JAMA. 2006;295(14):1675–80.

van Rooyen S, Godlee F, Evans S, Black N, Smith R. Effect of open peer review on quality of reviews and on reviewers’ recommendations: a randomised trial. BMJ. 1999;318(7175):23–7.

Pöschl U. Multi-Stage Open Peer Review: Scientific Evaluation Integrating the Strengths of Traditional Peer Review with the Virtues of Transparency and Self-Regulation. Front Comput Neurosci. 2012;6 Available from: https://doi.org/10.3389/fncom.2012.00033 . [cited 2018 Feb 26].

Herron DM. Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surg Endosc. 2012;26(8):2275–80.

Bourne PE, Polka JK, Vale RD, Kiley R. Ten simple rules to consider regarding preprint submission. PLoS Comput Biol. 2017;13(5):e1005473.

Walker R, Rocha da Silva P. Emerging trends in peer review—a survey. Front Neurosci. 2015;9 Available from: https://doi.org/10.3389/fncom.2012.00033 . [cited 2018 Dec 13].

Johnston D. Peer review incentives: a simple idea to encourage fast and effective peer review. Eur Sci Ed. 2015;41(3):70–1.

Google Scholar  

Amsen E. What is open peer review? F1000 Blogs. 2014; Available from: https://blog.f1000.com/2014/05/21/what-is-open-peer-review/ . [cited 2018 Dec 13].

Bravo G, Grimaldo F, López-Iñesta E, Mehmani B, Squazzoni F. The effect of publishing peer review reports on referee behavior in five scholarly journals. Nat Commun. 2019;10(1):322.

van Rooyen S, Delamothe T, SJW E. Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ. 2010;341 Available from: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2982798/ . [cited 2018 May 26].

Beck J, Funk K, Harrison M, McEntyre J, Breen J, Collings A, et al. Publishing peer review materials. F1000Research. 2018;7:1655.

King SR. Consultative review is worth the wait. elife. 2017;6:e32012.

Polka JK. Referee Report For: Publishing peer review materials [version 1; referees: 2 approved]. F1000Research. 2018;7:1655.

Download references

Acknowledgements

Thanks to the experts who participated in the London meeting: Tiago Barros (Publons), Elisa de Ranieri (Nature), Lynsey Haire (Taylor & Francis), Michael Markie (F1000), Catriona McCallum (Hindawi), Bahar Mehmani (Elsevier), Elizabeth Moylan (then BMC, now Wiley), Iratxe Puebla (PLOS), Martyn Rittman (MDPI), Peter Rodgers (eLife), Jeremy Sanders (Royal Society Open Science), Richard Sands (BMJ), Flaminio Squazzoni (PEERE), Xenia van Edig (Copernicus Publications), Michael Willis (Wiley), as well as to Theodora Bloom (BMJ), Sara Schroter (BMJ) and Phil Hurst (Royal Society) who did not attend the meeting but gave online input and commented on drafts of the guidelines. We also extend our very grateful thanks to our handling editor at Research Integrity & Peer Review, Elizabeth Wager, as well as our two very thorough peer reviewers, Mario Malicki and Virginia Barbour. The very helpful comments of all three really added to the structure and elaboration of the guidelines.

This work was supported by the OpenUP project, funded under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 710722.

TRH is Senior Researcher at Know-Center GmbH, Graz, Austria. The Know-Center is funded within the Austrian COMET program—Competence Centers for Excellent Technologies – under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth, and the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.

Availability of data and materials

Data and background materials are available via Zenodo: https://doi.org/10.5281/zenodo.2301842

Author information

Authors and affiliations.

Know-Center GmbH and Graz University of Technology, Inffeldgasse 13, 8010, Graz, Austria

Tony Ross-Hellauer

State and University Library Goettingen, Goettingen, Germany

Edit Görögh

You can also search for this author in PubMed   Google Scholar

Contributions

TRH and EG contributed to the design and implementation of the study and to the analysis of the results. TRH drafted the manuscript. TRH and EG approved the final draft of the manuscript.

Corresponding author

Correspondence to Tony Ross-Hellauer .

Ethics declarations

Ethics approval and consent to participate.

Not applicable

Consent for publication

Competing interests.

TRH is Editor-in-Chief of Publications (ISSN 2304-6775), an open access journal on scholarly publishing published quarterly by MDPI.

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Additional file

Additional file 1:.

Background resources for open peer review implementation. (DOCX 24 kb)

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated.

Reprints and permissions

About this article

Cite this article.

Ross-Hellauer, T., Görögh, E. Guidelines for open peer review implementation. Res Integr Peer Rev 4 , 4 (2019). https://doi.org/10.1186/s41073-019-0063-9

Download citation

Received : 04 October 2018

Accepted : 05 February 2019

Published : 27 February 2019

DOI : https://doi.org/10.1186/s41073-019-0063-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Peer review
  • Open peer review
  • Scholarly publishing
  • Open science

Research Integrity and Peer Review

ISSN: 2058-8615

open peer review in research

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

open peer review in research

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved August 12, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

  • Advanced search
  • Peer review
  • Record : found
  • Abstract : found
  • Article : found

What is open peer review? A systematic review

open peer review in research

Read this article at

  • Download PDF
  • Review article
  • Invite someone to review

Background: “Open peer review” (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods.

Methods: Recognising the absence of a consensus view on what open peer review is, this article undertakes a systematic review of definitions of “open peer review” or “open review”, to create a corpus of 122 definitions. These definitions are then systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition currently lacking.

Results: This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Quantifying definitions in this way allows us to accurately portray exactly how  ambiguously the phrase “open peer review”  has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature.

Conclusions: Based on this work, I propose a pragmatic definition of open peer review as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

Related collections

' class=

Open Research, Open Science, Open Scholarship

' class=

Most cited references 64

open peer review in research

Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data

Article

  • Abstract : not found
  • Article : not found

Publication prejudices: An experimental study of confirmatory bias in the peer review system

Peer review: a flawed process at the heart of science and journals., author and article information , affiliations, author notes.

Competing interests: No competing interests were disclosed.

Competing interests: I am Executive Editor of The BMJ, which operates a version of open peer review, and I have previously been employed by PLOS and BioMed Central which operate different versions.

Competing interests: I am a consultant for Frontiers Media SA, an Open Access publisher with its own system of Open Peer Review

Author information

This is an open access article distributed under the terms of the Creative Commons Attribution Licence, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

open peer review in research

A must-read for those which are interested in open peer review (OPR), a pre-requisite for scholarly communication in the post-print 21st century. It's the first paper I read in which the author has tried to analyse and compare the manifold definitions and practices of open peer review at different (mega-)journals or platforms.

Comment on this article

Similar content 435 .

  • WOMEN, INCARCERATION AND HIV: A SYSTEMATIC REVIEW OF HIV TREATMENT ACCESS, CONTINUITY OF CARE AND HEALTH OUTCOMES ACROSS INCARCERATION TRAJECTORIES Authors: Margaret Erickson , Kate Shannon , Ariel SERNICK …
  • Open Access Models, Pirate Libraries and Advocacy Repertoires: Policy Options for Academics to Construct and Govern Knowledge Commons Authors: Melanie Dulong de Rosnay
  • Retracted: Zonisamide's Efficacy and Safety on Parkinson's Disease and Dementia with Lewy Bodies: A Meta-Analysis and Systematic Review Authors: BioMed Research International

Cited by 50

  • The limitations to our understanding of peer review Authors: Jonathan P. Tennant , Tony Ross-Hellauer
  • Survey on open peer review: Attitudes and experience amongst editors, authors and reviewers Authors: Tony Ross-Hellauer , Arvid Deppe , Birgit Schmidt
  • The effect of publishing peer review reports on referee behavior in five scholarly journals Authors: Giangiacomo Bravo , Francisco Grimaldo , Emilia López-Iñesta …

Most referenced authors 962

  • Search Search
  • CN (Chinese)
  • DE (German)
  • ES (Spanish)
  • FR (Français)
  • JP (Japanese)
  • Open science
  • Booksellers
  • Peer Reviewers
  • Springer Nature Group ↗
  • Fundamentals of open research
  • Gold or Green routes to open research
  • Benefits of open research
  • Open research timeline
  • Whitepapers
  • About overview
  • Journal pricing FAQs
  • Publishing an OA book
  • Journals & books overview
  • OA article funding
  • Article OA funding and policy guidance
  • OA book funding
  • Book OA funding and policy guidance
  • Funding & support overview
  • Open access agreements
  • Springer Nature journal policies
  • APC waivers and discounts
  • Springer Nature book policies
  • Publication policies overview

The fundamentals of open access and open research

What is open access and open research.

Open access (OA) refers to the free, immediate, online availability of research outputs such as journal articles or books, combined with the rights to use these outputs fully in the digital environment. OA content is open to all, with no access fees.

Open research goes beyond the boundaries of publications to consider all research outputs – from data to code and even open peer review. Making all outputs of research as open and accessible as possible means research can have a greater impact, and help to solve some of the world’s greatest challenges.

How can I publish my work open access?

As the author of a research article or book, you have the ability to ensure that your research can be accessed and used by the widest possible audience. Springer Nature supports immediate Gold OA as the most open, least restrictive form of OA: authors can choose to publish their research article in a fully OA journal, a hybrid or transformative journal, or as an OA book or OA chapter.

Alternatively, where articles, books or chapters are published via the subscription route, Springer Nature allows authors to archive the accepted version of their manuscript on their own personal website or their funder’s or institution’s repository, for public release after an embargo period (Green OA). Find out more.

Why should I publish OA?

: Studies have shown that open access articles are viewed and cited more often than articles behind a paywall.

: Open access publications and data enable researchers to carry out collaborative research on a global scale.
: Content is available to those who can't access subscription content. : With Creative Commons licences, researchers are empowered to build on existing research quickly.
: Open access journals that cross multiple disciplines help researchers connect more easily and provide greater visibility of their research. : Open access journals and books comply with major funding policies internationally.

What are Creative Commons licences?

Open access works published by Springer Nature are published under Creative Commons licences. These provide an industry-standard framework to support re-use of OA material. Please see Springer Nature’s guide to licensing, copyright and author rights for journal articles and books and chapters for further information.

How do I pay for open access?

As costs are involved in every stage of the publication process, authors are asked to pay an open access fee  in order for their article to be published open access under a creative commons license. Springer Nature offers a free open access support service to make it easier for our authors to discover and apply for funding to cover article processing charges (APCs) and/or book processing charges (BPCs). Find out more.

What is open data?

We believe that all research data, including research files and code, should be as open as possible and want to make it easier for researchers to share the data that support their publications, making them accessible and reusable. Find out more about our research data services and policies.

What is a preprint?

A preprint is a version of a scientific manuscript posted on a public server prior to formal peer review. Once posted, the preprint becomes a permanent part of the scientific record, citable with its own unique DOI . Early sharing is recommended as it offers an opportunity to receive feedback on your work, claim priority for a discovery, and help research move faster. In Review is one of the most innovative preprint services available, offering real time updates on your manuscript’s progress through peer review. Discover In Review and its benefits.

What is open peer review?

Open peer review refers to the process of making peer reviewer reports openly available. Many publishers and journals offer some form of open peer review, including BMC who were one of the first publishers to open up peer review in 1999. Find out more .

Blog posts on open access from "The Source"

How to publish open access with fees covered

Open Research

How to publish open access with fees covered

Could you publish open access with fees covered under a Springer Nature open access agreement? 

Celebrating our 2000th open access book

Celebrating our 2000th open access book

We are proud to celebrate the publication of our 2000th open access book. Take a look at how we achieved this milestone.

Why is Gold OA best for researchers?

open access

Why is Gold OA best for researchers?

Explore the advantages of Gold OA, by reading some of the highlights from our white paper "Going for Gold".

How researchers are using open data in 2022

How researchers are using open data in 2022

How are researchers using open data in 2022? Read this year’s State of Open Data Report,  providing insights into the attitudes, motivations and challenges of researchers towards open data.

Ready to publish?

BMC

A pioneer of open access publishing, BMC is committed to innovation and offers an evolving portfolio of some 300 journals.

Discover Journals

Got a discovery you're ready to share with the world? Publish your findings quickly and with integrity, never letting good research go to waste.

L_natureresearch_boxgreyblue_600x250

Open research is at the heart of Nature Research. Our portfolio includes  Nature Communications ,  Scientific Reports  and many more.

Springer

Springer offers a variety of open access options for journal articles and books across a number of disciplines. 

L_palgravemacmillan_boxgreyblue_600x250

Palgrave Macmillan is committed to developing sustainable models of open access for the HSS disciplines.

L_apress_boxgreyblue_600x250

Apress is dedicated to meeting the information needs of developers, IT professionals, and tech communities worldwide.

Discover more tools and resources along with our author services

Author services

Author services

Early Career Resource Center

Early Career Resource Center

Journal Suggester

Journal Suggester

Using Your ORCID ID

Using Your ORCID ID

The Transfer Desk

The Transfer Desk

Tutorials and educational resources.

How to Write a Manuscript

How to Write a Manuscript

How to submit a journal article manuscript

How to submit a journal article manuscript

Nature Masterclasses

Nature Masterclasses

Stay up to date.

Here to foster information exchange with the library community

Connect with us on LinkedIn and stay up to date with news and development.

  • Tools & Services
  • Account Development
  • Sales and account contacts
  • Professional
  • Press office
  • Locations & Contact

We are a world leading research, educational and professional publisher. Visit our main website for more information.

  • © 2024 Springer Nature
  • General terms and conditions
  • Your US State Privacy Rights
  • Your Privacy Choices / Manage Cookies
  • Accessibility
  • Legal notice
  • Help us to improve this site, send feedback.

Open peer review

Making it easier to trust research by increasing transparency in peer review

open peer review in research

What is open peer review?

Open peer review is an open research practice that enables transparency and accountability in the peer review process. Typically, it refers to any peer review model that makes aspects of the peer review process publicly available before or after publication. 

An open peer review model may include any or all the following features: 

  • Open reports: Peer review reports are published for anyone to read. 
  • Open identities: Author and/or reviewer identities are shared publicly. 
  • Open interaction: Authors, editors, and reviewers engage in open discussion. 
  • Open participation: The wider research community are given a forum to comment on the research. 
  • Preprint open peer review: Manuscripts are shared via preprint servers ahead of any formal peer review procedures. 
  • Post-publication commenting : Readers and authors can respond to reviewer reports post-publication. 
  • Decoupled peer review: The peer review process is facilitated by a different organizational entity than the venue of publication. 

watch our FREE WEBINAR ON-DEMAND

Understanding peer review and how it works

Fill in the form below to discover the types of peer review, including the open, post-publication peer review model we use at F1000. 

What is the purpose of open peer review?

Open peer review aims to tackle various perceived shortcomings of the traditional scholarly peer review process, including wastefulness, poor incentives, and the lack of transparency. 

Advocates of open peer review argue that increased transparency in peer review leads to an improved understanding of published research, more constructive reviews, and well-deserved credit for reviewers. 

Unlike closed peer review, where the peer review process is hidden behind closed doors, open peer review widens access to the peer review process, particularly if both reviewer reports and names are published alongside the research.  

As such, this commitment to enhanced transparency offers members of the research community and beyond greater insight into the peer review process, including the editorial decision-making process and reviewer feedback. 

What are the different types of peer review?

Community peer review.

The manuscript is fact-checked and published online, where others in the research community can add comments through a forum. Community peer review fosters scientific discussion and collaboration and allows for faster sharing of research findings.

Post-publication peer review

Following the publication of a manuscript, expert reviewers assess the research in an open forum following publication. The names of the reviewers are usually published along with their comments. 

Decoupled peer review

The peer review process is managed by an independent service, not the venue of publication. Authors then submit their peer-reviewed work to a publication. Decoupled peer review enables authors to enhance a manuscript before submitting it to a traditional journal.

Benefits of open peer review

Open peer review recognizes the importance of reviewer feedback, the integrity of the review process, and the vital role that it plays in building trust in research.  

Enhances public engagement

Open reviewer reports and comments from the wider research community help those outside academia to contextualize research through the benefit of additional expert opinion. 

Professional development

With open peer review, the review process is transformed into an opportunity for professional development where everyone can learn from reviewer feedback.   

Higher quality reviews

Some studies have shown that open peer review leads to more constructive feedback, greater comments on methods, and increased substantiating evidence to support their comments. 

Credit for reviewers

By allowing reviewers to share their names alongside reports, reviewers can further demonstrate their reputation in their field and their value to their institution. 

Inspires constructive collaboration

By widening access to the peer review process, authors, reviewers, and readers can engage in constructive dialogue on a global scale. 

Enhances accountability

When reviewer reports and identities are shared openly, it reduces the potential for bias or conflict of interest, thereby upholding the integrity of the review process. 

Challenges of open peer review

Concerns over honest feedback.

It’s a common concern among researchers that critical reviewers will suffer career consequences. This is especially troubling for early career researchers that depend on senior researchers for opportunities and advancement. Yet, open peer review presents numerous opportunities for improved peer review practices. By making the peer review process fully open, reviewers are held accountable for their feedback. Any bias or negative behaviors are made more transparent, reducing opportunities to engage in negative behavior.

Misconceptions around rigor

Another concern among researchers is the view that open peer review is less rigorous than closed peer review. However, open peer review reports are often more valuable for researchers when compared to closed peer review reports, perhaps because reviewers know anyone can see their feedback. In fact, due to the increased visibility and transparency, authors often receive a more considered, rigorous review, with constructive suggestions for improvements to increase the quality of their research. 

Authors not understanding its value

Researchers can also be hesitant to publish in open peer review publications because they don’t understand its value to authors. Yet, authors gain a lot from open peer review because they can read the full reviewer report and reply to the reviewer with comments when they submit a revision. Open peer review also helps authors to better understand the reviewer’s point of view, which can be challenging in closed models of peer review where reports are not always shared fully with authors.    

What can be peer reviewed openly?

Many forms of research can be peer reviewed transparently, including traditional Research Articles . 

Here are just some of the article types that can be peer reviewed openly on an F1000 publishing venue. 

open peer review in research

Becoming a peer reviewer

There’s no single route to becoming a peer reviewer, but if you would like to get involved in peer reviewing, then the below practical steps will help you get started:

Accept invitations for relevant papers or express interest. Some publications may have registration forms you can submit to volunteer as a peer reviewer for their publication. Although, the author may have the final say on who reviews their paper. 

Assess the manuscript in good time. If you have been invited to peer review, you will be expected to evaluate a manuscript in a fair, timely, and robust manner. Before you agree, ensure you can perform the review within the time frame and state any conflicts of interest. 

Write the peer review report and sign your name. When drafting your peer review report, give yourself enough time to write an organized review that includes constructive and fair feedback, and specific examples where possible. The critique should be useful and help the authors improve their manuscript. 

Co-reviewing and open peer review

Efforts to include early career researchers (ECRs) in the peer review process are on the rise. Many funders and other organizations now offer training and resources for ECRs looking to become peer reviewers.  

There are many benefits to involving junior researchers in the peer review process, including expanding the pool of well-trained reviewers and encouraging greater diversity in peer review. ECRs are considerably more diverse than senior researchers in terms of gender and ethnicity, so their inclusion can help diversify research. Yet, their involvement in peer review is often overlooked.   

However, with open peer review, ECRs receive the recognition they deserve for the work that they do by allowing them to openly share their names and reviewer reports alongside manuscripts that have been published open access.

How common is open peer review?

Open peer review adoption has been growing since the turn of the century, but recently it’s become widely embraced. Many publishers already implement some form of open peer review. Yet, the level of openness often differs in terms of what is revealed to whom and when. 

This shift towards open peer review shows no sign of slowing down, with even more publishers looking to incorporate open peer review into their publishing models. In 2018, editors and publishers from over 100 publications signed an open letter to acknowledge that they have started, or plan to start, publishing peer review reports openly. 

Expand your open research knowledge

open peer review in research

When did peer review start: the origins and evolution of peer review through time

Peer review is not just quality control, it is the backbone of modern scientific and academic publishing, ensuring the validity and credibility of research articles. While it may seem like…

open peer review in research

How to respond to peer reviewers comments: top tips on addressing reviewer feedback

The peer review process is a fundamental component of scholarly publishing, ensuring the quality and credibility of academic research. After submitting your manuscript to a publishing venue, it undergoes rigorous…

open peer review in research

How to write a peer review report: tips and tricks for constructive reviews

Peer review is an integral part of scholarly communication and academic publishing. A key player in this process is the peer reviewer, who is typically a recognized expert in the…

FREE WEBINAR ON-DEMAND

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Published: March 1999

Pros and cons of open peer review

Nature Neuroscience volume  2 ,  pages 197–198 ( 1999 ) Cite this article

38k Accesses

22 Citations

93 Altmetric

Metrics details

Anonymous peer review, despite the criticisms often leveled against it, is used in more or less the same form by the great majority of scientific journals. The British Medical Journal ( BMJ ), however, has recently taken the bold step of abolishing referee anonymity, and now requires all referees to identify themselves to the authors. The editor, Richard Smith, justifies this move primarily on ethical grounds, arguing 1 that "a court with an unidentified judge makes us think immediately of totalitarian states and the world of Franz Kafka". Many other journals, including Nature Neuroscience , will await the results of this experiment with interest. Yet, whatever the results, there are a number of reasons to think that open review may not be the best solution for all journals.

Few would deny that peer review, as currently practiced, has its drawbacks. There have been a number of studies on the effectiveness of peer review, mainly in the clinical literature (see http://www.wame.org for details and references), and some have found evidence of systematic biases among referees; one study, for instance, reported that that US referees were more positive than non-US referees toward papers from US authors 2 , and another found evidence of bias against female applicants in grant review 3 . Even if similar biases have not been demonstrated in basic science journals, it would seem complacent to deny the possibility that they might exist. Moreover, it is understandable that some authors are uncomfortable with a system in which their identities are known to the referees while the latter remain anonymous. Authors may feel themselves defenseless against what they see as the arbitrary behavior of referees who cannot be held accountable by the authors for unfair comments.

We believe, however, that some of the arguments against anonymous review are misplaced, at least as they pertain to scientific journals. Although the review process is often compared with a court trial, the analogy is inappropriate. In contrast to the law, journals are part of a pluralistic system in which authors themselves choose the standards by which they wish to be judged. This is not to deny that publication in prestigious journals is important for career advancement, but the power wielded by even the most influential journals is nowhere near absolute. The ultimate source of a journal's influence lies with the credibility of its editorial process, and its prestige derives largely from the quality of the papers it accepts for publication. In this sense, journals have only as much power as the scientific community chooses to grant them.

The primary role of the review process is, or should be, to help the editors decide which papers to publish. Filtering information is an important function of any journal, but this is particularly true for a journal such as Nature Neuroscience that aspires to attract a broad readership to its papers. Therefore, we look to our referees not only to identify technical flaws, but also to advise us about a paper's novelty, significance and likely interest to our readers. Referees should bear in mind that we receive many times more papers than we can publish, and that for every paper that is accepted, another—invisible to them—must be rejected to make space for it. We also ask referees to advise us whether a paper that is not yet acceptable is nevertheless potentially important, and if so, how it could be improved. In some cases, this may be simply a matter of rewriting the paper to make it clearer; in others it may mean requesting many additional experiments from the authors. It is also widely felt that the review process should help the authors of rejected papers to revise the paper for resubmission elsewhere. However, although improving papers is undoubtedly an additional benefit of the review system, we do not consider this to be its primary purpose.

Given this background, what are the arguments for opening up the review process so that authors know their referees' identities? Advocates of open review argue that openness will force referees to think more carefully about the scientific issues and to write more thoughtful reviews; it may also help to expose possible conflicts of interest in some cases. A few referees always sign their reviews as a matter of principle, and many more do so in specific cases, for instance when they wish to discuss the results directly with the authors, when they feel obliged to disclose a possible conflict, or when they believe that their identities will be obvious in any case. These are all legitimate reasons for openness, but the published literature gives little support to the idea that a general policy of disclosure improves the overall quality of reviews, as the BMJ acknowledges in its editorial 1 . The main argument for more openness is ethical, that it is fundamentally unfair for authors to be exposed to the judgment of someone acting behind the screen of anonymity.

Yet it is important to remember that decisions are not made by referees, but rather by editors. The editors take responsibility for their decisions, and they are accountable for the quality of the advice on which those decisions are based. Most of the abuses that an open review system is intended to prevent—hostile comments, unsubstantiated criticisms, excessive delay of competitors' manuscripts—can also be prevented by a careful editor.

We believe that there are several strong arguments against open review. For one thing, it may lead to serious problems in finding appropriate referees. The BMJ claims that, since it opened up its peer-review process, only a small percentage (about 2%) of referees have refused to review because of the change in editorial policy. However, an informal poll of some of our referees suggests that this might not be the case for basic science journals. Many of the people we contacted said they would refuse to review certain papers if their names were revealed. It might be especially difficult to find referees for authors who hold positions of power and influence, or for those who are considered quarrelsome or vindictive by their peers. In particular, younger, less-established scientists (who are reported 4 to be among the best reviewers) would be reluctant to reveal themselves, for fear of retaliation from their more powerful colleagues. Even if they did review papers, it might be hard for them to be fully honest, knowing that the person they are reviewing may be evaluating their grants and recommending them for tenure. Anonymous review remains an important corrective for such unequal power relationships.

The opportunities for nepotism will also be increased by an open review system. It seems to be widely believed that more prominent authors receive preferential treatment in the review process, and although editors can try to minimize this tendency it may be impossible to eliminate altogether. It is very common, for instance, to receive reports that begin along the lines, "This is an excellent study from one of the leading groups in the field...", and although a case can be made that the authors' previous track record is relevant when judging their latest work, the editor must not allow this to become a dominant factor. In an open system, however, it seems almost inevitable that the opportunities to reciprocate favors over time will lead to referees placing more rather than less weight on an author's identity.

The biggest problem with open review, at least from an editor's point of view, is that it is likely to lead to more bland, even timid, reviews. Referees will be more likely to restrict their comments to technical concerns that are easily defended, rather than advising on necessarily more subjective issues such as conceptual novelty and general interest. Several referees commented that a policy of forced openness would cause reviews to resemble letters of recommendation, which are often so inflated as to be useless. The only way for the editor to get an honest opinion might then be to call the referee and get comments 'off the record', defeating the point of peer review altogether. Peter Strick, editor-in-chief of the Journal of Neurophysiology , notes that when the journal tried encouraging voluntary open review a decade ago, the editors quickly realized that this system promoted more problems than it solved, including bland and cautious reviews. The journal also experienced an occasional breakdown of the peer-review process, in which authors and referees bypassed the editors completely in negotiating how a paper should be revised.

If complete openness is not the answer, how can the review process be improved? One solution that is occasionally proposed is the opposite—a completely closed system in which referees (and perhaps even editors) are blind to the identities of the authors. However, this seems most unlikely to work; self-identifying clues are often an essential part of a manuscript, and if the referees have enough knowledge of the authors to hold any prejudice (whether positive or negative), they are also likely to be able to guess the authors' identities. We believe that the present framework—authors identified to referees who remain anonymous—is the only workable one, and that any efforts to improve the system should focus on how anonymous referees can be helped to do a better job.

Reviewing well requires a substantial commitment of time and energy, and it is generally admitted that being a good referee does not lead to any tangible rewards with respect to career advancement. Why are people willing to expend so much unrecognized effort? For some, it is as simple as civic duty and a feeling that they owe their colleagues the same type of treatment that they would wish for their own manuscripts. Some are motivated by loyalty to the journal or to a particular editor. Many referees enjoy having early access to new and interesting papers in their field. This is of course a slippery slope, given that access to privileged information can easily lead to its abuse. Other motives are more obviously problematic, for instance using refereeing as a way of blocking the dissemination of ideas opposed to one's own, or currying favor with editors who will be making decisions about one's own papers in the future. Ultimately, the peer review system will stand or fall on the availability of good citizens whose judgment and ethics can be trusted. However, like many other tasks critical for the success of science, reviewing skills are acquired haphazardly, usually from a limited number of mentors whose own approach may or may not be ideal.

We are exploring ways to improve our peer review system. Soon we plan to include on our web site a new Guide to Referees , which we hope will be helpful to referees, authors and readers alike in explaining how we reach our editorial decisions and what types of advice we find most useful. Over the next few months, we also plan to ask some of our authors—both rejected and accepted—to rate the performance of their referees. We hope this information will be useful for several purposes: it will help us in selecting referees, it will allow us to provide feedback for any interested referees on the perceived quality of their reports, and it will also allow us to recognize and thank those referees whose reports are felt to be the most useful. We welcome suggestions ( [email protected]) as to how we can improve our review process, and best represent the interests of authors referees and readers.

Smith, R. BMJ 318 , 4–5 ( 1999).

Article   CAS   Google Scholar  

Link, A. M. JAMA 280 , 246–247 ( 1998).

Wenneras, C. & Wold, A. Nature 387 , 341–343 (1997).

Evans, A. T., McNutt, R. A., Fletcher, S. W. & Fletcher, R. H. J. Gen. Intern. Med. 8 , 422– 428 (1993).

Download references

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Pros and cons of open peer review. Nat Neurosci 2 , 197–198 (1999). https://doi.org/10.1038/6295

Download citation

Issue Date : March 1999

DOI : https://doi.org/10.1038/6295

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

open peer review in research

open peer review in research

  • Translation

Benefits and Drawbacks of Open Peer Review and Its Impact on Research Quality

By charlesworth author services.

  • 05 September, 2023

The peer review process has provided a trusted framework for evaluating scholarly work for decades. In this system, reviewers’ names are kept secret, so authors get honest feedback. Traditionally, keeping reviewers’ identities hidden has offered authors a sense of security. While the traditional method has its merits, it also raises questions about potential biases and the authenticity of critiques. In contrast, Open Peer Review (OPR) is gaining prominence as a transformative alternative, addressing these concerns through transparency and collaboration. 

Open Peer Review and Its Rising Prominence in Scholarly Publishing

Open Peer Review (OPR) is a modern approach to the traditional peer review process, aimed at enhancing transparency and accountability within scholarly publishing. In OPR, authors' and reviewers' identities are disclosed during evaluation. This change from anonymous reviews is gaining popularity because it can enhance feedback quality, promote collaboration, and create an open environment.

The concept of OPR fits the principles of Open Science, promoting transparent and collaborative research. OPR aims to reduce bias and bolster peer review's reliability by involving wider stakeholders. This change aligns with the evolving scholarly communication landscape, where researchers are increasingly emphasizing the importance of sharing not only their research findings but also the process that led to those findings.

Benefits of Open Peer Review

OPR not only reveals the identities of authors and reviewers but also extends to the practice of publishing review reports. This innovative method holds a range of benefits that are transforming the landscape of scholarly communication:

1. Transparency and Accountability: OPR creates a transparent environment where reviewers' identities are disclosed. This transparency in peer review process ensures that reviewers are accountable for comments and critiques, reducing the potential for biased or unfair assessments. Authors can interact with reviewers, fostering productive discussions and effective revisions.

2. Enhanced Constructive Feedback: In the OPR model, reviewers are motivated to offer detailed insights due to the visibility of their contributions. Authors gain valuable feedback, improving the quality of their work. This iterative feedback loop facilitates impactful revisions, improving research outputs.

3. Strengthened Credibility and Trustworthiness: Transparency is key to credibility. OPR strengthens peer review's credibility by revealing the thorough evaluation process. Review reports give readers insights, aiding assessment and fostering trust in academia.

4. Recognition and Credit for Reviewers: Traditional anonymous peer review overlooks reviewers' contributions. OPR not only encourages experts to engage actively in the review process but also allows them to receive due credit for their valuable contributions. Reviewers can enhance their reputation and gain acknowledgment within academia.

5. Evolving Peer Review for the Digital Age: As research communication evolves in the digital age, OPR aligns with the principles of Open Science, complementing the drive for open access, data sharing, and collaboration.

Drawbacks and Concerns of Open Peer Review 

While OPR offers a promising alternative to traditional anonymous peer review, it's not without its challenges and concerns. Let’s delve into some of the drawbacks and potential issues associated with this approach: 

1. Fear of Retaliation or Judgment: Reviewers may avoid frank feedback, fearing harm to relationships or backlash. Early career researchers might hesitate to engage in open critique with senior colleagues due to power dynamics, potentially stifling meaningful debate and exchange of ideas.

2. Delays in the Peer Review Process: Increased interactions between authors and reviewers might extend the review process, affecting the timely dissemination of research findings. 

3. Uneven Participation and Potential Bias: OPR’s adoption varies by discipline and region. This imbalance may introduce biases, underrepresenting certain fields or regions. Controversial research might face intense scrutiny, potentially overlooking groundbreaking work.

4. Burden on Reviewers: OPR’s focus on detailed, constructive feedback might overwhelm reviewers. This could discourage participation, reducing available experts.

5. Potential for Conflicts of Interest: In OPR, it might be challenging to manage conflicts of interest effectively. The disclosure of identities could increase the risk of personal or professional conflicts impacting the review process.

Impact of Open Peer Review on Research Quality

OPR shapes both peer review and research quality. Its impact includes: 

• Unearthing Errors and Flaws through Transparency: Transparency is a powerful tool in enhancing research quality. OPR exposes research to more experts, spotting mistakes and flaws. Reviewers and the scientific community's collective review leads to accurate findings.

• Collaboration Enhancing Comprehensive Evaluations: OPR fosters reviewer collaboration, providing comprehensive evaluations through discussions. This improves research feedback.

• Iterative Feedback Refines Research Outputs: OPR’s iterative feedback loop empowers authors, improving their work’s quality. Authors receive not just comments for revision but also engage in meaningful discussions about their work. 

Although OPR is not as established as traditional review methods, its adoption is steadily increasing. Notably, several prominent journals such as Royal Society Open Science , Nature Communications, EMBO, eLife, and the PLOS journals are actively embracing various forms of OPR. As these influential platforms incorporate OPR, they pave the way for a new era of scholarly discourse, where the open exchange of ideas and accountability redefine the landscape of peer review. 

Share with your colleagues

cwg logo

Scientific Editing Services

Sign up – stay updated.

We use cookies to offer you a personalized experience. By continuing to use this website, you consent to the use of cookies in accordance with our Cookie Policy.

Keene State College

  • KSC Directories

Searching Databases & Finding Peer-Reviewed Articles

  • Finding Peer-Reviewed Articles

Handouts & Worksheets

  • Search Library Databases

Search Google Scholar

Find full-text from a citation

  • Off Campus Access to Databases

Tips on Searching Databases

  • What is a Peer-Reviewed Article ?
  • Predatory Academic Journals
  • Back to Guides This link opens in a new window

Research & Writing Help

Contact or connect with the center for research & writing.

open peer review in research

   Chat via the Help & Chat button

open peer review in research

Follow us on Instagram and Facebook

Definition: Peer-Reviewed

A peer-reviewed article is from a publication that has been through the peer-review process. This process subjects an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field (peers). Peer review is considered necessary to ensure academic quality in some fields.

The Center for Research and Writing Research Desk can help you find peer-reviewed publications on the library's databases.

You will sometimes find peer-reviewed articles when you search on Google.  Publishers of peer-reviewed articles often charge lots of money for full-text access so you may only be allowed to read the abstract (a summary) of the article for free. You never have to pay for the full-text! The library can provide you access to the full-text for free through our databases or through inter-library loan .

  • Worksheet: Introduction to Searching Databases
  • Guide: Getting Started with Research

open peer review in research

  • Sort by subject to select the best Library and Open Source databases for your subject.

Find Peer-Reviewed Articles through the library's Discovery tool - the "Google" for the Library. Discovery searches for a little bit of everything - Articles, Books, EBooks, Movies, etc.

Search Discovery on the Library's Home page .

https://scholar.google.com/

Searches for articles, theses, books, abstracts and court opinions, from academic publishers, professional societies, online repositories, universities and other web sites. Full-text not always available for free.

  • Connect Mason Library to your Google Scholar account to get FREE access to materials that you would otherwise have to pay for - see this handout (You may have to use Inter-library loan ).
  • Search Tips for Google Scholar https://scholar.google.com/intl/en/scholar/help.html

If you have a citation:

  • C heck the Journal Search  

Or you may have to use Inter-Library Loan

  • How to use Interlibrary Loan

Off Campus Access to Library Databases

See the Getting Started with Research Guide for more help.

More Videos on Searching:

  • Using Search Results
  • Using Subject Terms part 1
  • Using Subject Terms part 2
  • Create an EBSCO Account and Use Custom Folders
  • Next: What is a Peer-Reviewed Article ? >>
  • Last Updated: May 9, 2024 10:05 AM
  • URL: https://library.keene.edu/searching-peer-review

Facebook Twitter Instagram

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

publications-logo

Article Menu

open peer review in research

  • Subscribe SciFeed
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

It takes a village editorship, advocacy, and research in running an open access data journal.

open peer review in research

1. Introduction

2. a place for data sharing and data journals, 3. intertwining editorship, advocacy, and research, 3.1. outreach and social media presence, 3.2. special collections, 3.3. scientometric research, 3.4. grant activities, 4. sharing for the future, author contributions, acknowledgments, conflicts of interest.

1 published 49 articles in its first year (2014) and has since increased significantly in its number of publications, with 879 articles being published in 2023 ( ; accessed on 29 July 2024).
2 , accessed on 29 July 2024).
3 , accessed on 29 July 2024).
4 , accessed on 29 July 2024).
5 (accessed on 29 July 2024).
6 (accessed on 29 July 2024).
7 (accessed on 29 July 2024).
8 (accessed on 29 July 2024).
9 (accessed on 29 July 2024).
10 (accessed on 29 July 2024).
11 (accessed on 29 July 2024).
12 ; YouTube: ; Instagram: (accessed on 8 August 2024).
13 (accessed on 29 July 2024).
14 (accessed on 29 July 2024).
15 (accessed on 29 July 2024).
  • Mayer-Schönberger, V.; Cukier, K. Big Data: A Revolution That Will Transform How We Live, Work, and Think ; Houghton Mifflin Harcourt: Boston, MA, USA, 2013. [ Google Scholar ]
  • Owens, T. Defining data for Humanists: Text, artifact, information or evidence? J. Digit. Humanit. 2011 , 1 , 1. Available online: http://journalofdigitalhumanities.org/1-1/defining-data-for-humanists-by-trevor-owens/ (accessed on 8 August 2024).
  • Schöch, C. Big? Smart? Clean? Messy? Data in the Humanities. J. Digit. Humanit. 2013 , 2 , 2–13. Available online: https://journalofdigitalhumanities.org/2-3/big-smart-clean-messy-data-in-the-humanities/ (accessed on 8 August 2024).
  • DeFanti, T.; Grafton, A.; Levy, T.E.; Manovich, L.; Rockwood, A. Quantitative Methods in the Humanities and Social Sciences ; Springer: Cham, Switzerland; Heidelberg, Germany; New York, NY, USA; Dordrecht, The Netherlands; London, UK, 2014. [ Google Scholar ]
  • Lemercier, C.; Zalc, C. Quantitative Methods in the Humanities: An Introduction ; University of Virginia Press: Charlottesville, VA, USA, 2019. [ Google Scholar ]
  • Blythe, R.A.; Croft, W. Can a Science—Humanities collaboration be successful? Adapt. Behav. 2010 , 18 , 12–20. [ Google Scholar ] [ CrossRef ]
  • Real, L.A. Collaboration in the sciences and the humanities: A comparative phenomenology. Arts Humanit. High. Educ. 2012 , 11 , 250–261. [ Google Scholar ] [ CrossRef ]
  • Van Peer, W.; Hakemulder, F.; Zyngier, S. Scientific Methods for the Humanities ; John Benjamins: Amsterdam, PA, USA, 2012. [ Google Scholar ]
  • Alaimo, C.; Kallinikos, J.; Aaltonen, A. Data and value. In Handbook of Digital Innovation ; Nambisan, S., Lyytinen, K., Yoo, Y., Eds.; Edward Elgar Publishing: Cheltenham/Northampton, UK, 2020; pp. 162–178. [ Google Scholar ]
  • Gorgolewski, K.J.; Margulies, D.S.; Milham, M.P. Making data sharing count: A publication-based solution. Front. Neurosci. 2013 , 7 , 9. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Zimmerman, A.; Bos, N.; Olson, J.S.; Olson, G.M. The promise of data in e-research: Many challenges, multiple solutions, diverse outcomes. In e-Research: Transformations in Scholarly Practice ; Jankowski, N.W., Ed.; Routledge: New York, NY, USA, 2009; pp. 222–239. [ Google Scholar ]
  • Bishop, L.; Kuula-Luumi, A. Revisiting qualitative data reuse: A decade on. Sage Open 2017 , 7 , 1–15. [ Google Scholar ] [ CrossRef ]
  • Khan, N.; Thelwall, M.; Kousha, K. Measuring the impact of biodiversity datasets: Data reuse, citations and altmetrics. Scientometrics 2021 , 126 , 3621–3639. [ Google Scholar ] [ CrossRef ]
  • Colavizza, G.; Hrynaszkiewicz, I.; Staden, I.; Whitaker, K.; McGillivray, B. The citation advantage of linking publications to research data. PLoS ONE 2019 , 15 , e0230416. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Mahony, S. Toward openness and transparency to better facilitate knowledge creation. J. Assoc. Inf. Sci. Technol. 2022 , 73 , 1–15. [ Google Scholar ] [ CrossRef ]
  • Tenopir, C.; Allard, S.; Douglass, K.; Aydinoglu, A.U.; Wu, L.; Read, E.; Manoff, M.; Frame, M. Data sharing by scientists: Practices and perceptions. PLoS ONE 2011 , 6 , e21101. [ Google Scholar ] [ CrossRef ]
  • Kervin, K.; Hedstrom, M. How research funding affects data sharing. In Proceedings of the ACM 2012 Conference on Computer Supported Cooperative Work Companion (CSCW’12), Seattle, WA, USA, 11–15 February 2012; Association for Computing Machinery: New York, NY, USA, 2012; pp. 131–134. [ Google Scholar ] [ CrossRef ]
  • European Commission. H2020 Programme Guidelines on FAIR Data Management in Horizon 2020. Version 3.0. 2016. Available online: https://ec.europa.eu/research/participants/data/ref/h2020/grants_manual/hi/oa_pilot/h2020-hi-oa-data-mgt_en.pdf (accessed on 29 July 2024).
  • Wilkinson, M.D.; Dumontier, M.; Aalbersberg, I.J.; Appleton, G.; Axton, M.; Baak, A.; Blomberg, N.; Boiten, J.W.; da Silva Santos, L.B.; Bourne, P.E.; et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci. Data 2016 , 3 , 160018. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Leonard, K.; Russo, S.; Martinez, A.; McElroy, L.; Garba, I.; Jennings, L.; Oré, C.O.; Cummins, J.J.; Fernandez, A.A.; Taitingfong, R. CARE Statement for Indigenous Data Sovereignty ; UNESCO: New York, NY, USA, 2023; Available online: https://www.un.org/techenvoy/global-digital-compact/submissions (accessed on 8 August 2024).
  • Gilby, E.; Ammon, M.; Leow, R.; Moore, S. Open Research and the Arts and Humanities: Opportunities and Challenges ; Working Group on Open Research in the Arts and Humanities, University of Cambridge: Cambridge, UK, 2022. [ Google Scholar ] [ CrossRef ]
  • Westbury, M.; Candea, M.; Gabrys, J.; Hennessy, S.; Jarman, B.; Mcneice, K.; Sharma, C. Voice, Representation, Relationships: Report of the Open Qualitative Research Working Group ; University of Cambridge Working Group on Open Qualitative Research: Cambridge, UK, 2022. [ Google Scholar ] [ CrossRef ]
  • Sixto-Costoya, A.; Robinson-Garcia, N.; van Leeuwen, T.; Costas, R. Exploring the relevance of ORCID as a source of study of data sharing activities at the individual-level: A methodological discussion. Scientometrics 2021 , 126 , 7149–7165. [ Google Scholar ] [ CrossRef ]
  • Perrier, L.; Blondal, E.; MacDonald, H. The views, perspectives, and experiences of academic researchers with data sharing and reuse: A meta-synthesis. PLoS ONE 2020 , 15 , e0229182. [ Google Scholar ] [ CrossRef ]
  • Callaway, E. Scooped in science? Relax, credit will come your way. Nature 2019 , 575 , 576–577. [ Google Scholar ] [ CrossRef ]
  • Jiao, H.; Qiu, Y.; Ma, X.; Yang, B. Dissemination effect of data papers on scientific datasets. J. Assoc. Inf. Sci. Technol. 2024 , 75 , 115–131. [ Google Scholar ] [ CrossRef ]
  • Marongiu, P.; Pedrazzini, N.; Ribary, M.; McGillivray, B. Le Journal of Open Humanities Data (JOHD): Enjeux et défis dans la publication de data papers pour les sciences humaines. In Publier, Partager, Réutiliser les Données de la Recherche: Les Data Papers et Leurs Enjeux ; Kosmopoulos, C., Schöpfel, J., Eds.; Presses Universitaires du Septentrion: Lille, France, 2023. [ Google Scholar ] [ CrossRef ]
  • McGillivray, B.; Marongiu, P.; Pedrazzini, N.; Ribary, M.; Wigdorowitz, M.; Zordan, E. Deep impact: A study on the impact of data papers and datasets in the Humanities and Social Sciences. Publications 2022 , 10 , 39. [ Google Scholar ] [ CrossRef ]
  • Candela, L.; Castelli, D.; Manghi, P.; Tani, A. Data journals: A survey. J. Assoc. Inf. Sci. Technol. 2015 , 66 , 1747–1762. [ Google Scholar ] [ CrossRef ]
  • Walters, W.H. Data journals: Incentivizing data access and documentation within the scholarly communication system. Insights 2020 , 33 , 18. [ Google Scholar ] [ CrossRef ]
  • Vuong, Q.-H. The editor: A demanding but underestimated role in scientific publishing. Learn. Publ. 2022 , 35 , 418–422. [ Google Scholar ] [ CrossRef ]
  • Nemeth, B.; Cobb, J. Editorial team roles and responsibilities. Interpretation 2020 , 8 , 1N-T1095. [ Google Scholar ] [ CrossRef ]
  • Sarker, S.; Agarwal, R.; Goes, P.; Gregor, S.; Henfridsson, O. Roles and responsibilities of a senior editor. J. Assoc. Inf. Syst. 2015 , 16 , i. [ Google Scholar ] [ CrossRef ]
  • Mayernik, M.S.; Callaghan, S.; Leigh, R.; Tedds, J.; Worley, S. Peer review of datasets: When, why, and how. Bull. Am. Meteorol. Soc. 2015 , 96 , 191–201. [ Google Scholar ] [ CrossRef ]
  • Lawrence, B.; Jones, C.; Matthews, B.; Pepler, S.; Callaghan, S. Citation and peer review of data: Moving towards formal data publication. Int. J. Digit. Curation 2011 , 6 , 4–37. [ Google Scholar ] [ CrossRef ]
  • Kratz, J.E.; Strasser, C. Researcher perspectives on publication and peer review of data. PLoS ONE 2015 , 10 , e0117619. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Carpenter, T. What Constitutes Peer Review of Data? A Survey of Peer Review Guidelines. The Scholarly Kitchen 2017. Available online: https://scholarlykitchen.sspnet.org/2017/04/11/what-constitutes-peer-review-research-data/ (accessed on 8 August 2024).
  • Jung, Y.; Hwang, H. Data Peer Review and Tracking Impact of Research Data. In Technical Report 22-012 ; National Research Council of Science & Technology: Seoul, Republic of Korea, 2023. [ Google Scholar ]
  • Schimmer, R.; Geschuhn, K.K.; Vogler, A. Disrupting the Subscription Journals’ Business Model for the Necessary Large-Scale Transformation to Open Access ; White Paper; Max Planck Digital Library: Munich, Germany, 2015. [ Google Scholar ] [ CrossRef ]
  • Araujo, P.; Bornatici, C.; Heers, M. Recognising open research data in research assessment: Overview of practices and challenges (1.0). Zenodo 2024 . [ Google Scholar ] [ CrossRef ]
  • Devriendt, T.; Shabani, M.; Borry, P. Reward systems for cohort data sharing: An interview study with funding agencies. PLoS ONE 2023 , 18 , e0282969. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Lowenberg, D.; Chodacki, J.; Fenner, M.; Kemp, J.; Jones, M.B. Open data metrics: Lighting the fire (Version 1) [Computer software. Zenodo 2019 . [ Google Scholar ] [ CrossRef ]
  • Puebla, I.; Lowenberg, D. Building trust: Data metrics as a focal point for responsible data stewardship. Harv. Data Sci. Rev. 2024 . [ Google Scholar ] [ CrossRef ]
  • van Bellen, S.; Alperin, J.P.; Larivière, V. The oligopoly of academic publishers persists in exclusive database. arXiv 2024 . [ Google Scholar ] [ CrossRef ]

Click here to enlarge figure

The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Wigdorowitz, M.; Ribary, M.; Farina, A.; Lima, E.; Borkowski, D.; Marongiu, P.; Sorensen, A.H.; Timis, C.; McGillivray, B. It Takes a Village! Editorship, Advocacy, and Research in Running an Open Access Data Journal. Publications 2024 , 12 , 24. https://doi.org/10.3390/publications12030024

Wigdorowitz M, Ribary M, Farina A, Lima E, Borkowski D, Marongiu P, Sorensen AH, Timis C, McGillivray B. It Takes a Village! Editorship, Advocacy, and Research in Running an Open Access Data Journal. Publications . 2024; 12(3):24. https://doi.org/10.3390/publications12030024

Wigdorowitz, Mandy, Marton Ribary, Andrea Farina, Eleonora Lima, Daniele Borkowski, Paola Marongiu, Amanda H. Sorensen, Christelle Timis, and Barbara McGillivray. 2024. "It Takes a Village! Editorship, Advocacy, and Research in Running an Open Access Data Journal" Publications 12, no. 3: 24. https://doi.org/10.3390/publications12030024

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Share on twitter
  • Share on facebook

Peer review will only do its job if referees are named and rated

We need a mechanism whereby academics can build a public reputation as referees and receive career benefits for doing so, says randy robertson.

  • Share on linkedin
  • Share on mail

Measuring fruit and vegetables

Last year, a splashy headline in USA Today caught my attention: “Penis length has grown 24 per cent in recent decades. That may not be good news.” Science journalism being what it is, the article links not to the meta-analysis that drew these conclusions but to Stanford Medicine’s advertisement of it. Still, the original article, “Worldwide Temporal Trends in Penile Length: A Systematic Review and Meta-Analysis” , does indeed contend that “erect penile length increased 24 per cent over the past 29 years”.

Hmm. If you’re sceptical, so was I – and, sure enough, looking over the meta-analysis and checking the original studies, I found a few problems. First, while the authors claim to have included only studies in which investigators did the measurements, at least three of the largest they draw on were based on self-report – which, for obvious reasons, often proves unreliable. Second, there was no consistent method of measurement, with most studies not even noting the method used, rendering comparisons impossible. Finally, the authors inflated the total number of members measured. 

In case you’re wondering, I’m not a part of the Data Colada sleuthing team . I’m an English professor at a liberal arts college.

I sent my concerns to the corresponding author and then to the journal’s editor. The rhetoric of their response was fine: the authors acknowledged the problems and even thanked me for pointing them out, which must have been hard. Nonetheless, though they vowed to revise the article, neither they nor the journal editor has yet published a correction eight months on.

Want to write for THE ? Click for more information

What distinguishes this case from the raft of flawed studies that critics have exposed in recent years is that this study is a meta-analysis, the supposed gold standard in science. If meta-analyses, which are designed to weed out poorly conducted experiments, are themselves riddled with rudimentary mistakes, science is in deeper trouble than we thought.

The humanities, naturally, are even worse. Historians and literary scholars wrest quotes from context with abandon and impunity. Paraphrase frequently proves inaccurate. Textual evidence is cherry-picked and quoted passages are amputated at the most convenient joint.

One lesson to draw, of course, is caveat lector : readers should be vigilant, taking nothing on faith. But if we all need to exercise rigorous peer review every time we read a scholarly journal, then the original peer review process becomes redundant. The least that reviewers should do is to check that authors are using their sources appropriately. If an English professor could see the penis paper’s grave errors, how on earth did the peer reviewers not see them?

Some suggest abandoning pre-publication review in favour of open post-publication “curation” by the online crowd. But this seems a step too far, even in a digital environment, likely leaving us awash in AI-generated pseudo-scholarship.

Better to re-establish a reliable filter before publication. Good refereeing does not mean skimming a manuscript so you can get on with your own work. Neither does it mean rejecting a submission because you don’t like the result. It means embracing the role of mentor, checking the work carefully and providing copious suggestions for revision, both generous and critical. In essence, it is a form of teaching.

Find out more about how to get full unlimited article access to THE for staff and students.

The problem is that it is little regarded on the tenure track. Conducting rigorous peer review is unglamorous and unheralded labour; one earns many more points for banging out articles with eye-popping titles, even though a healthy vetting process is necessary for individual achievement to be meaningful.

We need to raise the stakes for reviewers by insisting on publishing their names and, ideally, their reports, too, as some journals are already doing . Anonymous referees get no recognition for their labours, but, contrariwise, their reputations remain untarnished when they approve shabby work. Neither encourages careful review. Anonymity should be available exceptionally, for reviewers worried about being harassed by third parties when the topic is especially contentious and for junior scholars concerned about retaliation from seniors.

Optimistically, two natural consequences of public reviewing would be thoroughness and civility. What’s more, peer reviewers would enter into a reputation economy that drew on the power of the networked public sphere. Journals should offer space for readers to comment on published work , including on the published referee reports, helping to sort strong referees from weak ones.

Editors would also have at their disposal a wide swathe of signed referee reports from across their field on which to draw when deciding whom to task with vetting new submissions. As it stands, aside from the habit of tapping personal and professional acquaintances, editors tend to rely on scholarly reputation, handing a few “star” academics disproportionate control over what is published – even though such figures are not necessarily good editors of others’ work, any more than they are necessarily good teachers. Generating and critiquing scholarship require different skill sets.

Editors should not extend invitations to peer reviewers who have repeatedly overlooked flagrant mistakes, as determined by post-publication review. On the positive side, high-quality reviews should count as scholarship, not just service to the profession, as they form an integral part of scholarly production. And if book reviews merit a distinct CV section, so do peer reviews.

No doubt plenty of scholars continue to offer valuable peer review, but plenty do not. And it is clear that, in this case, too, it will take more than self-reporting to identify who genuinely falls into which category.

Randy Robertson is associate professor in the department of English and creative writing at Susquehanna University , Pennsylvania.

Register to continue

Why register?

  • Registration is free and only takes a moment
  • Once registered, you can read 3 articles a month
  • Sign up for our newsletter

Or subscribe for unlimited access to:

  • Unlimited access to news, views, insights & reviews
  • Digital editions
  • Digital access to THE’s university and college rankings analysis

Already registered or a current subscriber? Login

Related articles

Montage of a closeup of the edge of open book pages with a person smiling gesturing towards it to illustrate Peer reviewers: chill out and don’t let the power  go to your head

Peer reviewers: chill out and don’t let the power go to your head

It makes zero difference to reviewers if someone else gets a paper in a high-impact journal, so why are they so pernickety, asks Stephen Cochrane 

A robot reading

‘Burn it with fire!’ ChatGPT use ‘polarises’ peer reviewers

Global survey of peer reviewers reveals deep distrust towards ChatGPT, with some calling for a complete ban on its use in research and academia

A dog offers banknotes, symbolising payment for peer review

Peer review is broken. Paying referees could help fix it

Offering payment has risks, but it could expand the pool of willing reviewers beyond those on permanent academic salaries, says Duncan Money 

open peer review in research

UK research’s islands of excellence need flood defences

As a loss-maker, research is under pressure as fears of insolvency rise. But universities must do all they can to shore up a key element of their impact 

Illustration of people in the sea looking out at scientists on islands around display cabinets to illustrate Will the funding crisis confine UK research to elite universities?

Will the funding crisis confine UK research to elite universities?

At a time of increasing financial constraint, jobs are being shed even in UK departments that ride high in the Research Excellence Framework, while time allocations for research are being cut. Can a loss-making activity like research survive outside traditional institutions, asks Jack Grove 

Junior researchers ‘cited more if PhD supervisor is well known’

Success of those mentored by highly regarded scholars suggests ‘chaperone effect’ is increasingly important, finds study

Featured jobs

open peer review in research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • v.25(3); 2014 Oct

Logo of ejifcc

Peer Review in Scientific Publications: Benefits, Critiques, & A Survival Guide

Jacalyn kelly.

1 Clinical Biochemistry, Department of Pediatric Laboratory Medicine, The Hospital for Sick Children, University of Toronto, Toronto, Ontario, Canada

Tara Sadeghieh

Khosrow adeli.

2 Department of Laboratory Medicine and Pathobiology, University of Toronto, Toronto, Canada

3 Chair, Communications and Publications Division (CPD), International Federation for Sick Clinical Chemistry (IFCC), Milan, Italy

The authors declare no conflicts of interest regarding publication of this article.

Peer review has been defined as a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field. It functions to encourage authors to meet the accepted high standards of their discipline and to control the dissemination of research data to ensure that unwarranted claims, unacceptable interpretations or personal views are not published without prior expert review. Despite its wide-spread use by most journals, the peer review process has also been widely criticised due to the slowness of the process to publish new findings and due to perceived bias by the editors and/or reviewers. Within the scientific community, peer review has become an essential component of the academic writing process. It helps ensure that papers published in scientific journals answer meaningful research questions and draw accurate conclusions based on professionally executed experimentation. Submission of low quality manuscripts has become increasingly prevalent, and peer review acts as a filter to prevent this work from reaching the scientific community. The major advantage of a peer review process is that peer-reviewed articles provide a trusted form of scientific communication. Since scientific knowledge is cumulative and builds on itself, this trust is particularly important. Despite the positive impacts of peer review, critics argue that the peer review process stifles innovation in experimentation, and acts as a poor screen against plagiarism. Despite its downfalls, there has not yet been a foolproof system developed to take the place of peer review, however, researchers have been looking into electronic means of improving the peer review process. Unfortunately, the recent explosion in online only/electronic journals has led to mass publication of a large number of scientific articles with little or no peer review. This poses significant risk to advances in scientific knowledge and its future potential. The current article summarizes the peer review process, highlights the pros and cons associated with different types of peer review, and describes new methods for improving peer review.

WHAT IS PEER REVIEW AND WHAT IS ITS PURPOSE?

Peer Review is defined as “a process of subjecting an author’s scholarly work, research or ideas to the scrutiny of others who are experts in the same field” ( 1 ). Peer review is intended to serve two primary purposes. Firstly, it acts as a filter to ensure that only high quality research is published, especially in reputable journals, by determining the validity, significance and originality of the study. Secondly, peer review is intended to improve the quality of manuscripts that are deemed suitable for publication. Peer reviewers provide suggestions to authors on how to improve the quality of their manuscripts, and also identify any errors that need correcting before publication.

HISTORY OF PEER REVIEW

The concept of peer review was developed long before the scholarly journal. In fact, the peer review process is thought to have been used as a method of evaluating written work since ancient Greece ( 2 ). The peer review process was first described by a physician named Ishaq bin Ali al-Rahwi of Syria, who lived from 854-931 CE, in his book Ethics of the Physician ( 2 ). There, he stated that physicians must take notes describing the state of their patients’ medical conditions upon each visit. Following treatment, the notes were scrutinized by a local medical council to determine whether the physician had met the required standards of medical care. If the medical council deemed that the appropriate standards were not met, the physician in question could receive a lawsuit from the maltreated patient ( 2 ).

The invention of the printing press in 1453 allowed written documents to be distributed to the general public ( 3 ). At this time, it became more important to regulate the quality of the written material that became publicly available, and editing by peers increased in prevalence. In 1620, Francis Bacon wrote the work Novum Organum, where he described what eventually became known as the first universal method for generating and assessing new science ( 3 ). His work was instrumental in shaping the Scientific Method ( 3 ). In 1665, the French Journal des sçavans and the English Philosophical Transactions of the Royal Society were the first scientific journals to systematically publish research results ( 4 ). Philosophical Transactions of the Royal Society is thought to be the first journal to formalize the peer review process in 1665 ( 5 ), however, it is important to note that peer review was initially introduced to help editors decide which manuscripts to publish in their journals, and at that time it did not serve to ensure the validity of the research ( 6 ). It did not take long for the peer review process to evolve, and shortly thereafter papers were distributed to reviewers with the intent of authenticating the integrity of the research study before publication. The Royal Society of Edinburgh adhered to the following peer review process, published in their Medical Essays and Observations in 1731: “Memoirs sent by correspondence are distributed according to the subject matter to those members who are most versed in these matters. The report of their identity is not known to the author.” ( 7 ). The Royal Society of London adopted this review procedure in 1752 and developed the “Committee on Papers” to review manuscripts before they were published in Philosophical Transactions ( 6 ).

Peer review in the systematized and institutionalized form has developed immensely since the Second World War, at least partly due to the large increase in scientific research during this period ( 7 ). It is now used not only to ensure that a scientific manuscript is experimentally and ethically sound, but also to determine which papers sufficiently meet the journal’s standards of quality and originality before publication. Peer review is now standard practice by most credible scientific journals, and is an essential part of determining the credibility and quality of work submitted.

IMPACT OF THE PEER REVIEW PROCESS

Peer review has become the foundation of the scholarly publication system because it effectively subjects an author’s work to the scrutiny of other experts in the field. Thus, it encourages authors to strive to produce high quality research that will advance the field. Peer review also supports and maintains integrity and authenticity in the advancement of science. A scientific hypothesis or statement is generally not accepted by the academic community unless it has been published in a peer-reviewed journal ( 8 ). The Institute for Scientific Information ( ISI ) only considers journals that are peer-reviewed as candidates to receive Impact Factors. Peer review is a well-established process which has been a formal part of scientific communication for over 300 years.

OVERVIEW OF THE PEER REVIEW PROCESS

The peer review process begins when a scientist completes a research study and writes a manuscript that describes the purpose, experimental design, results, and conclusions of the study. The scientist then submits this paper to a suitable journal that specializes in a relevant research field, a step referred to as pre-submission. The editors of the journal will review the paper to ensure that the subject matter is in line with that of the journal, and that it fits with the editorial platform. Very few papers pass this initial evaluation. If the journal editors feel the paper sufficiently meets these requirements and is written by a credible source, they will send the paper to accomplished researchers in the field for a formal peer review. Peer reviewers are also known as referees (this process is summarized in Figure 1 ). The role of the editor is to select the most appropriate manuscripts for the journal, and to implement and monitor the peer review process. Editors must ensure that peer reviews are conducted fairly, and in an effective and timely manner. They must also ensure that there are no conflicts of interest involved in the peer review process.

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g001.jpg

Overview of the review process

When a reviewer is provided with a paper, he or she reads it carefully and scrutinizes it to evaluate the validity of the science, the quality of the experimental design, and the appropriateness of the methods used. The reviewer also assesses the significance of the research, and judges whether the work will contribute to advancement in the field by evaluating the importance of the findings, and determining the originality of the research. Additionally, reviewers identify any scientific errors and references that are missing or incorrect. Peer reviewers give recommendations to the editor regarding whether the paper should be accepted, rejected, or improved before publication in the journal. The editor will mediate author-referee discussion in order to clarify the priority of certain referee requests, suggest areas that can be strengthened, and overrule reviewer recommendations that are beyond the study’s scope ( 9 ). If the paper is accepted, as per suggestion by the peer reviewer, the paper goes into the production stage, where it is tweaked and formatted by the editors, and finally published in the scientific journal. An overview of the review process is presented in Figure 1 .

WHO CONDUCTS REVIEWS?

Peer reviews are conducted by scientific experts with specialized knowledge on the content of the manuscript, as well as by scientists with a more general knowledge base. Peer reviewers can be anyone who has competence and expertise in the subject areas that the journal covers. Reviewers can range from young and up-and-coming researchers to old masters in the field. Often, the young reviewers are the most responsive and deliver the best quality reviews, though this is not always the case. On average, a reviewer will conduct approximately eight reviews per year, according to a study on peer review by the Publishing Research Consortium (PRC) ( 7 ). Journals will often have a pool of reviewers with diverse backgrounds to allow for many different perspectives. They will also keep a rather large reviewer bank, so that reviewers do not get burnt out, overwhelmed or time constrained from reviewing multiple articles simultaneously.

WHY DO REVIEWERS REVIEW?

Referees are typically not paid to conduct peer reviews and the process takes considerable effort, so the question is raised as to what incentive referees have to review at all. Some feel an academic duty to perform reviews, and are of the mentality that if their peers are expected to review their papers, then they should review the work of their peers as well. Reviewers may also have personal contacts with editors, and may want to assist as much as possible. Others review to keep up-to-date with the latest developments in their field, and reading new scientific papers is an effective way to do so. Some scientists use peer review as an opportunity to advance their own research as it stimulates new ideas and allows them to read about new experimental techniques. Other reviewers are keen on building associations with prestigious journals and editors and becoming part of their community, as sometimes reviewers who show dedication to the journal are later hired as editors. Some scientists see peer review as a chance to become aware of the latest research before their peers, and thus be first to develop new insights from the material. Finally, in terms of career development, peer reviewing can be desirable as it is often noted on one’s resume or CV. Many institutions consider a researcher’s involvement in peer review when assessing their performance for promotions ( 11 ). Peer reviewing can also be an effective way for a scientist to show their superiors that they are committed to their scientific field ( 5 ).

ARE REVIEWERS KEEN TO REVIEW?

A 2009 international survey of 4000 peer reviewers conducted by the charity Sense About Science at the British Science Festival at the University of Surrey, found that 90% of reviewers were keen to peer review ( 12 ). One third of respondents to the survey said they were happy to review up to five papers per year, and an additional one third of respondents were happy to review up to ten.

HOW LONG DOES IT TAKE TO REVIEW ONE PAPER?

On average, it takes approximately six hours to review one paper ( 12 ), however, this number may vary greatly depending on the content of the paper and the nature of the peer reviewer. One in every 100 participants in the “Sense About Science” survey claims to have taken more than 100 hours to review their last paper ( 12 ).

HOW TO DETERMINE IF A JOURNAL IS PEER REVIEWED

Ulrichsweb is a directory that provides information on over 300,000 periodicals, including information regarding which journals are peer reviewed ( 13 ). After logging into the system using an institutional login (eg. from the University of Toronto), search terms, journal titles or ISSN numbers can be entered into the search bar. The database provides the title, publisher, and country of origin of the journal, and indicates whether the journal is still actively publishing. The black book symbol (labelled ‘refereed’) reveals that the journal is peer reviewed.

THE EVALUATION CRITERIA FOR PEER REVIEW OF SCIENTIFIC PAPERS

As previously mentioned, when a reviewer receives a scientific manuscript, he/she will first determine if the subject matter is well suited for the content of the journal. The reviewer will then consider whether the research question is important and original, a process which may be aided by a literature scan of review articles.

Scientific papers submitted for peer review usually follow a specific structure that begins with the title, followed by the abstract, introduction, methodology, results, discussion, conclusions, and references. The title must be descriptive and include the concept and organism investigated, and potentially the variable manipulated and the systems used in the study. The peer reviewer evaluates if the title is descriptive enough, and ensures that it is clear and concise. A study by the National Association of Realtors (NAR) published by the Oxford University Press in 2006 indicated that the title of a manuscript plays a significant role in determining reader interest, as 72% of respondents said they could usually judge whether an article will be of interest to them based on the title and the author, while 13% of respondents claimed to always be able to do so ( 14 ).

The abstract is a summary of the paper, which briefly mentions the background or purpose, methods, key results, and major conclusions of the study. The peer reviewer assesses whether the abstract is sufficiently informative and if the content of the abstract is consistent with the rest of the paper. The NAR study indicated that 40% of respondents could determine whether an article would be of interest to them based on the abstract alone 60-80% of the time, while 32% could judge an article based on the abstract 80-100% of the time ( 14 ). This demonstrates that the abstract alone is often used to assess the value of an article.

The introduction of a scientific paper presents the research question in the context of what is already known about the topic, in order to identify why the question being studied is of interest to the scientific community, and what gap in knowledge the study aims to fill ( 15 ). The introduction identifies the study’s purpose and scope, briefly describes the general methods of investigation, and outlines the hypothesis and predictions ( 15 ). The peer reviewer determines whether the introduction provides sufficient background information on the research topic, and ensures that the research question and hypothesis are clearly identifiable.

The methods section describes the experimental procedures, and explains why each experiment was conducted. The methods section also includes the equipment and reagents used in the investigation. The methods section should be detailed enough that it can be used it to repeat the experiment ( 15 ). Methods are written in the past tense and in the active voice. The peer reviewer assesses whether the appropriate methods were used to answer the research question, and if they were written with sufficient detail. If information is missing from the methods section, it is the peer reviewer’s job to identify what details need to be added.

The results section is where the outcomes of the experiment and trends in the data are explained without judgement, bias or interpretation ( 15 ). This section can include statistical tests performed on the data, as well as figures and tables in addition to the text. The peer reviewer ensures that the results are described with sufficient detail, and determines their credibility. Reviewers also confirm that the text is consistent with the information presented in tables and figures, and that all figures and tables included are important and relevant ( 15 ). The peer reviewer will also make sure that table and figure captions are appropriate both contextually and in length, and that tables and figures present the data accurately.

The discussion section is where the data is analyzed. Here, the results are interpreted and related to past studies ( 15 ). The discussion describes the meaning and significance of the results in terms of the research question and hypothesis, and states whether the hypothesis was supported or rejected. This section may also provide possible explanations for unusual results and suggestions for future research ( 15 ). The discussion should end with a conclusions section that summarizes the major findings of the investigation. The peer reviewer determines whether the discussion is clear and focused, and whether the conclusions are an appropriate interpretation of the results. Reviewers also ensure that the discussion addresses the limitations of the study, any anomalies in the results, the relationship of the study to previous research, and the theoretical implications and practical applications of the study.

The references are found at the end of the paper, and list all of the information sources cited in the text to describe the background, methods, and/or interpret results. Depending on the citation method used, the references are listed in alphabetical order according to author last name, or numbered according to the order in which they appear in the paper. The peer reviewer ensures that references are used appropriately, cited accurately, formatted correctly, and that none are missing.

Finally, the peer reviewer determines whether the paper is clearly written and if the content seems logical. After thoroughly reading through the entire manuscript, they determine whether it meets the journal’s standards for publication,

and whether it falls within the top 25% of papers in its field ( 16 ) to determine priority for publication. An overview of what a peer reviewer looks for when evaluating a manuscript, in order of importance, is presented in Figure 2 .

An external file that holds a picture, illustration, etc.
Object name is ejifcc-25-227-g002.jpg

How a peer review evaluates a manuscript

To increase the chance of success in the peer review process, the author must ensure that the paper fully complies with the journal guidelines before submission. The author must also be open to criticism and suggested revisions, and learn from mistakes made in previous submissions.

ADVANTAGES AND DISADVANTAGES OF THE DIFFERENT TYPES OF PEER REVIEW

The peer review process is generally conducted in one of three ways: open review, single-blind review, or double-blind review. In an open review, both the author of the paper and the peer reviewer know one another’s identity. Alternatively, in single-blind review, the reviewer’s identity is kept private, but the author’s identity is revealed to the reviewer. In double-blind review, the identities of both the reviewer and author are kept anonymous. Open peer review is advantageous in that it prevents the reviewer from leaving malicious comments, being careless, or procrastinating completion of the review ( 2 ). It encourages reviewers to be open and honest without being disrespectful. Open reviewing also discourages plagiarism amongst authors ( 2 ). On the other hand, open peer review can also prevent reviewers from being honest for fear of developing bad rapport with the author. The reviewer may withhold or tone down their criticisms in order to be polite ( 2 ). This is especially true when younger reviewers are given a more esteemed author’s work, in which case the reviewer may be hesitant to provide criticism for fear that it will damper their relationship with a superior ( 2 ). According to the Sense About Science survey, editors find that completely open reviewing decreases the number of people willing to participate, and leads to reviews of little value ( 12 ). In the aforementioned study by the PRC, only 23% of authors surveyed had experience with open peer review ( 7 ).

Single-blind peer review is by far the most common. In the PRC study, 85% of authors surveyed had experience with single-blind peer review ( 7 ). This method is advantageous as the reviewer is more likely to provide honest feedback when their identity is concealed ( 2 ). This allows the reviewer to make independent decisions without the influence of the author ( 2 ). The main disadvantage of reviewer anonymity, however, is that reviewers who receive manuscripts on subjects similar to their own research may be tempted to delay completing the review in order to publish their own data first ( 2 ).

Double-blind peer review is advantageous as it prevents the reviewer from being biased against the author based on their country of origin or previous work ( 2 ). This allows the paper to be judged based on the quality of the content, rather than the reputation of the author. The Sense About Science survey indicates that 76% of researchers think double-blind peer review is a good idea ( 12 ), and the PRC survey indicates that 45% of authors have had experience with double-blind peer review ( 7 ). The disadvantage of double-blind peer review is that, especially in niche areas of research, it can sometimes be easy for the reviewer to determine the identity of the author based on writing style, subject matter or self-citation, and thus, impart bias ( 2 ).

Masking the author’s identity from peer reviewers, as is the case in double-blind review, is generally thought to minimize bias and maintain review quality. A study by Justice et al. in 1998 investigated whether masking author identity affected the quality of the review ( 17 ). One hundred and eighteen manuscripts were randomized; 26 were peer reviewed as normal, and 92 were moved into the ‘intervention’ arm, where editor quality assessments were completed for 77 manuscripts and author quality assessments were completed for 40 manuscripts ( 17 ). There was no perceived difference in quality between the masked and unmasked reviews. Additionally, the masking itself was often unsuccessful, especially with well-known authors ( 17 ). However, a previous study conducted by McNutt et al. had different results ( 18 ). In this case, blinding was successful 73% of the time, and they found that when author identity was masked, the quality of review was slightly higher ( 18 ). Although Justice et al. argued that this difference was too small to be consequential, their study targeted only biomedical journals, and the results cannot be generalized to journals of a different subject matter ( 17 ). Additionally, there were problems masking the identities of well-known authors, introducing a flaw in the methods. Regardless, Justice et al. concluded that masking author identity from reviewers may not improve review quality ( 17 ).

In addition to open, single-blind and double-blind peer review, there are two experimental forms of peer review. In some cases, following publication, papers may be subjected to post-publication peer review. As many papers are now published online, the scientific community has the opportunity to comment on these papers, engage in online discussions and post a formal review. For example, online publishers PLOS and BioMed Central have enabled scientists to post comments on published papers if they are registered users of the site ( 10 ). Philica is another journal launched with this experimental form of peer review. Only 8% of authors surveyed in the PRC study had experience with post-publication review ( 7 ). Another experimental form of peer review called Dynamic Peer Review has also emerged. Dynamic peer review is conducted on websites such as Naboj, which allow scientists to conduct peer reviews on articles in the preprint media ( 19 ). The peer review is conducted on repositories and is a continuous process, which allows the public to see both the article and the reviews as the article is being developed ( 19 ). Dynamic peer review helps prevent plagiarism as the scientific community will already be familiar with the work before the peer reviewed version appears in print ( 19 ). Dynamic review also reduces the time lag between manuscript submission and publishing. An example of a preprint server is the ‘arXiv’ developed by Paul Ginsparg in 1991, which is used primarily by physicists ( 19 ). These alternative forms of peer review are still un-established and experimental. Traditional peer review is time-tested and still highly utilized. All methods of peer review have their advantages and deficiencies, and all are prone to error.

PEER REVIEW OF OPEN ACCESS JOURNALS

Open access (OA) journals are becoming increasingly popular as they allow the potential for widespread distribution of publications in a timely manner ( 20 ). Nevertheless, there can be issues regarding the peer review process of open access journals. In a study published in Science in 2013, John Bohannon submitted 304 slightly different versions of a fictional scientific paper (written by a fake author, working out of a non-existent institution) to a selected group of OA journals. This study was performed in order to determine whether papers submitted to OA journals are properly reviewed before publication in comparison to subscription-based journals. The journals in this study were selected from the Directory of Open Access Journals (DOAJ) and Biall’s List, a list of journals which are potentially predatory, and all required a fee for publishing ( 21 ). Of the 304 journals, 157 accepted a fake paper, suggesting that acceptance was based on financial interest rather than the quality of article itself, while 98 journals promptly rejected the fakes ( 21 ). Although this study highlights useful information on the problems associated with lower quality publishers that do not have an effective peer review system in place, the article also generalizes the study results to all OA journals, which can be detrimental to the general perception of OA journals. There were two limitations of the study that made it impossible to accurately determine the relationship between peer review and OA journals: 1) there was no control group (subscription-based journals), and 2) the fake papers were sent to a non-randomized selection of journals, resulting in bias.

JOURNAL ACCEPTANCE RATES

Based on a recent survey, the average acceptance rate for papers submitted to scientific journals is about 50% ( 7 ). Twenty percent of the submitted manuscripts that are not accepted are rejected prior to review, and 30% are rejected following review ( 7 ). Of the 50% accepted, 41% are accepted with the condition of revision, while only 9% are accepted without the request for revision ( 7 ).

SATISFACTION WITH THE PEER REVIEW SYSTEM

Based on a recent survey by the PRC, 64% of academics are satisfied with the current system of peer review, and only 12% claimed to be ‘dissatisfied’ ( 7 ). The large majority, 85%, agreed with the statement that ‘scientific communication is greatly helped by peer review’ ( 7 ). There was a similarly high level of support (83%) for the idea that peer review ‘provides control in scientific communication’ ( 7 ).

HOW TO PEER REVIEW EFFECTIVELY

The following are ten tips on how to be an effective peer reviewer as indicated by Brian Lucey, an expert on the subject ( 22 ):

1) Be professional

Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it.

2) Be pleasant

If the paper is of low quality, suggest that it be rejected, but do not leave ad hominem comments. There is no benefit to being ruthless.

3) Read the invite

When emailing a scientist to ask them to conduct a peer review, the majority of journals will provide a link to either accept or reject. Do not respond to the email, respond to the link.

4) Be helpful

Suggest how the authors can overcome the shortcomings in their paper. A review should guide the author on what is good and what needs work from the reviewer’s perspective.

5) Be scientific

The peer reviewer plays the role of a scientific peer, not an editor for proofreading or decision-making. Don’t fill a review with comments on editorial and typographic issues. Instead, focus on adding value with scientific knowledge and commenting on the credibility of the research conducted and conclusions drawn. If the paper has a lot of typographical errors, suggest that it be professionally proof edited as part of the review.

6) Be timely

Stick to the timeline given when conducting a peer review. Editors track who is reviewing what and when and will know if someone is late on completing a review. It is important to be timely both out of respect for the journal and the author, as well as to not develop a reputation of being late for review deadlines.

7) Be realistic

The peer reviewer must be realistic about the work presented, the changes they suggest and their role. Peer reviewers may set the bar too high for the paper they are editing by proposing changes that are too ambitious and editors must override them.

8) Be empathetic

Ensure that the review is scientific, helpful and courteous. Be sensitive and respectful with word choice and tone in a review.

Remember that both specialists and generalists can provide valuable insight when peer reviewing. Editors will try to get both specialised and general reviewers for any particular paper to allow for different perspectives. If someone is asked to review, the editor has determined they have a valid and useful role to play, even if the paper is not in their area of expertise.

10) Be organised

A review requires structure and logical flow. A reviewer should proofread their review before submitting it for structural, grammatical and spelling errors as well as for clarity. Most publishers provide short guides on structuring a peer review on their website. Begin with an overview of the proposed improvements; then provide feedback on the paper structure, the quality of data sources and methods of investigation used, the logical flow of argument, and the validity of conclusions drawn. Then provide feedback on style, voice and lexical concerns, with suggestions on how to improve.

In addition, the American Physiology Society (APS) recommends in its Peer Review 101 Handout that peer reviewers should put themselves in both the editor’s and author’s shoes to ensure that they provide what both the editor and the author need and expect ( 11 ). To please the editor, the reviewer should ensure that the peer review is completed on time, and that it provides clear explanations to back up recommendations. To be helpful to the author, the reviewer must ensure that their feedback is constructive. It is suggested that the reviewer take time to think about the paper; they should read it once, wait at least a day, and then re-read it before writing the review ( 11 ). The APS also suggests that Graduate students and researchers pay attention to how peer reviewers edit their work, as well as to what edits they find helpful, in order to learn how to peer review effectively ( 11 ). Additionally, it is suggested that Graduate students practice reviewing by editing their peers’ papers and asking a faculty member for feedback on their efforts. It is recommended that young scientists offer to peer review as often as possible in order to become skilled at the process ( 11 ). The majority of students, fellows and trainees do not get formal training in peer review, but rather learn by observing their mentors. According to the APS, one acquires experience through networking and referrals, and should therefore try to strengthen relationships with journal editors by offering to review manuscripts ( 11 ). The APS also suggests that experienced reviewers provide constructive feedback to students and junior colleagues on their peer review efforts, and encourages them to peer review to demonstrate the importance of this process in improving science ( 11 ).

The peer reviewer should only comment on areas of the manuscript that they are knowledgeable about ( 23 ). If there is any section of the manuscript they feel they are not qualified to review, they should mention this in their comments and not provide further feedback on that section. The peer reviewer is not permitted to share any part of the manuscript with a colleague (even if they may be more knowledgeable in the subject matter) without first obtaining permission from the editor ( 23 ). If a peer reviewer comes across something they are unsure of in the paper, they can consult the literature to try and gain insight. It is important for scientists to remember that if a paper can be improved by the expertise of one of their colleagues, the journal must be informed of the colleague’s help, and approval must be obtained for their colleague to read the protected document. Additionally, the colleague must be identified in the confidential comments to the editor, in order to ensure that he/she is appropriately credited for any contributions ( 23 ). It is the job of the reviewer to make sure that the colleague assisting is aware of the confidentiality of the peer review process ( 23 ). Once the review is complete, the manuscript must be destroyed and cannot be saved electronically by the reviewers ( 23 ).

COMMON ERRORS IN SCIENTIFIC PAPERS

When performing a peer review, there are some common scientific errors to look out for. Most of these errors are violations of logic and common sense: these may include contradicting statements, unwarranted conclusions, suggestion of causation when there is only support for correlation, inappropriate extrapolation, circular reasoning, or pursuit of a trivial question ( 24 ). It is also common for authors to suggest that two variables are different because the effects of one variable are statistically significant while the effects of the other variable are not, rather than directly comparing the two variables ( 24 ). Authors sometimes oversee a confounding variable and do not control for it, or forget to include important details on how their experiments were controlled or the physical state of the organisms studied ( 24 ). Another common fault is the author’s failure to define terms or use words with precision, as these practices can mislead readers ( 24 ). Jargon and/or misused terms can be a serious problem in papers. Inaccurate statements about specific citations are also a common occurrence ( 24 ). Additionally, many studies produce knowledge that can be applied to areas of science outside the scope of the original study, therefore it is better for reviewers to look at the novelty of the idea, conclusions, data, and methodology, rather than scrutinize whether or not the paper answered the specific question at hand ( 24 ). Although it is important to recognize these points, when performing a review it is generally better practice for the peer reviewer to not focus on a checklist of things that could be wrong, but rather carefully identify the problems specific to each paper and continuously ask themselves if anything is missing ( 24 ). An extremely detailed description of how to conduct peer review effectively is presented in the paper How I Review an Original Scientific Article written by Frederic G. Hoppin, Jr. It can be accessed through the American Physiological Society website under the Peer Review Resources section.

CRITICISM OF PEER REVIEW

A major criticism of peer review is that there is little evidence that the process actually works, that it is actually an effective screen for good quality scientific work, and that it actually improves the quality of scientific literature. As a 2002 study published in the Journal of the American Medical Association concluded, ‘Editorial peer review, although widely used, is largely untested and its effects are uncertain’ ( 25 ). Critics also argue that peer review is not effective at detecting errors. Highlighting this point, an experiment by Godlee et al. published in the British Medical Journal (BMJ) inserted eight deliberate errors into a paper that was nearly ready for publication, and then sent the paper to 420 potential reviewers ( 7 ). Of the 420 reviewers that received the paper, 221 (53%) responded, the average number of errors spotted by reviewers was two, no reviewer spotted more than five errors, and 35 reviewers (16%) did not spot any.

Another criticism of peer review is that the process is not conducted thoroughly by scientific conferences with the goal of obtaining large numbers of submitted papers. Such conferences often accept any paper sent in, regardless of its credibility or the prevalence of errors, because the more papers they accept, the more money they can make from author registration fees ( 26 ). This misconduct was exposed in 2014 by three MIT graduate students by the names of Jeremy Stribling, Dan Aguayo and Maxwell Krohn, who developed a simple computer program called SCIgen that generates nonsense papers and presents them as scientific papers ( 26 ). Subsequently, a nonsense SCIgen paper submitted to a conference was promptly accepted. Nature recently reported that French researcher Cyril Labbé discovered that sixteen SCIgen nonsense papers had been used by the German academic publisher Springer ( 26 ). Over 100 nonsense papers generated by SCIgen were published by the US Institute of Electrical and Electronic Engineers (IEEE) ( 26 ). Both organisations have been working to remove the papers. Labbé developed a program to detect SCIgen papers and has made it freely available to ensure publishers and conference organizers do not accept nonsense work in the future. It is available at this link: http://scigendetect.on.imag.fr/main.php ( 26 ).

Additionally, peer review is often criticized for being unable to accurately detect plagiarism. However, many believe that detecting plagiarism cannot practically be included as a component of peer review. As explained by Alice Tuff, development manager at Sense About Science, ‘The vast majority of authors and reviewers think peer review should detect plagiarism (81%) but only a minority (38%) think it is capable. The academic time involved in detecting plagiarism through peer review would cause the system to grind to a halt’ ( 27 ). Publishing house Elsevier began developing electronic plagiarism tools with the help of journal editors in 2009 to help improve this issue ( 27 ).

It has also been argued that peer review has lowered research quality by limiting creativity amongst researchers. Proponents of this view claim that peer review has repressed scientists from pursuing innovative research ideas and bold research questions that have the potential to make major advances and paradigm shifts in the field, as they believe that this work will likely be rejected by their peers upon review ( 28 ). Indeed, in some cases peer review may result in rejection of innovative research, as some studies may not seem particularly strong initially, yet may be capable of yielding very interesting and useful developments when examined under different circumstances, or in the light of new information ( 28 ). Scientists that do not believe in peer review argue that the process stifles the development of ingenious ideas, and thus the release of fresh knowledge and new developments into the scientific community.

Another issue that peer review is criticized for, is that there are a limited number of people that are competent to conduct peer review compared to the vast number of papers that need reviewing. An enormous number of papers published (1.3 million papers in 23,750 journals in 2006), but the number of competent peer reviewers available could not have reviewed them all ( 29 ). Thus, people who lack the required expertise to analyze the quality of a research paper are conducting reviews, and weak papers are being accepted as a result. It is now possible to publish any paper in an obscure journal that claims to be peer-reviewed, though the paper or journal itself could be substandard ( 29 ). On a similar note, the US National Library of Medicine indexes 39 journals that specialize in alternative medicine, and though they all identify themselves as “peer-reviewed”, they rarely publish any high quality research ( 29 ). This highlights the fact that peer review of more controversial or specialized work is typically performed by people who are interested and hold similar views or opinions as the author, which can cause bias in their review. For instance, a paper on homeopathy is likely to be reviewed by fellow practicing homeopaths, and thus is likely to be accepted as credible, though other scientists may find the paper to be nonsense ( 29 ). In some cases, papers are initially published, but their credibility is challenged at a later date and they are subsequently retracted. Retraction Watch is a website dedicated to revealing papers that have been retracted after publishing, potentially due to improper peer review ( 30 ).

Additionally, despite its many positive outcomes, peer review is also criticized for being a delay to the dissemination of new knowledge into the scientific community, and as an unpaid-activity that takes scientists’ time away from activities that they would otherwise prioritize, such as research and teaching, for which they are paid ( 31 ). As described by Eva Amsen, Outreach Director for F1000Research, peer review was originally developed as a means of helping editors choose which papers to publish when journals had to limit the number of papers they could print in one issue ( 32 ). However, nowadays most journals are available online, either exclusively or in addition to print, and many journals have very limited printing runs ( 32 ). Since there are no longer page limits to journals, any good work can and should be published. Consequently, being selective for the purpose of saving space in a journal is no longer a valid excuse that peer reviewers can use to reject a paper ( 32 ). However, some reviewers have used this excuse when they have personal ulterior motives, such as getting their own research published first.

RECENT INITIATIVES TOWARDS IMPROVING PEER REVIEW

F1000Research was launched in January 2013 by Faculty of 1000 as an open access journal that immediately publishes papers (after an initial check to ensure that the paper is in fact produced by a scientist and has not been plagiarised), and then conducts transparent post-publication peer review ( 32 ). F1000Research aims to prevent delays in new science reaching the academic community that are caused by prolonged publication times ( 32 ). It also aims to make peer reviewing more fair by eliminating any anonymity, which prevents reviewers from delaying the completion of a review so they can publish their own similar work first ( 32 ). F1000Research offers completely open peer review, where everything is published, including the name of the reviewers, their review reports, and the editorial decision letters ( 32 ).

PeerJ was founded by Jason Hoyt and Peter Binfield in June 2012 as an open access, peer reviewed scholarly journal for the Biological and Medical Sciences ( 33 ). PeerJ selects articles to publish based only on scientific and methodological soundness, not on subjective determinants of ‘impact ’, ‘novelty’ or ‘interest’ ( 34 ). It works on a “lifetime publishing plan” model which charges scientists for publishing plans that give them lifetime rights to publish with PeerJ, rather than charging them per publication ( 34 ). PeerJ also encourages open peer review, and authors are given the option to post the full peer review history of their submission with their published article ( 34 ). PeerJ also offers a pre-print review service called PeerJ Pre-prints, in which paper drafts are reviewed before being sent to PeerJ to publish ( 34 ).

Rubriq is an independent peer review service designed by Shashi Mudunuri and Keith Collier to improve the peer review system ( 35 ). Rubriq is intended to decrease redundancy in the peer review process so that the time lost in redundant reviewing can be put back into research ( 35 ). According to Keith Collier, over 15 million hours are lost each year to redundant peer review, as papers get rejected from one journal and are subsequently submitted to a less prestigious journal where they are reviewed again ( 35 ). Authors often have to submit their manuscript to multiple journals, and are often rejected multiple times before they find the right match. This process could take months or even years ( 35 ). Rubriq makes peer review portable in order to help authors choose the journal that is best suited for their manuscript from the beginning, thus reducing the time before their paper is published ( 35 ). Rubriq operates under an author-pay model, in which the author pays a fee and their manuscript undergoes double-blind peer review by three expert academic reviewers using a standardized scorecard ( 35 ). The majority of the author’s fee goes towards a reviewer honorarium ( 35 ). The papers are also screened for plagiarism using iThenticate ( 35 ). Once the manuscript has been reviewed by the three experts, the most appropriate journal for submission is determined based on the topic and quality of the paper ( 35 ). The paper is returned to the author in 1-2 weeks with the Rubriq Report ( 35 ). The author can then submit their paper to the suggested journal with the Rubriq Report attached. The Rubriq Report will give the journal editors a much stronger incentive to consider the paper as it shows that three experts have recommended the paper to them ( 35 ). Rubriq also has its benefits for reviewers; the Rubriq scorecard gives structure to the peer review process, and thus makes it consistent and efficient, which decreases time and stress for the reviewer. Reviewers also receive feedback on their reviews and most significantly, they are compensated for their time ( 35 ). Journals also benefit, as they receive pre-screened papers, reducing the number of papers sent to their own reviewers, which often end up rejected ( 35 ). This can reduce reviewer fatigue, and allow only higher-quality articles to be sent to their peer reviewers ( 35 ).

According to Eva Amsen, peer review and scientific publishing are moving in a new direction, in which all papers will be posted online, and a post-publication peer review will take place that is independent of specific journal criteria and solely focused on improving paper quality ( 32 ). Journals will then choose papers that they find relevant based on the peer reviews and publish those papers as a collection ( 32 ). In this process, peer review and individual journals are uncoupled ( 32 ). In Keith Collier’s opinion, post-publication peer review is likely to become more prevalent as a complement to pre-publication peer review, but not as a replacement ( 35 ). Post-publication peer review will not serve to identify errors and fraud but will provide an additional measurement of impact ( 35 ). Collier also believes that as journals and publishers consolidate into larger systems, there will be stronger potential for “cascading” and shared peer review ( 35 ).

CONCLUDING REMARKS

Peer review has become fundamental in assisting editors in selecting credible, high quality, novel and interesting research papers to publish in scientific journals and to ensure the correction of any errors or issues present in submitted papers. Though the peer review process still has some flaws and deficiencies, a more suitable screening method for scientific papers has not yet been proposed or developed. Researchers have begun and must continue to look for means of addressing the current issues with peer review to ensure that it is a full-proof system that ensures only quality research papers are released into the scientific community.

What is open peer review? A systematic review

Tony Ross-Hellauer Roles: Conceptualization, Data Curation, Formal Analysis, Investigation, Methodology, Project Administration, Resources, Software, Supervision, Validation, Visualization, Writing – Original Draft Preparation, Writing – Review & Editing

open peer review in research

This article is included in the Research on Research, Policy & Culture gateway.

open peer review, Open Science, scholarly communication, research evaluation, publishing

Revised Amendments from Version 1

The description of traditional peer review in the Background section has been revised to clarify the role of peer review in scholarly communication. The methodology section has been expanded to more completely describe the search strategy and inclusion criteria for the study. A new section and figure have been added to the results section to examine disciplinary differences amongst definitions. One figure was previously incorrect, as it included an extra row. The figure (Figure 6 in version 1; Figure 7 in version 2) has now been corrected. Two new sections have been added to the discussion which make clearer (1) the particular problems with traditional peer review that each OPR trait aims to address, and (2) how each trait can be related to the broader agenda of Open Science (a new figure is also added). The conclusion has been expanded to further clarify the article's findings and limitations. A Conflict of Interest statement has been added to more explicitly acknowledge the author’s relationship to OpenAIRE.

See the author's detailed response to the review by Emily Ford See the author's detailed response to the review by Theodora Bloom See the author's detailed response to the review by Richard Walker See the author's detailed response to the review by Bahar Mehmani

Introduction

  “Open review and open peer review are new terms for evolving phenomena. They don’t have precise or technical definitions. No matter how they’re defined, there’s a large area of overlap between them. If there’s ever a difference, some kinds of open review accept evaluative comments from any readers, even anonymous readers, while other kinds try to limit evaluative comments to those from ”peers“ with expertise or credentials in the relevant field. But neither kind of review has a special name, and I think each could fairly be called “open review” or “open peer review”.” - Peter Suber, email correspondence, 2007 1 .

As with other areas of “open science” ( Pontika et al. , 2015 ), “open peer review” (OPR) is a hot topic, with a rapidly growing literature that discusses it. Yet, as has been consistently noted ( Ford, 2013 ; Hames, 2014 ; Ware, 2011 ), OPR has neither a standardized definition, nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are disclosed to each other, for others it signifies systems where reviewer reports are published alongside articles. For others it signifies both of these conditions, and for yet others it describes systems where not only “invited experts” are able to comment. For still others, it includes a variety of combinations of these and other novel methods. The previous major attempt to resolve these elements systematically to provide a unified definition ( Ford, 2013 ), discussed later, unfortunately ultimately confounds rather than resolves these issues.

In short, things have not improved much since Suber made his astute observation. This continuing imprecision grows more problematic over time, however. As Mark Ware notes, “it is not always clear in debates over the merits of OPR exactly what is being referred to” ( Ware, 2011 ). Differing flavours of OPR include independent factors (open identities, open reports, open participation, etc.), which have no necessary connection to each other, and very different benefits and drawbacks. Evaluation of the efficacy of these differing variables and hence comparison between differing systems is therefore problematic. Discussions are potentially side-tracked when claims are made for the efficacy of “OPR” in general, despite critique usually being focussed on one element or distinct configuration of OPR. It could even be argued that this inability to define terms is to blame for the fact that, as Nicholas Kriegskorte has pointed out, “we have yet to develop a coherent shared vision for “open evaluation” (OE), and an OE movement comparable to the OA movement” ( Kriegeskorte, 2012 ).

To resolve this, I undertake a systematic review of the definitions of “open peer review” or “open review”, to create a corpus of more than 120 definitions. These definitions have been systematically analysed to build a coherent typology of the many different innovations in peer review signified by the term, and hence provide the precise technical definition that is currently lacking. This quantifiable data yields rich information on the range and extent of differing definitions over time and by broad subject area. Based on this work, I propose a pragmatic definition of OPR as an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process.

1. Problems with peer review

Peer review is the formal quality assurance mechanism whereby scholarly manuscripts (e.g. journal articles, books, grant applications and conference papers) are made subject to the scrutiny of others, whose feedback and judgements are then used to improve works and make final decisions regarding selection (for publication, grant allocation or speaking time). Peer review usually performs two distinct functions: (1) technical evaluation of the validity or soundness of a work in its methodology, analysis and argumentation (answering the question “is it good scholarship?”), and (2) assisting editorial selection by assessing the novelty or expected impact of a work (“is it exciting, innovative or important scholarship?”, “is it right for this journal, conference or funding call?”). The two processes need not be entwined, and some journals such as PLOS ONE and PeerJ, have begun to adopt models where reviewers are asked to focus only on technical soundness.

This broad system is perhaps more recent than one might expect, with its main formal elements only in general use since the mid-twentieth century in scientific publishing ( Spier, 2002 ). Researchers agree that peer review per se is necessary, but most find the current model sub-optimal. Ware’s 2008 survey, for example, found that an overwhelming majority (85%) agreed that “peer review greatly helps scientific communication” and that even more (around 90%) said their own last published paper had been improved by peer review. Yet almost two thirds (64%) declared that they were satisfied with the current system of peer review, and less than a third (32%) believed that this system was the best possible ( Ware, 2008 ). A recent follow-up study by the same author reported a slight increase in the desire for improvements in peer review ( Ware, 2016 )

Widespread beliefs that the current model is sub-optimal can be attributed to the various ways in which traditional peer review has been subject to criticism. These criticisms apply to differing levels, with some concerning the work of peer reviewers themselves, and others more concerned with editorial decisions based upon or affecting peer review. I next give a brief overview of these various criticisms of traditional peer review:

Unreliability and inconsistency: Reliant upon the vagaries of human judgement, the objectivity, reliability, and consistency of peer review are subject to question. Studies show reviewers’ views tend to show very weak levels of agreement ( Kravitz et al. , 2010 ; Mahoney, 1977 ), at levels only slightly better than chance ( Herron, 2012 ; Smith, 2006 ). Studies suggest decisions on rejection or acceptance are similarly inconsistent. For example, Peters and Ceci’s classic study found that eight out of twelve papers were rejected for methodological flaws when resubmitted to the same journals in which they had already been published ( Peters & Ceci, 1982 ). This inconsistency is mirrored in peer review’s inability to prevent errors and fraud from entering the scientific literature. Reviewers often fail to detect major methodological failings ( Schroter et al. , 2004 ), with eminent journals (whose higher rejection rates might suggest more stringent peer review processes) seeming to perform no better than others ( Fang et al. , 2012 ). Indeed, Fang and Casadevall found that the frequency of retraction is strongly correlated with the journal impact factor ( Fang & Casadevall, 2011 ). Whatever the cause, recent sharp rises in the number of retracted scientific publications ( Steen et al. , 2013 ) testify that peer review sometimes fails in its role as the gatekeeper of science, allowing errors and fraudulent material to enter the literature. At an editorial level, peer review’s other role, of guiding decisions that should in theory filter the best work into the best journals, also seems to be found wanting. Many articles in top journals remain poorly cited, while many of the most highly-cited articles in their fields are published in lower-tier journals ( Jubb, 2016 ).

Delay and expense: The period from submission to publication at many journals can often exceed one year, with much of this time taken up by peer review. This delay slows down the availability of results for further research and professional exploitation. The work undertaken in this period is also expensive, with the global costs of reviewers’ time estimated at £1.9bn in 2008 ( Research Information Network [RIN], 2008 ), a figure which does not take into account the coordinating costs of publishers, or the time authors spend revising and resubmitting manuscripts ( Jubb, 2016 ). These costs are greatly exacerbated by the current system in which peer review is managed by each journal, such that the same manuscript may be peer reviewed many times over as it is successively rejected and resubmitted until it finds acceptance. It could be argued that these issues relate more to editorial process than peer review per se . However, as we shall see, various new publishing models which encompass innovations in peer review (including open peer review), have the potential to address such issues.

Lack of accountability and risks of subversion: The “black-box” nature of traditional peer review gives reviewers, editors and even authors a lot of power to potentially subvert the process. At the editorial level, lack of transparency means that editors can unilaterally reject submissions or shape review outcomes by selecting reviewers based on their known preference for or aversion to certain theories and methods ( Travis & Collins, 1991 ). Reviewers, shielded by anonymity, may act unethically in their own interests by concealing conflicts of interest. Smith, an experienced editor, for example, reports reviewers stealing ideas and passing them off as their own, or intentional blocking or delaying publication of competitors’ ideas through harsh reviews ( Smith, 2006 ). Equally, they may simply favour their friends and target their enemies. Authors, meanwhile, can manipulate the system by writing reviews of their own work via fake or stolen identities ( Kaplan, 2015 ).

Social and publication biases: Although often idealized as impartial, objective assessors, in reality studies suggest that peer reviewers may be subject to social biases on the grounds of gender ( Budden et al. , 2008 ; Lloyd, 1990 ; Tregenza, 2002 ), nationality ( Daniel, 1993 ; Ernst & Kienbacher, 1991 ; Link, 1998 ), institutional affiliation ( Dall’Aglio, 2006 ; Gillespie et al. , 1985 ; Peters & Ceci, 1982 ), language ( Cronin, 2009 ; Ross et al. , 2006 ; Tregenza, 2002 ) and discipline ( Travis & Collins, 1991 ). Other studies suggest so-called “publication bias”, where prejudices against specific categories of works shape what is published. Publication bias can take many forms. First is a preference for complexity over simplicity in methodology (even if inappropriate, c.f. Travis & Collins, 1991 ) and language ( Armstrong, 1997 ). Next, “confirmatory bias” is theorized to lead to conservatism, biasing reviewers against innovative methods or results contrary to dominant theoretical perspectives ( Chubin & Hackett, 1990 ; Garcia et al. , 2016 ; Mahoney, 1977 ). Finally, factors like the pursuit of “impact” and “excellence” ( Moore et al. , 2017 ) mean that editors and reviewers seem primed to prefer positive results over negative or neutral ones ( Bardy, 1998 ; Dickersin et al. , 1992 ; Fanelli, 2010 ; Ioannidis, 1998 ), and to disfavour replication studies ( Campanario, 1998 ; Kerr et al. , 1977 ).

Lack of incentives : Traditional peer review provides little in the way of incentives for reviewers, whose work is almost exclusively unpaid and whose anonymous contributions cannot be recognised and hence rewarded ( Armstrong, 1997 ; Ware, 2008 ).

Wastefulness: Reviewer comments often add context or point to areas for future work. Reviewer disagreements can expose areas of tension in a theory or argument. The behind-the-scenes discussions of reviewers and authors can also guide younger researchers in learning review processes. Readers may find such information helpful and yet at present, this potentially valuable additional information is wasted.

In response to these criticisms, a wide variety of changes to peer review have been suggested (see the extensive overviews in Tennant et al. , 2017 ; Walker & Rocha da Silva, 2015 ). Amongst these innovations, many have been labelled as “open peer review” at one time or another. As we shall see, these innovations labelled as OPR in fact encompass a wide variety of discrete ways in which peer review can be “opened up”. Each of these distinct traits are theorized to address one or more of the shortcomings listed above, but no trait is claimed to address all of them and sometimes their aims may be in conflict. These points will be addressed fully in the discussion section.

2. The contested meaning of open peer review

The diversity of the definitions provided for open peer review can be seen by examining just two examples. The first one is, to my knowledge, the first recorded use of the phrase “open peer review”:

“[A]n open reviewing system would be preferable. It would be more equitable and more efficient. Knowing that they would have to defend their views before their peers should provide referees with the motivation to do a good job. Also, as a side benefit, referees would be recognized for the work they had done (at least for those papers that were published). Open peer review would also improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it. Frequently, the review itself provides useful information. Should not these contributions be shared? Interested readers should have access to the reviews of the published papers.” ( Armstrong, 1982 )

“[O]pen review makes submissions OA [open access], before or after some prepublication review, and invites community comments. Some open-review journals will use those comments to decide whether to accept the article for formal publication, and others will already have accepted the article and use the community comments to complement or carry forward the quality evaluation started by the journal. ” ( Suber, 2012 )

Within just these two examples, there are already a multitude of factors at play, including the removal of anonymity, the publishing of review reports, interaction between participants, crowdsourcing of reviews, and making manuscripts public pre-review, amongst others. But each of these are distinct factors, presenting separate strategies for openness and targeting differing problems. For example, disclosure of identities aims usually at increasing accountability and minimizing bias, c.f. “referees should be more highly motivated to do a competent and fair review if they may have to defend their views to the authors and if they will be identified with the published papers” ( Armstrong, 1982 ). Publication of reports, on the other hand, also tackles problems of incentive (reviewers can get credit for their work) and wastefulness (reports can be consulted by readers). Moreover, these factors need not necessarily be linked, which is to say that they can be employed separately: identities can be disclosed without reports being published, and reports published with reviewer names withheld, for example.

This diversity has led many authors to acknowledge the essential ambiguity of the term “open peer review” ( Hames, 2014 ; Sandewall, 2012 ; Ware, 2011 ). The major attempt thus far to bring coherence to this confusing landscape of competing and overlapping definitions, is Emily Ford’s paper “Defining and Characterizing Open Peer Review: A Review of the Literature” ( Ford, 2013 ). Ford examined thirty-five articles to produce a schema of eight “common characteristics” of OPR: signed review, disclosed review, editor-mediated review, transparent review, crowdsourced review, prepublication review, synchronous review, and post-publication review. Unfortunately, however, Ford’s paper fails to offer a definitive definition of OPR, since despite distinguishing eight “common characteristics” of OPR, Ford nevertheless tries to reduce it to merely one: open identities: “Despite the differing definitions and implementations of open peer review discussed in the literature, its general treatment suggests that the process incorporates disclosure of authors’ and reviewers’ identities at some point during an article’s review and publication” (p. 314). Summing up her argument elsewhere, she says: “my previous definition … broadly understands OPR as any scholarly review mechanism providing disclosure of author and referee identities to one another” ( Ford, 2015 ). But the other elements of her schema do not reduce to this one factor. Many definitions do not include open identities at all. This hence means that although Ford claims to have identified several features of OPR, she in fact is asserting that there is only one defining factor (open identity), which leaves us where we started. Ford’s schema is also problematic elsewhere: it lists “editor-mediated review” and “pre-publication review” as distinguishing characteristics, despite these being common traits of traditional peer review; it includes questionable elements such as the purely “theoretical” “synchronous review”; and some of its characteristics do not seem to be “base elements”, but complexes of other traits – for example, the definition of “transparent review” incorporates other characteristics such as open identities (which Ford terms “signed review”) and open reports (“disclosed review”).

Method: A systematic review of previous definitions

To resolve this ambiguity, I performed a review of the literature for articles discussing “open review” or “open peer review”, extracting a corpus of 122 definitions of OPR. I first searched Web of Science (WoS) for “TOPIC: (”open review" OR “open peer review”)”, with no limitation on date of publication, yielding a total of 137 results (searched on 12th July 2016). These records were then each individually examined for relevance and a total of 57 were excluded. 21 results (all BioMed Central publications) had been through an OPR process (which was mentioned in the abstract) but did not themselves touch on the subject of OPR; 12 results used the phrase “open review” to refer to a literature review with a flexible methodology; 12 results were for the review of objects classed “out of scope” (i.e. academic articles, books, conference submissions, data – examples included guidelines for clinical or therapeutic techniques, standardized terminologies, patent applications, and court judgements); 7 results were not in the English language; and 5 results were duplicate entries in WoS. This left a total of 80 relevant articles which mentioned either “open peer review” or “open review”.

The same search terms were applied to find sources in other academic databases (Google Scholar, PubMed, ScienceDirect, JSTOR and Project Muse). In addition, the first 10 pages of search results for these terms in Google and Google Books (search conducted 18 th July 2016) were examined to find references in “grey literature” (blogs, reports, white papers) and books respectively. Finally, the author examined the reference sections of identified publications, especially bibliographies and literature reviews, to find further references. Duplicate results were discarded and the above exclusion criteria applied to add a further 42 definitions to the corpus. The dataset is available online ( Ross-Hellauer, 2017 , http://doi.org/10.5281/zenodo.438024 ).

Each source was then individually examined for its definition of OPR. Where no explicit definition (e.g. “OPR is …”) was given, implicit definitions were gathered from contextual statements. For instance, “reviewers can notify the editors if they want to opt-out of the open review system and stay anonymous” ( Janowicz & Hitzler, 2012 ) is taken to endorse a definition of OPR as incorporating open identities. In a few cases, sources defined OPR in relation to the systems of specific publishers (e.g., F1000Research, BioMed Central and Nature), and so were taken to implicitly endorse those systems as definitive of OPR.

In searching only for the terms “open review” and “open peer review”, the study explicitly limits itself only to that literature which uses these terms. It is hence important to note that it is likely that other studies have described or proposed innovations to peer review which have aims similar to those identified by this study. However, if they have not explicitly used the label “open review” or “open peer review” in conjunction with these systems, those studies would necessarily fall outside of scope. For example, “post-publication peer review” (PPPR) is clearly a concept closely-related to OPR, but unless sources explicitly equate the two, sources discussing PPPR are not included in this review. It is acknowledged that this focus on the distinct usages of the term OPR, rather than on all sources which touch on the various aims and ideas which underlie such systems, limits the scope of this study.

The number of definitions of OPR over time show a clear upward trend, with the most definitions in a single year coming in 2015. The distribution shows that except for some outlying definitions in the early 1980s, the phrase “open peer review” did not really enter academic discussion until the early 1990s. At that time, the phrase seems to have been used largely to refer to non-blinded review (i.e. open identities). There is then a big upswing from the early-mid 2000s onwards, which perhaps correlates with the rise of the rise of the openness agenda (especially open access, but also open data and open science more generally) over that period ( Figure 1 ). Most of the definitions, 77.9% (n=95), come from peer-reviewed journal articles, with the second largest sources being books and blog posts. Other sources include letters to journals, news items, community reports and glossaries ( Figure 2 ). As shown in Figure 3 , the majority of definitions (51.6%) were identified to be primarily concerned with peer-review of Science, Technology, Engineering and Medicine (STEM) subject material, while 10.7% targeted material from Social Sciences and Humanities (SSH) material. The remainder (37.7%) were interdisciplinary. Meanwhile, regarding the target of the OPR mentioned in these articles ( Figure 4 ), most were referring to peer review of journal articles (80.7%), with 16% not specifying a target (16%), and a small number of articles also referring to review of data, conference papers and grant proposals.

Figure 1. Definitions of OPR in the literature by year.

Figure 2. breakdown of opr definitions by source., figure 3. breakdown of opr definitions by disciplinary scope., figure 4. breakdown of opr definitions by type of material being reviewed..

Sixty-eight percent (n=83) of the 122 definitions identified were explicitly stated, 37.7% (n=46) implicitly stated, and 5.7% (n=7) contained both explicit and implicit information.

The extracted definitions were examined and classified against an iteratively constructed taxonomy of OPR traits. Nickerson et al. (2013) advise that the development of a taxonomy should begin by identifying the appropriate meta-characteristic – in this case distinct individual innovations to the traditional peer review system. An iterative approach then followed, in which dimensions given in the literature were applied to the corpus of definitions and gaps/overlaps in the OPR taxonomy identified. Based on this, new traits or distinctions were introduced so that in the end, a schema of seven OPR traits was produced:

Open identities: Authors and reviewers are aware of each other’s identity

Open reports: Review reports are published alongside the relevant article.

Open participation: The wider community are able to contribute to the review process.

Open interaction: Direct reciprocal discussion between author(s) and reviewers, and/or between reviewers, is allowed and encouraged.

Open pre-review manuscripts: Manuscripts are made immediately available (e.g., via pre-print servers like arXiv) in advance of any formal peer review procedures.

Open final-version commenting: Review or commenting on final “version of record” publications.

Open platforms (“decoupled review”): Review is facilitated by a different organizational entity than the venue of publication.

The core traits are easily identified, with just three covering more than 99% of all definitions: Open identities combined with open reports cover 116 (95.1%) of all records. Adding open participations leads to a coverage of 121 (99.2%) records overall. As seen in Figure 5 , open identities is by far the most prevalent trait, present in 90.1% (n=110) of definitions. Open reports is also present in the majority of definitions (59.0%, n=72), while open participation is part of around a third. Open pre-review manuscripts (23.8%, n=29) and open interaction (20.5%, n=25) are also a fairly prevalent part of definitions. The outliers are open final version commenting (4.9%) and open platforms (1.6%).

Figure 5. Distribution of OPR traits amongst definitions.

If we break down these traits by the disciplinary-focus of the definition source, we observe some interesting differences between STEM- and SSH-focused sources ( Figure 6 ). Of those sources whose definitions were identified to be primarily concerned with peer-review of SSH-subject material, we observe that in comparison to STEM, there is less emphasis on open identities (present in 84.6% of SSH-focused definitions compared to 93.7% of STEM-focused definitions) and open reports (38.5% SSH vs. 61.9% STEM). Three traits were much more likely to be included in SSH definitions of OPR, however: open participation (53.85% SSH vs. 25.4% STEM), open interaction (30.8% SSH vs. 20.6% STEM), and open final-version commenting (15.4% SSH vs. 3.2%STEM). The other traits, open pre-review manuscripts and open platforms, were similar across both groups. Although these differences seem to hint at a slightly different understanding of OPR between the disciplines, we should be careful in generalizing too strongly here. Firstly because splitting scholarship into these two broad groups risks levelling the wealth of disciplinary-specificity within these categories. Secondly, because the number of SSH-specific sources (13) was small.

Figure 6. Prevalence of traits (as percentage) within definitions by disciplinary focus of definition.

The various ways these traits are configured within definitions can be seen in Figure 7 . Quantifying definitions in this way allows us to accurately portray exactly how ambiguously the phrase “open peer review” has been used thus far, for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature examined here.

Figure 7. Unique configurations of OPR traits within definitions.

The distribution of traits shows two very popular configurations and a variety of rarer ones, with the most popular configuration (open identities) accounting for one third (33.6%, n=41) and the second-most popular configuration (open identities, open reports) accounting for almost a quarter (23.8%, n=29) of all definitions. There then follows a “long-tail” of less-frequently found configurations, with more than half of all configurations being unique to a single definition.

Discussion: The traits of open peer review

I next offer a detailed analysis of each of these traits, detailing the issues they aim to resolve and the evidence to support their effectiveness.

Open identities

Open identity peer review, also known as signed peer review ( Ford, 2013 ; Nobarany & Booth, 2015 ) and “unblinded review” ( Monsen & Van Horn, 2007 ), is review where authors and reviewers are aware of each other’s identities. Traditional peer review operates as either “single-blind”, where authors do not know reviewers’ identities, or “double-blind”, where both authors and reviewers remain anonymous. Double-blind reviewing is more common in the Arts, Humanities and Social Sciences than it is in STEM (science, technology, engineering and medicine) subjects, but in all areas single-blind review is by far the most common model ( Walker & Rocha da Silva, 2015 ). A main reason for maintaining author anonymity is that it is assumed to tackle possible publication biases against authors with traditionally feminine names, from less prestigious institutions or non-English speaking regions ( Budden et al. , 2008 ; Ross et al. , 2006 ). Reviewer anonymity, meanwhile, is presumed to protect reviewers from undue influence, allowing them to give candid feedback without fear of possible reprisals from aggrieved authors. Various studies have failed to show that such measures increase review quality, however ( Fisher et al. , 1994 ; Godlee et al. , 1998 ; Justice et al. , 1998 ; McNutt et al. , 1990 ; van Rooyen et al. , 1999 ). As Godlee and her colleagues have said, “Neither blinding reviewers to the authors and origin of the paper nor requiring them to sign their reports had any effect on rate of detection of errors. Such measures are unlikely to improve the quality of peer review reports” ( Godlee et al. , 1998 ). Moreover, factors such as close disciplinary communities and internet search capabilities, mean that author anonymity is only partially effective, with reviewers shown to be able to identify authors in between 26 and 46 percent of cases ( Fisher et al. , 1994 ; Godlee et al. , 1998 ).

Proponents of open identity peer review argue that it will enhance accountability, further enable credit for peer reviewers, and simply make the system fairer: “most importantly, it seems unjust that authors should be “judged” by reviewers hiding behind anonymity” ( van Rooyen et al. , 1999 ). Open identity peer review is argued, moreover, to potentially increase review quality, as it is theorised that reviewers will be more highly motivated and invest more care in their reviews if their names are attached to them. Finally, a reviewer for this paper advises that “proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny” ( Bloom, 2017 ). Opponents counter this by arguing that signing will lead to poorer reviews, as reviewers temper their true opinions to avoid causing offence. To date, studies have failed to show any great effect in either direction ( McNutt et al. , 1990 ; van Rooyen et al. , 1999 ; van Rooyen et al. , 2010 ). However, since these studies derive from only one disciplinary area (medicine), the results cannot be taken as representative and hence further research is undoubtedly required.

Open reports

Open reports peer review is where review reports (either full reports or summaries) are published alongside the relevant article. Often, although not in all cases (e.g., EMBO reports, http://embor.embopress.org ), review names are published alongside the reports. The main benefits of this measure lie in making currently invisible but potentially useful scholarly information available for re-use. There is increased transparency and accountability that comes with being able to examine normally behind-the-scenes discussions and processes of improvement and assessment, and a potential to further incentivize peer reviewers by making their peer review work a more visible part of their scholarly activities (thus enabling reputational credit).

Reviewing is hard work. Research Information Network reported in 2008 that a single peer review takes an average of four hours, at an estimated total annual global cost of around £1.9 billion ( Research Information Network, 2008 ). Once an article is published, however, these reviews usually serve no further purpose than to reside in publisher’s long-term archives. Yet those reviews contain information that remains potentially relevant and useful in the here-and-now. Often, works are accepted despite the lingering reservations of reviewers. Published reports can enable readers to consider these criticisms themselves, and “have a chance to examine and appraise this process of “creative disagreement” and form their own opinions” ( Peters & Ceci, 1982 ). Making reviews public in this way also adds another layer of quality assurance, as the reviews are open to the scrutiny of the wider scientific community. It could also increase review quality, as the thought of their words being made publicly available could motivate reviewers to be more thorough in their review activities. Moreover, publishing reports also aims at raising the recognition and reward of the work of peer reviewers. Adding review activities to the reviewer’s professional record is common practice; author identification systems currently also add mechanisms to host such information (e.g. via ORCID) ( Hanson et al. , 2016 ). Finally, open reports give young researchers a guide (to tone, length, the formulation of criticisms) to help them as they begin to do peer review themselves.

The evidence-base against which to judge such arguments is not great enough to enable strong conclusions, however. Van Rooyen and her colleagues found that open reports correlate with higher refusal rates amongst potential reviewers, as well as an increase in time taken to write review but no concomitant effect on review quality ( van Rooyen et al. , 2010 ). Nicholson and Alperin’s small survey, however, found generally positive attitudes: “researchers … believe that open review would generally improve reviews, and that peer reviews should count for career advancement” ( Nicholson & Alperin, 2016 ).

Open participation

Open participation peer review, also known as “crowdsourced peer review” ( Ford, 2013 ; Ford, 2015 ), “community/public review” ( Walker & Rocha da Silva, 2015 ) and “public peer review” ( Bornmann et al. , 2012 ), allows the wider community to contribute to the review process. Whereas in traditional peer review editors identify and invite specific parties (peers) to review, open participation processes invite interested members of the scholarly community to participate in the review process, either by contributing full, structured reviews or shorter comments. According to Fitzpatrick & Santo (2012) , the rationale for opening up the pool of reviewers in this way is that “fields can often become self-replicating, as they limit the input that more horizontally-organized peer groups – such as scholars from related disciplines and interdisciplines, and even members of more broadly understood publics – might play in the development of scholarly thought” ( Fitzpatrick & Santo, 2012 ).

In practice, it may be that comments are open to anybody (anonymous or registered), or some credentials might first be required (e.g., Science Open requires an ORCID profile with at least five published articles). Open participation is often used as a complement to a parallel process of solicited peer review. It aims to resolve possible conflicts associated with editorial selection of reviewers (e.g. biases, closed-networks, elitism) and possibly improve the reliability of peer review by increasing the number of reviewers ( Bornmann et al. , 2012 ). Reviewers can come from the wider research community, as well as those traditionally under-represented in scientific assessment, including representatives from industry or members of special-interest groups, for example patients in the case of medical journals ( Ware, 2011 ). This has the potential to open the pool of reviewers beyond those identified by editors to include all potentially interested reviewers (including those from outside academia), and hence increase the number of reviewers for each publication (though in practice this is unlikely). Evidence suggests this practice could help increase the accuracy of peer review. For example, Herron (2012) produced a mathematical model of the peer review process which showed that “the accuracy of public reader-reviewers can surpass that of a small group of expert reviewers if the group of public reviewers is of sufficient size”, although only if the numbers of reader-reviewers exceeded 50.

Criticisms of open participation routinely focus on questions about reviewers’ qualifications to comment and the incentives for doing so. Given that disciplines are subject to increasingly narrow specialization, especially in the sciences ( Casadevall & Fang, 2014 ), it can be objected that those who lack intimate knowledge of the particular methods and objects of that field will literarily be unable to properly evaluate findings. As Stevan Harnad has said: “it is not clear whether the self-appointed commentators will be qualified specialists (or how that is to be ascertained). The expert population in any given speciality is a scarce resource, already overharvested by classical peer review, so one wonders who would have the time or inclination to add journeyman commentary services to this load on their own initiative” ( Harnad, 2000 ). Here, we might reflect on whether this is one reason why open participation seems to be a more central part of conceptions of OPR in the social science and humanities than in STEM subjects. As we saw above, open participation is actually the second most popular trait in definitions stemming from sources with an SSH-focus, appearing in more than half of those definitions, as compared to just a quarter of definitions that focused specifically on STEM subjects (although, again, we must remind ourselves that the small number of SSH definitions means we should not draw overly-strong conclusions based on this finding). As Fitzpatrick and Santo argue, in the humanities, peer review “often focuses on originality, creativity, depth and cogency of argument, and the ability to develop and communicate new connections across and additions to existing texts and ideas”. This is contrasted to the sciences, where peer review is more concretely focused on “verification of results or validation of methodologies” ( Fitzpatrick & Santo, 2012 ). Assessment of narrative cogency and the interconnection of ideas are more transferable across domains than are knowledge of discipline-specific methods and tools. To be sure, both play a role in all scholarship, but since the former play a larger role in SSH, this may be a motivating factor in increased interest in open participation in those disciplines.

Another issue for open participation is that difficulties have been reported in motivating self-selecting commentators to take part and deliver useful critique. Nature , for example, ran an experiment from June to December 2006 inviting submitting authors to take part in an experiment where open participation would be used as a complement to a parallel process of solicited peer reviews. Nature judged the trial to have been unsuccessful due to the small number of authors wishing to take part (just 5% of submitting authors), the small number of overall comments (almost half of articles received no comments) and the insubstantial nature of most of the comments that were received ( Fitzpatrick, 2011 ). At the open access journal Atmospheric Chemistry and Physics (ACP), which publishes pre-review discussion papers for community comments, only about one in five papers is commented upon ( Pöschl, 2012 ). Bornmann et al. (2012) conducted a comparative content analysis of the ACP’s community comments and formal referee reviews and concluded that the latter – tending to focus more on formal qualities, conclusions and potential impact – better supported the selection and improvement of manuscripts. This all suggests that although open participation might be a worthwhile complement to traditional, invited peer review, it is unlikely to be able to fully replace it.

Open interaction

Open interaction peer review allows and encourages direct reciprocal discussion between reviewers, and/or between author(s) and reviewers. In traditional peer review, reviewers and authors correspond only with editors. Reviewers have no contact with other reviewers, and authors usually have no opportunity to directly question or respond to reviewers’ comments. Allowing interaction amongst reviewers or between authors and reviewers, or between reviewers themselves, is another way to “open up” the review process, enabling editors and reviewers to work with authors to improve their manuscript. The motivation for doing so, according to ( Armstrong, 1982 ), is to “improve communication. Referees and authors could discuss difficult issues to find ways to improve a paper, rather than dismissing it”. In the words of Kathleen Fitzpatrick (2012), such interaction can foster “a conversational, collaborative discourse that not only harkens back to the humanities’ long investment in critical dialogue as essential to intellectual labor, but also models a forward-looking approach to scholarly production in a networked era.”

Some journals enable pre-publication interaction between reviewers as standard ( Hames, 2014 ). The EMBO Journal , for example, enables “cross-peer review,” where referees are “invited to comment on each other’s reports, before the editor makes a decision, ensuring a balanced review process” ( EMBO Journal, 2016 ). At eLife , reviewers and editor engage in an “online consultation session” where they come to a mutual decision before the editor compiles a single peer review summary letter for the author to give them a single, non-contradictory roadmap for revisions ( Schekman et al. , 2013 ). The publisher Frontiers has gone a step further, including an interactive collaboration stage that “unites authors, reviewers and the Associate Editor – and if need be the Specialty Chief Editor – in a direct online dialogue, enabling quick iterations and facilitating consensus” ( Frontiers, 2016 ).

Perhaps even more so than other areas studied here, evidence to judge the effectiveness of interactive review is scarce. Based on anecdotal evidence, Walker & Rocha da Silva (2015) advise that “[r]eports from participants are generally but not universally positive”. To the knowledge of the author, the only experimental study that has specifically examined interaction among reviewers or between reviewers and authors is that of Jeffrey Leek and his colleagues, who performed a laboratory study of open and closed peer review based on an online game and found that “improved cooperation does in fact lead to improved reviewing accuracy. These results suggest that in this era of increasing competition for publication and grants, cooperation is vital for accurate evaluation of scientific research” ( Leek et al. , 2011 ). Such results are encouraging, but hardly conclusive. Hence, there remains much scope for further research to determine the impact of cooperation on the efficacy and cost of the review process.

Open pre-review manuscripts

Open pre-review manuscripts are manuscripts that are immediately openly accessible (via the internet) in advance, or in synchrony with, any formal peer review procedures. Subject-specific “preprint servers” like arXiv.org and bioRxiv.org , institutional repositories, catch-all repositories like Zenodo or Figshare and some publisher-hosted repositories (like PeerJ Preprints ) allow authors to short-cut the traditional publication process and make their manuscripts immediately available to everyone. This can be used as a complement to a more traditional publication process, with comments invited on preprints and then incorporated into redrafting as the manuscript goes through traditional peer review with a journal. Alternatively, services which overlay peer-review functionalities on repositories can produce functional publication platforms at reduced cost ( Boldt, 2011 ; Perakakis et al. , 2010 ). The mathematics journal Discrete Analysis , for example, is an overlay journal whose primary content is hosted on arXiv ( Day, 2015 ). The recently released Open Peer Review Module for repositories, developed by Open Scholar in association with OpenAIRE, is an open source software plug-in which adds overlay peer review functionalities to repositories using the DSpace software ( OpenAIRE, 2016 ). Another innovative model along these lines is that of ScienceOpen , which ingests articles metadata from preprint servers and contextualizes them by adding altmetrics and other relational information, before offering authors peer review.

In other cases, manuscripts are submitted to publishers in the usual way but made immediately available online (usually following some rapid preliminary review or “sanity check”) before the start of the peer review process. This approach was pioneered with the 1997 launch of the online journal Electronic Transactions in Artificial Intelligence (ETAI), where a two-stage review process was used. First, manuscripts were made available online for interactive community discussion, before later being subject to standard anonymous peer review. The journal stopped publishing in 2002 ( Sandewall, 2012 ). Atmospheric Chemistry and Physics uses a similar system of multi-stage peer review, with manuscripts being made immediately available as “discussion papers” for community comments and peer review ( Pöschl, 2012 ). Other prominent examples are F1000Research and the Semantic Web Journal .

The benefits to be gained from open pre-review manuscripts is that researchers can assert their priority in reporting findings – they needn’t wait for the sometimes seemingly endless peer review and publishing process, during which they might fear being scooped. Moreover, getting research out earlier increases its visibility, enables open participation in peer review (where commentary is open to all), and perhaps even, according to ( Pöschl, 2012 ), increases the quality of initial manuscript submissions. Finally, making manuscripts openly available in advance of review allows comments to be posted as they are received, either from invited reviewers or the wider community, and enabling readers to follow the process of peer-review in real-time.

Open final-version commenting

Open final-version commenting is review or commenting on final “version of record” publications. If the purpose of peer review is to assist in the selection and improvement of manuscripts for publication, then it seems illogical to suggest that peer review can continue once the final version-of-record is made public. Nonetheless, in a literal sense, even the declared fixed version-of-record continues to undergo a process of improvement (occasionally) and selection (perpetually).

The internet has hugely expanded the range of effective action available for readers to offer their feedback on scholarly works. Where before only formal routes like the letters to the journal or commentary articles offered readers a voice, now a multitude of channels exist. Journals are increasingly offering their own commentary sections. Walker & Rocha da Silva (2015) found that of 53 publishing venues reviewed, 24 provided facilities to enable user-comments on published articles – although these were typically not heavily used. Researchers seem to see the worth of such functionalities, with almost half of respondents to a 2009 survey believing supplementing peer review with some form of post-publication commentary to be beneficial ( Mulligan et al. , 2013 ). But users can “publish” their thoughts anywhere on the Web – via academic social networks like Mendeley , ResearchGate and Academia . edu , via Twitter , or on their own blogs. In this sense, peer review can be decoupled not only from the journal, but also from any particular platform. The reputation of a piece of work is continuously evolving as long as it remains the subject of discussion. Thus, considering final-version commenting to be an active part of an ongoing, perpetual process peer review in a wider sense hence might encourage an adjustment in our conception of the nature of peer review, away from seeing it as a distinct process that leads to publication, and Improvements based on feedback happen most obviously in the case of so-called ‘living’ publications, like the Living Reviews group of three disciplinary journals in the fields of relativity, solar physics and computational astrophysics, publishing invited review articles which allow authors to regularly update their articles to incorporate the latest developments in the field. Here, even where the published version is anticipated to be the final version, it remains open to future retraction or correction. Such changes are often fueled by social media, as in the 2010 case of #arseniclife, where social media critique over flaws in the methodology of a paper claiming to show a bacterium capable of growing on arsenic resulted in refutations being published in Science. The Retraction Watch blog is dedicated to publicizing such cases.

An important platform in this regard has been Pubpeer which proclaims itself a “post-publication peer review platform”. When its users swarmed to critique a Nature paper on STAP (Stimulus-Triggered Acquisition of Pluripotency) cells, PubPeer argued that its “post-publication peer review easily outperformed even the most careful reviewing in the best journal. The papers’ comment threads on PubPeer have attracted some 40000 viewers. It’s hardly surprising they caught issues that three overworked referees and a couple of editors did not. Science is now able to self-correct instantly. Post-publication peer review is here to stay” ( PubPeer, 2014 ).

Open platforms (“decoupled review”)

Open platforms peer review is review facilitated by a different organizational entity than the venue of publication. Recent years have seen the emergence of a group of dedicated platforms which aim to augment the traditional publishing ecosystem by de-coupling review functionalities from journals. Services like RUBRIQ and Peerage of Science offer “portable” or “independent” peer review. A similar service, Axios Review , operated from 2013 to 2017. Each platform invites authors to submit manuscripts directly to them, then organises review amongst their own community of reviewers and returns review reports. In the case of RUBRIQ and Peerage of Science, participating journals then have access to these scores and manuscripts and so can contact authors with a publishing offer or to suggest submission. Axios meanwhile, directly forwarded the manuscript, along with reviews and reviewer identities, to the author’s preferred target journal. The models vary in their details – RUBRIQ, for example, pays its reviewers, whereas Axios operated on a community model where reviewers earned discounts on having their own work reviewed – but all aim in their ways to reduce inefficiencies in the publication process, especially the problem of duplication of effort. Whereas in traditional peer review, a manuscript could undergo peer review at several journals, as it is submitted and rejected, then submitted elsewhere, such services need just one set of reviews which can be carried over to multiple journals until a manuscript finds a home (hence “portable” review).

Other decoupled platforms aim at solving different problems. Publons seeks to address the problem of incentive in peer review by turning peer review into measurable research outputs. Publons collects information about peer review from reviewers and publishers to produce reviewer profiles which detail verified peer review contributions that researchers can add to their CVs. Overlay journals like Discrete Mathematics , discussed above, are another example of open platforms. Peter Suber (quoted in Cassella & Calvi, 2010 ) defines the overlay journal as “An open-access journal that takes submissions from the preprints deposited at an archive (perhaps at the author’s initiative), and subjects them to peer review…. Because an overlay journal doesn’t have its own apparatus for disseminating accepted papers, but uses the pre-existing system of interoperable archives, it is a minimalist journal that only performs peer review.” Finally, there are the many venues through which readers can now comment on already-published works (see also “open final-version commenting” above), including blogs and social networking sites, as well as dedicated platforms such as PubPeer.

Which problems with traditional peer do the various OPR traits address?

I began by sketching out various problems with traditional peer review and advised that OPR, in its various incarnations, has been proposed as a solution to many of these problems, but that no individual trait addresses all of these problems, and that sometimes their aims may be in conflict. Which traits address which of the problems identified above? Which might actually exacerbate them? Based on the foregoing, I here present this summary:

Unreliability and inconsistency: Open identities and open reports are theorized to lead to better reviews, as the thought of having their name publicly connected to a work or seeing their review published encourages reviewers to be more thorough. There is at present too little evidence to judge if this is actually so, however. Open participation and open final-version commenting are theorized to possibly improve the reliability of peer review by increasing the number of potential reviewers, especially from different disciplinary backgrounds. In practice, open participation struggles to attract reviewers in most cases and thus is probably not a sustainable replacement for invited peer review, although it is perhaps a worthwhile supplement to it. Some evidence suggests that open interaction between reviewers and authors could lead to improved reviewing accuracy.

Delay and expense: Open pre-review manuscripts sharply reduce the time before research is first publicly available and may increase the overall quality of initial submissions. Open platforms can help overcome the “waterfall” problem, where individual articles go through multiple cycles of review and rejection at different journals. In principle, open participation could reduce the need for editorial mediation in finding reviewers, but in practice any reduction of costs is questionable, as open participation can fail to attract reviewers and in any case, editorial mediation will continue to be necessary to facilitate discussion and arbitrate disputes. Open identities and open reports might actually exacerbate problems of delay and expense, as it seems invited reviewers are currently less inclined to review under such circumstances. Finally, open interaction – by necessitating more back and forth between reviewers and authors, and more editorial mediation – might lead to longer reviewing times.

Lack of accountability and risks of subversion: Open identities and reports can increase accountability through increased transparency and by making any conflicts of interest more immediately apparent to authors and future readers. Open participation could overcome problems associated with editorial selection of reviewers (e.g. biases, closed-networks, elitism). However, in opening up participation to the wider community, it might actually increase engagement by those with conflicts of interest. Where anonymity is possible, this may be particularly problematic. Moreover, lack of anonymity for reviewers in open identities review might subvert the process by discouraging reviewers from making strong criticisms, especially against higher-status colleagues.

Social and publication biases: Open reports adds another layer of quality assurance, allowing the wider community to scrutinize reviews to examine decision-making processes. However, open identities removes anonymity conditions for reviewers (single-blind) or authors and reviewers (double-blind) which are traditionally in place to counteract social biases (although there is not strong-evidence that such anonymity has been effective).

Lack of incentives: Open reports linked to open identities enable higher visibility for peer review activities, allowing review work to be cited in other publications and in career development activities linked to promotion and tenure. Open participation could in principle increase incentives to peer review by enabling reviewers to themselves select the works that they consider themselves qualified to judge; however in practice, experience to date suggests that reviewers are less likely to review under this condition.

Wastefulness: Open reports make currently invisible but potentially useful scholarly information available for re-use, as well as providing young researchers a guide (to tone, length, the formulation of criticisms) to help them as they begin to do peer review themselves.

This synthesis allows us to draw the following conclusions: (1) the individual traits of OPR can be argued to address many of the problems with traditional peer review, but (2) differing traits addresses differing problems in differing ways, (3) no trait addresses all problems, and in fact (4) individual traits may actually exacerbate problems in some areas. Assessing this already complex landscape is made yet more problematic by the fact that (5) there is often little evidence to support or challenge many of these claims. There is hence a pressing need for more research to empirically evaluate the efficacy of differing traits in resolving these issues.

Open Science as the unifying theme for the traits of OPR

The traits that we have identified to be part of definitions of OPR are disparate in their aims and implementation. Is there any common thread between them? I would argue yes: they each aim to bring peer review more into line with the emergent agenda of Open Science. To advance this argument, I’ll next briefly describe this movement and its underlying aims, and then relate each OPR trait to this agenda.

Open Science is the name given to a broad movement to reshape scholarly communication. As the English word “science” traditionally excludes the humanities and social sciences, the phenomenon is often referred to by more explicitly inclusive terms like “open scholarship” or “open research”. As “Open Science” is the more common term, I shall use it here, but should be read as referring to research from all academic disciplines.

Open Science encompasses a variety of practices, usually including areas like open access to publications, open research data, open source software/tools, open workflows, citizen science, open educational resources, and alternative methods for research evaluation including open peer review ( Pontika et al ., 2015 ). The aims and assumptions underlying the push to implement these various practices have been analysed by Fecher & Friesike (2013) , whose analysis of the literature found five broad concerns, or “schools of thought” ( Figure 8 ). These are:

Figure 8. Five schools of thought in Open Science (CC BY-NC, Fecher & Friesike, 2013 ).

Democratic school : Believing that there is an unequal distribution of access to knowledge, this area is concerned with making scholarly knowledge (including publications and data) available freely for all.

Pragmatic school: Following the principle that the creation of knowledge is made more efficient through collaboration and strengthened through critique, this area seeks to harness network effects by connecting scholars and making scholarly methods transparent.

Infrastructure school: This thread is motivated by the assumption that efficient research requires readily available platforms, tools and services for dissemination and collaboration.

Public school: Based on the recognition that true societal impact requires societal engagement in research and readily understandable communication of scientific results, this area seeks to bring the public to collaborate in research through citizen science, and make scholarship more readily understandable through lay summaries, blogging and other less formal communicative methods.

Measurement school: Motivated by the acknowledgement that traditional metrics for measuring scientific impact have proven problematic (by being too heavily focused on publications, often only at the journal-level, for instance), this strand seeks “alternative metrics” which can make use of the new possibilities of digitally networked tools to track and measure the impact of scholarship through formerly invisible activities.

The traits of OPR, in differing yet overlapping ways, each aim to bring greater transparency, accountability, inclusivity and/or efficiency to the restricted model of traditional peer review. The traits of OPR can be fit into Fecher & Friesike’s Open Science schema thus:

Democratic school : Open reports further make scholarly products available to all.

Pragmatic school: Open identities foster increased accountability by linking scholars’ names to their judgements; open reports increases transparency by opening review reports to readers; open interaction fosters increased collaboration between authors, reviewers and editors in the process of evaluation and revision of scholarship; open pre-review manuscripts enable the earlier dissemination of results.

Infrastructure school: Open platforms can make peer review more efficient by decoupling it from journals.

Public school: Open participation and final-version commenting bring greater inclusivity to peer review by expanding the potential pool of reviewers, including to those outside traditional research actors.

Measurement school: Open identities , open reports and open platforms (e.g., Publons) enable peer review activities to be more clearly monitored and taken into account in impact-measurement activities.

We have seen that the definition of “open peer review” is contested ground. My aim here has been to provide some clarity as to what is being referred to when this term is used. This is especially important since interest in the term (measured via references in the literature) is growing rapidly. By analyzing 122 separate definitions from the literature I have identified seven different traits of OPR, which all aim to resolve differing problems with traditional peer review. Amongst the corpus of definitions there are 22 unique configurations of these traits, meaning 22 distinct definitions of OPR in the reviewed literature. Across all definitions, the core elements are open identities and open reports, with one or both elements present in over 95% of the definitions examined. Among the other elements, open participation is the next most common element, and should perhaps be considered a core trait in SSH. Further secondary elements are open interaction and pre-review manuscripts. Fringe elements include open final version commenting and open platforms.

Given that OPR is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations. Although it could be argued that merely accepting the status quo in this way does not help resolve possible confusion regarding usage, I would argue that quantifying the ambiguity of usage and mapping the distinct traits enables future discussion to start from a firmer basis that (1) acknowledges that people often mean different things when they use this term, and (2) clarifies in advance exactly which OPR traits are under discussion.

By being clear about these distinct traits, it will enable us to treat the ambiguity of OPR as a feature and not a bug. The large number of possible configurations of options presents a tool-kit for differing communities to construct open peer review systems that reflect their own needs, preferences and goals. The finding that there seems to be a difference in interpretations between disciplines (for example, that open participation seems more central to conceptions of OPR in SSH than STEM) reinforces this view. Moreover, disambiguating these traits will enable more focused analysis of the extent to which these traits are actually effective in countering the problems they are claimed to address. This is particularly urgent because, as we have seen, there is often little evidence to support or refute many of these claims.

Based upon this analysis I offer the following definition:

OPR definition: Open peer review is an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the aims of Open Science, including making reviewer and author identities open, publishing review reports and enabling greater participation in the peer review process. The full list of traits is:

Open participation: The wider community to able to contribute to the review process.

Data availability

Dataset including full data files used for analysis in this review: http://doi.org/10.5281/zenodo.438024 ( Ross-Hellauer, 2017 ).

1 This quote was found on the P2P Foundation Wiki ( http://wiki.p2pfoundation.net/Open_Peer_Review , accessed 18 th July 2016). Its provenance is uncertain, even to Suber himself, who recently advised in personal correspondence (19 th August 2016): “I might have said it in an email (as noted). But I can’t confirm that, since all my emails from before 2009 are on an old computer in a different city. It sounds like something I could have said in 2007. If you want to use it and attribute it to me, please feel free to note my own uncertainty!”

Competing interests

This work was conducted as part of the OpenAIRE2020 project, an EC-funded initiative to implement and monitor Open Access and Open Science policies in Europe and beyond.

Grant information

This work is funded by the European Commission H2020 project OpenAIRE2020 (Grant agreement: 643410, Call: H2020-EINFRA-2014-1).

The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Acknowledgements

The author thanks Birgit Schmidt (University of Goettingen), Arvid Deppe (University of Kassel), Jon Tennant (Imperial College London, ScienceOpen), Edit Gorogh (University of Goettingen) and Alessia Bardi (Istituto di Scienza e Tecnologie dell'Informazione) for discussion and comments that led to the improvement of this text. Birgit Schmidt created Figure 1 .

Supplementary material

Supplementary file 1: PRISMA checklist. The checklist was completed with the original copy of the manuscript.

Click here to access the data.

Supplementary file 2: PRISMA flowchart showing the number of records identified, included and excluded.

  •   Armstrong JS: Barriers to Scientific Contributions: The Authors Formula. Behav Brain Sci. Cambridge University Press (CUP). 1982; 5 (02): 197–199. Publisher Full Text
  •   Armstrong JS: Peer Review for Journals: Evidence on Quality Control Fairness, and Innovation. Sci Eng Ethics. Springer Nature. 1997; 3 (1): 63–84. Publisher Full Text
  •   Bardy AH: Bias in reporting clinical trials. Br J Clin Pharmacol. Wiley-Blackwell. 1998; 46 (2): 147–50. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Bloom T: Referee Report For: What is open peer review? A systematic review [version 1; referees: 1 approved, 3 approved with reservations]. F1000Res. 2017; 6 : 588. Publisher Full Text
  •   Boldt A: Extending ArXiv.Org to Achieve Open Peer Review and Publishing. J Scholarly Publ. University of Toronto Press Inc. (UTPress), 2011; 42 (2): 238–42. Publisher Full Text
  •   Bornmann L, Herich H, Joos H, et al. : In Public Peer Review of Submitted Manuscripts How Do Reviewer Comments Differ from Comments Written by Interested Members of the Scientific Community? A Content Analysis of Comments Written for Atmospheric Chemistry and Physics . Scientometrics. Springer Nature. 2012; 93 (3): 915–29. Publisher Full Text
  •   Budden AE, Tregenza T, Aarsen LW, et al. : Double-blind review favours increased representation of female authors. Trends Ecol Evol. 2008; 23 (1): 4–6. PubMed Abstract | Publisher Full Text
  •   Campanario JM: Peer Review for Journals as It Stands Today-Part 1. Sci Commun. SAGE Publications. 1998; 19 (3): 181–211. Publisher Full Text
  •   Casadevall A, Fang FC: Specialized science. Infect Immun. 2014; 82 (4): 1355–1360. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Cassella M, Calvi L: New Journal Models and Publishing Perspectives in the Evolving Digital Environment. IFLA Journal. SAGE Publications. 2010; 36 (1): 7–15. Publisher Full Text
  •   Chubin DE, Hackett EJ: Peerless Science: Peer Review and US Science Policy. Suny Press, 1990. Reference Source
  •   Cronin B: Vernacular and Vehicular Language. J Am Soc Inf Sci Technol. Wiley-Blackwell. 2009; 60 (3): 433. Publisher Full Text
  •   Dall’Aglio P: Peer Review and Journal Models. ArXiv:Physics/0608307, 2006. Reference Source
  •   Daniel HD: Guardians of Science. Wiley-Blackwell, 1993. Publisher Full Text
  •   Day C: Meet the Overlay Journal. Phys Today. AIP Publishing, 2015. Publisher Full Text
  •   Dickersin K, Min YI, Meinert CL: Factors influencing publication of research results. Follow-up of applications submitted to two institutional review boards. JAMA. American Medical Association (AMA). 1992; 267 (3): 374–8. PubMed Abstract | Publisher Full Text
  •   EMBO Journal: About | The EMBO Journal [WWW Document]. 2016; (accessed 8.24.16). Reference Source
  •   Ernst E, Kienbacher T: Chauvinism. Nature. Springer Nature. 1991; 352 (6336): 560. Publisher Full Text
  •   Fanelli D: Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data. Edited by Enrico Scalas. PLoS One. Public Library of Science (PLoS). 2010; 5 (4): e10271. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Fang FC, Casadevall A: Retracted Science and the Retraction Index. Infect Immun. American Society for Microbiology. 2011; 79 (10): 3855–59. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Fang FC, Steen RG, Casadevall A: Misconduct Accounts for the Majority of Retracted Scientific Publications. Proc Natl Acad Sci U S A. Proceedings of the National Academy of Sciences. 2012; 109 (42): 17028–33. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Fecher B, Friesike S: Open Science: One Term, Five Schools of Thought. In: Bartling, S. and Friesike (Eds.), Opening Science. New York, NY: Springer, 2013; 17–47. Publisher Full Text
  •   Fisher M, Friedman SB, Strauss B: The Effects of Blinding on Acceptance of Research Papers by Peer Review. JAMA. American Medical Association (AMA). 1994; 272 (2): 143–46. PubMed Abstract | Publisher Full Text
  •   Fitzpatrick K: Planned Obsolescence. New York, NY: NYU Press, 2011. Reference Source
  •   Fitzpatrick K, Santo A: Open Review, A Study of Contexts and Practices. Report. 2012. Reference Source
  •   Ford E: Defining and Characterizing Open Peer Review: A Review of the Literature. J Scholarly Publ. University of Toronto Press Inc. (UTPress), 2013; 44 (4): 311–26. Publisher Full Text
  •   Ford E: Open peer review at four STEM journals: an observational overview [version 2; referees: 2 approved, 2 approved with reservations]. F1000Res. F1000 Research Ltd. 2015; 4 : 6. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Frontiers: About Frontiers Academic Journals and Research Community. 2016. Reference Source
  •   Garcia JA, Rodriguez-Sanchez R, Fdez-Valdivia J: Authors and Reviewers Who Suffer from Confirmatory Bias. Scientometrics. Springer Nature. 2016; 109 (2): 1377–95. Publisher Full Text
  •   Gillespie GW, Chubin DE, Kurzon GM: Experience with NIH Peer Review: Researchers Cynicism and Desire for Change. Sci Technol Hum Val. 1985; 10 (3): 44–54. Publisher Full Text
  •   Godlee F, Gale CR, Martyn CN: Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA. American Medical Association (AMA). 1998; 280 (3): 237–40. PubMed Abstract | Publisher Full Text
  •   Hames I: The Changing Face of Peer Review. Sci Ed. Korean Council of Science Editors. 2014; 1 (1): 9–12. Publisher Full Text
  •   Hanson B, Lawrence R, Meadows A, et al. : Early Adopters of ORCID Functionality Enabling Recognition of Peer Review: Two Brief Case Studies. Learn Publ. Wiley-Blackwell. 2016; 29 (1): 60–63. Publisher Full Text
  •   Harnad S: The Invisible Hand of Peer Review. Journal (On-line/Unpaginated). Exploit Interactive. 2000. Reference Source
  •   Herron DM: Is expert peer review obsolete? A model suggests that post-publication reader review may exceed the accuracy of traditional peer review. Surg Endosc. Springer Nature. 2012; 26 (8): 2275–80. PubMed Abstract | Publisher Full Text
  •   Ioannidis JP: Effect of the statistical significance of results on the time to completion and publication of randomized efficacy trials. JAMA. American Medical Association (AMA). 1998; 279 (4): 281–6. PubMed Abstract | Publisher Full Text
  •   Janowicz K, Hitzler P: Open and Transparent: the Review Process of the Semantic Web Journal. Learn Publ. Wiley-Blackwell. 2012; 25 (1): 48–55. Publisher Full Text
  •   Jubb M: Peer Review: The Current Landscape and Future Trends. Learn Publ. Wiley-Blackwell. 2016; 29 (1): 13–21. Publisher Full Text
  •   Justice AC, Cho MK, Winker MA, et al. : Does masking author identity improve peer review quality? A randomized controlled trial. PEER Investigators. JAMA. American Medical Association (AMA). 1998; 280 (3): 240–2. PubMed Abstract | Publisher Full Text
  •   Kaplan S: Major Publisher Retracts 64 Scientific Papers in Fake Peer Review Outbreak. Washington Post, 2015. Reference Source
  •   Kerr S, Tolliver J, Petree D: Manuscript Characteristics Which Influence Acceptance for Management and Social Science Journals. Acad Manage J. The Academy of Management. 1977; 20 (1): 132–41. Publisher Full Text
  •   Kravitz RL, Franks P, Feldman MD, et al. : Editorial peer reviewers' recommendations at a general medical journal: are they reliable and do editors care? PLoS One. Public Library of Science (PLoS). 2010; 5 (4): e10072. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Kriegeskorte N: Open evaluation: a vision for entirely transparent post-publication peer review and rating for science. Front Comput Neurosci. Frontiers Media SA. 2012; 6 : 79. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Leek JT, Taub MA, Pineda FJ: Cooperation between referees and authors increases peer review accuracy. PLoS One. Public Library of Science (PLoS). 2011; 6 (11): e26895. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Link AM: US and non-US submissions: an analysis of reviewer bias. JAMA. 1998; 280 (3): 246–7. PubMed Abstract | Publisher Full Text
  •   Lloyd ME: Gender factors in reviewer recommendations for manuscript publication. J Appl Behav Anal. Society for the Experimental Analysis of Behavior. 1990; 23 (4): 539–43. PubMed Abstract | Free Full Text
  •   Mahoney MJ: Publication Prejudices: An Experimental Study of Confirmatory Bias in the Peer Review System. Cognit Ther Res. 1977; 1 (2): Springer Nature: 161–75. Publisher Full Text
  •   McNutt RA, Evans AT, Fletcher RH, et al. : The effects of blinding on the quality of peer review. A randomized trial. JAMA. American Medical Association (AMA). 1990; 263 (10): 1371–6. PubMed Abstract | Publisher Full Text
  •   Monsen ER, Van Horn L: Research: Successful Approaches. American Dietetic Association; 2007. Reference Source
  •   Moore S, Neylon C, Eve MP, et al. : Excellence R Us: University Research and the Fetishisation of Excellence. Palgrave Commun. Springer Nature. 2017; 3 : 16105. Publisher Full Text
  •   Mulligan A, Hall L, Raphael E: Peer Review in a Changing World: An International Study Measuring the Attitudes of Researchers. J Am Soc Inf Sci Technol. Wiley-Blackwell. 2013; 64 (1): 132–61. Publisher Full Text
  •   Nicholson J, Alperin JP: A Brief Survey on Peer Review in Scholarly Communication. The Winnower, 2016. Reference Source
  •   Nickerson RC, Varshney U, Muntermann J: A Method for Taxonomy Development and Its Application in Information Systems. Eur J Inf Syst. Springer Nature. 2013; 22 (3): 336–59. Publisher Full Text
  •   Nobarany S, Booth KS: Use of Politeness Strategies in Signed Open Peer Review. J Assoc Inf Sci Technol. Wiley-Blackwell. 2015; 66 (5): 1048–64. Publisher Full Text
  •   OpenAIRE: OpenAIRE’s Experiments in Open Peer Review / Report. Zenodo. 2016. Publisher Full Text
  •   Perakakis P, Taylor M, Mazza M, et al. : Natural Selection of Academic Papers. Scientometrics. Springer Nature. 2010; 85 (2): 553–59. Publisher Full Text
  •   Peters DP, Ceci SJ: Peer-Review Practices of Psychological Journals: The Fate of Published Articles Submitted Again. Behav Brain Sci. Cambridge University Press (CUP). 1982; 5 (02): 187–195. Publisher Full Text
  •   Pontika N, Knoth P, Cancellieri M, et al. : Fostering Open Science to Research Using a Taxonomy and an ELearning Portal. In Proceedings of the 15th International Conference on Knowledge Technologies and Data-Driven Business - i-KNOW 15 . Association for Computing Machinery (ACM). 2015. Publisher Full Text
  •   Pöschl U: Multi-stage open peer review: scientific evaluation integrating the strengths of traditional peer review with the virtues of transparency and self-regulation. Front Comput Neurosci. Frontiers Media SA. 2012; 6 : 33. PubMed Abstract | Publisher Full Text | Free Full Text
  •   PubPeer: Science Self-Corrects – Instantly. PubPeer: The Online Journal Club. 2014. Reference Source
  •   Research Information Network: Activities, Costs and Funding Flows in the Scholarly Communications System in the UK: Report Commissioned by the Research Information Network (RIN). 2008. Reference Source
  •   Ross JS, Gross CP, Desai MM, et al. : Effect of blinded peer review on abstract acceptance. JAMA. American Medical Association (AMA). 2006; 295 (14): 1675–80. PubMed Abstract | Publisher Full Text
  •   Ross-Hellauer T: Review of Definitions of Open Peer Review in the Scholarly Literature 2016. 2017. Data Source
  •   Sandewall E: Maintaining Live Discussion in Two-Stage Open Peer Review. Front Comput Neurosci. Frontiers Media SA. 2012; 6 : 9. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Schekman R, Watt F, Weigel D: The eLife approach to peer review. eLife. eLife Sciences Organisation Ltd. 2013; 2 : e00799. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Schroter S, Black N, Evans S, et al. : Effects of Training on Quality of Peer Review: Randomised Controlled Trial. BMJ. BMJ. 2004; 328 (7441): 673–70. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Smith R: Peer review: a flawed process at the heart of science and journals. J R Soc Med. SAGE Publications. 2006; 99 (4): 178–82. PubMed Abstract | Free Full Text
  •   Spier R: The History of the Peer-Review Process. Trends Biotechnol. Elsevier BV. 2002; 20 (8): 357–58. PubMed Abstract | Publisher Full Text
  •   Steen RG, Casadevall A, Fang FC: Why has the number of scientific retractions increased? Edited by Gemma Elizabeth Derrick. PLoS One. Public Library of Science (PLoS). 2013; 8 (7): e68397. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Suber P: Open Access. Cambridge, MA: MIT Press, 2012. Reference Source
  •   Tennant JP, Dugan JM, Graziotin D, et al. : A multi-disciplinary perspective on emergent and future innovations in peer review [version 1; referees: 2 approved with reservations]. F1000Res. 2017; 6 : 1151. Publisher Full Text
  •   Travis GD, Collins HM: New Light on Old Boys: Cognitive and Institutional Particularism in the Peer Review System. Sci Technol Hum Val. 1991; 16 (3). Publisher Full Text
  •   Tregenza T: Gender Bias in the Refereeing Process? Trends Ecol. 2002; 17 (8): 349–350. Publisher Full Text
  •   van Rooyen S, Delamothe T, Evans SJ: Effect on peer review of telling reviewers that their signed reviews might be posted on the web: randomised controlled trial. BMJ. BMJ. 2010; 341 : c5729. PubMed Abstract | Publisher Full Text | Free Full Text
  •   van Rooyen S, Godlee F, Evans S, et al. : Effect of open peer review on quality of reviews and on reviewers' recommendations: a randomised trial. BMJ. BMJ. 1999; 318 (7175): 23–27. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Walker R, Rocha da Silva P: Emerging trends in peer review-a survey. Front Neurosci. Frontiers Media SA. 2015; 9 : 169. PubMed Abstract | Publisher Full Text | Free Full Text
  •   Ware M: Peer Review: Benefits, Perceptions and Alternatives. Publishing Research Consortium 4, 2008. Reference Source
  •   Ware M: Peer Review: Recent Experience and Future Directions. New Review of Information Networking. Informa UK Limited. 2011; 16 (1): 23–53. Publisher Full Text
  •   Ware M: Peer Review Survey 2015. Publishing Research Consortium. 2016. Reference Source

Comments on this article Comments (1)

  • Reader Comment 07 Sep 2017 Mick Watson , The Roslin Institute, University of Edinburgh, UK 07 Sep 2017 Reader Comment My comments on this are well summarized in the opinion piece I published in Genome Biology some time ago: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-015-0669-2 Competing Interests: No competing interests were disclosed. My comments on this are well summarized in the opinion piece I published in Genome Biology some time ago: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-015-0669-2 My comments on this are well summarized in the opinion piece I published in Genome Biology some time ago: https://genomebiology.biomedcentral.com/articles/10.1186/s13059-015-0669-2 Competing Interests: No competing interests were disclosed. Close Report a concern
  • Comment ADD YOUR COMMENT
Views Downloads
F1000Research - -
PubMed Central - -

Open Peer Review

Competing Interests: No competing interests were disclosed.

  • Respond or Comment
  • COMMENT ON THIS REPORT

Competing Interests: I am a full-time journal editor, employed by The BMJ which does research in this area (and is cited in the article)

Reviewer Expertise: I am a journal editor. I have operated a couple of different variants of open peer review.

Reviewer Expertise: library and information science, scholarly communication, scholarly publishing

  • The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you are using of open science fully accepted and
  • The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you are using of open science fully accepted and not contested? If so, then great, but if not, then it becomes murkier and you might want to spend time unpacking the tension there. Also in the last sentence of the Intro, what is that ethos of open science?
  • Would it be useful to unpack some counter arguments on the reasons peer review in its current state of blinded does not work? For example, in the delay and expense portion, how does flipping the model to use APCs change the cost at all? And what happens to unfunded research when the model is flipped? Does that create a disparity that only well funded research is readily available? This might create yet another stratification of scholarly publishing and science communication, which one would assume open science is trying to diminish. I realize that you do some of this in the discussion section, but I find there is a gap in the discussion of the economic argument.
  • I appreciate your thoughtful criticisms of past works that have been unable to do what you are doing in this article. Being the author of one of them, however, I would like to make some points.
  • Would like to point out that while I understand the lack of a definition in my authored article (Defining & Characterizing Open Peer Review - 2013) is problematic, it was never my intent to fully describe it, but I had to use a scope for my systematic review, and that scope was identity disclosure. Please note that in my concluding remarks on that paper that I recommended a definition be more tightly defined, and it never claimed to define it wholesale.
  • Your research does a good job picking up the task that other papers were unable to accomplish.
  • I would like to hear more in your methodology section about the searching for and selection of social sciences and humanities literature, as I think there might be some gaps in your data set based on this approach. You provide your search terms for Web of Science, but not the other databases and search engines. Including this would strengthen your methodology section. To me, the treatment of social sciences and humanities in this study is one of its weaknesses.
  • Please outline the limitations of your research method.
  • The methods section should be strengthened for better understanding of social sciences and humanities approaches, as well as limitations, for the paper to be more scientifically sound.
  • I am not a statistician, nor am I a quantitative researcher, so I cannot provide a robust review of your results when it comes to these facets.
  • Figures and tables are helpful to translate findings and ideas presented.
  • This section is well organized and easy to understand. I appreciate the presentation of criticisms of OPR in this section.
  • Open participation and open interaction sections would be greatly enhanced and do a great service to your consideration of the social sciences and humanities disciplines if you engaged with Fitzpatrick’s work presented in the Mellon White paper as well the Logos article. Additionally, there is an article not included in your data set (was it out of scope?), in Social Epistemology , that may help. It would be good to engage more deeply with the question: Is OPR changing the role and purpose of peer review itself? There seems to be evidence of this by the mentorship offered at eLife, the encouragement of reviewers to engage with one another at Frontiers, and generally by collaborative approaches to review that OPR enables. In my view these approaches make peer review more robust, including more than just vetting, fact checking, and some substantive critical feedback. To this end, you will need to more clearly define in your introduction and throughout the paper the assumed purpose of peer review, which you offer us in the open final-versioning commenting portion of the discussion section.
  • Open platforms section: While I agree that today platforms are an enormous part of our work in communicating science and engaging with our colleagues across the globe. That being said, I would like to point out that the process of peer review could be completely decoupled from a platform. The reason I mention this is that for some individuals and perhaps some disciplines, it might be difficult to get one’s head around the distinction between a peer review process and its technological implementation. To me they are distinct, and it is merely digital technology that assists us in allowing OPR to unfold. It behooves shy away from techno determinism when it comes to the possibilities presented by OPR.
  • I think you have a solid finding, but I would like to point out one more quibble. “Open science” is not a term embraced in the social sciences and humanities. Again, since you are couching your definition under the ethos of open science you will need to better describe open science, and make a bridge for social science and humanities disciplines. If an overarching definition of OPR is to be fully accepted by all disciplines, it needs to be inclusive of all of them. This is wherein the tension lies, where the community-based aspect of OPR in the social sciences and humanities (digital humanities?) are much more pronounced in the meaning making of the process. How can you better acknowledge the disciplinary tensions in the paper? Or would you like to scope your findings differently?
  • This paper is well written and organized logically, which make it quite readable and easy to follow. 
  • The main weakness of your paper is the lack of nuance addressed between STEM and social sciences and humanities disciplines. Engaging in the the tension between the approaches to and understanding of peer review and OPR in different disciplines will greatly strengthen your paper.
  • Presenting better the limitations of your method and clarifying your method as noted above will help scope the paper to be more scientifically sound.
  • Finally, clearly define and scope Open Science so that your proposed definition is more understandable. This will greatly strengthen not only the paper, but the definition itself.

Are the rationale for, and objectives of, the Systematic Review clearly stated?

Are sufficient details of the methods and analysis provided to allow replication by others?

Is the statistical analysis and its interpretation appropriate?

I cannot comment. A qualified statistician is required.

Are the conclusions drawn adequately supported by the results presented in the review?

Reviewer Expertise: library and information science, scholarly communication. scholarly publishing

  • Author Response 01 Sep 2017 Tony Ross-Hellauer , OpenAIRE / Uni. Goettingen, Germany 01 Sep 2017 Author Response Emily Ford: “Introduction: The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you ... Continue reading Emily Ford: “Introduction: The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you are using of open science fully accepted and not contested? If so, then great, but if not, then it becomes murkier and you might want to spend time unpacking the tension there. Also in the last sentence of the Intro, what is that ethos of open science?”   Tony Ross-Hellauer: I'd like to thank the reviewer for their very thoughtful and helpful comments. The inclusion of more consideration of the SSH perspective, especially, definitely strengthens the paper.   EF: “Introduction: Background - Would it be useful to unpack some counter arguments on the reasons peer review in its current state of blinded does not work? For example, in the delay and expense portion, how does flipping the model to use APCs change the cost at all? And what happens to unfunded research when the model is flipped? Does that create a disparity that only well funded research is readily available? This might create yet another stratification of scholarly publishing and science communication, which one would assume open science is trying to diminish. I realize that you do some of this in the discussion section, but I find there is a gap in the discussion of the economic argument.”   TRH: Open Participation relies to an extent on OA (have added a sentence on this), but I’m afraid I don't see a further connection here. Although OPR is of course related to OA journals (in that they have tended to be more likely to experiment with OPR), surely if the same system of (traditional, blinded) peer review is in use, the basic costs (for review) will be the same? I agree that a fully-APC based OA model of publishing has the potential to exclude less well-resourced institutions (especially outside the developed West), but do not follow how this wider argument is connected to OPR. In any case, I believe these considerations fall out of scope of this review (although it would be interesting to follow them up elsewhere). EF: "Introduction: Contested Meaning - I appreciate your thoughtful criticisms of past works that have been unable to do what you are doing in this article. Being the author of one of them, however, I would like to make some points. - would like to point out that while I understand the lack of a definition in my authored article (Defining & Characterizing Open Peer Review - 2013) is problematic, it was never my intent to fully describe it, but I had to use a scope for my systematic review, and that scope was identity disclosure. Please note that in my concluding remarks on that paper that I recommended a definition be more tightly defined, and it never claimed to define it wholesale. - Your research does a good job picking up the task that other papers were unable to accomplish." TRH: Thanks for clarifying this.   EF “Methodology - I would like to hear more in your methodology section about the searching for and selection of social sciences and humanities literature, as I think there might be some gaps in your data set based on this approach. You provide your search terms for Web of Science, but not the other databases and search engines. Including this would strengthen your methodology section. To me, the treatment of social sciences and humanities in this study is one of its weaknesses. Please outline the limitations of your research method. The methods section should be strengthened for better understanding of social sciences and humanities approaches, as well as limitations, for the paper to be more scientifically sound.”   TRH: I have expanded the methodology section to better specify search terms and databases used, and included a statement regarding limitations of the search strategy.   EF: “Results - I am not a statistician, nor am I a quantitative researcher, so I cannot provide a robust review of your results when it comes to these facets. Figures and tables are helpful to translate findings and ideas presented.”   TRH: No response required.   EF: “Open participation and open interaction sections would be greatly enhanced and do a great service to your consideration of the social sciences and humanities disciplines if you engaged with Fitzpatrick’s work presented in the Mellon White paper as well the Logos article. Additionally, there is an article not included in your data set (was it out of scope?), in Social Epistemology (http://dx.doi.org/10.1080/02691728.2010.498929), that may help. It would be good to engage more deeply with the question: Is OPR changing the role and purpose of peer review itself? There seems to be evidence of this by the mentorship offered at eLife, the encouragement of reviewers to engage with one another at Frontiers, and generally by collaborative approaches to review that OPR enables. In my view these approaches make peer review more robust, including more than just vetting, fact checking, and some substantive critical feedback. To this end, you will need to more clearly define in your introduction and throughout the paper the assumed purpose of peer review, which you offer us in the open final-versioning commenting portion of the discussion section.”   TRH: I have added consideration of disciplinary differences in the results and discussion sections. In particular, I have added a new figure to show the breakdown of traits by discipline, and added more consideration of the philosophical reasons to consider open participation and interaction. The question of whether OPR is changing the role of peer review per se is an excellent one, but I feel it is out of scope for this paper (which is already long enough!). The article by Fitzpatrick in Social Epistemology was deemed out of scope as it does not mention OPR (save for one mention in a block of quoted text) - the ideas underlying "peer-to-peer review" are no doubt related to the idea of OPR, but the scope here is only those papers which discuss OPR and give an explicit or implicit definition.   EF: “Open platforms section: While I agree that today platforms are an enormous part of our work in communicating science and engaging with our colleagues across the globe. That being said, I would like to point out that the process of peer review could be completely decoupled from a platform. The reason I mention this is that for some individuals and perhaps some disciplines, it might be difficult to get one’s head around the distinction between a peer review process and its technological implementation. To me they are distinct, and it is merely digital technology that assists us in allowing OPR to unfold. It behooves shy away from techno determinism when it comes to the possibilities presented by OPR.”   TRH: This is an important point - I have added a sentence to the section on final version commenting "In this sense, peer review can be decoupled not only from the journal, but also from any particular platform."   EF: “Conclusion - I think you have a solid finding, but I would like to point out one more quibble. “Open science” is not a term embraced in the social sciences and humanities. Again, since you are couching your definition under the ethos of open science you will need to better describe open science, and make a bridge for social science and humanities disciplines. If an overarching definition of OPR is to be fully accepted by all disciplines, it needs to be inclusive of all of them. This is wherein the tension lies, where the community-based aspect of OPR in the social sciences and humanities (digital humanities?) are much more pronounced in the meaning making of the process. How can you better acknowledge the disciplinary tensions in the paper? Or would you like to scope your findings differently?“   TRH: I have added an explicit reference to Open Science, specifically noting that I use the term to include all academic disciplines. I've also added reference to the disciplinary differences found, and included a concluding note that this area needs further research.   EF: “Final thoughts - This paper is well written and organized logically, which make it quite readable and easy to follow. The main weakness of your paper is the lack of nuance addressed between STEM and social sciences and humanities disciplines. Engaging in the the tension between the approaches to and understanding of peer review and OPR in different disciplines will greatly strengthen your paper. Presenting better the limitations of your method and clarifying your method as noted above will help scope the paper to be more scientifically sound. Finally, clearly define and scope Open Science so that your proposed definition is more understandable. This will greatly strengthen not only the paper, but the definition itself.”   TRH: Restatement of the above points - see answers above. Emily Ford: “Introduction: The definition of open science needs to be clearly stated in the Introduction in order to strengthen the frame of the whole paper. Is the definition you are using of open science fully accepted and not contested? If so, then great, but if not, then it becomes murkier and you might want to spend time unpacking the tension there. Also in the last sentence of the Intro, what is that ethos of open science?”   Tony Ross-Hellauer: I'd like to thank the reviewer for their very thoughtful and helpful comments. The inclusion of more consideration of the SSH perspective, especially, definitely strengthens the paper.   EF: “Introduction: Background - Would it be useful to unpack some counter arguments on the reasons peer review in its current state of blinded does not work? For example, in the delay and expense portion, how does flipping the model to use APCs change the cost at all? And what happens to unfunded research when the model is flipped? Does that create a disparity that only well funded research is readily available? This might create yet another stratification of scholarly publishing and science communication, which one would assume open science is trying to diminish. I realize that you do some of this in the discussion section, but I find there is a gap in the discussion of the economic argument.”   TRH: Open Participation relies to an extent on OA (have added a sentence on this), but I’m afraid I don't see a further connection here. Although OPR is of course related to OA journals (in that they have tended to be more likely to experiment with OPR), surely if the same system of (traditional, blinded) peer review is in use, the basic costs (for review) will be the same? I agree that a fully-APC based OA model of publishing has the potential to exclude less well-resourced institutions (especially outside the developed West), but do not follow how this wider argument is connected to OPR. In any case, I believe these considerations fall out of scope of this review (although it would be interesting to follow them up elsewhere). EF: "Introduction: Contested Meaning - I appreciate your thoughtful criticisms of past works that have been unable to do what you are doing in this article. Being the author of one of them, however, I would like to make some points. - would like to point out that while I understand the lack of a definition in my authored article (Defining & Characterizing Open Peer Review - 2013) is problematic, it was never my intent to fully describe it, but I had to use a scope for my systematic review, and that scope was identity disclosure. Please note that in my concluding remarks on that paper that I recommended a definition be more tightly defined, and it never claimed to define it wholesale. - Your research does a good job picking up the task that other papers were unable to accomplish." TRH: Thanks for clarifying this.   EF “Methodology - I would like to hear more in your methodology section about the searching for and selection of social sciences and humanities literature, as I think there might be some gaps in your data set based on this approach. You provide your search terms for Web of Science, but not the other databases and search engines. Including this would strengthen your methodology section. To me, the treatment of social sciences and humanities in this study is one of its weaknesses. Please outline the limitations of your research method. The methods section should be strengthened for better understanding of social sciences and humanities approaches, as well as limitations, for the paper to be more scientifically sound.”   TRH: I have expanded the methodology section to better specify search terms and databases used, and included a statement regarding limitations of the search strategy.   EF: “Results - I am not a statistician, nor am I a quantitative researcher, so I cannot provide a robust review of your results when it comes to these facets. Figures and tables are helpful to translate findings and ideas presented.”   TRH: No response required.   EF: “Open participation and open interaction sections would be greatly enhanced and do a great service to your consideration of the social sciences and humanities disciplines if you engaged with Fitzpatrick’s work presented in the Mellon White paper as well the Logos article. Additionally, there is an article not included in your data set (was it out of scope?), in Social Epistemology (http://dx.doi.org/10.1080/02691728.2010.498929), that may help. It would be good to engage more deeply with the question: Is OPR changing the role and purpose of peer review itself? There seems to be evidence of this by the mentorship offered at eLife, the encouragement of reviewers to engage with one another at Frontiers, and generally by collaborative approaches to review that OPR enables. In my view these approaches make peer review more robust, including more than just vetting, fact checking, and some substantive critical feedback. To this end, you will need to more clearly define in your introduction and throughout the paper the assumed purpose of peer review, which you offer us in the open final-versioning commenting portion of the discussion section.”   TRH: I have added consideration of disciplinary differences in the results and discussion sections. In particular, I have added a new figure to show the breakdown of traits by discipline, and added more consideration of the philosophical reasons to consider open participation and interaction. The question of whether OPR is changing the role of peer review per se is an excellent one, but I feel it is out of scope for this paper (which is already long enough!). The article by Fitzpatrick in Social Epistemology was deemed out of scope as it does not mention OPR (save for one mention in a block of quoted text) - the ideas underlying "peer-to-peer review" are no doubt related to the idea of OPR, but the scope here is only those papers which discuss OPR and give an explicit or implicit definition.   EF: “Open platforms section: While I agree that today platforms are an enormous part of our work in communicating science and engaging with our colleagues across the globe. That being said, I would like to point out that the process of peer review could be completely decoupled from a platform. The reason I mention this is that for some individuals and perhaps some disciplines, it might be difficult to get one’s head around the distinction between a peer review process and its technological implementation. To me they are distinct, and it is merely digital technology that assists us in allowing OPR to unfold. It behooves shy away from techno determinism when it comes to the possibilities presented by OPR.”   TRH: This is an important point - I have added a sentence to the section on final version commenting "In this sense, peer review can be decoupled not only from the journal, but also from any particular platform."   EF: “Conclusion - I think you have a solid finding, but I would like to point out one more quibble. “Open science” is not a term embraced in the social sciences and humanities. Again, since you are couching your definition under the ethos of open science you will need to better describe open science, and make a bridge for social science and humanities disciplines. If an overarching definition of OPR is to be fully accepted by all disciplines, it needs to be inclusive of all of them. This is wherein the tension lies, where the community-based aspect of OPR in the social sciences and humanities (digital humanities?) are much more pronounced in the meaning making of the process. How can you better acknowledge the disciplinary tensions in the paper? Or would you like to scope your findings differently?“   TRH: I have added an explicit reference to Open Science, specifically noting that I use the term to include all academic disciplines. I've also added reference to the disciplinary differences found, and included a concluding note that this area needs further research.   EF: “Final thoughts - This paper is well written and organized logically, which make it quite readable and easy to follow. The main weakness of your paper is the lack of nuance addressed between STEM and social sciences and humanities disciplines. Engaging in the the tension between the approaches to and understanding of peer review and OPR in different disciplines will greatly strengthen your paper. Presenting better the limitations of your method and clarifying your method as noted above will help scope the paper to be more scientifically sound. Finally, clearly define and scope Open Science so that your proposed definition is more understandable. This will greatly strengthen not only the paper, but the definition itself.”   TRH: Restatement of the above points - see answers above. Competing Interests: Article author Close Report a concern

Reviewer Expertise: Peer review innovations

  • Author Response 01 Sep 2017 Tony Ross-Hellauer , OpenAIRE / Uni. Goettingen, Germany 01 Sep 2017 Author Response Bahar Mehmani: „Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on ... Continue reading Bahar Mehmani: „Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on evaluation of the efficiency of open peer review. The author has created then seven OPR traits based on WoS data driven taxonomy. In conclusion though he suggests accepting the existing ambiguity is the only way: “given this is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations”. This doesn’t seem to be solving the issue he raises at in beginning. On the contrary considering all peer review innovations as forms of OPR might worsen the current situation. Initiatives like “registered reports”, “shortening review deadlines” or “reviewer recognition”, which are meant to address result-biased peer review, incentivizing reviewers, and boosting review speed, respectively, are certainly peer review innovations but cannot be directly considered as part of the umbrella term of open peer review. Hence the article deserves a stronger conclusion, including perhaps a suggested guideline of clarifying what type of “open peer review” beforehand of any evaluation/discussion using authors seven classifications.”   Tony Ross-Hellauer: I thank the reviewer for their time and care in preparing this helpful review report, which has helped to improve the second version. I understand the concern about merely accepting ambiguity to be no way to overcome confusion. I share this view and have now added/altered text to make my intentions clearer - "Given that OPR is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations. Although it could be argued that merely accepting the status quo in this way does not help resolve possible confusion regarding usage, I would argue that quantifying the ambiguity of usage and mapping the distinct traits enables future discussion to start from a firmer basis that (1) acknowledges that people often mean different things when they use this term, and (2) clarifies in advance exactly which OPR traits are under discussion."   BM: “Author can surely strengthen the conclusion by highlighting the lack of evidence in efficiency claims about some of the 7 traits mentioned in the article.”   TRH: I have included this point in the expanded conclusion.   BM: “I suggest a double checking the term “Post-publication peer review” in WoS search results. Author reports he has used “Open Final version commenting” for this process in making taxonomy reporting only 6 results. However, Post-publication peer review seem to produce a higher number of documents when I searched the term in Scopus results in 55 documents (11 for 2015 and 19 for 2016). This might impact the result reported in Fig. 6 reporting unique configurations of OPR.”   TRH: Although there is obvious overlap between the terms "open peer review" and "post-publication peer review", and so the literature on PPPR is useful to understand these phenomena, this study is only an attempt to categorize uses of the former term - hence, articles on PPPR which did not mention OPR  were out of scope.   BM: “Although the author reports 22 different definitions there, the figures shows 23 of them.”   TRH: The original figure was wrong and has been corrected. The final row of the original figure was a duplicate of the 12th entry (n=1, open reports, participation, manuscripts). I thank the reviewer very much for their attention to detail in spotting this silly error! Hence, it is correct that there are 22 different definitions.   BM: “It is also worth mentioning most of OPR initiatives mentioned in the article are not directly addressing all of shortcomings of the current peer review process. Mainly because one should distinguish between editorial process and peer review process. Issues such as delay and expense are universal to single/double/open peer review as they are part of the editorial process.”   TRH: I address this point with added text in the background and conclusion sections. Bahar Mehmani: „Tony provides an overview of different definitions of Open Peer Review, acknowledging the ambiguity of the term “open peer review” and the probable impact of such ambiguity on evaluation of the efficiency of open peer review. The author has created then seven OPR traits based on WoS data driven taxonomy. In conclusion though he suggests accepting the existing ambiguity is the only way: “given this is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations”. This doesn’t seem to be solving the issue he raises at in beginning. On the contrary considering all peer review innovations as forms of OPR might worsen the current situation. Initiatives like “registered reports”, “shortening review deadlines” or “reviewer recognition”, which are meant to address result-biased peer review, incentivizing reviewers, and boosting review speed, respectively, are certainly peer review innovations but cannot be directly considered as part of the umbrella term of open peer review. Hence the article deserves a stronger conclusion, including perhaps a suggested guideline of clarifying what type of “open peer review” beforehand of any evaluation/discussion using authors seven classifications.”   Tony Ross-Hellauer: I thank the reviewer for their time and care in preparing this helpful review report, which has helped to improve the second version. I understand the concern about merely accepting ambiguity to be no way to overcome confusion. I share this view and have now added/altered text to make my intentions clearer - "Given that OPR is such a contested concept, in my view the only sensible way forward is to acknowledge the ambiguity of this term, accepting that it is used as an umbrella concept for a diverse array of peer review innovations. Although it could be argued that merely accepting the status quo in this way does not help resolve possible confusion regarding usage, I would argue that quantifying the ambiguity of usage and mapping the distinct traits enables future discussion to start from a firmer basis that (1) acknowledges that people often mean different things when they use this term, and (2) clarifies in advance exactly which OPR traits are under discussion."   BM: “Author can surely strengthen the conclusion by highlighting the lack of evidence in efficiency claims about some of the 7 traits mentioned in the article.”   TRH: I have included this point in the expanded conclusion.   BM: “I suggest a double checking the term “Post-publication peer review” in WoS search results. Author reports he has used “Open Final version commenting” for this process in making taxonomy reporting only 6 results. However, Post-publication peer review seem to produce a higher number of documents when I searched the term in Scopus results in 55 documents (11 for 2015 and 19 for 2016). This might impact the result reported in Fig. 6 reporting unique configurations of OPR.”   TRH: Although there is obvious overlap between the terms "open peer review" and "post-publication peer review", and so the literature on PPPR is useful to understand these phenomena, this study is only an attempt to categorize uses of the former term - hence, articles on PPPR which did not mention OPR  were out of scope.   BM: “Although the author reports 22 different definitions there, the figures shows 23 of them.”   TRH: The original figure was wrong and has been corrected. The final row of the original figure was a duplicate of the 12th entry (n=1, open reports, participation, manuscripts). I thank the reviewer very much for their attention to detail in spotting this silly error! Hence, it is correct that there are 22 different definitions.   BM: “It is also worth mentioning most of OPR initiatives mentioned in the article are not directly addressing all of shortcomings of the current peer review process. Mainly because one should distinguish between editorial process and peer review process. Issues such as delay and expense are universal to single/double/open peer review as they are part of the editorial process.”   TRH: I address this point with added text in the background and conclusion sections. Competing Interests: Article author Close Report a concern
  • Would this meet most established criteria for a systematic review? Although the author completes a PRISMA checklist, he also notes firstly that he searched Web of Science, then that he added a bunch of other sources, and finally, “This set of articles was further enriched with 42 definitions from sources found through searching for the same terms in other academic databases (e.g., Google Scholar, JSTOR, disciplinary databases), Google (for blog articles) and Google Books (for books), as well as following citations in relevant bibliographies and literature reviews." This suggests that it is not really systematic (although the author is to be applauded for providing the data he worked from). There is a lack of clarity about the universe of literature that is being assessed, and of details about how it was assessed. In my view, the article is still a worthwhile undertaking, despite being non-systematic, but the title ought to reflect this.  
  • In my view the author does not pay enough attention to one important variant of open review, namely real-time review in the open, in which either invited reviewers or ‘the crowd’ comment on an article, with comments being posted as they are ready, rather than at the end of a formal process of peer review and decision-making. Conversely, the inclusion of ‘open platforms’ seems very confusing to me, as they do not have many of the criteria of openness that define the other flavours of open review that the author describes, but instead are about decoupling peer review from publication. Indeed Figure 6 makes plain that as a trait open platforms only occur twice in the 122 definitions considered. I would bet that the ‘real-time’ variant occurs far more often, and ought to be included as one of the key traits within the umbrella definition.  
  • In a paper that aims to define open peer review, it is unfortunate that the author doesn’t spend longer considering alternative definitions of peer review. Throughout the article he appears to conflate editorial selection (whether a journal accepts a manuscript) with technical review (whether the work is sound and properly reported). Thus when he talks about the “problems with peer review” he is sometimes talking about reviewers not spotting technical problems, sometimes about editors rejecting articles that don’t suit their taste, and sometimes about authors going through cycles of editorial rejection to achieve a high impact publication. Conflating these various things does not provide a sound foundation on which to build a definition of open peer review. This conflation is made worse when, for example, it is implied that the only reason for retraction is error. Thus most of the first three paragraphs of the ‘background’ section, and much of what follows about biases, incentives and wastefulness are muddled, and the references and evidence do not all support the broad claims made about ‘peer review’ (itself an ‘umbrella term’).
  • Abstract and later: The author needs to decide throughout the article whether he is singular (as he appears to be) or uses the royal ‘we’.  
  • p7 – it is not helpful to list the seven types of openness without definitions. Even if the lengthy discussion of them follows later, I was desperate for some brief definition, in particular for the last two, and was labouring under a misapprehension about the meaning of the fourth, until much later in the article when I discovered what was meant by ‘open pre-review manuscripts’.    
  • p8 – proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny.  
  • p8 – some journals use open reports without open identities – i.e. posting reports with published articles but without identifying the reviewer (e.g. http://embor.embopress.org/about#Transparent_Process). The author writes as if open reports must always have open identities.  
  • p10 – I think ‘open pre-review manuscripts’ is the wrong name for what the author is describing. At first I thought this meant the practice of posting the authors’ version of a manuscript alongside the peer review history (as is done by The BMJ, for example). But I think the author means ‘Open posting before formal review’ (which some call preprints). He might like to consider the suggestions by Neylon et al. about this issue of terminology (http://biorxiv.org/content/early/2016/12/09/092817)  
  • p10 – I wonder if the author has any evidence that PubPeer has been ‘a major influence’?  
  • p11 – I don’t think ‘open platforms’ is the right term either. (Although as noted above, I don’t really think this section belongs in the discussion at all, if it is to remain I would strongly recommend renaming it.) In publishing terms a platform is ‘where’ you publish articles, and the author is here discussing an aspect of how you get to the point of publication, and in particular peer review services (which as far as I can tell de facto meet only rather limited criteria of openness). I think what the author is describing is peer review options decoupled from journals (see Priem and Hemminger, Front Comput Neurosci 2012; 6: 19) and as noted I don’t understand why these have a place in a definition of open peer review.  
  • p11 – Conclusion. I don’t believe the author presents a unified definition of open peer review, for all the reasons discussed above, but he does present most of the traits that together come under the umbrella term.

Not applicable

Competing Interests: I am Executive Editor of The BMJ, which operates a version of open peer review, and I have previously been employed by PLOS and BioMed Central which operate different versions.

  • Author Response 01 Sep 2017 Tony Ross-Hellauer , OpenAIRE / Uni. Goettingen, Germany 01 Sep 2017 Author Response Theodora Bloom: “This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant ... Continue reading Theodora Bloom: “This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant by open peer review. I have three significant concerns that need to be addressed, followed by more minor annotations and comments.”   Tony Ross-Hellauer: I'd like to thank the reviewer for their time and care in undertaking this review, which has helped strengthen the paper, especially with regards to the suggestions for strengthening the analysis of the problems with peer review.   TB: Would this meet most established criteria for a systematic review? Although the author completes a PRISMA checklist, he also notes firstly that he searched Web of Science, then that he added a bunch of other sources, and finally, “This set of articles was further enriched with 42 definitions from sources found through searching for the same terms in other academic databases (e.g., Google Scholar, JSTOR, disciplinary databases), Google (for blog articles) and Google Books (for books), as well as following citations in relevant bibliographies and literature reviews." This suggests that it is not really systematic (although the author is to be applauded for providing the data he worked from). There is a lack of clarity about the universe of literature that is being assessed, and of details about how it was assessed. In my view, the article is still a worthwhile undertaking, despite being non-systematic, but the title ought to reflect this.   TRH: I have strengthened the description in the methods section to address this criticism. I do believe this article meets the criteria for PRISMA systematic reviews (the relevant sections from the PRISMA checklist being: "7. - Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched"; and "8. Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated." )   TB: “In my view the author does not pay enough attention to one important variant of open review, namely real-time review in the open, in which either invited reviewers or ‘the crowd’ comment on an article, with comments being posted as they are ready, rather than at the end of a formal process of peer review and decision-making. Conversely, the inclusion of ‘open platforms’ seems very confusing to me, as they do not have many of the criteria of openness that define the other flavours of open review that the author describes, but instead are about decoupling peer review from publication. Indeed Figure 6 makes plain that as a trait open platforms only occur twice in the 122 definitions considered. I would bet that the ‘real-time’ variant occurs far more often, and ought to be included as one of the key traits within the umbrella definition.”   TRH: I thank the reviewer for pointing out the gap in not mentioning what they term "real-time review in the open" - but I would respectfully disagree with the reviewer that this feature constitutes a core trait. In my view it depends upon other traits, including open reports, commenting, participation and especially open pre-review manuscripts. As this feature depends (in my view) most fully upon open pre-review manuscripts (since the manuscript would need to be online to begin the process), I have included mention of this option in the discussion section for that trait with the added text: "Finally, making manuscripts openly available in advance of review allows comments to be posted as they are received, either from invited reviewers or the wider community, and enabling readers to follow the process of peer-review in real-time."   TB: “In a paper that aims to define open peer review, it is unfortunate that the author doesn’t spend longer considering alternative definitions of peer review. Throughout the article he appears to conflate editorial selection (whether a journal accepts a manuscript) with technical review (whether the work is sound and properly reported). Thus when he talks about the “problems with peer review” he is sometimes talking about reviewers not spotting technical problems, sometimes about editors rejecting articles that don’t suit their taste, and sometimes about authors going through cycles of editorial rejection to achieve a high impact publication. Conflating these various things does not provide a sound foundation on which to build a definition of open peer review. This conflation is made worse when, for example, it is implied that the only reason for retraction is error. Thus most of the first three paragraphs of the ‘background’ section, and much of what follows about biases, incentives and wastefulness are muddled, and the references and evidence do not all support the broad claims made about ‘peer review’ (itself an ‘umbrella term’).”   TRH: Since the aim of this article was to clarify, I am very grateful to the reviewer for pointing out the ways in which this section actually confuses things. It was certainly not my intention to insinuate that OPR is a panacea for all problems with traditional peer review. I have added text to address the reviewer's comments: (a) The description of traditional peer review in the Background section has been revised to clarify the role of peer review in scholarly communication; (b) Two new sections have been added to the discussion which make clearer (1) the particular problems with traditional peer review that each OPR trait aims to address, and (2) how each trait can be related to the broader agenda of Open Science (a new figure is also added). TB: “Abstract and later: The author needs to decide throughout the article whether he is singular (as he appears to be) or uses the royal ‘we’.”   TRH: Corrected   TB: “p7 – it is not helpful to list the seven types of openness without definitions. Even if the lengthy discussion of them follows later, I was desperate for some brief definition, in particular for the last two, and was labouring under a misapprehension about the meaning of the fourth, until much later in the article when I discovered what was meant by ‘open pre-review manuscripts’. “   TRH: Definitions added.   TB: “p8 – proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny.”   TRH: Added text to address this: "Finally, a reviewer for this paper advises that “proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny” (Bloom, 2017). "   TB: “p8 – some journals use open reports without open identities – i.e. posting reports with published articles but without identifying the reviewer (e.g. http://embor.embopress.org/about#Transparent_Process). The author writes as if open reports must always have open identities.”   TRH: Added sentence to clarify: "Often, although not in all cases (e.g., EMBO reports, http://embor.embopress.org), review names are published alongside the reports. "   TB: “p10 – I think ‘open pre-review manuscripts’ is the wrong name for what the author is describing. At first I thought this meant the practice of posting the authors’ version of a manuscript alongside the peer review history (as is done by The BMJ, for example). But I think the author means ‘Open posting before formal review’ (which some call preprints). He might like to consider the suggestions by Neylon et al. about this issue of terminology (http://biorxiv.org/content/early/2016/12/09/092817)”   TRH: I agree the terminology could be misconstrued, but am not sure the reviewer's suggestion is preferable (the phrase "posting" could be thought ambiguous) - to avoid ambiguity for future readers, I have added definitions on p7 where I introduce the terms.   TB: “p10 – I wonder if the author has any evidence that PubPeer has been ‘a major influence’?”   TRH: I’ve weakened the terminology to "An important platform in this regard has been major influence here has been the independent platform Pubpeer "   TB: “p11 – I don’t think ‘open platforms’ is the right term either. (Although as noted above, I don’t really think this section belongs in the discussion at all, if it is to remain I would strongly recommend renaming it.) In publishing terms a platform is ‘where’ you publish articles, and the author is here discussing an aspect of how you get to the point of publication, and in particular peer review services (which as far as I can tell de facto meet only rather limited criteria of openness). I think what the author is describing is peer review options decoupled from journals (see Priem and Hemminger, Front Comput Neurosci 2012; 6: 19) and as noted I don’t understand why these have a place in a definition of open peer review.”   TRH: I respectfully disagree with the reviewer here - the word "platform" seems to be used more broadly - e.g., PubPeer considers itself an "online platform for post-publication peer review" - however, I agree that the terminology might be confusing and so have changed the wording to "Open platforms (“decoupled review”)". The point about open platforms being a fringe trait is valid (I've added text to the open platforms section in the discussion to strengthen the acknowledgement of this) - it was included simply because it was observed to be part of the two definitions cited.   TB: “p11 – Conclusion. I don’t believe the author presents a unified definition of open peer review, for all the reasons discussed above, but he does present most of the traits that together come under the umbrella term.”   TRH: I have removed the term "unified" as this is no doubt contentious Theodora Bloom: “This is an interesting paper addressing a question that is important to journal editors and publishers as well as the wider ‘open science’ community, namely what is meant by open peer review. I have three significant concerns that need to be addressed, followed by more minor annotations and comments.”   Tony Ross-Hellauer: I'd like to thank the reviewer for their time and care in undertaking this review, which has helped strengthen the paper, especially with regards to the suggestions for strengthening the analysis of the problems with peer review.   TB: Would this meet most established criteria for a systematic review? Although the author completes a PRISMA checklist, he also notes firstly that he searched Web of Science, then that he added a bunch of other sources, and finally, “This set of articles was further enriched with 42 definitions from sources found through searching for the same terms in other academic databases (e.g., Google Scholar, JSTOR, disciplinary databases), Google (for blog articles) and Google Books (for books), as well as following citations in relevant bibliographies and literature reviews." This suggests that it is not really systematic (although the author is to be applauded for providing the data he worked from). There is a lack of clarity about the universe of literature that is being assessed, and of details about how it was assessed. In my view, the article is still a worthwhile undertaking, despite being non-systematic, but the title ought to reflect this.   TRH: I have strengthened the description in the methods section to address this criticism. I do believe this article meets the criteria for PRISMA systematic reviews (the relevant sections from the PRISMA checklist being: "7. - Describe all information sources (e.g., databases with dates of coverage, contact with study authors to identify additional studies) in the search and date last searched"; and "8. Present full electronic search strategy for at least one database, including any limits used, such that it could be repeated." )   TB: “In my view the author does not pay enough attention to one important variant of open review, namely real-time review in the open, in which either invited reviewers or ‘the crowd’ comment on an article, with comments being posted as they are ready, rather than at the end of a formal process of peer review and decision-making. Conversely, the inclusion of ‘open platforms’ seems very confusing to me, as they do not have many of the criteria of openness that define the other flavours of open review that the author describes, but instead are about decoupling peer review from publication. Indeed Figure 6 makes plain that as a trait open platforms only occur twice in the 122 definitions considered. I would bet that the ‘real-time’ variant occurs far more often, and ought to be included as one of the key traits within the umbrella definition.”   TRH: I thank the reviewer for pointing out the gap in not mentioning what they term "real-time review in the open" - but I would respectfully disagree with the reviewer that this feature constitutes a core trait. In my view it depends upon other traits, including open reports, commenting, participation and especially open pre-review manuscripts. As this feature depends (in my view) most fully upon open pre-review manuscripts (since the manuscript would need to be online to begin the process), I have included mention of this option in the discussion section for that trait with the added text: "Finally, making manuscripts openly available in advance of review allows comments to be posted as they are received, either from invited reviewers or the wider community, and enabling readers to follow the process of peer-review in real-time."   TB: “In a paper that aims to define open peer review, it is unfortunate that the author doesn’t spend longer considering alternative definitions of peer review. Throughout the article he appears to conflate editorial selection (whether a journal accepts a manuscript) with technical review (whether the work is sound and properly reported). Thus when he talks about the “problems with peer review” he is sometimes talking about reviewers not spotting technical problems, sometimes about editors rejecting articles that don’t suit their taste, and sometimes about authors going through cycles of editorial rejection to achieve a high impact publication. Conflating these various things does not provide a sound foundation on which to build a definition of open peer review. This conflation is made worse when, for example, it is implied that the only reason for retraction is error. Thus most of the first three paragraphs of the ‘background’ section, and much of what follows about biases, incentives and wastefulness are muddled, and the references and evidence do not all support the broad claims made about ‘peer review’ (itself an ‘umbrella term’).”   TRH: Since the aim of this article was to clarify, I am very grateful to the reviewer for pointing out the ways in which this section actually confuses things. It was certainly not my intention to insinuate that OPR is a panacea for all problems with traditional peer review. I have added text to address the reviewer's comments: (a) The description of traditional peer review in the Background section has been revised to clarify the role of peer review in scholarly communication; (b) Two new sections have been added to the discussion which make clearer (1) the particular problems with traditional peer review that each OPR trait aims to address, and (2) how each trait can be related to the broader agenda of Open Science (a new figure is also added). TB: “Abstract and later: The author needs to decide throughout the article whether he is singular (as he appears to be) or uses the royal ‘we’.”   TRH: Corrected   TB: “p7 – it is not helpful to list the seven types of openness without definitions. Even if the lengthy discussion of them follows later, I was desperate for some brief definition, in particular for the last two, and was labouring under a misapprehension about the meaning of the fourth, until much later in the article when I discovered what was meant by ‘open pre-review manuscripts’. “   TRH: Definitions added.   TB: “p8 – proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny.”   TRH: Added text to address this: "Finally, a reviewer for this paper advises that “proponents of open identity review in medicine would also point out that it makes conflicts of interest much more apparent and subject to scrutiny” (Bloom, 2017). "   TB: “p8 – some journals use open reports without open identities – i.e. posting reports with published articles but without identifying the reviewer (e.g. http://embor.embopress.org/about#Transparent_Process). The author writes as if open reports must always have open identities.”   TRH: Added sentence to clarify: "Often, although not in all cases (e.g., EMBO reports, http://embor.embopress.org), review names are published alongside the reports. "   TB: “p10 – I think ‘open pre-review manuscripts’ is the wrong name for what the author is describing. At first I thought this meant the practice of posting the authors’ version of a manuscript alongside the peer review history (as is done by The BMJ, for example). But I think the author means ‘Open posting before formal review’ (which some call preprints). He might like to consider the suggestions by Neylon et al. about this issue of terminology (http://biorxiv.org/content/early/2016/12/09/092817)”   TRH: I agree the terminology could be misconstrued, but am not sure the reviewer's suggestion is preferable (the phrase "posting" could be thought ambiguous) - to avoid ambiguity for future readers, I have added definitions on p7 where I introduce the terms.   TB: “p10 – I wonder if the author has any evidence that PubPeer has been ‘a major influence’?”   TRH: I’ve weakened the terminology to "An important platform in this regard has been major influence here has been the independent platform Pubpeer "   TB: “p11 – I don’t think ‘open platforms’ is the right term either. (Although as noted above, I don’t really think this section belongs in the discussion at all, if it is to remain I would strongly recommend renaming it.) In publishing terms a platform is ‘where’ you publish articles, and the author is here discussing an aspect of how you get to the point of publication, and in particular peer review services (which as far as I can tell de facto meet only rather limited criteria of openness). I think what the author is describing is peer review options decoupled from journals (see Priem and Hemminger, Front Comput Neurosci 2012; 6: 19) and as noted I don’t understand why these have a place in a definition of open peer review.”   TRH: I respectfully disagree with the reviewer here - the word "platform" seems to be used more broadly - e.g., PubPeer considers itself an "online platform for post-publication peer review" - however, I agree that the terminology might be confusing and so have changed the wording to "Open platforms (“decoupled review”)". The point about open platforms being a fringe trait is valid (I've added text to the open platforms section in the discussion to strengthen the acknowledgement of this) - it was included simply because it was observed to be part of the two definitions cited.   TB: “p11 – Conclusion. I don’t believe the author presents a unified definition of open peer review, for all the reasons discussed above, but he does present most of the traits that together come under the umbrella term.”   TRH: I have removed the term "unified" as this is no doubt contentious Competing Interests: Article author Close Report a concern
  • P4: I suggest the author replaces “Unaccountability” with “Lack of accountability”
  • P6: In the methods, there seem to be two literature surveys, the first by OpenAire (never mentioned again in the rest of the article), the second by the author. The author should clarify exactly who did what and how he used the OpenAire search
  • P6: The text at the top of column 2 starts in the middle of a sentence. I think something is missing.
  • P7: In Figure 4, it is not clear what is the metric. Is it the number of Journal Articles/Grant proposals etc. or is it the number of distinct definitions found in journals etc? It would be good to clarify what is meant by “Data,Journal Articles”
  • P7: The author writes that “for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature." In reality he found 22 configurations in his, necessarily limited survey. I am certain the literature contains many more. I suggest he corrects his initial statement to make this clear.
  • P7: The definition of “Open identities”, “Open Reports” etc. is given in the discussion. I suggest it would be useful to insert the definitions, earlier, immediately after the introduction of the schema (column 2 p 7)
  • P8: It might be worth mentioning that some publishers (like Frontiers) favor a system of Open Peer Review which publishes reviewers’ names, only when articles are accepted, thereby avoiding the risk of self-censorship by critical reviewers. 

Competing Interests: I am a consultant for Frontiers Media SA, an Open Access publisher with its own system of Open Peer Review

  • Author Response 01 Sep 2017 Tony Ross-Hellauer , OpenAIRE / Uni. Goettingen, Germany 01 Sep 2017 Author Response Richard Walker: “This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”. The author makes a systematic search of ... Continue reading Richard Walker: “This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”. The author makes a systematic search of the literature, fully and correctly detailing his methods in the Supplementary Materials. His main conclusion is that “Open peer review is an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science…”. The data from the systematic review fully justifies this conclusion. The references are comprehensive and up to date. On this basis, I believe the quality of the article is already sufficient to justify publication. I would like, nonetheless to suggest some possibilities for improvement.”   Tony Ross-Hellauer: I'd like to personally thank the reviewer for their time and care in undertaking this review, which has helped strengthen the paper, especially with regards to the suggestions for strengthening the conclusion.   RW: Weak conclusions - The author’s conclusions, while correct, are weak. The rapid growth in references to Open Peer Review in the literature suggests that interest in OPR is growing rapidly. It would be useful to point this out. It would also be useful to point out that 110/122 references in his survey talk about “Open Identities” and 72 talk about Open Reports”. This suggests to me that the core sense of Open Peer Review lies precisely in the use of Open identifies and Open Reports and that other aspects are more peripheral. If this were my article (which it is not) I would make this core/periphery distinction more explicit. The author observes correctly that there is still very little evidence about the effectiveness or otherwise of different forms of Open Peer Review. This is another issue that it would be good to bring out in the conclusions.   TRH: I agree that the conclusion was weak and welcome the suggestions for improvement. I have written an extended conclusion which I believe addresses all these points.   RW: ““Power distribution” - The author claims that the configurations of OPR traits “follow a power-law distribution”. Readers will understand what he means. However a power law is a functional relationship between two quantities – and here I see only one (the number of configurations). Power laws play no further part in the author’s argument. So I suggest it would be better to avoid the term. What the author could say, correctly, is that there are a couple of very common configurations and a lot of rarer ones. This links to the idea of a “core” and “peripheral” concepts of OPR.”   TRH: Text changed to "The distribution of traits shows two very popular configurations and a variety of rarer ones ..."   RW: “Reasons for open reports - The author correctly argues that Open Identities provide an incentive to reviewers to do their work thoroughly. I suggest that the same applies to “Open Reports”. No reviewer wants to expose himself/herself as lazy or blatantly unfair.”   TRH: Agreed - I have added text to this effect: "It could also increase review quality, as the thought of their words being made publicly available could motivate reviewers to be more thorough in their review activities. "   RW: “P4: I suggest the author replaces “Unaccountability” with “Lack of accountability””   TRH: Text changed as suggested.   RW: “P6: In the methods, there seem to be two literature surveys, the first by OpenAire (never mentioned again in the rest of the article), the second by the author. The author should clarify exactly who did what and how he used the OpenAire search” TRH: There was only one lit review, done by the main author as part of the OpenAIRE project - for clarity, I've changed the reference to OpenAIRE to the first person singular ("I")   RW: “P6: The text at the top of column 2 starts in the middle of a sentence. I think something is missing.”   TRH: I've changed the structure of this sentence to be less confusing: "Sixty-eight percent (n=83) of the 122 definitions identified were explicitly stated, 37.7% (n=46) implicitly stated, and 5.7% (n=7) contained both explicit and implicit information."   RW: “P7: In Figure 4, it is not clear what is the metric. Is it the number of Journal Articles/Grant proposals etc. or is it the number of distinct definitions found in journals etc? It would be good to clarify what is meant by “Data,Journal Articles””   TRH: This is explained in the text at the bottom of p5 of v1: "Meanwhile, regarding the target of the OPR mentioned in these articles (Figure 4), most were referring to peer review of journal articles (80.7%), with 16% not specifying a target (16%), and a small number of articles also referring to review of data, conference papers and grant proposals."   RW: “P7: The author writes that “for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature." In reality he found 22 configurations in his, necessarily limited survey. I am certain the literature contains many more. I suggest he corrects his initial statement to make this clear.”   TRH: Added text to sentence to read "in the literature examined here."   RW: “P7: The definition of “Open identities”, “Open Reports” etc. is given in the discussion. I suggest it would be useful to insert the definitions, earlier, immediately after the introduction of the schema (column 2 p 7)” TRH: Text added as suggested.   RW: “P8: It might be worth mentioning that some publishers (like Frontiers) favor a system of Open Peer Review which publishes reviewers’ names, only when articles are accepted, thereby avoiding the risk of self-censorship by critical reviewers.”   TRH: This is a good suggestion. Richard Walker: “This is a useful, well-written article that helps to clarify some of the “fuzziness” concerning the concept of “Open Peer Review”. The author makes a systematic search of the literature, fully and correctly detailing his methods in the Supplementary Materials. His main conclusion is that “Open peer review is an umbrella term for a number of overlapping ways that peer review models can be adapted in line with the ethos of Open Science…”. The data from the systematic review fully justifies this conclusion. The references are comprehensive and up to date. On this basis, I believe the quality of the article is already sufficient to justify publication. I would like, nonetheless to suggest some possibilities for improvement.”   Tony Ross-Hellauer: I'd like to personally thank the reviewer for their time and care in undertaking this review, which has helped strengthen the paper, especially with regards to the suggestions for strengthening the conclusion.   RW: Weak conclusions - The author’s conclusions, while correct, are weak. The rapid growth in references to Open Peer Review in the literature suggests that interest in OPR is growing rapidly. It would be useful to point this out. It would also be useful to point out that 110/122 references in his survey talk about “Open Identities” and 72 talk about Open Reports”. This suggests to me that the core sense of Open Peer Review lies precisely in the use of Open identifies and Open Reports and that other aspects are more peripheral. If this were my article (which it is not) I would make this core/periphery distinction more explicit. The author observes correctly that there is still very little evidence about the effectiveness or otherwise of different forms of Open Peer Review. This is another issue that it would be good to bring out in the conclusions.   TRH: I agree that the conclusion was weak and welcome the suggestions for improvement. I have written an extended conclusion which I believe addresses all these points.   RW: ““Power distribution” - The author claims that the configurations of OPR traits “follow a power-law distribution”. Readers will understand what he means. However a power law is a functional relationship between two quantities – and here I see only one (the number of configurations). Power laws play no further part in the author’s argument. So I suggest it would be better to avoid the term. What the author could say, correctly, is that there are a couple of very common configurations and a lot of rarer ones. This links to the idea of a “core” and “peripheral” concepts of OPR.”   TRH: Text changed to "The distribution of traits shows two very popular configurations and a variety of rarer ones ..."   RW: “Reasons for open reports - The author correctly argues that Open Identities provide an incentive to reviewers to do their work thoroughly. I suggest that the same applies to “Open Reports”. No reviewer wants to expose himself/herself as lazy or blatantly unfair.”   TRH: Agreed - I have added text to this effect: "It could also increase review quality, as the thought of their words being made publicly available could motivate reviewers to be more thorough in their review activities. "   RW: “P4: I suggest the author replaces “Unaccountability” with “Lack of accountability””   TRH: Text changed as suggested.   RW: “P6: In the methods, there seem to be two literature surveys, the first by OpenAire (never mentioned again in the rest of the article), the second by the author. The author should clarify exactly who did what and how he used the OpenAire search” TRH: There was only one lit review, done by the main author as part of the OpenAIRE project - for clarity, I've changed the reference to OpenAIRE to the first person singular ("I")   RW: “P6: The text at the top of column 2 starts in the middle of a sentence. I think something is missing.”   TRH: I've changed the structure of this sentence to be less confusing: "Sixty-eight percent (n=83) of the 122 definitions identified were explicitly stated, 37.7% (n=46) implicitly stated, and 5.7% (n=7) contained both explicit and implicit information."   RW: “P7: In Figure 4, it is not clear what is the metric. Is it the number of Journal Articles/Grant proposals etc. or is it the number of distinct definitions found in journals etc? It would be good to clarify what is meant by “Data,Journal Articles””   TRH: This is explained in the text at the bottom of p5 of v1: "Meanwhile, regarding the target of the OPR mentioned in these articles (Figure 4), most were referring to peer review of journal articles (80.7%), with 16% not specifying a target (16%), and a small number of articles also referring to review of data, conference papers and grant proposals."   RW: “P7: The author writes that “for the literature offers a total of 22 distinct configurations of seven traits, effectively meaning that there are 22 different definitions of OPR in the literature." In reality he found 22 configurations in his, necessarily limited survey. I am certain the literature contains many more. I suggest he corrects his initial statement to make this clear.”   TRH: Added text to sentence to read "in the literature examined here."   RW: “P7: The definition of “Open identities”, “Open Reports” etc. is given in the discussion. I suggest it would be useful to insert the definitions, earlier, immediately after the introduction of the schema (column 2 p 7)” TRH: Text added as suggested.   RW: “P8: It might be worth mentioning that some publishers (like Frontiers) favor a system of Open Peer Review which publishes reviewers’ names, only when articles are accepted, thereby avoiding the risk of self-censorship by critical reviewers.”   TRH: This is a good suggestion. Competing Interests: Article author Close Report a concern

Reviewer Status

Alongside their report, reviewers assign a status to the article:

Reviewer Reports

1 2 3 4

(revision)
31 Aug 17
read read read

27 Apr 17
read read read read
  • Richard Walker , Swiss Federal Institute of Technology in Lausanne, Geneva, Switzerland
  • Theodora Bloom , The BMJ, London, UK
  • Bahar Mehmani , RELX Group, Amsterdam, The Netherlands
  • Emily Ford , Portland State University, Portland, USA

Comments on this article

All Comments (1)

Browse by related subjects

Competing Interests Policy

Provide sufficient details of any financial or non-financial competing interests to enable users to assess whether your comments might lead a reasonable person to question your impartiality. Consider the following examples, but note that this is not an exhaustive list:

  • Within the past 4 years, you have held joint grants, published or collaborated with any of the authors of the selected paper.
  • You have a close personal relationship (e.g. parent, spouse, sibling, or domestic partner) with any of the authors.
  • You are a close professional associate of any of the authors (e.g. scientific mentor, recent student).
  • You work at the same institute as any of the authors.
  • You hope/expect to benefit (e.g. favour or employment) as a result of your submission.
  • You are an Editor for the journal in which the article is published.
  • You expect to receive, or in the past 4 years have received, any of the following from any commercial organisation that may gain financially from your submission: a salary, fees, funding, reimbursements.
  • You expect to receive, or in the past 4 years have received, shared grant support or other funding with any of the authors.
  • You hold, or are currently applying for, any patents or significant stocks/shares relating to the subject matter of the paper you are commenting on.

Stay Updated

Sign up for content alerts and receive a weekly or monthly email with all newly published articles

Register with F1000Research

Already registered? Sign in

Not now, thanks

The email address should be the one you originally registered with F1000.

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here .

If you still need help with your Google account password, please click here .

You registered with F1000 via Facebook, so we cannot reset your password.

If you still need help with your Facebook account password, please click here .

If your email address is registered with us, we will email you instructions to reset your password.

If you think you should have received this email but it has not arrived, please check your spam filters and/or contact for further assistance.

eLife logo

  • Research Article
  • Biochemistry and Chemical Biology
  • Structural Biology and Molecular Biophysics

Observing one-divalent-metal-ion-dependent and histidine-promoted His-Me family I-PpoI nuclease catalysis in crystallo

  • Caleb Chang

Is a corresponding author

  • Department of Biosciences, Rice University, United States ;
  • Open access
  • Copyright information

Share this article

Cite this article.

  • Copy to clipboard
  • Download BibTeX
  • Download .RIS

Peer review process

Reviewer #1 (public review):, reviewer #2 (public review):, author response.

Version of Record: This is the final version of the article.

  • Boston University, United States
  • Axel T Brunger
  • Stanford University School of Medicine, Howard Hughes Medical Institute, United States

This study is convincing because they performed time-resolved X-ray crystallography under different pH conditions using active/inactive metal ions and PpoI mutants, as with the activity measurements in solution in conventional enzymatic studies. Although the reaction mechanism is simple and maybe a little predictable, the strength of this study is that they were able to validate that PpoI catalyzes DNA hydrolysis through "a single divalent cation" because time-resolved X-ray study often observes transient metal ions which are important for catalysis but are not predictable in previous studies with static structures such as enzyme-substrate analog-metal ion complexes. The discussion of this study is well supported by their data. This study visualized the catalytic process and mutational effects on catalysis, providing a new insight into the catalytic mechanism of I-PpoI through a single divalent cation. The authors found that His98, a candidate of proton acceptor in the previous experiments, also affects the Mg2+ binding for catalysis without the direct interaction between His98 and the Mg2+ ion, suggesting that "Without a proper proton acceptor, the metal ion may be prone for dissociation without the reaction proceeding, and thus stable Mg2+ binding was not observed in crystallo without His98". In the future, this interesting feature observed in I-PpoI should be investigated by biochemical, structural and computational analyses using other one metal-ion dependent nucleases.

Most polymerases and nucleases use two or three divalent metal ions in their catalytic functions. The family of His-Me nucleases, however, use only one divalent metal ion, along with a conserved histidine, to catalyze DNA hydrolysis. The mechanism has been studied previously but, according to the authors, it remained unclear. By use of time resolved X-ray crystallography, this work convincingly demonstrated that only one M2+ ion is involved in the catalysis of the His-Me I-PpoI 19 nuclease, and proposed concerted functions of the metal and the histidine.

This work performs mechanistic studies, including the number and roles of metal ion, pH dependence, and activation mechanism, all by structural analyses, coupled with some kinetics and mutagenesis. Overall, it is a highly rigorous work. This approach was first developed in Science (2016) for a DNA polymerase, in which Yang Cao was the first author. It has subsequently been applied to just 5 to 10 enzymes by different labs, mainly to clarify two versus three metal ion mechanisms. The present study is the first one to demonstrate a single metal ion mechanism by this approach.

Furthermore, on the basis of the quantitative correlation between the fraction of metal ion binding and the formation of product, as well as the pH dependence, and the data from site specific mutants, the authors concluded that the functions of Mg2+ and His are a concerted process. A detailed mechanism is proposed in Figure 6.

Even though there are no major surprises in the results and conclusions, the time-resolved structural approach and the overall quality of the results represent a significant step forward for the Me-His family of nucleases. In addition, since the mechanism is unique among different classes of nucleases and polymerases, the work should be of interest to readers in DNA enzymology, or even mechanistic enzymology in general.

Weaknesses:

Two relatively minor issues are raised here for consideration by the authors:

p. 4, last para, lines 1-2: "we next visualized the entire reaction process by soaking I-PpoI crystals in buffer....". This is a little over-stated. The structures being observed are not reaction intermediates. They are mixtures of substrates and products in the enzyme-bound state. The progress of the reaction is limited by the progress of soaking of the metal ion. Crystallography is just been used as a tool to monitor the reaction (and provide structural information about the product). It would be more accurate to say that "we next monitored the reaction progress by soaking...."

p. 5, beginning of the section. The authors on one hand emphasized the quantitative correlation between Mg ion density and the product density. On the other hand, they raised the uncertainty in the quantitation of Mg2+ density versus Na+ density, thus they repeated the study with Mn2+ which has distinct anomalous signals. This is a very good approach. However, still no metal ion density is shown in the key figure 2A. It will be clearer to show the progress of metal ion density in a figure (in addition to just plots), whether it is Mg or Mn.

Revised version: The authors have properly revised the paper in response to both questions raised in the weakness section. The first issue is an important clarification for others working on similar approaches also. For the second issue, the metal ion density is nicely shown in Fig. S4 now.

The following is the authors’ response to the original reviews.

Public Reviews: Reviewer #1 (Public Review): This study is convincing because they performed time-resolved X-ray crystallography under different pH conditions using active/inactive metal ions and PpoI mutants, as with the activity measurements in solution in conventional enzymatic studies. Although the reaction mechanism is simple and may be a little predictable, the strength of this study is that they were able to validate that PpoI catalyzes DNA hydrolysis through "a single divalent cation" because time-resolved X-ray study often observes transient metal ions which are important for catalysis but are not predictable in previous studies with static structures such as enzyme-substrate analog-metal ion complexes. The discussion of this study is well supported by their data. This study visualized the catalytic process and mutational effects on catalysis, providing new insight into the catalytic mechanism of I-PpoI through a single divalent cation. The authors found that His98, a candidate of proton acceptor in the previous experiments, also affects the Mg2+ binding for catalysis without the direct interaction between His98 and the Mg2+ ion, suggesting that "Without a proper proton acceptor, the metal ion may be prone for dissociation without the reaction proceeding, and thus stable Mg2+ binding was not observed in crystallo without His98". In future, this interesting feature observed in I-PpoI should be investigated by biochemical, structural, and computational analyses using other metal-ion dependent nucleases.

We appreciate the reviewer for the positive assessment as well as all the comments and suggestions.

Reviewer #2 (Public Review): Summary: Most polymerases and nucleases use two or three divalent metal ions in their catalytic functions. The family of His-Me nucleases, however, use only one divalent metal ion, along with a conserved histidine, to catalyze DNA hydrolysis. The mechanism has been studied previously but, according to the authors, it remained unclear. By use of a time resolved X-ray crystallography, this work convincingly demonstrated that only one M2+ ion is involved in the catalysis of the His-Me I-PpoI 19 nuclease, and proposed concerted functions of the metal and the histidine. Strengths: This work performs mechanistic studies, including the number and roles of metal ion, pH dependence, and activation mechanism, all by structural analyses, coupled with some kinetics and mutagenesis. Overall, it is a highly rigorous work. This approach was first developed in Science (2016) for a DNA polymerase, in which Yang Cao was the first author. It has subsequently been applied to just 5 to 10 enzymes by different labs, mainly to clarify two versus three metal ion mechanisms. The present study is the first one to demonstrate a single metal ion mechanism by this approach. Furthermore, on the basis of the quantitative correlation between the fraction of metal ion binding and the formation of product, as well as the pH dependence, and the data from site-specific mutants, the authors concluded that the functions of Mg2+ and His are a concerted process. A detailed mechanism is proposed in Figure 6. Even though there are no major surprises in the results and conclusions, the time-resolved structural approach and the overall quality of the results represent a significant step forward for the Me-His family of nucleases. In addition, since the mechanism is unique among different classes of nucleases and polymerases, the work should be of interest to readers in DNA enzymology, or even mechanistic enzymology in general.

Thank you very much for your comments and suggestions.

Weaknesses: Two relatively minor issues are raised here for consideration: p. 4, last para, lines 1-2: "we next visualized the entire reaction process by soaking I-PpoI crystals in buffer....". This is a little over-stated. The structures being observed are not reaction intermediates. They are mixtures of substrates and products in the enzyme-bound state. The progress of the reaction is limited by the progress of the soaking of the metal ion. Crystallography has just been used as a tool to monitor the reaction (and provide structural information about the product). It would be more accurate to say that "we next monitored the reaction progress by soaking....".

We appreciate the clarification regarding the description of our experimental approach. We agree that our structures do not represent reaction intermediates but rather mixtures of substrate and product states within the enzyme-bound environment. We have revised the text accordingly to more accurately reflect our methodology.

p. 5, the beginning of the section. The authors on one hand emphasized the quantitative correlation between Mg ion density and the product density. On the other hand, they raised the uncertainty in the quantitation of Mg2+ density versus Na+ density, thus they repeated the study with Mn2+ which has distinct anomalous signals. This is a very good approach. However, there is still no metal ion density shown in the key Figure 2A. It will be clearer to show the progress of metal ion density in a figure (in addition to just plots), whether it is Mg or Mn.

Thank you for your insightful comments. We recognize the importance of visualizing metal ion density alongside product density data. To address this, we included in Figure S4 to present Mg2+/Mn2+ and product densities concurrently.

Reviewer #1 (Recommendations For The Authors): (1) Figure 6. I understand that pre-reaction state (left panel) and Metal-binding state (two middle panels) are in equilibrium. But can we state that the Metal-binding state (two middle panels) and the product state (right panel) are in equilibrium and connected by two arrows?

Thank you for your comments. We agree that the DNA hydrolysis reaction process may not be reversible within I-Ppo1 active site. To clarify, we removed the backward arrows between the metal-binding state and product state. In addition, we thank the reviewer for giving a name for the middle state and think it would be better to label the middle state. We added the metal-binding state label in the revised Figure 6 and also added “on the other hand, optimal alignment of a deprotonated water and Mg2+ within the active site, labeled as metal-binding state, leads to irreversible bond breakage (Fig. 6a)” within the text.

(2) The section on DNA hydrolysis assay (Materials and Methods) is not well described. In this section, the authors should summarize the methods for the experiments in Figure 4 AC, Figure 5BC, Figure S3C, Figure S4EF, and Figure S6AB. The authors presented some graphs for the reactions. For clarity, the author should state in the legends which experiments the results are from (in crystallo or in solution). Please check and modify them.

Thank you for the suggestion. We have added four paragraphs to detail the experimental procedures for experiments in these figures. In addition, we have checked all of the figure legends and labeled them as “in crystallo or in solution.” To clarify, we also added “in crystallo” or “solution” in the corresponding panels.

(3) The authors showed the anomalous signals of Mn2+ and Tl+. The authors should mention which wavelength of X-rays was used in the data collections to calculate the anomalous signals.

Thank you for the suggestion. We have included the wavelength of the X-ray in the figure legends that include anomalous maps, which were all determined at an X-ray wavelength of 0.9765 Å.

(4) The full names of "His-Me" and "HNH" are necessary for a wide range of readers.

Thank you for the suggestion. We have included the full nomenclature for His-Me (histidine-metal) nucleases and HNH (histidine-asparagine-histidine) nuclease.

(5) The authors should add the side chain of Arg61 in Figure 1E because it is mentioned in the main text.

Thank you for the suggestion. We have added Arg61 to Figure 1E.

(6) Figure 5D. For clarity, the electron densities should cover the Na+ ion. The same request applies to WatN in Figure S3B.

Thank you for catching this detail. We have added the electron density for the Na+ ion in Figure 5D and WatN in Figure S3B.

(7) At line 269 on page 8, what is "previous H98A I-PpoI structure with Mn2+"? Is the structure 1CYQ? If so, it is a complex with Mg2+.

Thank you for catching this detail. We have edited the text to “previous H98A I-PpoI structure with Mg2+.”

(8) At line 294 on page 9, "and substrate alignment or rotation in MutT (66)." I think "alignment of the substrate and nucleophilic water" is preferred rather than "substrate alignment or rotation".

Thank you for the suggestion. We have edited the text to “alignment of the substrate and nucleophilic water.”

(9) At line 305 on page 9, "Second, (58, 69-71) single metal ion binding is strictly correlated with product formation in all conditions, at different pH and with different mutants (Figure 3a and Supplementary Figure 4a-c) (58)". The references should be cited in the correct positions.

Thank you for catching this typo. We have removed the references.

(10) At line 347 on page 10, "Grown in a buffer that contained (50 g/L glucose, 200 g/L α-lactose, 10% glycerol) for 24 hrs." Is this sentence correct?

Thank you for catching this detail. We have corrected the sentence.

(11) At line 395 on page 11, "The His98Ala I-PpoI crystals of first transferred and incubated in a pre-reaction buffer containing 0.1M MES (pH 6.0), 0.2 M NaCl, 1 mM MgCl2 or MnCl2, and 20% (w/v) PEG3350 for 30 min." In the experiments using this mutant, does a pre-reaction buffer contain MgCl2 or MnCl2?

Thank you for bringing this to our attention. We have performed two sets of experiments: (1) metal ion soaking in 1 mM Mn2+, which is performed similarly as WT and does not have Mn2+ in the pre-reaction buffer; (2) imidazole soaking, 1 mM Mn2+ was included in the pre-reaction buffer. We reasoned that the Mn2+ will not bind or promote reaction with His98Ala I-PpoI, but pre-incubation may help populate Mn2+ within the lattice for better imidazole binding. However, neither Mn2+ nor imidazole were observed. We have added experimental details for both experiments with His98Ala I-PpoI.

(12) In the figure legends of Figure 1, is the Fo-Fc omit map shown in yellow not in green? Please remove (F) in the legends.

We have changed the Fo-Fc map to be shown in violet. We have also removed (f) from the figure legends.

(13) I found descriptions of "MgCl". Please modify them to "MgCl2".

Thank you for catching these details. We have modified all “MgCl” to “MgCl2.”

(14) References 72 and 73 are duplicated.

We have removed the duplicated reference.

Reviewer #2 (Recommendations For The Authors): p. 9, first paragraph, last three lines: "Thus, we suspect that the metal ion may play a crucial role in the chemistry step to stabilize the transition state and reduce the electronegative buildup of DNA, similar to the third metal ion in DNA polymerases and RNaseH." This point is significant but the statement seems a little uncertain. You are saying that the single metal plays the role of two metals in polymerase, in both the ground state and the transition state. I believe the sentence can be stronger and more explicit.

Thank you for raising this point. We suspect the single metal ion in I-PpoI is different from the A-site or B-site metal ion in DNA polymerases and RNaseH, but similar to the third metal ion in DNA polymerases and nucleases. As we stated in the text,

(1) the metal ion in I-PpoI is not required for substrate alignment. The water molecule and substrate can be observed in place even in the presence of the metal ion. In contrast, the A-site or B-site metal ion in DNA polymerases and RNaseH are required for aligning the substrates.

(2) Moreover, the appearance of the metal ion is strictly correlated with product formation, similar as the third metal ion in DNA polymerase and RNaseH.

To emphasize our point, we have revised the sentence as

“Thus, similar to the third metal ion in DNA polymerases and RNaseH, the metal ion in I-PpoI is not required for substrate alignment but is essential for catalysis. We suspect that the single metal ion helps stabilize the transition state and reduce the electronegative buildup of DNA, thereby promoting DNA hydrolysis.”

Minor typos: p. 2, line 4 from bottom: due to the relatively low resolution...

Thank you for catching this. We have edited the text to “due to the relatively low resolution.”

Figure 4F: What is represented by the pink color?

The structures are color-coded as 320 s at pH 6 (violet), 160 s at pH 7 (yellow), and 20 s at pH 8 (green). We have included the color information in figure legend and make the labeling clearer in the panel.

p. 9, first paragraph, last line: ...similar to the third...

Thank you for catching this. We have edited the text.

Download links

Downloads (link to download the article as pdf).

  • Article PDF
  • Figures PDF

Open citations (links to open the citations from this article in various online reference manager services)

Cite this article (links to download the citations from this article in formats compatible with various reference manager tools), be the first to read new articles from elife.

Howard Hughes Medical Institute

  • Open access
  • Published: 08 August 2024

Evidence for motivational interviewing in educational settings among medical schools: a scoping review

  • Leonard Yik Chuan Lei 1 ,
  • Keng Sheng Chew 1 ,
  • Chee Shee Chai 1 &
  • Yoke Yong Chen 1  

BMC Medical Education volume  24 , Article number:  856 ( 2024 ) Cite this article

176 Accesses

Metrics details

Motivational interviewing (MI) is a person-centred approach focused on empowering and motivating individuals for behavioural change. Medical students can utilize MI in patient education to engage with patients’ chronic health ailments and maladaptive behaviours. A current scoping review was conducted to 1) determine the types of MI (conventional, adapted, brief and group MI) education programs in medical schools, delivery modalities and teaching methods used; 2) classify educational outcomes on the basis of Kirkpatrick’s hierarchy; and 3) determine the key elements of MI education via the FRAMES (feedback, responsibility, advice, menu of options, empathy, self-efficacy) model.

This scoping review was conducted via the framework outlined by Arksey and O’Malley. Two online databases, CINAHL and MEDLINE Complete, were searched to identify MI interventions in medical education. Further articles were selected from bibliography lists and the Google Scholar search engine.

From an initial yield of 2019 articles, 19 articles were included. First, there appears to be a bimodal distribution of most articles published between the two time periods of 2004--2008 and 2019--2023. Second, all the studies included in this review did not use conventional MI but instead utilized a variety of MI adaptation techniques. Third, most studies used face-to-face training in MI, whereas only one study used online delivery. Fourth, most studies have used a variety of interactive experiences to teach MI. Next, all studies reported outcomes at Kirkpatrick’s Level 2, but only 4 studies reported outcomes at Kirkpatrick’s Level 3. According to the FRAMES model, all studies ( n =19; 100%) reported the elements of responsibility and advice. The element that was reported the least was self-efficacy ( n = 12; 63.1%).

Our findings suggest that motivational interviewing can be taught effectively in medical schools via adaptations to MI and a variety of teaching approaches. However, there is a need for further research investigating standardized MI training across medical schools, the adequate dose for training in MI and the implementation of reflective practices. Future studies may benefit from exploring and better understanding the relationship between MI and self-efficacy in their MI interventions.

Peer Review reports

Motivational interviewing (MI) is a person-centred approach that focuses on empowering and motivating individuals for behavioural change [ 1 ]. Undoubtedly, the empathetic approach of MI in clinical settings fosters a supportive environment that encourages discussion of the benefits of enhanced self-care [ 2 ]. In this context, MI practitioners utilize a set of essential skills encapsulated by the acronym “OARS”, which stands for O = open-ended questions, A = affirmations, R = reflections and S = summaries to promote active listening [ 3 ]. MI was developed primarily for the treatment of addiction disorders but has since progressed to include other physical and mental ailments as well [ 4 ]. In a study on MI interventions in alcoholism, Miller & Sanchez [ 68 ] identified six common motivational elements that should be covered, represented by the acronym “FRAMES”, where F = feedback (e.g., personalized feedback on the impacts of alcoholism on the client’s own experiences, as opposed to providing generic information); R = responsibility (e.g., empowering clients to make their own choices and take responsibility for their change process); A = advice (e.g., effectively given in a nondirective and noncoercive manner); M = menu (e.g., offering a variety of choices on transition methods and plans); E = empathy (e.g., rendering empathic, reassuring and reflective listening); and S = self-efficacy (e.g., supporting clients to succeed in a specified goal). This review used the FRAMES model to determine the key elements of MI education. FRAMES was a predecessor to MI and was initially designed to address drinking problems [ 5 ]; however, it is also used in other health issues, such as decreasing stroke risk [ 6 ], substance use screening and brief intervention [ 7 ]. The FRAMES model offers a structure that can be used to improve the delivery of MI by ensuring that key elements of MI are present in educational interventions.

Mechanisms of motivational interviewing

Frey et al. [ 8 ] developed mechanisms of the motivational interviewing (MMI) framework and described the mechanisms of fidelity of practice in MI, including a technical component, a relational component and MI-inconsistent practices [ 8 ]. The technical component consists of the interviewer’s ability to evaluate the participant’s language relating to a specific behaviour change target and then build a conversation that evokes change talk. The relational component includes respect for the participant’s self-determination, appropriate empathy, and equal partnership. Non-MI consistent behaviours include confrontation, offering unsolicited advice, and persuasion. Additionally, it is important to identify and understand the mechanisms of change so that MI users and researchers can focus on these mechanisms during training, which can lead to improved outcomes and fidelity [ 8 ].

Types of motivational interviewing

MI can be categorized into four types: conventional, adaptive, brief, and group. Conventional MI is an evidence-based approach and directive form of interviewing developed by Miller & Rollnick [ 9 ]. Throughout the course of MI, four important tasks occur: engaging (building mutual relationships), focusing (setting goals), evoking (developing clients’ motivations for change) and planning (negotiating change) [ 9 ]. In this review, the term conventional MI is defined as an approach that utilizes MI-consistent tasks and behaviours in multiple sessions that target an identified population of clients.

Adapted MI consists of culturally sensitive MI and digitally supported interventions that can be used as adjunct interventions to the primary behavioural program [ 10 ]. This review defines the term adapted MI to include any adaptations made to adapt MI culturally to the setting or delivered by technology through various types of technologies and content (e.g., computers, smartphones, applications, videos and audio). Additionally, it also includes adaptations made to structured curricula, such as using role plays or real patient interactions to facilitate the learning of MI.

Brief MI is a type of MI with varying lengths, ranging from 5--90 minutes in duration, emphasizing the lack of an accepted definition of brief MI [ 10 ]. This review defines the term brief MI as an MI that provides brief consultations centred on typically fewer sessions (e.g., 1--2 sessions) than conventional MI (e.g., 3--4 sessions or more).

Group MI can be defined as groups of clients that apply the MI spirit, processes and methods to increase motivation for change and promote beneficial collaboration among participants and practitioners in a shared location to encourage change [ 11 ]. This review defines the term group MI as MI that is adapted for group format and is MI consistent (e.g., applying MI principles, spirit and techniques in its delivery).

Additionally, MI can be used in patient education to help patients better handle their chronic health conditions and maladaptive behaviours. Therefore, behavioural change is vital in the recovery course of different mental and physical disorders, as a change to a healthier lifestyle has been shown to result in a significant decrease in chronic disease risk [ 12 ]. More than 120 studies have demonstrated the efficacy of MI in addressing a wide range of problematic behaviours, such as substance abuse and risky behaviour, as well as promoting healthy behaviours [ 13 ]. There is specific evidence regarding the effectiveness of MI across different health behaviours (substance abuse, risky behaviours and promoting health behaviours), according to the types of MI: conventional, adaptive, brief and group. For conventional MI, research has shown effectiveness in treating substance abuse [ 14 ], reducing risky behaviours in human immunodeficiency virus (HIV)-positive men [ 15 ] and promoting physical activity in older adults [ 16 ]. Adaptive MI has demonstrated its effectiveness in reducing alcohol problems in women [ 17 ], reducing risky sexual behaviours and psychological symptoms in HIV-positive older adults [ 18 ] and promoting self-management to reduce BMI and improve lifestyle adherence with a computer assistant [ 19 ]. Brief MI has been effective in reduction in alcohol misuse in college students with attention deficit hyperactivity disorder (ADHD) [ 20 ] and improvement in the engagement of physical activity in patients with low physical activity levels [ 21 ]. Research has revealed that group MI is effective in treating drug use among women [ 22 ], reducing risky sexual behaviour among adolescents [ 23 ] and improving self-efficacy and oral health behaviours among pregnant women [ 24 ].

Unhealthy lifestyle-linked behaviours characterize common preventable risk factors that lead to the majority of noncommunicable diseases and their associated mortality and morbidity [ 25 ]. MI provides an approach for healthcare providers to assist patients in investigating and resolving their ambivalence toward changing unhealthy lifestyle behaviour [ 27 ]. Studies have reported the effectiveness of teaching MI to medical students [ 4 , 26 , 28 , 29 , 30 ]. Therefore, considering the prevalence and widespread application of MI in health care settings, this underscores the importance of MI being taught in the initial stages of medical education.

In a recent systematic review, Kaltman and Tankersley [ 31 ] reviewed MI curricula in undergraduate medical education (UME) and revealed important findings. Their research findings suggest that generally being involved in an MI curriculum can be linked to enhanced MI-related knowledge and skills in the short term. Additionally, they noted that 1) the MI curricula were heterogeneous in nature; 2) the curricula were different in terms of timing, duration and number of sessions; 3) the curricula employed in studies were multiple pedagogies; and 4) the quality of the evaluations and research evidence varied. However, this review by Kaltman and Tankersley [ 31 ] was limited to reporting only on MI-specific outcomes such as knowledge, skills, attitudes towards, and self-efficacy in implementing MI. Kaltman and Tankersley [ 31 ] systematic review did not stratify and explore in detail studies on the types of MI (conventional, adaptive, brief, or group). Furthermore, the systematic review did not investigate the key elements of MI education as described by the FRAMES model. The scoping review aimed to bridge the knowledge gap on types of MI (conventional, adapted, brief, group MI) and key elements of MI education covered via the FRAMES model. Specifically, the objectives of this study were to 1) determine the types of MI education programs in medical schools, the delivery modalities, and the teaching methods used; 2) classify educational outcomes on the basis of Kirkpatrick’s hierarchy [ 32 ]; and 3) determine the key elements of MI education covered via the FRAMES model.

This study adopted the methodological 5-step framework of Arksey and O’Malley for this scoping review. The five steps are as follows: 1) define our research objectives; 2) identify relevant studies; 3) identify studies based on our selection criteria; 4) chart and analyse the data; and 5) collate, summarize, and disseminate the results.

Eligibility criteria

Relevant peer-reviewed articles on MI studies conducted in medical education settings, published in academic journals only, in the English language, with no time limit imposed on the publication period, were identified. Studies involving nonmedical students as well as grey literature, such as conference proceedings, technical reports, videos, and informal communications, were excluded. Studies in languages other than English were also excluded. The search strategy was guided by the methodology of Aromataris and Riitano [ 33 ]. The Boolean operators and keywords used in this search strategy were ("medical education" OR "medical teaching*" OR "medical graduate*" OR "medical postgraduate*” OR “medical student*”) AND ("motivational interview*" OR "motivational enhanc*" OR "motivational chang*" OR "motivational behavior”) AND ("psycholog*" OR "health*"). The search utilized databases from the Medical Literature Analysis and Retrieval System Online (MEDLINE Complete) and Cumulative Index of Nursing and Allied Health Literature (CINAHL Complete) databases via the EBSCOHost database search query, covering all study designs (i.e., quantitative, qualitative, and mixed studies). The protocol was developed a priori before the search process was conducted, including establishing the objectives and eligibility criteria for determining the studies selected. The reference lists of the selected studies were further checked for additional sources, including traditional and systematic reviews. Articles that met the eligibility criteria were selected through a consensus among the authors and were charted according to the Preferred Reporting Items for Systematic reviews and Meta-analysis extension for Scoping Reviews (PRISMA-ScR) guidelines [ 34 ]. The first author conducted the searches and screened the articles using the search strategy and the inclusion and exclusion criteria stated above. This process resulted in the identification of 59 articles. The decision process resulted in 19 studies for inclusion in this review based on the inclusion and exclusion criteria. The data were extracted and charted by the first author. Notably, the following data were extracted: 1) the study characteristics of the identified articles (publication year, country of origin, type of MI, and medical student phase) and 2) a detailed description of the key findings of the articles (i.e., author, year, objectives, participants, delivery, duration, teaching methods, assessments, and educational outcomes based on Kirkpatrick’s hierarchy). Proforma was developed by all the authors and used to extract and chart the data. The study characteristics are then charted in Table 1 , and detailed descriptions of the key findings of the articles are charted in Table 2 . The other authors assisted in identifying specific data elements to be charted onto Tables 1 and 2 . All the authors contributed to analysing the charted data to ensure the consistency and accuracy of the analysis. The outcomes of educational intervention were classified under the four levels of Kirkpatrick’s hierarchy. Studies classified as Level 3 consists of simulations and observations of behaviours in activities (e.g., roleplay, standardized patients, real patients) after a learning activity such as a workshop. Although Level 3 is usually linked to students applying what they have acquired in training to job settings, our classification extends to controlled settings simulating real-life applications. The most recent search of MEDLINE Complete, CINAHL Complete and Google Scholar was carried out in October 2023.

From an initial pool of 2,019 articles, after removing duplicates and screening for relevance, 19 articles were included in this review. The detailed selection process is illustrated in the PRISMA flow diagram in Fig. 1 .

figure 1

Prism flow diagram

Characteristics of the identified articles

The study characteristics, country of origin, and phase of study are presented in Table 1 . The detailed descriptions of the key findings of these articles (i.e., author, year, objectives, participants, delivery, duration, teaching methods, assessments, and educational outcomes based on Kirkpatrick’s hierarchy) are provided in Table 2 . Most of the studies were published between 2004–2008 and 2019–2023, with each period accounting for 31.5% of the total articles. The majority of MI studies originated from the US (57.8%).

Types and characteristics of MI

With respect to the first research objective, none of the 19 studies in this scoping review conducted conventional MI. Rather, most studies in this scoping review used adapted MI ( n =8; 42.1%) [ 4 , 36 , 38 , 42 , 44 , 46 , 47 , 49 ], followed by group MI ( n =7; 36.8%) [ 26 , 29 , 35 , 40 , 45 , 48 , 39 ] and brief MI ( n =4; 21%) [ 37 , 41 , 43 , 50 ].

Adapted motivational interviewing was utilized in 8 studies. This approach includes any adaptations utilized to adjust MI culturally to the situation or facilitated by technology via different types of content and technologies (e.g., computers, smartphones, applications, videos and audio). Additionally, it also includes adaptations made to structured curricula, such as using role plays via standardized patients or real patient interactions to facilitate the learning of MI. Adapted MI was reported in 8 studies. Specifically, 5 studies [ 36 , 38 , 42 , 44 , 47 ] adapted their curricula to teach MI via role playing standardized patients or real patients. Additionally, 3 studies [ 4 , 46 , 49 ] utilized technological adaptations and blended learning (face-to-face and online) to teach motivational interviewing.

In group MI, this approach consists of MI that is adapted for group format and is MI consistent (e.g., applying MI principles, spirit and techniques in its delivery). Group MI was carried out in 7 studies. Two studies [ 26 , 45 ] used training workshops to teach and practice MI in smaller groups. The remaining 5 studies [ 29 , 35 , 39 , 40 , 48 ] used a small group format to teach MI skills consisting of lectures, roleplay, a case-based curriculum and demonstrations.

Brief MI provides brief consultations centred on typically shorter number sessions (e.g., 1--2 sessions) than conventional MI (e.g., 3--4 sessions or more). A brief MI was conducted in 4 studies. Two studies [ 41 , 51 ] delivered a single session of MI training within two hours. Another study [ 50 ] conducted four (10–15 minute) sessions teaching MI, with a total of less than 1 hour of training. Opheim et al. [ 43 ] conducted a four-hour workshop on MI, which is a relatively brief training intervention.

More than half of the studies focused on clinical medical students ( n =10; 52.6%) [ 4 , 35 , 37 , 38 , 41 , 42 , 43 , 45 , 46 , 49 ], and the least studied was the combination of preclinical and clinical students ( n =2; 10.5%) [ 40 , 47 ]. There was a diverse number of participants, ranging from 17 to 339 students. The median number of participants in these studies was 93. The most common delivery mode identified was face-to-face learning ( n =15; 78.9%) [ 26 , 29 , 35 , 36 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 47 , 48 , 51 ], followed by blended learning ( n =3; 15.7%) [ 4 , 46 , 49 ], and the least common delivery mode was online learning ( n =1, 5.2%) [ 50 ]. The duration of intervention for brief MI ( n =4; 21.0%) [ 37 , 41 , 43 , 50 ] ranged from 10 minutes to 2 hours per session. The duration of adapted MI ( n =8; 42.1%) [ 4 , 36 , 38 , 42 , 44 , 46 , 47 , 49 ] and group MI ( n =7; 36.8%) [ 26 , 29 , 35 , 40 , 39 , 45 , 48 ] ranged from 3 hours to 12 hours. The teaching methods include workshops, lectures, videos, role plays, demonstrations, interviews, interactive exercises, small and large group activities, simulated patients, and online forums.

Classifying educational outcomes based on Kirkpatrick’s hierarchy

With respect to the second research objective (i.e., classifying educational outcomes on the basis of Kirkpatrick’s hierarchy [ 32 ]), all 19 studies [ 4 , 26 , 29 , 35 , 36 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ] were categorized at Kirkpatrick’s Level 2 (knowledge/skills/attitudes). This is followed by 16 out of 19 studies [ 4 , 26 , 29 , 35 , 36 , 39 , 40 , 43 , 44 , 45 , 46 , 47 , 48 , 49 , 50 , 51 ] categorized at Kirkpatrick’s Level 1. Only 4 out of 19 studies [ 35 , 38 , 41 , 47 ] are categorized at Kirkpatrick’s Level 3 (Behaviour). One of the studies [ 38 ] compared the effectiveness of standardized patients versus role plays from colleagues and reported that both were equally effective for teaching basic MI skills among medical students. The students were evaluated in a simulated environment and demonstrated their MI skills in terms of student roleplay or standardized patients. The study reported that standardized patient role play is as effective as student role play in teaching basic MI skills. The sessions focused on demonstrating skills in a simulated setting, suggesting that the student’s behaviour (i.e., adherence to MI skills) was evaluated and improved via the educational intervention. In another study, Bell et al. [ 35 ] investigated the use of a curriculum to teach medical students the principles of MI to increase their knowledge, skills and confidence in counselling patients with the aim of health behaviour change. The research indicated that video-recorded interactions between students and patients enabled students to effectively apply MI skills to real-life patients. None of the studies included reported outcomes at Level 4 (results).

Key elements of the reported FRAMES model and assessment methods used

With respect to the third research objective, all 6 elements in the FRAMES model were covered in 9 out of 19 studies [ 4 , 29 , 35 , 36 , 39 , 40 , 44 , 45 , 51 ], 5 elements were identified in another 4 studies [ 26 , 41 , 48 , 49 ], and 4 elements were identified in 4 studies [ 42 , 43 , 46 , 47 ]. The most reported element in all 19 studies was responsibility and advice ( n =19; 100%), and the least reported element was self-efficacy in only 12 studies ( n =12; 63.1%) [ 4 , 29 , 35 , 36 , 39 , 40 , 41 , 44 , 45 , 46 , 48 , 51 ]. Figure 2 shows additional details on the important elements present in the MI interventions.

figure 2

Important elements of MI interventions ( n  = 19) identified as “reported” via the FRAMES model

The primary assessment method used across the studies was the use of pre- and posttest surveys, which are used to measure knowledge ( n =10, 52.6%), skills ( n =5, 26.3%) and attitudes ( n =3, 15.8%) pertaining to MI. Moreover, the specific instruments employed for focused assessments were (1) MITI to measure fidelity of MI in 5 out of 19 studies ( n =5, 26.3%), (2) Video Assessment of Simulated Encounters (VASE-R) to measure MI skills in 2 out of 19 ( n =2, 10.5%) (3) Behaviour Change Counselling Index (BECCI) to measure practitioner’s skill and competence in delivering effective MI in 2 studies out of 19 ( n =2, 10.5%), (4) Objective Structured Clinical Examination (OSCE) to measure clinical competence in 2 studies out of 19 ( n =2, 10.5%), (5) Motivational Interviewing Knowledge and Attitudes Test (MIKAT) to measure the practitioner’s knowledge and attitude pertaining to MI in 1 study out of 19 ( n =1, 5.2%), (6) Motivational interviewing skill code (MISC) to measure adherence to MI in 1 study out of 19 ( n =1, 5.2%), (7) the Calgary-Cambridge Observation Guide (C-CG) to measure communication skills between practitioners and patients was used in 1 study out of 19 ( n =1, 5.2%), (8) Motivational interviewing confidence scale (MICS) to measure confidence in health behaviour change dialogues in 1 study out of 19 ( n =1, 5.2%) and (8) the Jefferson Scale of Physician Empathy (JSPE) to measure empathy in patient care among health practitioners in 1 study out of 19 ( n =1, 5.2%).

Our scoping review sheds light on the current trends and key findings to determine the types of MI education programs in medical schools, the delivery modalities and teaching methods used, classify educational outcomes on based on Kirkpatrick’s hierarchy [ 32 ] and determine the key elements of MI education covered via the FRAMES model. First, there appears to be a bimodal distribution of most articles published between the two time periods of 2004--2008 and 2019--2023. Second, all the studies included in this review did not use conventional MI but instead utilized a variety of MI adaptation techniques. Third, most studies used face-to-face training in MI, whereas only one study used online delivery. Fourth, most studies have used a variety of interactive experiences to teach MI. Next, all studies reported outcomes at Kirkpatrick’s Level 2, but only 4 studies reported outcomes at Kirkpatrick’s Level 3. Finally, the most covered elements of MI training in these studies were responsibility and advice ( n = 19; 100%), and the least covered element in MI training was self-efficacy ( n = 12; 63.1%) [ 4 , 29 , 35 , 36 , 39 , 40 , 41 , 44 , 45 , 46 , 48 , 51 ]. This review expands on the evidence of MI interventions among medical schools. The results of our findings generally suggest that MI can be effectively taught in medical schools. Furthermore, we have provided several recommendations for further research to improve the implementation of MI in medical schools.

There appears to be a bimodal distribution of published articles between the two time periods, i.e., between 2004 and 2008 and between 2019 and 2023. A decline in the number of articles published was observed between 2009 and 2019. This decline could be due to the shift in the applications of MI beyond treating addictive behaviours to include a broad range of other behavioural conditions [ 52 ], such as its expanded applications in school education [ 53 , 54 , 55 ], lifestyle coaching [ 56 , 57 , 58 ], probation and parole [ 59 , 60 ] and digital health care and telemedicine [ 61 , 62 ]. From 2019 onwards, however, there was an increasing trend in the number of published articles on MI training for medical students. This could be attributed to the MI Network of Trainers (MINT) making it mandatory to attend MI training during the COVID-19 pandemic to provide virtual training in 2020 and 2021 [ 52 ], which has facilitated remote participation.

Types of MI education programs in medical schools

All the studies included in this review did not use conventional MI but utilized a variety of MI adaptation techniques. Most studies [ 4 , 36 , 38 , 42 , 44 , 46 , 47 , 49 ] have used adapted MI to conduct their MI training, possibly because of the need to tailor MI programs to fit medical school curricula. Medical students have been linked to extensive academic responsibilities and clinical rotations [ 63 ], contributing to this adaptation of MI. In fact, the lack of harmonization of training methods among medical schools has led to challenges in understanding the optimal approach to teach MI among medical students [ 31 ]. Furthermore, there is no consensus on the standard dose of training in MI that is adequate or mandatory for learners to acquire sufficient skilfulness in the practice of MI [ 9 ]. Moreover, medical schools have time constraints and limited MI teaching opportunities because of their hectic medical curriculum schedules [ 41 ]. This may lead to a variety of adaptations of MI, as noted in this review. Future research can focus on addressing the lack of harmonization in MI training methods and emphasize building and employing standardized MI training with adequate dosing across medical schools.

Delivery modalities and teaching methods used

In the present review, the delivery modalities used to train medical students in MI varied across the studies. Most studies [ 26 , 29 , 35 , 36 , 38 , 39 , 40 , 41 , 42 , 43 , 44 , 45 , 47 , 48 , 51 ] have focused on delivering face-to-face training on MI to clinical medical students. This aligns with the current literature, which suggests that MI is a complex communication skill [ 57 ] and is reported to be taught more effectively in face-to-face sessions [ 64 ]. In this review, only one study [ 50 ] used a fully online approach to teach MI to medical students. A systemic study suggested that for an online MI intervention to be effective, it requires significant emphasis on fidelity and training procedures [ 65 ]. In a recent comparative study, Schaper et al. [ 66 ] reported similar effects of training MI among general practitioners in both online and face-to-face training in MI skills and spirit. Future studies could focus on the implementation of online versus face-to-face training for medical students with an emphasis on fidelity and training procedures for MI.

A large proportion of the studies in this review report the use of a variety of teaching approaches (e.g., workshops, role-play, standardized patients, and small and large group sessions) to teach MI. This aligns with Kolb’s experiential learning cycle [ 67 ], where the process of learning occurs when knowledge is formed via the transformation of experience. This model is guided by four phases of the learning process: concrete experience (having an experience), reflective observation (reflecting on an experience), abstract conceptualization (learning from the experience), and active experimentation (experimenting what you have learned). Medical students who are given the opportunity to engage in Kolb’s learning cycle [ 67 ] via interactive activities, reflection and simulated or real-life settings are likely to develop good MI skills. Future research should underpin educational theories into MI training by implementing structured reflective exercises in MI education.

Educational outcomes based on Kirkpatrick’s hierarchy

Our review shows that all studies reported outcomes at Kirkpatrick’s Level 2, suggesting that medical students have acquired the intended knowledge, skills, and attitudes. There are only 4 studies that reported outcomes at Kirkpatrick’s Level 3, which evaluates the degree to which the students apply their learning to simulated or real-world settings. The first 3 studies [ 38 , 41 , 47 ] showed their improvement in behaviour by showing their learned skills in realistic settings, which included observing students’ behaviour in standardized patients or real patients. The last study [ 35 ] revealed improvements in the MI skills of real patients in diverse settings, such as traditional health behaviour interventions, such as alcohol, tobacco and weight loss interventions. Future studies should include longitudinal evaluations of the effectiveness of MI skills.

Key elements of MI education covered via the FRAMES model

According to the FRAMES model [ 68 ], all included studies reported the elements of responsibility and advice ( n =19; 100%) in the training of MI. The element responsibility is the shared responsibility of the learner’s growth by the learner and teacher. This could be attributed to the move towards competency-based medical education, which emphasizes shared responsibility among students while incorporating student-centric learning techniques and formative assessment as a vital element of the learning process [ 69 ]. In other words, the high reporting of ‘responsibility’ and ‘advice’ suggest that the present MI training significantly emphasizes medical students taking ownership of their learning and decision-making processes (‘responsibility’). Moreover, from a patient education perspective, empowering patients to take ownership of their health [ 70 ] and effectively guiding patients toward positive behavioural changes through good advice in a nonconfrontational approach is a basic tenet of MI (‘advice’).

The least reported element found in training for MI in our included studies [ 4 , 29 , 35 , 36 , 39 , 40 , 41 , 44 , 45 , 46 , 48 , 51 ] was self-efficacy. This may be due to MI training focusing less on self-efficacy and instead emphasizing other elements, such as empathy, open-ended questioning and reflective listening. An educational theory that is linked to the element of self-efficacy is social cognitive theory. Social cognitive theory can be defined as a person’s belief in their ability to determine the behaviours required to reach their desired goals and their perceptions of their ability and skills to manage their environment [ 71 , 72 ]. Continued research into integrating social cognitive theory into MI training could assist practitioners in comprehending the role and importance of self-efficacy in behaviour change and reflective practice. The lower reporting of ‘self-efficacy’ might also indicate a potential gap in MI training. Self-efficacy is essential because it relates to the practitioner’s confidence in their ability to effectively implement MI techniques and facilitate behaviour change in patients. Addressing this gap in future research could lead to more competent and confident practitioners who are better equipped to address challenging patient interactions and support positive health outcomes. Future studies can also utilize FRAMES to guide research design and interventions and investigate which aspects of FRAMES in the training of MI are most effective within the limited time frame of medical curricula.

Limitations

This scoping review is subject to several limitations. We included only English-language studies in which medical students were the target participants. We did not include articles that are categorized as grey literature or other forms of nonpeer review articles, which might have resulted in biased outcomes. Most of the studies focused on evaluating learner knowledge and skills in MI, which might have limited the practical applications of MI to real patients. The first author conducted the search and screening of the articles. This may lead to selection bias and reduce the reliability of the study selection process. The protocol for this review was developed before the search was initiated but was not registered or published online, which increases the risk of selective reporting. The database search was limited to MEDLINE Complete and CINAHL Complete, which were accessed via EBSCOhost and the search engine Google Scholar. Although a comprehensive search was conducted, other databases that were relevant to the review, such as the PsycINFO and ERIC databases, were not included, potentially resulting in missing relevant articles. Kirkpatrick’s hierarchy was utilized to assess educational outcomes in this review. This approach may neglect other core aspects of educational interventions. Furthermore, although we have extensively searched various countries, most of the studies reported are from the USA ( n =11; 57.8%) or Germany ( n =4; 21.0%). A lack of diversity among studies in other regions may lead to biased outcomes.

Based on our review, the findings suggest that motivational interviewing can be taught effectively in medical schools via adaptations of MI and a variety of teaching approaches. However, there is a need for further research investigating standardized MI training across medical schools, the adequate dose for training in MI and the implementation of reflective practices that are supported by educational learning theories. Furthermore, longitudinal studies can assess the effectiveness of MI. Future studies may benefit from exploring and better understanding the relationship between MI and self-efficacy in their MI interventions. The FRAMES model can be used to guide research and explore which aspects of FRAMES are optimally delivered within the limited time frame of medical curricula.

Availability of data and materials

All data generated or analysed during this study are included in this published article.

Abbreviations

Attention-deficit/hyperactivity disorder

Behaviour Change Counselling Index

Brief motivational interviewing

Course Experience Questionnaire

Calgary-Cambridge Observation Guide

Cumulative Index of Nursing and Allied Health Literature

Feedback, Responsibility, Advice, Menu of Options, Empathy, Self-Efficacy

Human immunodeficiency virus

Helpful response questionnaire

Jefferson Scale of Physician Empathy

Large Group Activities

Learning Outcomes Questionnaire

Medical Literature Analysis and Retrieval System Online

  • Motivational interviewing

Motivational Interviewing Confidence Scale

Motivational Interviewing Knowledge and Attitudes Test

Motivational interviewing network of trainers

Motivational interviewing skill code

Motivational interviewing treatment integrity

Mechanisms of Motivational Interview

O = open-ended questions, A = affirmations, R = reflections, and S = summaries to promote active listening

Objective Structured Clinical Examination

Preferred Reporting Items for Systematic reviews and Meta-analysis extension for Scoping Reviews guidelines

Small Group Activities

Simulated Patient

Tabacco Intervention Basic Skills

Theory of Planned Behaviour

Video Assessment of the Simulated Encounter

Spencer JC, Wheeler SB. A systematic review of Motivational Interviewing interventions in cancer patients and survivors. Patient Educ Couns. 2016;99(7):1099–105.

Article   Google Scholar  

Gabbay RA, Kaul S, Ulbrecht J, Scheffler NM, Armstrong DG. Motivational interviewing by podiatric physicians: a method for improving patient self-care of the diabetic foot. J Am Podiatr Med Assoc. 2011;101(1):78–84.

Miller W, Rollnick S. Motivational interviewing: preparing people for change. New York: 2nd The Guilford Press; 2002.

Google Scholar  

Erschens R, Fahse B, Festl-Wietek T, Herrmann-Werner A, Keifenheim KE, Zipfel S, et al. Training medical students in motivational interviewing using a blended learning approach: a proof-of-concept study. Front Psychol. 2023;14:1204810.

Searight HR. Counseling patients in primary care: evidence-based strategies. Am Fam Phys. 2018;98(12):719–28.

Miller ET, Spilker J. Readiness to change and brief educational interventions: successful strategies to reduce stroke risk. J Neurosci Nurs. 2003;35(4):215–22.

Jaguga F, Ott MA, Kwobah EK, Apondi E, Giusto A, Barasa J, et al. Adapting a substance use screening and brief intervention for peer-delivery and for youth in Kenya. SSM Ment Health. 2023;4:100254.

Frey AJ, Lee J, Small JW, Sibley M, Owens JS, Skidmore B, et al. Mechanisms of motivational interviewing: a conceptual framework to guide practice and research. Prev Sci. 2021;22(6):689–700.

Miller WR, Rollnick S. Motivational interviewing: Helping people change and grow. New York: Guilford Publications; 2023.

Rimayanti MU, O’Halloran PD, Shields N, Morris R, Taylor NF. Comparing process evaluations of motivational interviewing interventions for managing health conditions and health promotions: a scoping review. Patient Educ Counsel. 2022;105(5):1170–80.

Centis E, Petroni ML, Ghirelli V, Cioni M, Navacchia P, Guberti E, et al. Motivational interviewing adapted to group setting for the treatment of relapse in the behavioral therapy of obesity. A clinical audit. Nutrients. 2020;12(12):3881.

Ford ES, Bergmann MM, Kröger J, Schienkiewitz A, Weikert C, Boeing H. Healthy living is the best revenge: findings from the European Prospective Investigation Into Cancer and Nutrition-Potsdam study. Arch Intern Med. 2009;169(15):1355–62.

Lundahl BW, Kunz C, Brownell C, Tollefson D, Burke BL. A meta-analysis of motivational interviewing: twenty-five years of empirical studies. Res Soc Work Pract. 2010;20:137–60.

Carroll KM, Ball SA, Nich C, Martino S, Frankforter TL, Farentinos C, et al. Motivational interviewing to improve treatment engagement and outcome in individuals seeking treatment for substance abuse: a multisite effectiveness study. Drug Alcohol Depend. 2006;81(3):301–12.

Rongkavilit C, Wang B, Naar-King S, Bunupuradah T, Parsons JT, Panthong A, et al. Motivational interviewing targeting risky sex in HIV-positive young Thai men who have sex with men. Arch Sex Behav. 2015;44(2):329–40.

Sönmez Sari E, Kitiş Y. The effect of nurse-led motivational interviewing based on the trans-theoretical model on promoting physical activity in healthy older adults: a randomized controlled trial. Int J Nurs Pract. 2024;30(2):e13252.

Polcin D, Witbrodt J, Nayak MB, Korcha R, Pugh S, Salinardi M. Characteristics of women with alcohol use disorders who benefit from intensive motivational interviewing. Subst Abus. 2022;43(1):23–31.

Lovejoy TI. Telephone-delivered motivational interviewing targeting sexual risk behavior reduces depression, anxiety, and stress in HIV-positive older adults. Ann Behav Med. 2012;44(3):416–21.

BlansonHenkemans OA, van der Boog PJ, Lindenberg J, van der Mast CA, Neerincx MA, Zwetsloot-Schonk BJ. An online lifestyle diary with a persuasive computer assistant providing feedback on self-management. Technol Health Care. 2009;17(3):253–67.

Meinzer MC, Oddo LE, Vasko JM, Murphy JG, Iwamoto D, Lejuez CW, et al. Motivational interviewing plus behavioral activation for alcohol misuse in college students with ADHD. Psychol Addict Behav. 2021;35(7):803–16.

Waite I, Grant D, Mayes J, Greenwood S. Can a brief behavioural change intervention encourage hospital patients with low physical activity levels to engage and initiate a change in physical activity behaviour? Physiotherapy. 2020;108:22–8.

Oveisi S, Stein L, Babaeepour E, Araban M. The impact of motivational interviewing on relapse to substance use among women in Iran: a randomized clinical trial. BMC psychiatry. 2020;20:1–7.

Schmiege SJ, Magnan RE, Yeater EA, Ewing SWF, Bryan AD. Randomized trial to reduce risky sexual behavior among justice-involved adolescents. Am J Prev Med. 2021;60(1):47–56.

Saffari M, Sanaeinasab H, Mobini M, Sepandi M, Rashidi-Jahan H, Sehlo MG, et al. Effect of a health-education program using motivational interviewing on oral health behavior and self-efficacy in pregnant women: a randomized controlled trial. Eur J Oral Sci. 2020;128(4):308–16.

Riley L, Guthold R, Cowan M, Savin S, Bhatti L, Armstrong T, et al. The World Health Organization STEPwise approach to noncommunicable disease risk-factor surveillance: methods, challenges, and opportunities. Am J Public Health. 2016;106(1):74–8.

D’Urzo KA, Flood SM, Baillie C, Skelding S, Dobrowolski S, Houlden RL, et al. Evaluating the implementation and impact of a motivational interviewing workshop on medical student knowledge and social cognitions towards counseling patients on lifestyle behaviors. Teach Learn Med. 2020;32(2):218–30.

Levensky ER, Forcehimes A, O’Donohue WT, Beitz K. Motivational interviewing: an evidence-based approach to counseling helps patients follow treatment recommendations. Am J Nurs. 2007;107(10):50–8.

Berger DJ, Nickolich S, Nasir M. Introduction to tobacco cessation and motivational interviewing: evaluation of a lecture and case-based learning activity for medical students. Cureus. 2024;16(2):e53704.

Edwards EJ, Arora B, Green P, Bannatyne AJ, Nielson T. Teaching brief motivational interviewing to medical students using a pedagogical framework. Patient Educ Counsel. 2022;105(7):2315–9.

Manojna GS, Madhavi BD. Effectiveness of teaching motivational interview technique to third professional year medical students to improve counseling skills–An interventional study. MRIMS J Health Sci. 2024:10.4103. [Epub ahead of print] June 19, 2024.

Kaltman S, Tankersley A. Teaching motivational interviewing to medical students: a systematic review. Acad Med. 2020;95(3):458–69.

Kirkpatrick JD, Kirkpatrick WK. Kirkpatrick’s four levels of training evaluation. Alexandria, VA: Association for Talent Development; 2016.

Aromataris E, Riitano D. Constructing a search strategy and searching for evidence. A guide to the literature search for a systematic review. Am J Nurs. 2014;114(5):49–56.

Tricco A, Lillie E, Zarin W, O’Brien K, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Ann Intern Med. 2018;169:467–73.

Bell K, Cole BA. Improving medical students’ success in promoting health behavior change: a curriculum evaluation. J Gen Intern Med. 2008;23(9):1503–6.

Brown RL, Pfeifer JM, Gjerde CL, Seibert CS, Haq CL. Teaching patient-centered tobacco intervention to first-year medical students. J Gen Intern Med. 2004;19(5):534–9.

Martino S, Haeseler F, Belitsky R, Pantalon M, Fortin AI. Teaching brief motivational interviewing to Year three medical students. Med Educ. 2007;41(2):160–7.

Mounsey AL, Bovbjerg V, White L, Gazewood J. Do students develop better motivational interviewing skills through role-play with standardised patients or with student colleagues? Med Educ. 2006;40(8):775–80.

Poirier MK, Clark MM, Cerhan JH, Pruthi S, Geda YE, Dale LC. Teaching motivational interviewing to first-year medical students to improve counseling skills in health behavior change. Mayo Clinic Proc. 2004;79:327–31 Elsevier.

White LL, Gazewood JD, Mounsey AL. Teaching students behavior change skills: description and assessment of a new Motivational interviewing curriculum. Med Teach. 2007;29(4):e67-71.

Haeseler F, Fortin AHT, Pfeiffer C, Walters C, Martino S, Haeseler F, et al. Assessment of a motivational interviewing curriculum for year 3 medical students using a standardized patient case. Patient Educ Counsel. 2011;84(1):27–30.

Lim BT, Moriarty H, Huthwaite M. 'Being-in-role’: a teaching innovation to enhance empathic communication skills in medical students. Med Teach. 2011;33(12):e663-9.

Opheim A, Andreasson S, Eklund AB, Prescott P. The effects of training medical students in motivational interviewing. Health Educ J. 2009;68(3):170–8.

Brogan Hartlieb K, Engle B, Obeso V, Pedoussaut MA, Merlo LJ, Brown DR. Advanced patient-centered communication for health behavior change: motivational interviewing workshops for medical learners. MedEdPORTAL. 2016;12:10455.

Gecht-Silver M, Lee D, Ehrlich-Jones L, Bristow M. Evaluation of a motivational interviewing training for third-year medical students. Fam Med. 2016;48(2):132–5.

Kaltman S, WinklerPrins V, Serrano A, Talisman N. Enhancing motivational interviewing training in a family medicine clerkship. Teach Learn Med. 2015;27(1):80–4.

Purkabiri K, Steppacher V, Bernardy K, Karl N, Vedder V, Borgmann M, et al. Outcome of a four-hour smoking cessation counselling workshop for medical students. Tob Induc Dis. 2016;14:37.

Jacobs NN, Calvo L, Dieringer A, Hall A, Danko R. Motivational interviewing training: a case-based curriculum for preclinical medical students. MedEdPORTAL. 2021;17:11104.

Keifenheim KE, Velten-Schurian K, Fahse B, Erschens R, Loda T, Wiesner L, et al. “A change would do you good”: Training medical students in Motivational Interviewing using a blended-learning approach - a pilot evaluation. Patient Educ Counsel. 2019;102(4):663–9.

Plass AM, Covic A, Lohrberg L, Albright G, Goldman R, Von Steinbüchel N. Effectiveness of a minimal virtual motivational interviewing training for first years medical students: differentiating between pre-test and then-test. Patient Educ Counsel. 2022;105(6):1457–62.

Martino S, Haeseler F, Belitsky R, Pantalon M, Fortin AHT. Teaching brief motivational interviewing to Year three medical students. Med Educ. 2007;41(2):160–7.

Miller WR. The evolution of motivational interviewing. Behav Cogn Psychother. 2023:1–17.

Small JW, Frey A, Lee J, Seeley JR, Scott TM, Sibley MH. Fidelity of motivational interviewing in school-based intervention and research. Prev Sci. 2021;22(6):712–21.

Rollnick S, Kaplan SG, Rutschman R. Motivational interviewing in schools: conversations to improve behavior and learning. Guilford Publications; 2016.

Pincus R, Bridges CW, Remley TP. School counselors using motivational interviewing. J Prof Counsel Pract Theory Res. 2018;45(2):82–94.

Lin CH, Chiang SL, Heitkemper MM, Hung YJ, Lee MS, Tzeng WC, et al. Effects of telephone-based motivational interviewing in lifestyle modification program on reducing metabolic risks in middle-aged and older women with metabolic syndrome: a randomized controlled trial. Int J Nurs Stud. 2016;60:12–23.

Dobber J, Latour C, Snaterse M, van Meijel B, ter Riet G, Scholte op Reimer W, et al. Developing nurses’ skills in motivational interviewing to promote a healthy lifestyle in patients with coronary artery disease. Eur J Cardiovasc Nurs. 2019;18(1):28–37.

Almansour M, AlQurmalah SI, Abdul Razack HI. Motivational interviewing-an evidence-based, collaborative, goal-oriented communication approach in lifestyle medicine: a comprehensive review of the literature. J Taibah Univ Med Sci. 2023;18(5):1170–8.

Polcin DL, Korcha R, Witbrodt J, Mericle AA, Mahoney E. Motivational Interviewing Case Management (MICM) for persons on probation or parole entering sober living houses. Crim Justice Behav. 2018;45(11):1634–59.

Sarpavaara H. The causes of change and no-change in substance users’ talk during motivational interviewing in the probation service in Finland. Int J Offender Ther Comp Criminol. 2017;61(4):430–44.

Bottel L, te Wildt BT, Brand M, Pape M, Herpertz S, Dieris-Hirche J. Telemedicine as bridge to the offline world for person affected with problematic internet use or internet use disorder and concerned significant others. Digit Health. 2023;9:20552076221144184.

Creech SK, Pulverman CS, Kahler CW, Orchowski LM, Shea MT, Wernette GT, et al. Computerized intervention in primary care for women veterans with sexual assault histories and psychosocial health risks: a randomized clinical trial. J Gen Intern Med. 2022;37(5):1097–107.

Duthie CJ, Cameron C, Smith-Han K, Beckert L, Delpachitra S, Garland SN, et al. Reasons for why medical students prefer specific sleep management strategies. Behav Sleep Med. 2024;22(4):516–29.

Clancy R, Taylor A. Engaging clinicians in motivational interviewing: comparing online with face-to-face post-training consolidation. Int J Ment Health Nurs. 2016;25(1):51–61.

Frost H, Campbell P, Maxwell M, O’Carroll RE, Dombrowski SU, Williams B, et al. Effectiveness of Motivational Interviewing on adult behaviour change in health and social care settings: a systematic review of reviews. PloS one. 2018;13(10):e0204890.

Schaper K, Woelber JP, Jaehne A. Can the spirit of motivational interviewing be taught online? A comparative study in general practitioners. Patient Educ Counsel. 2024;125:108297.

Kolb DA. Experience as the source of learning and development. Upper Sadle River: Prentice Hall; 1984.

Miller WR, Sanchez VC. Motivating young adults for treatment and lifestyle change. 1994.

Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: a systematic review of published definitions. Med Teach. 2010;32(8):631–7.

Lim E, Wynaden D, Heslop K. Using Q-methodology to explore mental health nurses’ knowledge and skills to use recovery-focused care to reduce aggression in acute mental health settings. Int J Ment Health Nurs. 2021;30(2):413–26.

Bandura A. Self-efficacy: toward a unifying theory of behavioral change. Psychol Rev. 1977;84(2):191.

Lönnfjord V, Hagquist C. The psychometric properties of the Swedish version of the general self-efficacy scale: a Rasch analysis based on adolescent data. Curr Psychol. 2018;37:703–15.

Download references

Acknowledgements

Additionally, the authors would like to thank Universiti Malaysia Sarawak for the support provided for this publication.

Open Access funding provided by Universiti Malaysia Sarawak.

Author information

Authors and affiliations.

Faculty of Medicine and Health Sciences, Universiti Malaysia Sarawak (UNIMAS), Kota Samarahan, Sarawak, 94300, Malaysia

Leonard Yik Chuan Lei, Keng Sheng Chew, Chee Shee Chai & Yoke Yong Chen

You can also search for this author in PubMed   Google Scholar

Contributions

LLYC, the first author, made significant contributions to developing the idea, the searches, conducting analysis and was responsible for drafting the manuscript. KSC and CYY contributed significantly to the conceptualization, alignment and reviewing of the manuscript. KSC and CYY and CCS participated in the analysis and writing of the manuscript. All authors have read and approved the final manuscript.

Corresponding author

Correspondence to Leonard Yik Chuan Lei .

Ethics declarations

Ethics approval and consent to participate.

This study did not require ethical approval or consent to participate.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Lei, L., Chew, K., Chai, C. et al. Evidence for motivational interviewing in educational settings among medical schools: a scoping review. BMC Med Educ 24 , 856 (2024). https://doi.org/10.1186/s12909-024-05845-w

Download citation

Received : 11 April 2024

Accepted : 30 July 2024

Published : 08 August 2024

DOI : https://doi.org/10.1186/s12909-024-05845-w

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Scoping review
  • Motivational behaviour
  • Motivational change
  • Motivational enhancement
  • Medical education
  • Medical teaching

BMC Medical Education

ISSN: 1472-6920

open peer review in research

  • Open access
  • Published: 07 August 2024

Management training programs in healthcare: effectiveness factors, challenges and outcomes

  • Lucia Giovanelli 1 ,
  • Federico Rotondo 2 &
  • Nicoletta Fadda 1  

BMC Health Services Research volume  24 , Article number:  904 ( 2024 ) Cite this article

107 Accesses

Metrics details

Different professionals working in healthcare organizations (e.g., physicians, veterinarians, pharmacists, biologists, engineers, etc.) must be able to properly manage scarce resources to meet increasingly complex needs and demands. Due to the lack of specific courses in curricular university education, particularly in the field of medicine, management training programs have become an essential element in preparing health professionals to cope with global challenges. This study aims to examine factors influencing the effectiveness of management training programs and their outcomes in healthcare settings, at middle-management level, in general and by different groups of participants: physicians and non-physicians, participants with or without management positions.

A survey was used for gathering information from a purposive sample of professionals in the healthcare field attending management training programs in Italy. Factor analysis, a set of ordinal logistic regressions and an unpaired two-sample t-test were used for data elaboration.

The findings show the importance of diversity of pedagogical approaches and tools and debate, and class homogeneity, as effectiveness factors. Lower competencies held before the training programs and problems of dialogue and discussion during the course are conducive to innovative practice introduction. Interpersonal and career outcomes are greater for those holding management positions.

Conclusions

The study reveals four profiles of participants with different gaps and needs. Training programs should be tailored based on participants’ profiles, in terms of pedagogical approaches and tools, and preserve class homogeneity in terms of professional backgrounds and management levels to facilitate constructive dialogue and solution finding approach.

Peer Review reports

Several healthcare systems worldwide have identified management training as a precondition for developing appropriate strategies to address global challenges such as, on one hand, poor health service outcomes in front of increased health expenditure, particularly for pharmaceuticals, personnel shortages and low productivity, and on the other hand in terms of unbalanced quality and equal access to healthcare across the population [ 1 ]. The sustainability of health systems itself seems to be associated with the presence of leaders, at all levels of health organizations, who are able to correctly manage scarce resources to meet increasingly complex health needs and demands, at the same time motivating health personnel under an increasing amount of stress and steering their behaviors towards the system’s goals, in order to drive the transition towards more decentralized, interorganizational and patient-centered care models [ 2 ].

Recently, professional training as an activity aimed at increasing learning of new capabilities (reskilling) and improving existing ones (upskilling) during the lifetime of individuals (lifelong learning) has been identified by the European Commission as one of the seven flagship programs to be developed in the National Recovery and Resilience Plans (NRRP) to support the achievement of European Union’s goals, such as green and digital transitions, innovation, economic and social inclusion and occupation [ 3 ]. As a consequence, many member states have implemented training programs to face current and future challenges in health, which often represents a core mission in their NRRPs.

The increased importance of developing management training programs is also related to the rigidity and focalization of university degree courses in medicine, which do not provide physicians with the basic tools for fulfilling managerial roles [ 4 ]. Furthermore, taking on these roles does not automatically mean filling existing gaps in management capabilities and skills [ 5 ]. Several studies have demonstrated that, in the health setting, management competencies are influenced by positions and management levels as well as by organization and system’s features [ 6 , 7 ]. Hence, training programs aimed at increasing management competencies cannot be developed without considering these differences.

To date, few studies have focused on investigating management training programs in healthcare [ 8 ]. In particular, much more investigation is required on methods, contents, processes and challenges determining the effectiveness of training programs addressed to health managers by taking into account different environments, positions and management levels [ 1 ]. A gap also exists in the assessment of management training programs’ outcomes [ 9 ]. This study aims to examine factors influencing the effectiveness and outcomes of management training, at the middle-management level, in healthcare. It intends to answer the following research questions: which factors influence the management training process? Which relationships exist between management competencies held before the program, factors of effectiveness, critical issues encountered, and results achieved or prefigured at the end of the program? Are there differences, in terms of factors of effectiveness, challenges and outcomes, between the following groups of management training programs’ participants: physicians and non-physicians, participants with or without management positions?

Management training in healthcare

Currently, there is a wide debate about the added value of management to health organizations [ 10 ] and thus about the importance of spreading management competencies within health organizations to improve their performance. Through a systematic review, Lega et al. [ 11 ] highlighted four approaches to examine the impact of management on healthcare performance, focusing on management practices, managers’ characteristics, engagement of professionals in performance management and organizational features and management styles.

Although findings have not always been univocal, several studies suggest a positive relationship between management competencies and practices and outcomes in healthcare organizations, both from a clinical and financial point of view [ 12 ]. Among others, Vainieri et al. [ 13 ] found, in the Italian setting, a positive association between top management’s competencies and organizational performance, assessed through a multidimensional perspective. This study also reveals the mediating effect of information sharing, in terms of strategy, results and organization structure, in the relationship between managerial competencies and performance.

The key role of management competencies clearly emerges for health executives, who have to turn system policies into a vision, and then articulate it into effective strategies and actions within their organizations to steer and engage professionals [ 14 , 15 , 16 , 17 , 18 , 19 ]. However, health systems are increasingly complex and continually changing across contexts and health service levels. This means the role of health executives is evolving as well and identifying the capacities they need to address current and emerging issues becomes more difficult. For instance, a literature review conducted by Figueroa et al. [ 20 ] sheds light on priorities and challenges for health leadership at three structural levels: macro context (international and national), meso context (organizations) and micro context (individual healthcare managers).

Doctor-managers are requested to carry both clinical tasks and tasks related to budgeting, goal setting and performance evaluation. As a consequence, a growing stream of research has speculated whether managers with a clinical background actually affect healthcare performance outcomes, but studies have produced inconclusive findings. In relation to this topic, Sarto and Veronesi [ 21 ] carried out a literature review showing a generally positive impact of clinical leadership on different types of outcome measures, with only a few studies reporting negative impacts on financial and social performance. Morandi et al. [ 22 ] focused on doctor-managers who have become middle managers and investigated the potential bias in performance appraisal due to the mismatch between self-reported and official performance data. At the individual level, the role played by managerial behavior, training, engagement, and perceived organizational support was analyzed. Among others indications they suggested that training programs should be revised to reduce bias in performance appraisal. Tasi et al. [ 23 ] conducted a cross-sectional analysis of the 115 largest U.S. hospitals, divided into physician-led and non-physician-led, which revealed that physician-led hospital systems have higher quality ratings across all specialities and more inpatient days per hospital bed than non-physician-led hospitals. No differences between the groups were found in total revenue and profit margins. The main implication of their study is that hospital systems may benefit from the presence of physician leadership to improve the quality and efficiency of care delivered to patients as long as education and training are able to adequately prepare them. The main issue, as also observed by others [ 4 , 24 ], is that university education in medicine still includes little focus on aspects such as collaborative management, communication and coordination, and leadership skills. Such a circumstance motivates the call for further training. Regarding the implementation of training programs, Liang et al. [ 1 ] have recently shown how it is hindered, among others, by a lack of sufficient knowledge about needed competencies and existing gaps. Their analysis, which focuses on senior managers from three categories in Chinese hospitals, shows that before commencing the programs senior managers had not acquired adequate management competencies either through formal or informal training. It is worth noticing that significant differences exist between hospital categories and management levels. For this reason, they recommend using a systemic approach to design training programs, which considers different hospital types, management levels and positions. Yarbrough et al. [ 6 ] examined how competence training worked in healthcare organizations and the competencies needed for leaders at different points of their careers at various organizational levels. They carried out a cross-sectional survey of 492 US hospital executives, whose most significant result was that competence training is effective in healthcare organizations.

Walston and Khaliq [ 25 ], from a survey of 2,001 hospital CEOs across the US concluded that the greatest contribution of continuing education is to keep CEOs updated on technological and market changes that impact their current job responsibilities. Conversely, it does not seem to be valued for career or succession planning. About the methods of continuing education, an increasing use of some internet-based tools was found. Walston et al. [ 26 ] identified the factors affecting continuing education, finding, among others, that CEOs from for-profit and larger hospitals tend to take less continuing education, whereas senior managers' commitment to continuing education is influenced by region, gender, the CEO's personal continuing education hours and the focus on change.

Furthermore, the principles that inspire modern healthcare models, such as dehospitalization, horizontal coordination and patient-centeredness, imply the increased importance of middle managers, within single structures but also along clinical pathways and projects, to create and sustain high performances [ 27 , 28 , 29 ].

Whaley and Gillis [ 8 ] investigated the development of training programs aimed at increasing managerial competencies and leadership of middle managers, both from clinical and nonclinical backgrounds, in the US context. By adopting the top managers’ perspective, they found a widespread difficulty in aligning training needs and program contents. A 360° assessment of the competencies of Australian middle-level health service managers from two public hospitals was then conducted by Liang et al. [ 7 ] to identify managerial competence levels and training and development needs. The assessment found competence gaps and confirmed that managerial strengths and weaknesses varied across management groups from different organizations. In general, several studies have shown that leading at various organizational levels, in healthcare, does not necessarily require the same levels and types of competencies.

Liang et al. [ 30 ] explored the core competencies required for middle to senior-level managers in Victorian public hospitals. By adopting mixed methods, they confirmed six core competencies and provided guidance to the development of the competence-based educational approach for training the current and future management workforce. Liang et al. [ 31 ] then focused on the poorly investigated area of community health services, which are one of the main solutions to reducing the increasing demand for hospital care in general, and, in particular, in the reforms of the Australian health system. Their study advanced the understanding of the key competencies required by senior and mid-level managers for effective and efficient community health service delivery. A following cross-sectional study by AbuDagga et al. [ 32 ] highlighted that some community health services, such as home healthcare and hospice agencies, also need specific cultural competence training to be effective, in terms of reducing health disparities.

Using both qualitative and quantitative methods, Liang et al. [ 33 ] developed a management competence framework. Such a framework was then validated on a sample of 117 senior and middle managers working in two public hospitals and five community services in Victoria, Australia [ 34 ]. Fanelli et al. [ 35 ] used mixed methods to identify the following specific managerial competencies, which healthcare professionals perceive as crucial to improve their performance: quality evaluation based on outcomes, enhancement of professional competencies, programming based on process management, project cost assessment, informal communication style and participatory leadership.

Loh [ 5 ], through a qualitative analysis conducted in Australian hospitals, examined the motivation behind the choice of medically trained managers to undertake postgraduate management training. Interesting results stemming from the analysis include the fact that doctors often move into management positions without first undertaking training, but also that clinical experience alone does not lead to required management competencies. It is also interesting to remark that effective postgraduate management training for doctors requires a combination of theory and practice, and that doctors choose to undertake training mostly to gain credibility.

Ravaghi et al. [ 36 ] conducted a literature review to assess the evidence on the effectiveness of different types of training and educational programs delivered to hospital managers. The analysis identifies a set of aspects that are impacted by training programs. Training programs focus on technical, interpersonal and conceptual skills, and positive effects are mainly reported for technical skills. Numerous challenges are involved in designing and delivering training programs, including lack of time, difficulty in employing competencies in the workplace, also due to position instability, continuous changes in the health system environment, and lack of support by policymakers. One of the more common flaws concerns the fact that managers are mainly trained as individuals, but they work in teams. The implications of the study are that increased investments and large-scale planning are required to develop the knowledge and competencies of hospital managers. Another shortage concerns the outcome measurement of training programs, which is a usually neglected issue in the literature [ 9 ]. It also emerges that the training programs performing best are specific, structured and comprehensive.

Kakemam and Liang [ 2 ] conducted a literature review to shed light on the methods used to assess management competencies, and, thus, professional development needs in healthcare. Their analysis confirms that most studies focus on middle and senior managers and demonstrate great variability in methods and processes of assessment. As a consequence, they elaborate a framework to guide the design and implementation of management competence studies in different contexts and countries.

In the end, the literature has long pointed out that developing and strengthening the competencies and skills of health managers represent a core goal for increasing the efficiency and effectiveness of health systems, and management training is crucial for achieving such a goal [ 37 ]. The reasons can be summarized as follows: university education has scarcely been able to provide physicians and, in general, health operators, with adequate, or at least basic, managerial competencies and skills; over time, professionals have been involved in increasingly complex and rapidly changing working environments, requiring increased management responsibilities as well as new competencies and skills; in many settings, for instance in Italy, delays in the enforcement of law requiring the attendance of specific management training courses to take up a leadership position, hindered the acquisition of new competencies and the improvement of existing ones by those already managing health organizations, structures and services.

For the purposes of this study, management competencies refer to the possession and ability to use skills and tools for service organization and service planning, control and evaluation, evidence-informed decision-making and human resource management in the healthcare field.

Management training in the Italian National Health System

The reform of the Italian National Health System (INHS), implemented by Legislative Decree No. 502/1992 and inspired by neo-managerial theories, introduced the role of the general manager and assigned new responsibilities to managers.

However, the inadequate performance achieved in the first years of the application of the reform highlighted the cultural gap that made the normative adoption of managerial approach and tools unproductive on the operational level. Legislation evolved accordingly, and in order to hold management positions, management training became mandatory. Decree-Law No. 583/1996 (converted into Law No. 4/1997) provided that the requirements and criteria for access to the top management level were to be determined. Therefore, Presidential Decree No. 484/1997 determined these requirements and also the requirements and criteria to access the middle-management level of INHS’ healthcare authorities. This regulation also imposed the acquisition of a specific management training certificate, dictated rules concerning the duration, contents, and teaching methods of management training courses issuing this certificate, and indicated the requirements for attendance. Immediately afterwards, Legislative Decree No. 229/1999 amended the discipline of medical management and health professions and promoted continuous training in healthcare. It also regulated management training, which became an essential requirement for the appointments of health directors and directors of complex structures in the healthcare authorities, for the categories of physicians, dentists, veterinarians, pharmacists, biologists, chemists, physicists and psychologists.

The second pillar of the INHS reform was the regionalization of the INHS. Therefore, the Regions had to organize the courses to achieve management training certificates on the basis of specific agreements with the State, which regulated the contents, the methodology, the duration and the procedures for obtaining certification. The State-Regions Conference approved the first interregional agreement on management training in July 2003, whereas the State-Regions Agreement of 16 May 2019 regulated the training courses. The mandatory contents of the management training outlined the skills and behaviors expected from general managers and other top management key players (Health Director, Administrative Director and Social and Health Director), but also for all middle managers.

A survey was used to gather information from a purposive sample of professionals in the healthcare field taking part in management training programs. In particular, a structured questionnaire was submitted to 140 participants enrolled in two management programs organized by an Italian university: a second-level specializing master course and a training program carried out in collaboration with the Region. The programs awarded participants the title needed to be appointed as a director of a ward or administrative unit in a public healthcare organization, and share the same scientific committee, teaching staff, administrative staff and venue. The respondents’ profile is shown in Table  1 .

It is worth pointing out that the teaching staff is characterized by diversity: teachers have different educational and professional backgrounds, are practitioners or academics, and come from different Italian regions.

The questionnaire was submitted and completed in presence and online between November 2022 and February 2023. All participants decided to take part in the analysis spontaneously and gave their consent, being granted total anonymity.

The questionnaire, which was developed for this study and based on the literature, consisted of 64 questions shared in the following five sections: participant profile (10 items), management competencies held by participants before the training program (4 items), effectiveness factors of the training program (23 items), challenges to effectiveness (10 items), and outcomes of the training program (17 items) (an English language version of the questionnaire is attached to this paper as a supplementary file). In particular, the second section aimed to shed light on the participants’ situation regarding management competencies held before the start of the training program and how they were acquired; the third section aimed to collect participants’ opinions regarding how the program was conducted and the factors influencing its effectiveness; the fourth section aimed to collect participants’ opinions regarding the main obstacles encountered during the program; and the fifth section aimed to reveal the main outcomes of the program in terms of knowledge, skills, practices and career.

Except for those of the first section, which collected personal information, all the items of the next four categories – management competencies, effectiveness factors, challenges and outcome — were measured through a 5-point Likert scale. To ensure that the content of the questionnaire was appropriate, clear and relevant, a pre-testing was conducted in October 2022 by asking four academics and four practitioners, both physicians and not, with and without management positions, to fill it out. The aim was to understand whether the questionnaire really addressed the information needs behind the study and was easily and correctly understood by respondents. Therefore, the four individuals involved in the pre-testing were asked to fill it out simultaneously but independently, and at the end of the compilation, a focus group that included them and the three authors was used to collect their opinions and suggestions. After this phase, the following changes were made: in the ‘Participant profile’ section, ‘Veterinary medicine’ was added to the fields accounting for the ‘Educational background’ (item 3); in Sect. 2, it was decided to modify the explanation given to ‘basic management competencies’ and align it to what required by Presidential Decree No. 484/1997; in Sect. 3, item 25 was added to catch a missing aspect that respondents considered important, and brackets were added to the description of items 15, 16 and 29 to clarify the concepts of mixed and homogenous class and pedagogical approaches and tools; in Sect. 4, in the description of item 40, the words ‘find the energy required’ were added to avoid confusion with items 38 and 39, whereas brackets were added to items 41 and 45 to provide more explanation; in Sect. 5, brackets were added to the description of item 51 to increase clarity, and the last item was divided into two (now items 63 and 64) to distinguish the training program’s impact on career at different times.

With reference to the methods, first, a factor analysis based on the principal component method was conducted within each section of the questionnaire (except for the first again), in order to reduce the number of variables and shed light on the factors influencing the management training process. Bartlett's sphericity test and the Kaiser–Meyer–Olkin (KMO) value were performed to assess sampling adequacy, whereas factors were extracted following the Kaiser criterion, i.e., eigenvalues greater than unity, and total variance explained. The rotation method used was the Varimax method with Kaiser normalization, except for the second section (i.e., management competencies held by participants before the training program) that), which did not require rotation since a single factor emerged from the analysis. Bartlett's sphericity test was statistically significant ( p  < 0.001) in all sections, KMO values were all greater than 0.65 (average value 0.765), and the total variances explained were all greater than 65% (average value of approximately 70.89%), which are acceptable values for such analysis.

Second, a set of ordinal logistic regressions were performed to assess the relationships existing between management competencies held before the start of the course, effectiveness factors, challenges, and outcomes of the training program.

The factors that emerged from the factor analysis were used as independent variables, whereas some significant outcome items accounting for different performance aspects were selected as dependent variables: improved management competencies, innovation practices, professional relationships, and career prospects. Ordered logit regressions were used because the dependent variables (outcomes) were measured on ordinal scales. Some control variables for the respondent profiles were included in the regression models: age, gender, educational background, management position, and working in the healthcare field.

With the aim of understanding which explanatory variables could exert an influence, a backward elimination method was used, adopting a threshold level of significance values below 0.20 ( p  < 0.20). Table 4 shows the results of regressions with independent variables obtained following the criterion mentioned above. All four models respected the null hypothesis, which means that the proportional odds assumption behind the ordered logit regressions had not been rejected ( p  > 0.05). Third and last, an unpaired two-sample t-test was used to examine the differences between groups of participants in the management training programs selected based on two criteria: physicians and non-physicians, and participants with or without management positions.

First, descriptive statistics is useful for understanding the aspects participants considered the most and least important by category. This can be done by focusing on the items of the four sections of the questionnaire (except for the first one depicting participant profiles) that were given the highest and lowest scores at the sample level and by different groups of participants (physicians and non-physicians, participants with or without management positions). Table 2 summarizes the mean values and standard deviations by group of these higher and lower scores. Focusing on management competencies, all groups reported having mainly acquired them through professional experience, except for non-physicians who attributed major significance to postgraduate training programs, with a mean value of 3.05 out of 5. All groups agreed on the poor role of university education in providing management competencies, with mean values for the sample and all four groups below 2.5. It is worth noting that this item exhibits the lowest value for physicians (1.67) and the highest for non-physicians (2.37). In addition, physicians are the group attributing the lowest values to postgraduate education and professional experience for acquiring management competencies. In reference to factors of effectiveness, all groups also agree on the necessity of mixing theoretical and practical lessons during the training program with mean values of well above 4.5, whereas exclusive use of self-assessment is generally viewed as the most ineffective practice, except for non-physician, who attribute the lowest value to remote lessons (mean 1.82). Among the challenges, the whole sample and physicians and participants without management positions see the lack of financial support from their organization as the main problem (mean 4.10), while non-physicians and participants with management positions believe this is represented by a lack of time, with mean values, respectively, of 3.75 and 4. All agree that dialogue and discussion during the course have been the least relevant of the problems, with mean values below 1.5. Outcomes show generally high values, as revealed by the fact that the lowest values exhibit mean values around 3.5. It is worth noting that an increased understanding of the healthcare systems has been the main benefit gained from the program, with mean values equal to or higher than 4.50. The lowest positive impact is attributed by all attendees to improved relationships with superiors and top management, with mean values between 3.44 and 3.74, with the exception of participants without management positions who mention improved career prospects.

To shed light on the factors influencing the management training process, the findings of the factor analyses conducted by category are reported. Starting from the management competencies held before the training program, the following single factor was extracted from the four items, named and interpreted as follows:

Basic management competencies, which measures the level of management competencies acquired by participants through higher education, post-graduate training and professional experience.

The effectiveness factors are then grouped into six factors, named and explained as follows:

Diversity and debate, which aggregates five items assessing the importance of diversity in participants’ and teachers’ educational and professional backgrounds and pedagogical approaches and tools, as well as level of participant engagement and discussion during lessons and in carrying out the project work required to complete the program.

Specialization, which includes three items accounting for a robust knowledge of healthcare systems by focusing on teachers’ profiles and lessons’ theoretical approaches.

Lessons in presence, which groups three items explaining that in-presence lessons increase learning outcomes and discussion among participants.

Final self-assessment, made up of three items asserting that learning outcomes should be assessed by participants themselves at the end of the course.

Written intermediate assessment, composed of two items explaining that mid-terms assessment should only be written.

Homogeneous class, which is made up of a single component accounting for participants’ similarity in terms of professional backgrounds and management levels, tasks and responsibilities.

The challenges are aggregated into the following four factors:

Lack of time, which includes three items reporting scarce time and energy for lessons and study.

Problems of dialogue and discussion, which groups three items focusing on difficulties in relating to and debating with other participants and teachers.

Low support from organization, which is made up of two items reporting poor financial support and low value given to the initiative from participants’ own organizations.

Organizational issues, which aggregates two items demonstrating scarce flexibility and collaboration by superiors and colleagues of participants’ own organizations and unfamiliarity to study.

Table 3 shows the component matrix with saturation coefficients and factors obtained for the management competencies held before the training program (unrotated), effectiveness factors (rotated), and challenges (rotated).

A set of ordinal logistic regressions was performed to examine the relationships between management competencies held before the start of the course, effectiveness factors, challenges and outcomes of the training program. The results, shown in Table  4 , are articulated into four models, one for each selected outcome. In relation to model 1, the factors ‘diversity and debate’ ( p  < 0.001), ‘written intermediate assessment’ ( p  < 0.05) and ‘homogeneous class’ ( p  < 0.001) have a significant positive impact on the improvement of management competencies, which is also increased by low values attributed to ‘problems of dialogue and discussion’ ( p  < 0.01). In model 2, the change of professional practices in light of lessons learned during the program, selected as an innovation outcome, is then positively affected by ‘diversity and debate’ ( p  < 0.001), ‘homogeneous class’ ( p  < 0.05) and ‘organizational issues’ ( p  < 0.01), while it was negatively influenced by a high value of ‘basic management competencies’ held before the course ( p  < 0.05). Regarding model 3, ‘Diversity and debate’ ( p  < 0.001) and ‘homogeneous class’ ( p  < 0.01) have a significant positive effect on the improvement of professional relationships as well, whereas the same is negatively affected by ‘lessons in presence’ ( p  < 0.05). Finally, concerning model 4, the outcome career prospects benefit from ‘diversity and debate’ ( p  < 0.05) and ‘homogeneous class’ ( p  < 0.01), since both factors exert a positive effect. ‘Low support from organization’ negatively influences career prospects ( p  < 0.001). Table 4 also shows that the LR test of proportionality of odds across the response categories cannot be rejected (all four p  > 0.05).

Finally, it is worth noting that none of the control variables reflecting the respondent profiles (age, gender, management position, working in the healthcare field, and educational background) was found to be statistically significant. These variables are not reported in Table  4 because regression models were obtained following a backward elimination method, as explained in the method section.

In the end, the t-test reveals significant differences between physicians and non-physicians, as well as between participants with or without management positions. Table 5 shows only figures of t-test statistically significant with regards to competencies held before the attendance of the course, the factors of effectiveness, challenges of the training program, and outcomes achieved. In the first comparison, non-physicians show higher management competencies at the start of the program, with a mean value of 0.31, while physicians suffer from less support from their own organization with a mean value of 0.13 compared to -0.18, the mean value of the non-physicians. Concerning the second comparison, participants with management positions have higher management competencies at the start of the program (0.19 versus -0.13) and suffer more from lack of time, with higher mean values compared to participants without managerial positions, respectively 0.23 and -0.16. For what concerns the factors related to the effectiveness of the training program, participants with management positions exhibit a lower mean value in relation to written mid-term assessments, -0.24 versus 0.17, reported by participants with management positions. Differently, the final self-assessment at the end of the program is higher for participants with management positions, 0.24 compared to -0.17, the mean value of the participants without management positions. This latter category feels more the problem of low support from their organizations, with a mean value of 0.16 compared to -0.23, and is slightly less motivated by possible career improvement, with a mean value of 3.31 compared to 3.73 reported by participants with management positions.

The results stemming from the different analyses are now considered and interpreted in the light of the extant literature. Personal characteristics such as gender and age, differently from what was found by Walston et al. [ 26 ] for executives’ continuing education, and professional characteristics such as seniority and working in public or private sectors, do not seem to affect participation in management training programs.

The findings clearly show the outstanding importance of ‘diversity and debate’ and ‘class homogeneity’ as factors of effectiveness, since they positively impact all outcomes: competencies, innovation, professional relationships and career. These factors capture two key aspects complementing each other: on the one hand, participants and teachers’ different backgrounds provide the class with a wider pool of resources and expertise, whereas the use of pedagogical tools fostering discussion enriches the educational experience and stimulates creativity. On the other hand, due to the high level of professionalism in the setting, sharing common management levels means similar tasks and responsibilities, as well as facing similar problems. Consequently, speaking the same language leads to deeper knowledge and effective technical solutions.

In relation to the improvement of management competencies, it also emerges the critical role of a good class atmosphere, that is, the absence of problems of dialogue and discussion. ‘Diversity and debate’ and ‘class homogeneity’, as explained before, seem to contribute to this, since they enhance freedom of expression and fair confrontation, leading to improved learning outcomes. It is interesting to notice that the problems of dialogue and discussion turned out to be the least relevant challenge across the sample.

Two interesting points come from the factors affecting innovation. First, it seems that lower competencies before the training programs lead to the development of more innovative practices. The reason is that holding fewer basic competencies means a greater scope for action once new capabilities are learned: the spirit of openness is conducive to breaking down routines, and innovative practices hindered by a lack of knowledge and tools can thus be introduced. The reason is that holding fewer basic competencies means greater scope for action once new capabilities are learned: the spirit of openness is conducive to breaking down routines, and innovative practices hindered by a lack of knowledge and tools can thus be introduced. This extends the findings of previous studies since the employment of competencies in the workplace is influenced by the starting competence equipment of professionals [ 36 ], and those showing gaps have more room to recover, also in terms of motivation to change, that is, understanding the importance of meeting current and future challenges [ 26 ]. Second, more innovative practices are introduced by participants perceiving more organizational issues. This may reveal, on the one side, a stronger individual motivation towards professional growth of participants who suffer from lack of flexibility and collaboration from their own superiors and colleagues. In this regard, poor tolerance, flexibility and permissions in their workplace act as a stimulus to innovation, which can be viewed as a way of challenging the status quo. On the other side, in line with the above-mentioned concept, this confirms that unfamiliarity with the study increases the innovative potential of participants. Since this study reveals that physicians are neither adequately educated from a management point of view nor incentivized to attend post-graduation training programs, it points out how important is extending continuing education to all health professional categories [ 25 , 26 ].

The topic of competencies held by different categories needs more attention. The study reveals that physicians and participants without management positions start the program with less basic competencies. At the sample level, higher education is viewed as the most ineffective tool to provide such competencies, whereas professional experience is seen as the best way to gather them. Actually, non-physicians give the highest value to postgraduate education, which suggests they are those more interested or incentivized to take part in continuing education. Although holding managerial positions does not automatically mean having higher competencies [ 5 ], it is evident that such a professional experience contributes to filling existing gaps. Physicians stand out as the category for which university education, postgraduate education and professional experience exert the lowest impact on management competence improvement. Considering the relationship between competence held before the course and innovation, as described above, engaging physicians in training programs, even more if they do not have management responsibilities, has a major impact on health organizations’ development prospects. The findings also point out that effective management training requires a combination of theory and practice for all categories of professionals, not just for physicians, as observed by Loh [ 5 ].

The main outcome, in general and for all participant categories, is an increased understanding of how healthcare systems work, which anticipates increased competencies. This confirms the importance of knowledge on the healthcare environment [ 31 ], and clarifies the order of aspects impacted by training programs as reported by Ravaghi et al. [ 36 ]: first conceptual, then technical, and finally interpersonal. However, interpersonal outcomes are by far greater for those holding management positions, which extends the findings by Liang et al. [ 31 ]. In particular, participants already managing units report the greatest impacts in terms of ability to understand colleagues’ problems, improvement of professional relationships and collaboration with colleagues from other units. Obviously, participants with management positions, more than others, feel the lack of collaborative and communication skills, which represents one of the main flaws of university education in the field of medicine [ 4 ] and is also often neglected in management training [ 36 ]. This also confirms that different management levels show specific competence requirements and education needs [ 6 , 7 ]. 

It is then important to discuss the negative effect of lessons in presence on the improvement of professional relationships. At first glance, it may sound strange, but its real meaning emerges from a comprehensive interpretation of all the findings. First, it does not mean that remote lessons are more effective, as revealed by the fact that they, as a factor of effectiveness, are attributed very low values and, for all categories of participants, lower values than those attributed to lessons in presence and hybrid lessons. Non-physicians, in particular, attribute them the lowest value at all. At most, remote lessons are viewed as convenient rather than effective. The negative influence of lessons in presence can be explained by the fact that a specific category, i.e., those with management positions, rate this aspect much more important than other participants and, as reported above, find much more benefits in terms of improved relationships from management training. Participants with management positions, due to their tasks and responsibilities, suffer more than others from lack of time to be devoted to course participation. For them, as for the category of non-physicians, lack of time represents the main challenge to effectively attending the course. In the literature, such a problem is well considered, and lack of time is also viewed as a challenge to apply the skills learned during the course [ 36 ]. Considering that class discussion and homogeneity contribute to fostering relationships, a comprehensive reading of the findings reveals that due to workload, participants with management positions see particularly convenient and still effective remote lessons. Furthermore, if the class is formed by participants sharing similar professional backgrounds and management levels, debate is not precluded and interpersonal relationships improved as a consequence. From the observation of single items, it can be concluded that participants with management positions and in general those with higher basic management competencies at the start of the program, prefer more flexible and leaner training programs: intermediate assessment through conversation, self-assessment at the end of the course, more concentrated scheduled lessons and greater use of remote lessons.

Differently from what was found by Walston and Khaliq [ 25 ], the findings highlight that participants with management positions value the impact of management training on career prospects positively. These participants are also those more supported by their own organizations. Conversely, the lack of support, especially in terms of inadequate funds devoted to these initiatives, strongly affects physicians and participants without management positions, which clarifies what this challenge is about and who is mainly affected by it [ 36 ]. Low incentives mean having attended fewer training programs in the past, which, together with less management experience, explains why they have developed less competencies. Among the outcomes of the training program, the little attention paid by organizations is also testified by the lowest values attributed by all categories, except for participants without management positions, to the improvement of relationships with superiors and top management.

In general, the study contributes to a better understanding of the outcomes of management training programs in healthcare and their determinants [ 9 ]. In particular, it sheds light on gaps and education needs [ 1 ] by category of health professionals [ 2 ]. The research findings have major implications for practice, which can be drawn after identifying the four profiles of participants revealed by the study. All profiles share common characteristics, such as value given to debate, diversity of pedagogical approaches and tools and class homogeneity, rather than the need for a deeper comprehension of healthcare systems. However, they present characteristics that determine specific issues and education gaps, which are summarized as follows:

Physicians without management positions: low competencies at the start of the program and scarce incentives for attending the course from their own organization;

Physicians with management positions: they partially compensate for competence gaps through professional experience, suffer from lack of time, and are motivated by the chance to improve their career prospects;

Non-physicians without management positions: they partially fill competence gaps through postgraduate education, suffer from lack of time, and have scarce incentives for attending the course from their own organization;

Non-physicians with management positions: they partially bridge competence gaps through postgraduate education and professional experience, are the most affected by a lack of time, and are motivated by the chance to improve their career prospects.

Recommendations are outlined for different levels of action:

For policymakers, it is suggested to strengthen the ability of higher education courses in medicine and related fields to advance the understanding of healthcare systems’ structure and operation, as well as their current and future challenges. Such a new approach in the design curricula should then have as a main goal the provision of adequate management competencies.

For healthcare organizations, it is suggested to incentivize the acquisition of management competencies by all categories of professionals through postgraduate education and training programs. This means supporting them from both financial and organizational point of view, for instance, in terms of more flexible working conditions. Special attention should be paid to physicians who, even without executive roles, manage resources and directly impact the organization's effectiveness and efficiency levels through their day-by-day activity, and are the players holding the greatest innovative potential within the organization. Concerning the executives, especially in the current changing context of healthcare systems, much higher attention should be paid to fostering interpersonal skills, in terms of communication and cooperation.

For those designing training programs, it is suggested to tailor courses on the basis of participants’ profiles, using different pedagogical approaches and tools, for instance, in terms of teacher composition, lesson delivery methods and learning assessment methods, while preserving class homogeneity in terms of professional backgrounds and management levels to facilitate constructive dialogue and solution finding approaches. Designing ad hoc training programs would give the possibility to meet the needs of participants from an organizational point of view as well as, for instance, in terms of program length and lesson concentration.

Limitations

This study has some limitations, which pave the way for future research. First, it is context-specific by country, since it is carried out within the INHS, which mandatorily requires health professionals to attend management training programs to hold certain positions. It is then context-specific by training program, since it focuses on management training programs providing participants with the title to be appointed as a director of a ward or administrative unit in a public healthcare organization. This determines the kind of management competencies included in the study, which are those mandatorily required for such a middle-management category. Therefore, there is a need to extend research and test these findings on different types of management training programs, participants and countries. Second, this study is based on a survey of participants’ perceptions, which causes two kinds of unavoidable issues: although based on the literature and pre-tested, the questionnaire could not be able to measure what it intends to or capture detailed and nuanced insights from respondents, and responses may be affected by biases due to reactive effects. Third, a backward elimination method was adopted to select variables in model building. Providing a balance between simplicity and fit of models, this variable selection technique is not consequences-free. Despite advantages such as starting the process with all variables included, removing the least important early, and leaving the most important in, it also has some disadvantages. The major is that once a variable is deleted from the model, it is not included anymore, although it may become significant later [ 38 ]. For these reasons, it is intended to reinforce research with new data sources, such as teachers’ perspectives and official assessments, and different variable selection strategies. A combination of qualitative and quantitative methods for data elaboration could then be used to deepen the analysis of the relationships between motivations, effectiveness factors and outcomes. Furthermore, since the investigation of competence development, acquisition of new competencies and the transfer of acquired competencies was beyond the purpose of this study, a longitudinal approach will be used to collect data from participants attending future training programs to track changes and identify patterns.

Availability of data and materials

An English-language version of the questionnaire used in this study is attached to this paper as a supplementary file. The raw data collected via the questionnaire are not publicly available due to privacy and other restrictions. However, datasets generated and analyzed during the current study may be available from the corresponding author upon reasonable request.

Abbreviations

Italian National Health System

Kaiser–Meyer–Olkin

National Recovery and Resilience Plan

Liang Z, Howard PF, Wang J, Xu M, Zhao M. Developing senior hospital managers: does ‘one size fit all’? – evidence from the evolving Chinese health system. BMC Health Serv Res. 2020;20(281):1–14. https://doi.org/10.1186/s12913-020-05116-6 .

Article   Google Scholar  

Kakemam E, Liang Z. Guidance for management competency identification and development in the health context: a systematic scoping review. BMC Health Serv Res. 2023;23(421):1–13. https://doi.org/10.1186/s12913-023-09404-9 .

European Commission. Annual Sustainable Growth Strategy. 2020. https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELEX:52020DC0575&from=en

Blakeney EAR, Ali HN, Summerside N. Sustaining improvements in relational coordination following team training and practice change: a longitudinal analysis. Health Care Manag Rev. 2021;46(4):349–57. https://doi.org/10.1097/HMR.0000000000000288 .

Loh E. How and why medically-trained managers undertake postgraduate management training - a qualitative study from Victoria. J Health Organ Manag. 2015;29(4):438–54. https://doi.org/10.1108/jhom-10-2013-0233 .

Article   PubMed   Google Scholar  

Yarbrough LA, Stowe M, Haefner J. Competency assessment and development among health-care leaders: results of a cross-sectional survey. Health Serv Manag Res. 2012;25(2):78–86. https://doi.org/10.1258/hsmr.2012.012012 .

Liang Z, Blackstock FC, Howard PF, Briggs DS, Leggat SG, Wollersheim D, Edvardsson D, Rahman A. An evidence-based approach to understanding the competency development needs of the health service management workforce in Australia. BMC Health Serv Res. 2018;18(976):1–12. https://doi.org/10.1186/s12913-018-3760-z .

Whaley A, Gillis WE. Leadership development programs for health care middle managers: an exploration of the top management team member perspective. Health Care Manag Rev. 2018;43(1):79–89. https://doi.org/10.1097/HMR.0000000000000131 .

Campbell C, Lomperis A, Gillespie K, Arrington B. Competency-based healthcare management education: the Saint Louise University experience. J Health Adm Educ. 2006;23:135–68.

PubMed   Google Scholar  

Issel ML. Value Added of Management to Health Care Organizations. Health Care Manag Rev. 2020;45(2):95. https://doi.org/10.1097/HMR.0000000000000280 .

Lega F, Prenestini A, Spurgeon P. Is management essential to improving the performance and sustainability of health care systems and organizations? a systematic review and a roadmap for future studies. Value Health. 2013;16(1 Suppl.):S46–51. https://doi.org/10.1016/j.jval.2012.10.004 .

Bloom N, Propper C, Seiler S, Van Reenen J. Management practices in hospitals. Health, Econometrics and Data Group (HEDG) working papers 09/23, HEDG, c/o department of economics, University of York. 2009.

Vainieri M, Ferrè F, Giacomelli G, Nuti S. Explaining performance in health care: how and when top management competencies make the difference. Health Care Manag Rev. 2019;44(4):306–17. https://doi.org/10.1097/HMR.0000000000000164 .

Del Vecchio M, Carbone C. Stabilità dei Direttori Generali nelle aziende sanitarie. In: Anessi Pessina E, Cantù E, editors. Rapporto OASI 2002 L’aziendalizzazione della sanità in Italia. Milano, Italy: Egea; 2002. p. 268–301.

Google Scholar  

McAlearney AS. Leadership development in healthcare: a qualitative study. J Organ Behav. 2006;27:967–82.

McAlearney AS. Using leadership development pro- grams to improve quality and efficiency in healthcare. J Healthcare Manag. 2008;53:319–31.

McAlearney AS. Executive leadership development in U.S. health systems. J Healthcare Manag. 2010;55:207–22.

McAlearney AS, Fisher D, Heiser K, Robbins D, Kelleher K. Developing effective physician leaders: changing cultures and transforming organizations. Hosp Top. 2005;83(2):11–8.

Thompson JM, Kim TH. A profile of hospitals with leadership development programs. Health Care Manag. 2013;32(2):179–88. https://doi.org/10.1097/HCM.0b013e31828ef677 .

Figueroa C, Harrison R, Chauhan A, Meyer L. Priorities and challenges for health leadership and workforce management globally: a rapid review. BMC Health Serv Res. 2019;19(239):1–11. https://doi.org/10.1186/s12913-019-4080-7 .

Sarto F, Veronesi G. Clinical leadership and hospital performance: assessing the evidence base. BMC Health Serv Res. 2016;16(169):85–109. https://doi.org/10.1186/s12913-016-1395-5 .

Morandi F, Angelozzi D, Di Vincenzo F. Individual and job-related determinants of bias in performance appraisal: the case of middle management in health care organizations. Health Care Manag Rev. 2021;46(4):299–307. https://doi.org/10.1097/HMR.0000000000000268 .

Tasi MC, Keswani A, Bozic KJ. Does physician leadership affect hospital quality, operational efficiency, and financial performance? Health Care Manag Rev. 2019;44(3):256–62. https://doi.org/10.1097/hmr.0000000000000173 .

Hopkins J, Fassiotto M, Ku MC. Designing a physician leadership development program based on effective models of physician education. Health Care Manag Rev. 2018;43(4):293–302. https://doi.org/10.1097/HMR.0000000000000146 .

Walston SL, Khaliq AA. The importance and use of continuing education: findings of a national survey of hospital executives. J Health Admin Educ. 2010;27(2):113–25.

Walston SL, Chou AF, Khaliq AA. Factors affecting the continuing education of hospital CEOs and their senior managers. J Healthcare Manag. 2010;55(6):413–27. https://doi.org/10.1097/00115514-201011000-00008 .

Garman AN, McAlearney AS, Harrison MI, Song PH, McHugh M. High-performance work systems in health- care management, part 1: development of an evidence-informed model. Health Care Manag Rev. 2011;36(3):201–13. https://doi.org/10.1097/HMR.0b013e318201d1bf .

MacDavitt K, Chou S, Stone P. Organizational climate and healthcare outcomes. Joint Comm J Qual Patient Saf. 2007;33(S11):45–56. https://doi.org/10.1016/s1553-7250(07)33112-7 .

Singer SJ, Hayes J, Cooper JB, Vogt JW, Sales M, Aristidou A, Gray GC, Kiang MV, Meyer GS. A case for safety leadership training of hospital manager. Health Care Manag Rev. 2011;36(2):188–200. https://doi.org/10.1097/HMR.0b013e318208cd1d .

Liang Z, Leggat SG, Howard PF, Lee K. What makes a hospital manager competent at the middle and senior levels? Aust Health Rev. 2013;37(5):566–73. https://doi.org/10.1071/AH12004 .

Liang Z, Howard PF, Koh L, Leggat SG. Competency requirements for middle and senior managers in community health services. Aust J Prim Health. 2013;19(3):256–63. https://doi.org/10.1071/PY12041 .

AbuDagga A, Weech-Maldonado R, Tian F. Organizational characteristics associated with the provision of cultural competency training in home and hospice care agencies. Health Care Manag Rev. 2018;43(4):328–37. https://doi.org/10.1097/HMR.0000000000000144 .

Liang Z, Howard PF, Leggat SG, Bartram T. Development and validation of health service management competencies. J Health Organ Manag. 2018;32(2):157–75. https://doi.org/10.1108/JHOM-06-2017-0120 . (Epub 2018 Feb 8).

Howard PF, Liang Z, Leggat SG, Karimi L. Validation of a management competency assessment tool for health service managers. J Health Organ Manag. 2018;32(1):113–34. https://doi.org/10.1108/JHOM-08-2017-0223 .

Fanelli S, Lanza G, Enna C, Zangrandi A. Managerial competences in public organisations: the healthcare professionals’ perspective. BMC Health Serv Res. 2020;20(303):1–9. https://doi.org/10.1186/s12913-020-05179-5 .

Ravaghi H, Beyranvand T, Mannion R, Alijanzadeh M, Aryankhesal A, Belorgeot VD. Effectiveness of training and educational programs for hospital managers: a systematic review. Health Serv Manag Res. 2020;34(2):1–14. https://doi.org/10.1177/0951484820971460 .

Woltring C, Constantine W, Schwarte L. Does leadership training make a difference? J Public Health Manag Prac. 2003;9(2):103–22.

Chowdhury MZI, Turin TC. Variable selection strategies and its importance in clinical prediction modelling. Fam Med Comm Health. 2020;8(1):1–7. https://doi.org/10.1136/fmch-2019-000262 .

Download references

Acknowledgements

Not applicable.

DM 737/2021 risorse 2022–2023. Funded by the European Union - NextGenerationEU.

Author information

Authors and affiliations.

Department of Economics and Business, University of Sassari (Italy), Via Muroni 25, Sassari, 07100, Italy

Lucia Giovanelli & Nicoletta Fadda

Department of Humanities and Social Sciences, University of Sassari (Italy), Via Roma 151, 07100, Sassari, Italy

Federico Rotondo

You can also search for this author in PubMed   Google Scholar

Contributions

Although all the authors have made substantial contributions to the design and drafting of the manuscript: LG and FR conceptualized the study, FR and NF conducted the analysis and investigation and wrote the original draft; LG, FR and NF reviewed and edited the original draft, and LG supervised the whole process. All the authors read and approved the final manuscript.

Corresponding author

Correspondence to Federico Rotondo .

Ethics declarations

Ethics approval and consent to participate.

The research involved human participants. All authors certify that participants decided to take part in the analysis voluntarily and provided informed consent to participate. Participants were granted total anonymity and were adequately informed of the aims, methods, institutional affiliations of the researchers and any other relevant aspects of the study. In line with the Helsinki Declaration and the Italian legislation (acknowledgement of EU Regulation no. 536/2014 on January 31st, 2022 and Ministerial Decree of November 30th, 2021), ethical approval by a committee was not required since the study was non-medical and non-interventional.

Consent for publication

Competing interests.

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Giovanelli, L., Rotondo, F. & Fadda, N. Management training programs in healthcare: effectiveness factors, challenges and outcomes. BMC Health Serv Res 24 , 904 (2024). https://doi.org/10.1186/s12913-024-11229-z

Download citation

Received : 15 January 2024

Accepted : 20 June 2024

Published : 07 August 2024

DOI : https://doi.org/10.1186/s12913-024-11229-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Management training programs
  • Healthcare professionals
  • Factors of effectiveness

BMC Health Services Research

ISSN: 1472-6963

open peer review in research

IMAGES

  1. Open peer review: establishing quality

    open peer review in research

  2. Understanding Open Peer Review

    open peer review in research

  3. Peer Review

    open peer review in research

  4. Open peer review

    open peer review in research

  5. Understanding Peer Review in Science

    open peer review in research

  6. Open Peer Review

    open peer review in research

COMMENTS

  1. Open Peer Review

    Peer review is a pillar of scientific communication, the mechanism we rely on to ensure that published research is thoroughly vetted and scientifically valid. For that reason, we tend to think of peer review as a monolith-iconic, stable, and consistent. In fact, journals use many different forms and applications of peer review, often in parallel.

  2. What is open peer review? A systematic review

    Open Science encompasses a variety of practices, usually including areas like open access to publications, open research data, open source software/tools, open workflows, citizen science, open educational resources, and alternative methods for research evaluation including open peer review ( Pontika et al., 2015).

  3. Open peer review

    In 1999, the open access journal Journal of Medical Internet Research was launched, which from its inception decided to publish the names of the reviewers at the bottom of each published article. Also in 1999, the British Medical Journal moved to an open peer review system, revealing reviewers' identities to the authors but not the readers, and in 2000, the medical journals in the open access ...

  4. 8 Things you should know about open peer review

    Just like traditional peer review models, open peer review is a key pillar of research communication. Scholars, scientists, and the public alike rely on peer review to uphold research integrity and ensure that published research is valid and trustworthy. There are a range of key differences between open peer review and blinded, closed models.

  5. Open Peer Review: Making Scientific Research More Transparent

    Benefits of Open Peer Review. As indicated, one benefit of instilling more openness in the peer review process is greater transparency and trust. It provides the scientific community with a window into the editorial decision-making process. Transparency is one of the fundamental pillars of science. Open Peer Review offers researchers the ...

  6. Ten considerations for open peer review

    Item 2: Open peer review relies on, and encourages mutual trust, respect, and openness to criticism. Whatever the degree of openness in a peer review process, as a standard form of academic best practice, it is essential to act with an attitude of charity and in accordance with the highest moral principles 5.First of all, as a reviewer you should start with acknowledging the authors' effort ...

  7. Open peer review: promoting transparency in open science

    Peer Footnote 1 review represents one of the foundations of modern scholarly communication. The scrutiny of peers to assess the merits of research and to provide recommendations for whether research exhibits sufficient rigor and novelty to warrant publication is intended to reduce the risk of publishing research that is sloppy, erroneous or, at worst, fabricated.

  8. Peer review guidance: a primer for researchers

    The peer review process is essential for evaluating the quality of scholarly works, suggesting corrections, and learning from other authors' mistakes. The principles of peer review are largely based on professionalism, eloquence, and collegiate attitude. As such, reviewing journal submissions is a privilege and responsibility for 'elite ...

  9. What is open peer review? A systematic review

    Background: "Open peer review" (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations.The literature reflects this, with numerous overlapping and contradictory definitions. While for some the term refers to peer review where the identities of both author and reviewer are disclosed to each other, for ...

  10. (Open) Peer Review

    Open peer review is a transparent and collaborative approach that promotes transparency and accountability in the evaluation and validation of scientific research. In this process, reviewer comments, identities, and sometimes even pre-publication versions of the manuscript are openly shared, aligning with the principles of Open Science.

  11. Guidelines for open peer review implementation

    Open peer review (OPR) is moving into the mainstream, but it is often poorly understood and surveys of researcher attitudes show important barriers to implementation. As more journals move to implement and experiment with the myriad of innovations covered by this term, there is a clear need for best practice guidelines to guide implementation. This brief article aims to address this knowledge ...

  12. What Is Peer Review?

    The most common types are: Single-blind review. Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor.

  13. What is open peer review? A systematic review

    Background: "Open peer review" (OPR), despite being a major pillar of Open Science, has neither a standardized definition nor an agreed schema of its features and implementations. The literature reflects this, with a myriad of overlapping and often contradictory definitions. While the term is used by some to refer to peer review where the identities of both author and reviewer are ...

  14. The fundamentals of open access and open research

    Open research goes beyond the boundaries of publications to consider all research outputs - from data to code and even open peer review. Making all outputs of research as open and accessible as possible means research can have a greater impact, and help to solve some of the world's greatest challenges.

  15. (PDF) Open peer review, pros and cons from the ...

    Open peer review (OPR), as with other elements of open science and open research, is on the rise. It aims to bring greater transparency and participation to formal and informal peer review processes.

  16. Open peer review

    Open peer review is an open research practice that enables transparency and accountability in the peer review process. Typically, it refers to any peer review model that makes aspects of the peer review process publicly available before or after publication. An open peer review model may include any or all the following features:

  17. What is open peer-review?

    Advantages of open community peer review and preprints. Open peer review and preprints have several advantages for authors: Papers that undergone open peer-review may have slightly more positive reviews (perhaps because the reviewers signing up for a particular paper may have a stronger interest in the paper's topic and therefore to see it ...

  18. Pros and cons of open peer review

    The BMJ claims that, since it opened up its peer-review process, only a small percentage (about 2%) of referees have refused to review because of the change in editorial policy. However, an ...

  19. Open Peer Review: Evaluating its Research Impact

    Open Peer Review (OPR) is a modern approach to the traditional peer review process, aimed at enhancing transparency and accountability within scholarly publishing. In OPR, authors' and reviewers' identities are disclosed during evaluation. This change from anonymous reviews is gaining popularity because it can enhance feedback quality, promote ...

  20. Peer review in Eleven strategies for making reproducible research and

    Researchers and administrators can integrate reproducible research and open science practices into daily practice at their research institutions by adapting research assessment criteria and program requirements, offering training, and building communities. ... Peer review process; Peer review process. This article was accepted for publication ...

  21. Searching Databases & Finding Peer-Reviewed Articles

    Definition: Peer-Reviewed. A peer-reviewed article is from a publication that has been through the peer-review process. This process subjects an author's scholarly work, research, or ideas to the scrutiny of others who are experts in the same field (peers). Peer review is considered necessary to ensure academic quality in some fields.

  22. It Takes a Village! Editorship, Advocacy, and Research in ...

    Partaking in the editorial process of an academic journal is both a challenging and rewarding experience. It takes a village of dedicated individuals with a vested interest in the dissemination and sharing of high-quality research outputs. As members of the editorial team of an open access data journal, we reflect on the emergence of data-driven open research, a new journal genre (data paper ...

  23. Peer review will only do its job if referees are named and rated

    The least that reviewers should do is to check that authors are using their sources appropriately. If an English professor could see the penis paper's grave errors, how on earth did the peer reviewers not see them? Some suggest abandoning pre-publication review in favour of open post-publication "curation" by the online crowd. But this ...

  24. Peer Review in Scientific Publications: Benefits, Critiques, & A

    Peer review is a mutual responsibility among fellow scientists, and scientists are expected, as part of the academic community, to take part in peer review. If one is to expect others to review their work, they should commit to reviewing the work of others as well, and put effort into it. 2) Be pleasant. If the paper is of low quality, suggest ...

  25. What is open peer review? A systematic review

    Open Science encompasses a variety of practices, usually including areas like open access to publications, open research data, open source software/tools, open workflows, citizen science, open educational resources, and alternative methods for research evaluation including open peer review (Pontika et al., 2015).

  26. Peer review in Observing one-divalent-metal-ion-dependent and histidine

    The following is the authors' response to the original reviews. Public Reviews: Reviewer #1 (Public Review): This study is convincing because they performed time-resolved X-ray crystallography under different pH conditions using active/inactive metal ions and PpoI mutants, as with the activity measurements in solution in conventional enzymatic studies.

  27. Metaphysical deterrents to providers ...

    Despite the rising popularity of peer-to-peer sharing platforms, very little empirical research has documented how consumers respond to the opportunity of renting goods to one another. This work delineates how metaphysical (besides physical) contagion beliefs, particularly when self-identification with possessions is high, demotivates people ...

  28. Evidence for motivational interviewing in educational settings among

    Motivational interviewing (MI) is a person-centred approach focused on empowering and motivating individuals for behavioural change. Medical students can utilize MI in patient education to engage with patients' chronic health ailments and maladaptive behaviours. A current scoping review was conducted to 1) determine the types of MI (conventional, adapted, brief and group MI) education ...

  29. Management training programs in healthcare: effectiveness factors

    Background Different professionals working in healthcare organizations (e.g., physicians, veterinarians, pharmacists, biologists, engineers, etc.) must be able to properly manage scarce resources to meet increasingly complex needs and demands. Due to the lack of specific courses in curricular university education, particularly in the field of medicine, management training programs have become ...