A workshop report from the Open Science Festival Hannover
Scientific peer review – the process in which a researcher’s work is scrutinized by other researchers in the field – is the most widely used tool for quality control in scientific publishing. It is considered a safeguarding mechanism to ensure scientific integrity and maintain a high standard of scientific publications. To achieve these goals, peer review must be a dialogue between authors and reviewers in which reviewers provide comments and ask questions so that authors can use the results of this dialogue to address deficiencies and improve their work. Yet, when talking to active researchers, you might get the impression that peer review is not even close to such a fruitful dialogue. Instead, you might end up with a description of a dysfunctional process characterized by a lack of transparency and efficiency that cannot deliver on the promise of maintaining high scientific standards.
To change the current reality of peer review, many movements and initiatives have been created in recent years. Initiatives such as Publish Your Reviews or Berlin Exchange/Berlin Exchange Medicine attempt to promote the transition toward a culture that allows more transparent and open evaluation of scholarly work. However, changing a system that has long been viewed as the “gold-standard” in the world of scientific publication will not achieved by a few players alone. Therefore, it is of utmost importance for such initiatives to reach out to active researchers and explore how they can make the first small steps in their work toward Open Science.
We had just such a chance at the first German Open Science Festival in Hannover! Together with Ludo Waltman, Professor of Quantitative Science Studies at Leiden University and co-founder of Publish Your Reviews, we had the opportunity to co-host a workshop about Transparency and Openness in Peer Review.
The participants in our worksop were a diverse group of early career and established researchers from both, humanities and STEM (Science, Technology, Engineering and Mathematics) as well as research administrators and librarians. To get to know them a little bit better, we started the Workshop with a short survey about the existing knowledge and experiences with pre-print amongst the participants. It quickly revealed that while the participants were quite aware of many problems of the traditional way of scientific publishing and peer-review, most of them had never actually engaged in alternative forms such as preprinting or open peer-review.
Based on these outcomes, it was probably a good idea that Ludo continued the workshop with a brief introduction of the basic concepts of preprints and open peer review before giving some outlooks on the potential future of scholarly communication. The materials form Ludo’s presentation can be found here.
After this general introduction by Ludo, it was our time to present BEM. Here, we discussed the ideas behind our initiative and provided concrete examples of how we implement the concepts that Ludo introduced earlier. BEM not only offers students the opportunity to publish their research and recive feedback in a modern open peer review process, but it also enables them to become active, critical players in the process of gaining scientific knowledge as student peer reviewers while still in university. At the end of our input session, we talked about the necessity of making open science the core of every academic curriculum. If you are interested in our presentation, you can download it here. We have written down our arguments on why this is so crucial in our position paper “Open Science by Default”
After providing some input to the participants, we wanted to hear about their experiences and opinions on how they see the current state and the future of scientific publishing. Therefore, we split up into smaller groups to discuss preprinting and preprint feedback in more detail. During those discussions, we learned from the our participants that many felt that the current system of publish-or-perish did not allow them the latitude to experiment with new ways of sharing their research or evaluating others. The question “What keeps you from preprinting your own work?” was commonly answered with concerns about disadvantages arising from preprinting. The participants voiced fears that publishing their work as a preprint would impede them from publishing it later on in a peer-reviewed journal or that their results might even get stolen by competing researchers. Others simply stated that the enormous pressure to publish does not give them much time to think of such opportunities. However, there were also participants who already had positive experiences with preprinting and who already published preprints themselves.
Turning the discussions on methods for giving feedback on preprints, we talked about public review – a method where the scientific community gives feedback on commenting directly on selected aspects of the research rather than reviewing the whole work. Several platforms can be used for public review, including social media such as Twitter or dedicated platforms for public review like PubPeer. Another interesting possibility is that preprint servers or journals themselves provide a platform for public review of preprints - this is the model we are currently using at BEM.
The participants seemed to be quite interested in public review. They pointed out how useful such a model can be since it allows several reviewers to engage in a discussion, which can help authors understand how to improve their work. However, they also voiced various concerns about the details of this model. Once again, researchers in early career stages pointed out that going out and leaving comments on someone else’s work would take away time they need to spend on their own research without providing any real benefit to them. Another point of discussion here was the question of responsibility for making sure that platforms for public reviewing are not misused by, for example, leaving misleading or harmful comments about the research. In classical peer-review, it is the editor’s responsibility to carefully select referees and ensure that the submitted peer-review fulfills minimum quality standards. In public review of preprints, however, there is not necessarily an editor that can take up this responsibility, resulting in a significant challenge that may be adressed in a variety of ways. The possibility of giving the platforms that provide the infrastructure for public review the responsibility for moderating their contents was brought up, while others suggested creating a system with somewhat limited access in which only verified or actively invited researchers can leave their comments.
After discussing those and various other topics in smaller groups, we met back in the plenary to sum up, and share the results of our discussions. In this plenary discussion, two topics seemed to be of special interest to the participants. The first of those was the broad discussion of open vs. closed peer-review. While we at BEM, Publish Your Reviews and many other Open Science organizations are strong advocates for open peer review, some workshop participants, especially those from a more traditional STEM background, argued that a closed, potentially double-anonymous (sometimes refered to as double-blind) peer-review might be a fairer approach to peer review. Arguments in favor of this would be that referees, especially when reviewing the work of more junior researchers, might not feel comfortable giving negative feedback as it might feel like they’d be pushing down the authors too much. This could especially be a problem when a peer review leads to the rejection of a submitted manuscript.
The second big topic of the plenary discussion focused on an alternative form of scientific publishing, which breaks the large narrative that is usually told in a scientific publication down into several smaller modules, such as project outlines, annotated datasets, descriptions of methodologies, and data analyses which are each published and reviewed seperately and at subsequent time points. It was mentioned that a peer-reviewed publication currently serves as the “certified version of record,” which is used when scientific results are used for policymaking or as foundations of, e.g., court decisions. However, other participants argued that traditional scientific publications that are nowadays often considered of “high quality” or “definitive answers” often have only been very superficially reviewed and are thus not very suitable to be such a “certified version of record.”
After the plenary discussion, we ended the workshop with a presentation of the Publish Your Reviews initiative by Ludo. Publish Your Reviews is an initiative of ASAPbio that encourages reviewers to share their peer-reviews of preprints so that readers and further experts can participate in the discussion around a scholarly publication. Furthermore, bringing the peer-review to the public allows the referees to get some form of acknowledgment for their valuable, important, but usually unpaid work on the review. Ludo invited the workshop participants to consider signing the Publish Your Reviews pledge.
Looking back at the issues discussed and the questions asked during the workshop, it becomes obvious that there is a definite need for change towards a more fair way of publication and evaluation of scientific work to create an environment in which ideas like the ones discussed in the workshop can be implemented.
The current assessment system of a researcher’s achievements is dead centered on their publication metrics. It does not acknowledge contributions to transparent or sustainable research but only the quantity of publications and citations. In such a system, we cannot blame (early career) researchers that have to advance their scientific careers for not being more informed about or engaged in Open Science practices. Only if senior researchers, research administrators, and funding organizations understand that there is more to good research than a large number of publications in high-ranking journals we can hope to see a change toward a fairer culture of research evaluation.
Such a change will not come on its own. Thus, different initatives and organisations, such as the Coalition for Advancing Research Assessment (CoARA), have formed to shape an environment of research evaluation beyond quantitative publication metrics. And of course, we at Publish Your Reviews and BEM will continue promoting Open Science with a special focus on transparent peer review. Events such as the Open Science Festival are excellent opportunities to connect within the Open Science community and to reach out and convey our messages to a broader audience. We greatly enjoyed participating in the festival and especially all the interesting and fruitful discussions we had. We want to thank the Leibnitz University Hannover and the Leibniz Information Centre for Science and Technology for the great organization of the event, and we are eagerly looking forward to next year’s Open Science Festival.