Authors (in alphabetical order; underlined are the main authors of the blog post): Charlotte Buus Jensen, Valentino Cavalli, Maria Cruz, Raman Ganguly, Madeleine Huber, Mojca Kotar, Iryna Kuchma, Peter Löwe, Inge Rutsaert, Melanie Stummvoll, Gintare Tautkeviciene, Marta Teperek, Hannelore Vanhaverbeke

On 1 December 2017 Maria Cruz and Marta Teperek facilitated a workshop titled Evaluation of Research Careers fully acknowledging Open Science Practices. This was part of a larger conference Digital Infrastructures for Research 2017 in Brussels, Belgium. The workshop was attended by about 15 people from various backgrounds: library professionals, repository managers, research infrastructure providers, members of international networks for research organisations and others. Below is the summary of what happened at the workshop, key discussions and suggested next steps.

Rationale for the workshop

The workshop was inspired by a report published by the European Commission’s Working Group on Rewards under the Open Science Policy Platform “Evaluation of Research Careers fully acknowledging Open Science Practices”. Noting that “exclusive use of bibliometric parameters as proxies for excellence in assessment (…) does not facilitate Open Science”, the report concludes that “a more comprehensive recognition and reward system incorporating Open Science must become part of the recruitment criteria, career progression and grant assessment procedures…” However, in order to make this a reality, multiple stakeholders need to be involved and make appropriate steps to recognise and implement open science practices. The workshop aimed at developing roadmaps for some of these stakeholders and at identifying ways of effectively engaging with them, and discussing their possible goals and actions.  

What happened on the day

The initial plan was to look into four different stakeholder groups: research institutions, funding bodies and governments, principal investigators, and publishers. However, in order to ensure group work and interaction between workshop participants given that only about 15 people attended the workshop, it was decided the focus would be solely on the first two stakeholder groups: research institutions and funding bodies and governments. These stakeholders were also identified in the original EC’s report.

The participants split into two teams, each trying to create a roadmap for a different stakeholder group using collaborative google documents. To start with, the teams tried to address the following four questions for their stakeholders:

  1. What methods could be used to effectively engage with this stakeholder group and to ensure that they are willing to implement Open Science Practices in their research evaluation?
  2. What should be the goals for this stakeholder to fully implement Open Science Practices in research evaluation? What are the key milestones?
  3. What will be the main barriers to implementation of these goals and how to overcome them?
  4. Propose metrics which could be used to assess this stakeholder’s progress towards implementation of Open Science Practices in their research evaluation practices.

Subsequently, the groups swapped stakeholders, and reviewed and commented on the work of the other group, enriching the roadmaps and adding a broader perspective. The workshop concluded with a reporting session which brought the two groups together and allowed the attendees to engage in discussion.

Key observations about successfully engaging with research institutions

The participants identified internal and external drivers important to engage with research institutions and to encourage them to change their academic rewards systems to ones based on open science practices. Not surprisingly, requirements for open science from funding bodies and governments were at the very top of external drivers’ list. If funders start identifying commitment to open science practices as funding criteria, institutions will have no choice but to reward researchers for open science in order to continue securing funding bids.

One of the most appealing internal drivers discussed was lobbying within institutions by prominent researchers who are themselves committed to open science: this could not only help institutions roll out policy changes, but also demonstrate to younger researchers that commitment to open science might be valuable for their careers.

Key observations about successfully engaging with funding bodies and governments

Quite interestingly, external drivers were also seen as important factors to engage with funding bodies and governments. signs-108062_960_720Joint statements from several academic institutions were mentioned as tangible ways to establish effective collaborations with funding bodies. Therefore, there seems to be a need for synergy between institutions and funding bodies/governments. In addition, it has been stressed that better networks between international funding agencies and governments might also lead to cross-fertilisation of ideas and good practice exchange. For example, the European Commission could advise Member States to develop national policies on open science.

Lack of credible metrics to measure the commitment to open science practices was discussed as one of the main barriers, which might discourage funders and governments from changing academic rewards systems.

Can quality be measured with quantitative metrics?

The initial discussion about the lack of credible evaluation metrics as a potential barrier preventing funding bodies and governments from changing their academic rewards systems led to a longer debate about the usefulness of metrics in open science in general. One of the participants mentioned that a new metric, analogous to journal’s impact factor but tailored to research data, could potentially offer a solution to the problem. However, others felt that it might be simply inappropriate to measure qualitative outcomes with quantitative metrics, and such approach risks replicating all the flaws of metrics based on journal’s impact factor. It was proposed that instead high-quality peer-review on selected outputs should be emphasised and promoted (and rewarded as well).

Next steps

The short-term aim is to share the outcomes of this workshop with the authors of the  European Commission’s Working Group on Rewards under Open Science “Evaluation of Research Careers fully acknowledging Open Science Practices”.

In addition, roadmaps for the two remaining stakeholder groups (publishers and principal investigators) need to be drafted. Moreover, and as pointed out by participants of this workshop, even though it could be impossible (or not desirable) to create metrics for commitment to open science practices, it would be still valuable to develop frameworks for the different stakeholders to provide them with broad guidelines as to what kind of achievements could be rewarded. The same frameworks could be also used by researchers as a source of inspiration and motivation for open science.

Finally, one of the key drivers for change, identified during the workshop, were funding bodies and pilot funding schemes to which only researchers able to demonstrate commitment to openness could apply. Such funding schemes would not only allow the community to learn about suitable ways of assessment of open science practices, but would also provide researchers practising open science with immediate benefits and much needed recognition.