We need to go beyond managing research by the numbers

Very few aspects of our lives escape numerical assessment for comparative purposes. The current challenge for humanity is to keep the breeding number of Covid-19, or R0, as low as possible to prevent its spread.

This digital fixation is certainly true for academic research. Many measures of research results, quality and impact are used to guide recruitment, funding and promotion. But while it may seem odd in the midst of a viral pandemic to question that, I think now is exactly the right time.

There is no virtue in reducing rich complexity to mundane metrics. Haydn wrote 107 symphonies and Beethoven produced only nine – yet while most of us can hum at least one of Beethoven’s symphonies, few can remember Haydn’s. This is, in essence, the problem with using numbers to measure the sole value of something as complex as what emerges from the human imagination.

Unfortunately, measuring the quality of research is no different. Measures such as the number of scholarly publications, the impact factor of the journals in which the articles are published and their accumulated citations are widely used indicators of academic excellence and impact. However, by themselves, these “traditional” measures do not capture the finer aspects of research quality, may trivialize in-depth scholarship, vary widely by discipline, and are so far removed from expressing public value that ‘they are very limited in their usefulness.

One initiative to move research performance evaluation from digital evaluation to peer review is the San Francisco Declaration on Research Evaluation (Dora). Its aim is to transform the way research quality is understood, creating a culture where digital metrics are no longer the primary determinant of research promotion and funding.

In a global wave of background, Dora has been signed by more than 15,000 people and 1,800 institutions since its launch in 2012. The University of Melbourne has now become the first university in Australia to enroll, after the Australian Academy of Science and the Australian National Council for Health and Medical Research.

Numerical measurements are popular with policy makers because they give a sense of objectivity and are easier and faster than a more in-depth review of the research by experts. But we must always keep Goodhart’s Law in mind: “When a measure becomes a target, it ceases to be a good measure.

Economists who study research metrics find that academics are reluctant to embark on bold or risky research in case they are unable to produce papers capable of increasing their citation count (h-index) and / or be published in high impact journals (a measure originally designed not to measure the quality of research but to market journals to librarians).

The second problem with digital metrics is that they put academics at a disadvantage who take the time to raise a family or focus on teaching. They also favor those who publish large volumes of minor work, describe a widely used method or technology, or write many review articles, over those who publish in-depth research on difficult issues that have long-term impact. .

Digital measures are immature as reflections of the best interdisciplinary research and fail to capture some of our most important research: work that improves clinical practice, reforms social policy, or generates research packages and software that ‘others can use.

Our challenge is to disentangle the quality of research from numerical measures that reward caution, short-termism and exclude certain types of research and researcher. For example, the UK’s National Research Excellence Framework does not explicitly use journal impact factors to understand research quality or research impact. In Australia, we have a similar strategy with the government research and impact program and the NHMRC “impact statement”, introduced as part of the grant applications.

Australian Chief Scientist Alan Finkel called for the introduction of the “rule of five”. This would require researchers seeking employment, promotion or funding to submit their top five research papers from the past five to ten years, along with a description of the research, its impact and its how they contributed to what is often teamwork.

The Walter and Eliza Hall Institute in Melbourne and the Frances Crick Institute in London are both supporting the researchers by removing the constraints imposed by the need to publish for the next job or to secure another three-year grant by instituting appointments from seven years for new laboratory managers.

The biggest problems of our time – emerging diseases, statelessness, climate change, social equity and new sources of energy, to name a few – require the attention of our best researchers. , without being constrained by simplistic measures of performance and success.

A more holistic judgment of the quality and impact of research, informed but not guided by action, will inspire greater confidence in our research community to take the risks necessary to break paradigms and change the world.

Jim McCluskey is Assistant Vice Chancellor (Research) at the University of Melbourne.


Source link

Leave a Reply

Your email address will not be published. Required fields are marked *