University Alliance > News & Media > In the News > Research Fortnight: All assessment systems come at a price

Research Fortnight: All assessment systems come at a price

This article by Maddalaine Ansell, chief executive of University Alliance, was originally published in Research Fortnight on 19 July 2016. 

Researchers love to hate the Research Excellence Framework. Some say that it is overly burdensome and expensive. Others think it creates the wrong incentives; encouraging universities to game their way to better results. Yet others criticise it for driving a wedge between researchers selected for assessment and those left out.

University managers, policymakers and funders, on the other hand, believe that it is a useful management tool that has increased research quality, helped justify funding and shaped national priorities. Perhaps because of this, ministers have been clear that it’s here to stay. The results of the next exercise are expected—probably—by 2021.

It’s easy to pick holes. Working out how to improve the REF—the task Nicholas Stern faces—is far harder. The many, sometimes contradictory, submissions to his review may or may not have been helpful.

The biggest bone of contention is whether to assess institutional performance through the mandatory submission of all staff, or whether universities may continue to select staff and units of assessment. In our submission to the Stern review, University Alliance supported the latter.

The present approach produces granular data for funding allocation. This ensures that research excellence is identified and rewarded wherever it exists, even in smaller institutions that have historically received little or no funding. It also means staff can move fluidly between research, teaching and knowledge exchange.

This is particularly useful for universities with strong links to industry, where staff may do commercially sensitive research that they cannot publish. Requiring all staff to be submitted to the REF would discourage this flexibility.

Those arguing for institutional-level assessment portray it as less time-consuming, less divisive, and less vulnerable to gaming. This may all be true, although the games may just change, rather than vanish. Nevertheless, it is not surprising that the loudest advocates of this approach are the universities that receive the lion’s share of the funding. Institutional-level assessment would make their share grow even larger.

This would not help the government’s aim of creating a dynamic research base. Allocating funds on the basis of scale or track record would disadvantage new challengers. It is better to have a system where every institution has to stay on its toes.

Universal staff submissions would also bring administrative challenges. Submitting all staff would mean that panels would have a great deal more to assess; at greater expense. Seeking to reduce the burden by sampling could produce some very odd results in universities with diverse portfolios, including spells in industry. Developing a robust system by 2021 might be tricky.

The present government understands that our research strength depends on diversity and dynamism, and that maintaining them requires both core and project funding. Without it, the less research-intensive institutions could not develop their competitiveness. Maintaining selective staff assessment would do most to promote excellence through competition.

The higher education and research bill that is before the UK parliament reiterates the commitment to this dual support system and seeks to give it statutory protection. The plan to create Research England—a separate entity within the proposed umbrella body UK Research and Innovation (UKRI)—to take responsibility for the REF from the Higher Education Funding Council for England is also helpful.

This is not to say that the REF is perfect as it is. Combining assessment of a research unit’s environment with its support for creating impact—conveyed separately through the impact statement in REF 2014—would be more efficient and give a more holistic picture of how universities are trying to ensure their research makes a difference. We might also usefully broaden the definition of impact, and create alignment with the Teaching Excellence Framework, by including research-informed teaching as a valid impact.

The biggest win, however, might come from making better use of the data generated by the REF to understand not just individual institutions but the research base as a whole. Transferring core funding allocations to UKRI—which will also administer project-based funding—provides an opportunity to create a specialist national research analysis unit to study all forms of research performance and capability across the whole sector. This would improve the evidence base for publicly funded research in the round.

Any robust system is going to come at a cost. The important thing is to make sure that the benefits justify that cost. The present system creates a more or less level playing field that supports a dynamic research base, allows challenges to emerge and ensures excellence is funded wherever it is found. Let’s build on these strengths, not undermine them.