DARMA Home

Benchmarking research management – end of Day 1

01 Dec 2016 18:52 | Olaf Svenningsen (Administrator)

The first day of the NUAS workshop How to Measure the Value and Quality of Research and Innovation Services in Reykjavik just ended. The input from DARMA's members has been extremely useful – thank you to all, please keep the comments coming! It is still too early to make any general conclusions – we have been divided into working groups, and I only know what went down in the one I participated in – but you can follow the day somewhat through my tweets from today. Here is a brief summary of the day:

First out was Martin Kirk of the University of British Columbia in Canada, who gave an overview of how research administration is benchmarked and evaluated in Canada, which is thoroughly and with a degree of detail that I don't think we will see in Denmark in the near future (maybe not in the far future either...). Martin described the metrics, KPI's etc. and showed results from the "U15 Admin Capacity Benchmarking" (all documents will be made available, also to DARMA's members). 

Some of my notes include that all benchmarking and assessments absolutely need to be underpinned by reliable, coherent and consistent data, and that the best functioning units tend to be moderately staffed and with very well functioning support systems. Researcher satisfaction surveys are also critical to assessing and developing services. 

The next speaker was Simon Kerridge from Kent University, well-known to DARMA. Simon presented the British perspective, putting emphasis on the Metric Tide report – the name says a lot. Simon reinforced many of Martin's messages, e.g. that the infrastructure and how data is collected is crucial and needs to be consistent and comparable. Simon concluded that  at present, qualitative assessments (satisfaction surveys) are the best available indicators.

The third speaker was Koen Verhoef from the Netherlands, who addressed the inherent complexity in measuring and evaluating innovation and knowledge transfer. Many of his points were again similar to the previous speakers, and the statement that the Netherlands are moving from metrics towards more sophisticated impact assessment stuck in my mind.

After lunch we heard presentations from two researchers on their view of research services, both good and interesting – and recognizable, too. Andrew Telles of Göteborgs universitet introduced the group work by using chocolate cakes to illustrate how something very simple still can be difficult to capture with metrics, the message being that the reasoning behind the metrics is as important, if not more important as the metrics in themselves.

Then followed a long, intense afternoon of group discussions. I was in the group with the theme: Pre-award, Metrics and KPI for measuring quality and success of pre-award services: University and Society. At this moment, I can't summarize the discussions with any degree of justice, but topics included that we need to use the Snowball Metrics approach, and always capture input, process and output metrics, as just one of those will paint a misleading picture. How to educate your leaders, the university executive, to make well-informed, sensible strategic discussions was another topic, and finally, we concluded that quantitative indicators alone means little, if nothing, if not qualified by qualitative indicators or aspects.

The discussion continues and wraps up tomorrow, and a summary will be posted here, so stay tuned.

Comments

© DARMA – CVR: 35977880

Powered by Wild Apricot Membership Software