Benchmarking research management – Day 2 and summary of workshop

09 Dec 2016 14:07 | Olaf Svenningsen (Administrator)

A week has passed since the NUAS workshop How to Measure the Value and Quality of Research and Innovation Services in Reykjavik ended. For me, it has been very intense and busy days with a.o. the DARMA-Innovation Fund Denmark workshop, which I will write about in another blog.

One very important piece of information is that the follow-up NUAS workshop on this theme will be held in Göteborg, Sweden, on 31 August and 1 September 2017. DARMA will organize a Danish workshop for members on this theme during the first half of 2017; details will follow as soon as possible.

On Day 2 of the NUAS workshop, the results from the previous day’s group efforts was collected and discussed in a plenary session. I have not yet received the presentations from the organizers, and thus only have my own (bad, 1 and 2 are worse than bad, but...) photos of some of the slides, which will have to do for the moment. Please feel free to comment or add to the discussion in the comment field below or directly to me.


The presentations followed the structure of the working groups, starting with Pre-award theme 1 (internal resources and costs). This group discussed the mission (slide 1), which tools could be used (slide 2), made some remarks on the size of the team (slide 3), and finally suggested some possible metrics (slide 4), which focused on what was not done.

Pre-award theme 2 (the perspective of those we support) followed, starting out with a brief overview of tasks in pre-award (slide 5) and the goal of their group work (slide 6). This group came up with a shortlist of suggested KPI’s which included the big theme of the plenary session: customer satisfaction surveys (slide 7).

Pre-award theme 3 (a broader context) was also the group I participated in. We stated that all evaluations/metrics must include input, process and output, not just one of them (e.g. output as in amount or number of awards), and that quantitative indicators cannot stand alone– qualitative aspects must be included (slide 8). Some examples of possible metrics were presented, among them recruitment of high-potentials (slide 9) and linking researchers (slide 10).

In the discussion, it was re-emphasized that the input-process-output logic is necessary and that consistent, coherent and reliable data is a prerequisite if comparing between universities is an objective. This brought up that it is crucial to decide to what purpose the metrics are going to be used. If this is not decided beforehand, the risk of creating perverse incentives is considerable.


Results from the post-award groups were then presented, and I did not catch all the slides, but here are some of the main conclusions:

  • Theme 1 – Impact of processes; what could be worked with (slide 11)
  • Theme 2 (user perspective) – went very much on user satisfaction (slide 12)
  • Theme 3 (broader context) – satisfaction surveys figure again, but also a number of suggested KPI’s (slide 13)
  • Customer satisfaction got its own slide; how-to (slide 14)
  • Start-up meetings was another theme addressed separately (slide 15)


The plenary session was finished with a presentation from the groups working with innovation. Since innovation and tech-trans is not a core focus of DARMA, I will not summarize that discussion, but all the presentations, including the innovation slides, will be made available on DARMA’s web pages as soon as the association receives them from the NUAS organizers.

© DARMA – CVR: 35977880

Powered by Wild Apricot Membership Software