DARMA Home

Benchmarking research management – NUAS and DARMA

24 Nov 2016 08:04 | Olaf Svenningsen (Administrator)
One week from now, I travel to Iceland to attend the NUAS workshop How to Measure the Value and Quality of Research and Innovation Services in Reykjavik. It is the first of two NUAS workshops with the purpose of defining how to measure what we all do: research management/service/support. You can read the workshop program here, and how the work groups and themes are defined in this PDF file. The second workshop will be held in Göteborg, Sweden in the Spring of 2017.

Since how things are measured will determine what is actually done – as Simon Kerridge emphasized at DARMA's Annual Meeting this year – this may be one of the most important processes for DARMA's members in the near future.

DARMA's board has decided that I participate in the Reykjavik workshop – I will comment on the progress "live" on Twitter and summarize here on DARMA.dk. In early 2017, DARMA will organize a Danish workshop on this theme, where the outcome of the Reykjavik workshop will be discussed with the purpose of providing input to the Göteborg meeting.

For me, it would be very helpful with all the input I could get from you, DARMA's members. Please read the program and workshop document (links above), and either write your comments below, or send them to me in an e-mail. I will value all input very much!

Comments

  • 24 Nov 2016 11:02 | Torben Høøck Hansen
    Looking at the theme 1, I cannot help but feel the discussions could enter into a mess; metrics on how much money, success rates, number of applications are not a true measurement of the support staff’s quality (or the researchers’ for that matter). While simple to collect, this kind of metrics is influenced by a very great number of things outside the support staff’s control and influence. The most obvious is of course the evaluators; yes, they react to applications we are assisting with, but they are an erratic factor. Support staff and researchers can make a flawless application based on the most excellent science, and still meet defeat due some ‘quirky’ evaluator. So yes to being measured, but I would want to be measured for the work I do, not what is outside my/the support unit’s control and not directly related to my/the support unit’s competences
    Link  •  Reply
    • 24 Nov 2016 22:25 | Henrik Engell-Hedager
      Hi Torben, this is why the NUAS workshop is so important. If we/DARMA do not start the discussion of being measured now we exactly run the risk of being measured wrong. All agree to go for high quality both in research and in research management/administration. The NUAS workshops will hopefully secure that.
      Link  •  Reply
    • 24 Nov 2016 23:29 | Olaf Svenningsen (Administrator)
      You make a very good and important point: it is necessary to evaluate research support on what research support does and/or delivers and there is a tendency that metrics and KPI's that are derivative are used. I share your fear that we may be measured on things that we do in fact not deliver, like number of proposals or number of awards. The necessary starting point is to define what research support/research management delivers, and discuss how that can be measured/evaluated. At a NORDP workshop last year, this issue was discussed in depth, and the conclusion was that quantitative measures – e.g. KPI's – alone simply can't be used; some sort of assessment or qualitative evaluation is required.
      Link  •  Reply
      • 25 Nov 2016 10:33 | Torben Høøck Hansen
        Can see we are all three on the same line of thinking. And I am very pleased to hear that my fears match the reasons for having the workshop. The demand for accountability is seeping down through institutions, and while I find it both relevant and acceptable to assess the value of our work, I find the use of metrics, KPIs etc. not only useless, but outright dangerous. And it is ironic that research institutions that always claim that their impact cannot be measured the same way as a sausage factory, stoop to measurements when it comes to internal matters. But as the saying goes, the priest’s children are always the most ill-behaved.
        Link  •  Reply
    • 30 Nov 2016 10:30 | Marie Terpager
      Enig
      Link  •  Reply
  • 30 Nov 2016 10:22 | Anonymous
    Good points made - KPIs etc are here to stay for some time, so efforts made to address these in the best possible way are appreciated.
    Link  •  Reply
  • 30 Nov 2016 10:31 | Anonymous
    The topic is highly interesting, though also quite difficult to measure. As Torben Høøck mentions it can easily enter into a mess with focus on very many parameters. However, for me, I think it all sums up to one number: success rates (number of applications and not amount of money).

    In principle, Im not interested in how many applications have been submitted if they arent good. Part of my job is to stop the researchers in aiming broad ("skyde med spredehagl"). Instead, each and every application should be focused towards each single foundation (e.g. knowing the background of the board, is the science within the charter (“fundats”) etc).
    In principle, Im not interested in how much money has been granted, as this could rely on one single large grant or the very opposite.
    It is the number of good applications that are good on all relevant levels (e.g. consortium, science strength, outreach, impact etc) and that aims for a specific foundation. And the evaluation of “good” sums up to: success rate.


    Benchmarking of research support can be relevant for at least two purposes:
    - “Globally” - comparison between institutions.
    - “Locally” - effect of research support at the individual institution. E.g. comparison of success rate with and without research support, including on how much support a given researcher/application have had.

    As interesting global benchmarking can/will be, local benchmarking could reveal the importance of research support (hopefully :) ) and where future effort should be.
    Link  •  Reply
  • 30 Nov 2016 10:42 | Nicolaj Tofte Brenneche
    Dear Olaf and colleagues,
    Thank you very much for your invitation to join this very important discussion.

    The desire to have clear performance metrics is well known and so are their unintended consequences. The point refered to in the blog post that 'how things are measured will determine what is actually done' is a standard Performance Management argument (what you measure is what you get). However, this is not always the case. Measures do of course influence decisions and practices, but what is important to notice is their unintended effects on behavior. The Dean of Education at CBS, Jan Molin, wrote some time ago a very thoughtful text about this issue which I strongly recommend everyone to read. It puts the management desire for measurements into perspective: http://www.cbs.dk/cbs/organisation/direktion/nyheder/lever-numerologernes-tid

    For our work as research supporters, I think that we need to be very aware that the practices of and demands on research support are heterogeneous: It makes a difference whether you work with medical science or humanities, in a small or large university, in a centralised or decentralised support structure, to mention only a few parameters. Furthermore, the wider landscape we operate in is dynamic and the preconditions for success are shifting over time.

    I believe that, in a Danish research support context, we are in a phase where we see a growing orientation towards professionalisation. With this development comes a growth in self-awareness and identity, educational solutions, career tracks, research support structures and a common nomenclature. If we are to develop 'performance measures' in the coming years, it would be relevant to systematise and benchmark the professional training of research support officers and thus take the professionalisation process to the next level. This would be much more fruitful than the construction of some overarching, heterogeneity-ignorat KPI.
    Link  •  Reply
  • 30 Nov 2016 10:52 | Helen Korsgaard
    I agree with you all here. I find it wrong to evaluate support by counting consultations with funders or researchers. Then the focus will be on counting rather than on quality of the consultations.That will decrease rather than increase the amount of grants that we all want to see increase. As far as I know there are a lot of research within this area, new public management, that prove this point.
    Link  •  Reply
  • 30 Nov 2016 13:17 | Torben Høøck Hansen
    Provoked by Olaf's mail, I can only say that anybody suggesting to measure by one simple metric (let me guess - how much funding over a given period of time?), cannot be take serious, not matter that person’s position at any university. To simplify things in such an extreme way would compare to measuring ALL research on numbers of Nature papers only.
    If we look at what we support, namely research, it should be noted that it is measured in vast number of ways. Some are metrics, some are assessments, some are qualitative and some of them just the words and judgements from individuals/peers. So perhaps it is time that university administration, research support included, is measured and discussed with the same degree of professionalism.
    Link  •  Reply
  • 30 Nov 2016 13:32 | Anonymous
    In addition to the broad range of good comments already posted, we also think that it is important (this from a very local perspective) to discuss the role of research support in stimulating the motivation among reaserchers to engage in research applications, to be instrumental in cultivating a culture and nurture an environment which encourage and facilitate project engagement. The assessment of the users and in addition their actual use of research support and the nature of using reaserch support, could also be vital elements in the current discussion.
    When discussing resarch support it is also vital that we as support staff continually develop in close dialogue with stakeholders, such as applicants, evaluators, foundations etc. Concretely for instance by inviting applicants to discussions at network meetings, have foundation board members give talks at information meetings etc. These are not new measures of course, but they are often seen more as measures to inform rather than measures that acually develop the comptencies of support staff.These are examples of activities which also create value in the development of research support staff.
    Link  •  Reply
  • 30 Nov 2016 14:11 | Annedorte Vad
    I look very much forward to this seminar and the discussions with colleagues. I am very much in favour of statistics, spreadsheets and numbers. But ONLY as a tool to keep information on what has been done and to use for planning what to do next. I use data for this with great joy. I fear someone (management) wants to turn it into a simple measure of success and I don't like the consequences. I will repeat the famous quote: 'Not everything that counts can be counted, and not everything that can be counted counts' which is spot on for this kind of thinking.
    Link  •  Reply

© DARMA – CVR: 35977880

Powered by Wild Apricot Membership Software