Knowledge That Counts: Points Systems and the Governance of Danish Universities Susan Wright Introduction The term ‘governance’ as applied to universities has more than one meaning. It was once widely used from the fourteenth to sixteenth centuries in England to mean the way an institution like a university was run, how a landed estate or even a whole country was kept in good order, and how an individual conducted business by maintaining ‘wise self-command’ (Oxford English Dictionary 1989 VI:710). In almost all contexts – except universities – these meanings had fallen in desuetude by the eighteenth century, only suddenly to burst back into use in the 1990s. Their decline coincided with governing becoming the specialized role of a ‘government’ which, through the machinery of a centralized bureaucracy, managed the population and economy of a nation state. The resurgence of ‘governance’ in the 1990s
98
Embed
pure.au.dk file · Web viewThird, this system of governing relied on individuals’ freely exercising their own agency, but, often learning from the pedagogies embedded in political
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Knowledge That Counts: Points Systems and the Governance of Danish
Universities
Susan Wright
Introduction
The term ‘governance’ as applied to universities has more than one meaning. It was once widely
used from the fourteenth to sixteenth centuries in England to mean the way an institution like a
university was run, how a landed estate or even a whole country was kept in good order, and how
an individual conducted business by maintaining ‘wise self-command’ (Oxford English
Dictionary 1989 VI:710). In almost all contexts – except universities – these meanings had fallen
in desuetude by the eighteenth century, only suddenly to burst back into use in the 1990s. Their
decline coincided with governing becoming the specialized role of a ‘government’ which,
through the machinery of a centralized bureaucracy, managed the population and economy of a
nation state. The resurgence of ‘governance’ in the 1990s heralded a change in the political
order, when
‘government’ … becomes less identified with ‘the’ government – national
government – and more wide ranging. ‘Governance’ becomes a more relevant
concept to refer to some forms of administrative or regulatory capacities.
(Giddens 1998: 32–33)
There were three main characteristics of this shift from government to governance in the 1990s.
First, instead of the bureaucratic management of a society, governments increasingly
accomplished the maintenance of order and the delivery of services through networks of
agencies and actors operating on global, national and local scales and including trans-national
agencies, international corporations, state and public institutions, arms-length agencies, and civil
society organizations (Rhodes 1997). Governments were to encourage enterprise and
competition by contracting out service delivery to such networks of partners (known in Canada
as alternate service delivery [ASD]) (Osborne and Graeber 1992). Second, what had to be
governed were no longer clear organisational structures but this network of often obscure
linkages. Contracting organisations were free to manage their own production processes or enter
subcontracts with others. Government tried to maintain control through technocratic measures
such as setting performance targets and key performance indicators, conducting audits, checking
contract compliance, and basing payment on the number and quality of outputs (Dean 1999).
Often these technocratic measures acted, in Foucault’s terms, as ‘political technologies’ (Dreyfus
and Rabinow 1982: 196) in that the political and ideological aims of government were not made
explicit but were embedded in the detailed operations of these apparently politically neutral and
purely administrative systems. Third, this system of governing relied on individuals’ freely
exercising their own agency, but, often learning from the pedagogies embedded in political
technologies, they were to exercise their freedom in ways that achieved the government’s vision
of order and contributed to the international success of the competition state (Rose 1989,
Pedersen 2011).
This new meaning of governance echoed the old in that it spanned the three scales of the self-
management of individuals, the running of institutions, and the ordering of a country, now part of
a reconceptualised space of global competition. But between the old and the new meanings of
governance there was an important shift in who had the power to define ‘good governance’. It
was no longer up to people or institutions to maintain their own ‘wise self-command’ in a
bottom-up fashion. Now ‘good governance’ was defined ‘top-down’ and was achieved when the
government’s ideas of the proper order of the country were enacted in the management of
organizations and the conduct of individuals. The apotheosis of this art of government was to
find a single technical measure that would operate on all three scales at once and that would
simultaneously order the competitive state, the enterprising organization, and the
‘responsibilized’ individual according to the government’s ideological and political vision.
This chapter will focus on universities, one of the only institutions that has kept alive the
original idea of governance when it otherwise fell into disuse.1 In that original sense, governance
refers to the array of ways that a university orders its own affairs by managing its relations with
the state, maintaining its own internal organization, and instilling certain values and expectations
of individual conduct. Now this meaning of governance is overlain by the resurgent meaning, in
which it is government that defines the contribution of universities to the competitive state, the
ways that the institution should be organized and managed, and the appropriate behaviour for
‘responsible’ academics and students to adopt. As will be discussed in this chapter, the Danish
government’s reforms of universities are a good example of the introduction of this top-down
form of governance. In particular, the Danish government’s system for allocating a scale of
points for different kinds of research publications was a political technology that aimed to bring
the ordering of the sector as a whole, individual institutions, and academic staff into alignment.
The government used the points system to establish competition for funding between
universities, which was considered a necessary pre-requisite for them to perform well on the
world stage; it made clear to newly appointed strategic leaders what priorities to set for their
organization; and every individual quickly learnt what is expected of them to maximize ‘what
counts.’ In short, the points system was an attempt, through a single mechanism, to set up an
institutional circuit that took governance from the world stage to the self-management of the
individual on the front line and back.
Systems of governance do not always work as designed. The chapter will start by setting out
the two strands of thinking that informed the university reforms in Denmark. One strand was the
reform of the public sector to create a competition state, and the other strand refocused the work
of universities on what the government deemed necessary for Denmark to succeed in a global
knowledge economy and maintain its position as one of the richest countries in the world. In
both strands of the reforms, performance indicators, such as the points system, became an
important mechanism of university governance. The second section summarizes the long process
of designing the points system for the government to use in funding algorithms for the sector,
and for university leaders to use as a tool of management. The third section is based on fieldwork
in a faculty which had long used such points systems. Academics had internalized the system’s
priorities, but had also internalized conflicts between their own motivation and the system’s
incentives, with resultant high levels of stress. The fourth section, based on fieldwork in another
faculty where the points system was a new phenomenon, explores the ways that academics used
different combinations of pragmatic accommodation and principled resistance to the system’s
imperatives, until finally it was withdrawn.2
Governance and the Global Knowledge Economy
A major reform of university governance in Denmark started with a University Law in 2003.
This law was in keeping with the wider reform of the public sector that the finance ministry had
been developing since the 1980s (Wright and Ørberg 2008). Called ‘Aim and Frame Steering’
(mål- og rammestyring), ministers were no longer to run the bureaucratic delivery of services.
Instead, they were to focus on formulating the political goals for their sector and the legal and
budget framework through which they were to be realised. The delivery of these services and the
achievement of the political goals were then contracted out to agencies. In a process Pollitt et al.
(2001) call ‘agentification,’ parts of the bureaucracy and other state-run organizations, like
universities, were turned into such agencies, with the legal status of a person and the power to
engage in contracts with the ministry. The ministry steered these agencies by writing clear
performance goals into the contracts along with numerical and quality measures for their
achievement. For example, the ministry’s contracts with universities contain long lists of the
numbers and percentage rise in outputs of graduates and PhDs, publications, externally funded
projects, and so on to be achieved within a defined period. The state auditor checks annually the
universities’ reports about the fulfilment of these contracted targets. Output and performance
measures have also become more important in the allocation of state funding, on which the
universities are predominantly reliant. Payments for teaching were already (since 1994) entirely
based on the numbers of students who passed their exams each year. Following the 2003 law, the
ministry worked on defining and weighting the criteria for increasingly basing the rest of their
funding on outputs and for allocating this funding competitively between the universities. As will
be shown below, a points system based on the number of publications and proxies for their
‘quality’ became a key mechanism for shifting towards output and performance payments in the
government’s new way of steering the university as one of its public sector ‘service providers.’
While these changes to the steering of universities were clearly part of a reform of the whole
public sector, the minister for research also tied them closely into a strategy for Denmark’s future
economic success. Denmark had been an avid participant in the work of the Organisation for
Economic Co-operation and Development (OECD), which through the 1990s promoted the idea
that the future lay in a global economy operating on a new resource – ‘knowledge.’ This idea
was taken up by other transnational organizations like the European Union (EU), the World
Economic Forum (WEF), and the World Bank (WB). They argued that a future global
knowledge economy was both inevitable and fast approaching. Each country’s economic
survival, they maintained, lay in its ability to generate a highly skilled workforce capable of
developing new knowledge and transferring it quickly into innovative products and new ways of
organising production. The OECD in particular developed policy guidance for its members (the
thirty richest countries in the world) to make the reforms deemed necessary to survive this global
competition. It measured and ranked their performance and galvanized national ministers into an
emotionally charged competition for success and avoidance of the ignominy of failure.
Universities were thrust centre stage in this vision of the future. They were to ‘drive’ their
country’s efforts to succeed in the global knowledge economy. As well as aiming to attract the
‘brightest brains’ through the fast growing and lucrative international trade in students, many
governments set a target for 50 per cent of school leavers to gain higher education, and sought to
reform education so that students not only acquired high-level cognitive skills, but also the
‘transferable’ skills thought necessary for employment in a global knowledge economy. Policy
makers widely adopted the idea that university research should shift from Mode 1 (motivated by
disciplinary agendas) to Mode 2 (motivated by social need) (Gibbons et al. 1994). In a
bowdlerized version of this argument, the Danish government’s catchword for their university
reform was ‘From idea to invoice,’ arguing that academics should develop closer relations with
industry and focus on results that would lead to innovations. The OECD developed checklists
and tool kits, guidance and best practice to help governments reform universities. These included
changing the management of universities to make them capable of entering into partnerships
with industry and the state and of delivering the performance these partners expected.
The Danish University Law in 2003 brought the agendas for both the competition state and
the global knowledge economy to bear on university management. Whereas previously
academic, administrative and technical staff and students had elected the leaders and decision
making bodies at every level of the organization, all these were abolished, apart from elected
study boards, which continued to be responsible for the design, running, and quality of education
programmes. Now a governing board, with a majority of members appointed from outside the
university, appointed the rector, like a CEO of a company. He or she appointed deans, who
appointed heads of department. In what was called ‘unified management’ (enstrenget ledelse),
each leader was accountable to, and had an obligation of loyalty towards, the superior who had
appointed him or her, and was no longer, as in the previous structure, primarily accountable to
the people he or she led. Although a later amendment required the ‘unified management’ to
involve employees in decisions, the faculty and departmental boards and their rights and powers
which had involved members of the university in decision making had been abolished. For the
first time, the rector now spoke ‘on behalf of’ or even ‘as’ the university, as a coherent and
centrally managed organization (Ørberg 2007). This was a clear break with the idea of the
university as a community of academics, administrators, and students.
By changing the legal status, state steering, financing, and management of universities, the
minister claimed he was ‘setting universities free;’ he was both making them into agencies with
the power to enter contracts with the state, industry, and other organizations and he was giving
the new leaders ‘freedom to manage’ – it was up to them how they ran ‘their’ organization as
long as they delivered on contracts. With the rector as the head of a strongly line-managed and
coherent organization, empowered to decide on the strategic use of the university’s funding and
acting as an interlocutor with the ministry, politicians, and industry, the minister claimed that
government could restore its trust in universities. When, shortly afterwards, the minister initiated
mergers between universities and with government research institutes, he felt at least three
Danish universities were now capable of appearing within the top ten in Europe measured by one
of the world ranking tables (Kofoed and Larsen 2010). In his view, universities now had the kind
of organization needed to drive Denmark’s efforts to succeed in the global knowledge economy
and could be trusted with increased government funding to that end. A Globalization Council
was established by the prime minister and produced a strategy that argued that Denmark’s
continuing status as one of the world’s wealthiest countries largely depended on the performance
of its universities (Government of Denmark 2006). To achieve this, a ‘Globalization Pool’ during
the years 2010–12 substantially increased university budgets. In the government’s view, to
incentivize Danish universities to become ‘Global Top Level Universities’, this funding had to
be allocated competitively and on the basis of ‘quality indicators’ (Government of Denmark
2006: 22). Right from the start, academics were worried that the indicators would not just be
used to establish competition within the sector, but as tools for internal management, to allocate
funding between faculties and departments and to incentivize the behaviour and even hire and
fire individual staff (Emmeche 2009b). The ministry’s steering group stated explicitly that the
‘quality indicators’ were expected to have an effect on the behaviour of individual researchers,
motivating them to publish their research in the most prestigious ‘publication channels’ that can
be used to compare research quality internationally (FI 2007; FI 2009b). In the ministry’s task of
devising the output indicators and the formula for the competitive funding system, the agendas of
the public sector reforms and the preparation for the global knowledge economy came together.
By choosing indicators that counted in the world rankings, restructured the sector competitively,
and made clear to each individual what counts, it seemed they had found a mechanism which
brought these three elements of governance into alignment.
Devising a System for Competitive Allocation of Funding
The process of devising indicators that would mobilise the whole university sector, the internal
organisation of each institution and each individual academic and would improve Denmark’s
standing in the global university rankings is presented diagrammatically in Figure 1.
<insert Figure 1. Institutional circuitry: the Danish research points system from individual
performance to world rankings
from accompanying file>
In autumn 2006, the ministry started to look for ‘quality’ indicators for teaching, knowledge
transfer (videnspredning) and research on which to allocate funding competitively between
universities. In negotiation with Danish Universities, it was decided that, for teaching, the
existing calculation of outputs –the number of students who passed their year’s exams–could
also be used as a measure of ‘quality’. This was doubted by some academics who had argued
repeatedly that a system which rewarded faster throughput of students with fewer dropouts and
fewer failures might improve ‘value for money’ but might also, perversely, incentivise the
lowering of standards. The government rejected this argument, claiming it could rely on
academics’ professionalism to maintain standards.3 Paradoxically, the government both designed
indicators to change academics’ behaviour, but also depended on academics resisting these
incentives. The ministry set up working groups to devise new quality indicators for outputs in
knowledge transfer (videnspredning) and research. The knowledge transfer working party
produced a report that was criticized for poorly defining activities, which ranged from industrial
innovation to enhancing public debate and democracy. Eventually, knowledge transfer was
dropped as an indicator.
The working party charged with devising an indicator for research quality began reviewing
available European models. They rejected the U.K.’s Research Assessment Exercise, based on
peer review panels, as too costly in staff time. The Leuven model combined a number of
indicators – PhD completions, external funding, and citation rates for publications. Research
commissioned by the humanities faculties of Danish universities showed that measures based on
commercially produced citation indexes were inappropriate for the humanities, as humanities
faculty published very little in the international journals covered by those firms (Faurbæk 2007).4
It was agreed that there should be one measure for all disciplines. Therefore, the working party
adapted the Norwegian model (Schneider 2009), which allocated differential points to journal
articles, chapters in edited volumes, and monographs depending on whether they were ‘top level’
or not and peer reviewed or not. In this model, ‘quality’ is not assessed directly but relies on the
journal’s or publisher’s peer-reviewing and ‘international’ status (defined as in an international
language and with under two-thirds of contributors from the same country). The Australian
system of auditing and ranking universities called Excellence Research for Australia (ERA)
entailed similar ranked lists of journals until the minister cancelled them at the last minute. He
said this was because university managers were using the lists in an ‘ill-informed and undesirable
way’ to set academics targets for publications in top ranked journals (Carr 2011). In contrast, the
Danish government’s aim was for managers and academics to treat measures as targets.
The Danish model required all academics to enter their publications into their university’s
database each year. These would be put together as a national database and points allocated to
each publication according to an authorized list of which journals and publishers were ‘level 1’
or ‘level 2’. Level 2 journals were defined as the leading international journals that published the
top 20 per cent of the ‘world production’ of articles in a field. To create this authorized list, in
late 2007 the ministry, with the agreement of Danish Universities, set up 68 disciplinary groups
involving 360 academics. They delivered their lists to the ministry in March 2009. The ministry
found that the same journal could appear on two lists at different levels – presumably because it
was central to one discipline but more peripheral to another. When the ministry published its
consolidated list on its web site, immediately 58 of the 68 chairs signed a petition saying it was
not an appropriate tool for distributing funding and asking the ministry to remove the list from its
web site (Forskerforum 2009a; Richter and Villesen 2009). One disciplinary group found 89 of
the journals they had put in the ‘lower level’ had been upgraded to ‘top level’ whilst 30 of their
most important journals had been downgraded (Richter and Villesen 2009). In another
disciplinary group, seven coffee table magazines suddenly appeared in the ‘top level.’ No Danish
journals or Danish publishers appeared as ‘top level’ at all, disadvantaging subjects such as
Danish language, literature, history, and law (Larsen, Mai, Ruus, Svendsen and Togeby 2009).
Overall, one per cent of all the journals academics had selected as important had disappeared
(Larsen et al. 2009). The press confronted the minister, who admitted, ‘It’s not as easy as one
may think to make a ranking list of 20,000 journals,’ and the list disappeared from the ministry’s
web site (Richter 2009; ForskerForum 2009b). The discipline groups were asked to re-work their
lists, but this time each journal was allocated to a specific discipline to avoid overlaps. They
delivered their lists again in September 2009, but 32 of the disciplinary group chairs signed a
statement that they could not vouch for this indicator and their advice was not to use it for
funding allocation (Emmeche 2009b: 2). The disciplinary groups had worked for two years and
still only listed journals; there were no lists of all the publishing houses for monographs and
edited volumes relevant to each discipline, let alone decisions about which of them were ‘level 1’
and ‘level 2.’ The ministry therefore published the ideal version of the points system alongside a
‘temporary’ one. By default, the temporary list seems to have become permanent. It notably
downgraded the points for monographs and edited volumes, which are the publication outlets
used predominantly by the humanities (see Table 1).
In addition, a PhD thesis initially earned two points, a ‘habilitation’ or professorial thesis five points, and a patent one point. Source: FI 2009b
(Later PhD theses were removed from the points system to avoid them counting twice, as ‘completed PhDs’ was already a category used for the distribution of the block grant (See Table 2).
Now that the ministry had its lists and could calculate the research points for each university
each year, it had to decide what weight to give these points in the funding allocation model. An
allocation model had already been developed in the late 1990s, based on 50% for teaching, 40%
for external funding and10% for PhD completions, but this was only used to distribute marginal
amounts in an ad hoc fashion (Schneider & Aagaard 2012: 195). Now the ministry proposed that
research points should be given a 50 per cent weighting, teaching 30 per cent, and knowledge
transfer 20 per cent, but Danish Universities rejected this. In 2009 Danish Universities finally
suggested (echoing the Leuven model) that the indicators should be teaching (45 per cent), PhD
completions (ten per cent), and research (45 per cent). But they argued that research should be
divided into 35 per cent for funding from external sources (e.g., contracts with industry or grants
from the research council) and the research publication points should only be given a ten per cent
weighting, although this would increase gradually to 25 per cent. The government agreed to this
proposal (FI 2009a).
Table 2. Weighting of indicators in the formula for competitive allocation of basic grant
Teaching Externally funded research
Research publication points
Completed PhDs
201
0
45 35 10 10
201
1
45 30 15 10
201
2
45 20 25 10
Source: FI 2009a
The final stage in setting up this system depended on gaining the agreement of enough
political parties to give the proposal a majority in parliament. Danish Universities finally backed
the minister’s ‘authorized list’ and competitive funding formula even though there was still
disquiet among members of the disciplinary groups. The spokesperson of the Radical Liberals,
who had been holding out against this process, took Danish Universities’ approval to mean ‘the
universities’ approved, seriously misunderstanding that Danish Universities was the voice of the
rectors and that, under the 2003 University Law, the university no longer had mechanisms for
speaking collegially. She finally acceded on 5 November 2009, just in time for the system to be
implemented in the Finance Law from January 2010 (FI 2009a).
The new competitive funding formula would not be applied to the universities’ existing block
grants, but only to additional funding, called ‘the globalisation pool’. The political parties agreed
a document (Ministry of Science, Technology and Development 2009), which explained that
they would increase funding for research and development by 10,000 million kroner over three
years, so that public funding of research would meet the Bologna Process target of 1 per cent of
GNP.5 Of this extra funding, 67 per cent was allocated to special initiatives like upgrading
laboratories (1,000 million kroner each year), Danish participation in international innovation
partnerships (30–90 million kroner each year) or collaboration with the private sector (130–190
million kroner each year) and around 200 million kroner per year was used to increase the
teaching output payment per student passing exams in the humanities and social sciences. Thirty-
two per cent of the extra funding was allocated to research. But of this, about a third was
allocated to ‘strategic research’, and further earmarked for the government’s priority research
areas (e.g., bio-products and food research received 50–70 million kroner each year). A further
third was allocated to special programmes in ‘free research’ (Research Council competitive
grants which are responsive to researchers’ initiatives but for which demand far outstrips
supply). This meant, as shown in Table 3, that the globalisation pool increased the universities’
annual basic grant by very little – an increase of 7.8% from 2009 to 2010 and by much smaller
amounts in the following years. Initially only 3.9% of the basic grant was allocated between the
universities on the basis of the points system to which so much administrative and academic
effort had been devoted over the previous three years, although that had risen to 8.9% by 2012.
Even more importantly, an evaluation of the points system in 2012 revealed that the
redistribution effect of the points system, compared to the previous method of allocating the
basic grant, was only 1.6 %. That is, it was responsible for about 11.5 million kroner out of 720
million kroner in 2012, the year when it was most significant (Sivertsen and Schneider 2012:
23).
Table 3. Universities’ Block Grant (Basisbevilling) 2006–12 (in billion kroner)
2006 2007 2008 2009 2010 2011 2012
Total block grant for research
Increase on previous year
6.2 6.5 6.9 7.5 7.7
7.8%
8.0
3.7%
8.1
1.25%
Of which, competitive allocation based on bibliometric points
- - - - 0.300
(3.9%)
0.570
(7.1%)
0.720
(8.9%)
Sources: For 2006-9 2012 budget law. For 2010-12, Sivertsen and Schneider 2012: 23 Table 2.5.
It clearly takes a very small financial incentive to establish a competitive ethos between
universities. For some universities, which historically received comparatively little basic funding
from the government, this new source of funding from research publications could be an
important additional income. But as other universities followed suit, and all increased their
research output, they would find themselves competing over a finite pool in a zero sum game. As
each university increased their research points, the value of each point would decline, yet they
would have to keep up the pace of the treadmill, ever increasing their research output and their
points score, so as to maintain their position relative to the other universities, and their share of
the competitive funding.
True to the new system of ‘Aim and Frame’ steering, the minister used his contracts with
university leaders to commit them to use output and ‘quality’ indicators to create a competitive
ethos throughout their organization. In its contract with the Minister for the period 2006–8, the
university on which this chapter is focused committed itself to developing internal systems for
allocating research funding according to ‘international quality criteria’ in 2007 and to distribute
up to ten per cent of its budget between faculties on the basis of these criteria in 2008. The
rector’s contracts with faculty deans further outsourced this commitment to allocate funding
competitively between departments. For example, the humanities faculty contract obliged the
dean to allocate ten per cent of funding between departments on ‘quality’ criteria in 2008 and the
faculty’s research committee learnt that if they did not develop a method for allocating funding
based on research quality by spring 2007, the rector would withhold 6.3 million kroner from the