UNIVERSITY OF MANCHESTER MANCHESTER BUSINESS SCHOOL Service Quality in Higher Education: The students’ viewpoint A dissertation submitted to the University of Manchester for the degree of Bachelor of Science in the Faculty of Humanities May 2012 DANIEL JAKE BEAUMONT BSc (Honours) in Management 7408488 Supervisor: Dr Anna Goatman
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
UNIVERSITY OF MANCHESTER
MANCHESTER BUSINESS SCHOOL
Service Quality in Higher Education:
The students’ viewpoint
A dissertation submitted to the University of Manchester for the degree of Bachelor of Science in the Faculty of
Humanities
May 2012
DANIEL JAKE BEAUMONT BSc (Honours) in Management
7408488
Supervisor: Dr Anna Goatman
2
Statement of Originality
This dissertation is my own original work and has not been submitted for any assessment or award at the University
of Manchester or any other university.
Acknowledgements
I would like to thank Dr. Anna Goatman for her invaluable input and phenomenal support throughout the process of
completing this dissertation.
ii
3
Abstract
In light of the imminent rise in tuition fees, university funding cuts and fears of
declining student numbers, gaining a sustainable competitive advantage in the higher
education sector is at the forefront of many universities’ agendas. In what can be
categorised as an extremely intangible service sector, one way that a university can
differentiate their service offering from the competition is through the provision of
excellent service quality. This study investigates perceptions of service quality at the
University of Manchester, collecting viewpoints from Undergraduate students from
different academic year groups.
The research was gathered through the use of focus groups as the primary data
collection method, incorporating both quantitative and qualitative techniques to
triangulate the methodology and increase the credibility of findings. By using
Importance-Performance Analysis to examine the data, the findings indicate that
perceptions of different service quality characteristics are complex, varying in terms
of importance and performance, whilst also displaying disparity between different
academic year groups. Despite this, a set of ‘core’ characteristics has been
uncovered, which all students deemed important to their university experience,
regardless of which academic year group they were part of.
This study provides university service management with a ‘snapshot’ of the current
provision of service quality at the University of Manchester. It also offers suggestions
that could be implemented to improve service quality, given the limited resources
available to management. Due to the dynamic nature of service quality, it is essential
to conduct further research to build on this study, in order to ensure that the
university remains competitive in what is an increasingly turbulent environment.
iii
4
Contents
1. Introduction 8
1.1 The UK Higher Education Sector 8
1.2 The Higher Education Sector and Service Quality 12
1.3 The University of Manchester 12
1.4 Study Structure 13 2. Literature Review 15
2.1 Introduction 15
2.2 The Nature of Services 15
2.3 The Construct of Service Quality 21
2.4 Measuring Service Quality 27
2.5 Chapter Summary 37 3. Research Objectives and Questions 38 4. Methodology 39
4.1 Introduction 39
4.2 Methodological Stance 39
4.3 Rationalising the Methodological Approach 42
4.4 The Research Process: Focus Groups 43
4.5 Data Analysis Techniques 47
4.6 Ethical Considerations 48
4.7 Chapter Summary 49 5. Discussion 50
5.1 Introduction 50
5.2 The Importance and Performance of Service Quality Characteristics 50
5.3 Importance-Performance Analysis and Problematic Areas 60
5.4 Differences between Students’ Perceptions of Service Quality 70
iv
5
5.5 Suggestions for University Service Management 76 5.6 Chapter Summary 84 6. Conclusion 85
6.1 Introduction 85
6.2 Conclusions Pertaining to Research Questions 85
programme issues and understanding. Evaluating service quality and understanding
how these dimensions impact service quality can enable higher education institutions
to efficiently design the service delivery process (Abdullah, 2006a). This is important
given the current economic climate since many UK universities are facing substantial
funding cuts (Section 1.1.2). In addition, rising tuition fees have the potential to
disenchant students from higher education (Section 1.1.1), making it even more
crucial to consider the provision of service quality. Furthermore, it is important to
satisfy students, since satisfied students will recommend the service to other
prospective students and will also be more likely to continue the relationship with the
service provider (Munteanu et al., 2010). Therefore, since the student is the main
recipient of the service, it becomes even more crucial to understand service quality
and its influence on the service delivery process, in an attempt to fulfil students’
needs more effectively.
35
2.4.7 Existing Approaches to Service Quality Measurement in Higher Education
Many higher education institutions use in-house evaluation techniques to measure
the quality of education provided to students, as well as an assessment of student
satisfaction (Munteanu et al., 2010). They tend to perform an evaluation of other
aspects of the student experience beyond the assessment of the quality of teaching
and learning (Aldridge and Rowley, 1998). According to Gibson (2010), higher
education institutions normally use a survey to obtain student feedback on a
particular academic programme. These normally incorporate questions relating to
various aspects of the programme, including questions relating to satisfaction with
regards to the overall academic experience. In addition, the survey tends to include
corroborating questions, for instance, whether or not the student would recommend
the programme to others (Gibson, 2010). In contrast, Manchester Business School
obtain feedback from students by distributing a standard course specific survey at the
end of each semester, requiring students to evaluate their experience of a particular
module (Appendix C). In turn, completed questionnaires are collected and analysed;
results are sent to faculties, as well as the course leaders (Palihawadana and
Holmes, 1996).
Alternatively, some universities, such as the University of Manchester, also utilise
external bodies to measure their service quality. An example of this is the National
Student Survey (NSS) (Appendix D). The NSS provides an opportunity for final year
students to give opinions on what they liked about their institution and course, as well
as things that could be improved (Student Survey, 2011). Subsequently, feedback is
used to compile year-on-year comparative data, which prospective students can use
to make informed choices in regards to where and what to study (Student Survey,
2011). Meanwhile, universities can use the data to enhance the student’s university
experience. Finally, the survey is administered by an independent market research
agency (e.g. Ipsos MORI), in order to ensure transparency and encourage a fair
assessment of each university (Student Survey, 2011).
2.4.8 The Search for a Uniform Measurement Model in Higher Education
There is an extensive amount of literature pertaining to the search for a general scale
and instrument for the measurement of service quality in all or a number of distinct
groups of service contexts (Aldridge and Rowley, 1998). Furthermore, Seth et al.
36
(2005, p. 933) state that:
“There does not seem to be a well-accepted conceptual definition
and model of service quality nor is there any generally accepted
operation definition of how to measure service quality.”
Although measurement scales such as SERVQUAL and SERVPERF were designed
as generic measures of service quality, it is important to view these instruments as
the basic platforms, which often require modification to fit the specific situation
(Abdullah, 2006a). More specifically, higher education institutions must focus their
attention on the dimensions perceived to be important rather than focusing on a
number of different attributes (Abduallah, 2006a). However, the approach of
attempting to determine what students perceive to be the important dimensions
through the use of surveys is questionable. According to Gruber et al. (2010), many
existing surveys are poorly designed, lack standardisation and give no evidence
concerning reliability or validity. Therefore, it is inevitable that problems pertaining to
the reliability and validity will arise when developing an instrument (e.g. the
HEdPERF model) that attempts to capture and model the complex and multifaceted
nature of service quality (Hill, 1995).
More sophisticated approaches to the construct of service quality within the service
encounter are required (Svensson, 2006). Abdullah (2006a) suggests that it may be
time to bury the existing instruments and attempt to reconstruct or redefine service
quality from a new and different perspective. However, instead of trying to generalise
and attempt to model service quality for a particular sector (e.g. higher education),
Sultan and Wong (2010) see service quality as a contextual issue since its
dimensions vary widely. Therefore, it could be more worthwhile to investigate service
quality based entirely on the situation at hand, since findings may vary from one
situation to the next. Carrillat et al. (2007) support this view suggesting that the
measurement of service quality should be adapted to context of each study.
Customers do not perceive quality in a one-dimensional way but rather judge quality
based on multiple factors relevant to the context (Zeithaml et al., 2009).
37
2.5 Chapter Summary
The chapter has reviewed the literature regarding the nature of services, the
construct of service quality and the measurement of service quality. In summary, it
has been acknowledged that the construct of service quality is complex and multi-
faceted in nature, making it increasingly difficult to measure. It has also been
established that confining the measurement of service quality to its particular context
could be more useful than using a generic methodology (e.g. SERVQUAL).
A review of the literature has uncovered a gap that this research attempts to address.
It is evident that service quality is deemed an ‘elusive’ and ‘indistinct’ construct by
many authors (Bolton and Drew, 1991; Carman, 1990; Cronin and Taylor, 1992;
Parasuraman et al., 1988). Furthermore, there appears to be no definitive instrument
that accurately measures service quality (Clewes, 2003), since many measurement
instruments tend to be generic and subject to various criticisms in terms of their
reliability and validity. Accordingly, Abdullah (2006a) suggests that measuring service
quality using existing instruments is inadequate and that there is a need to explore
service quality from new perspectives. In consideration of these issues, a gap exists
to conduct research that investigates students’ perceptions of services quality, using
a combination of both quantitative and qualitative techniques applicable to the study
context, in order to provide service management at the University of Manchester with
fresh insights regarding the current provision of service quality.
38
3. Research Objectives and Questions
The literature review presented in the previous chapter has raised a number of
objectives and questions that this study seeks to investigate further. The fundamental
purpose of this study is to investigate students’ perceptions of service quality at the
University of Manchester.
Accordingly, the following objectives can be proposed:
To identify the most and least influential characteristics of service quality as
perceived by students studying at the University of Manchester.
To determine whether dissimilarities exist in students’ perceptions of service
quality across different academic year groups.
To provide suggestions to university service management in an attempt to
improve the service quality provided to students.
The study attempts to answer the following research questions:
1. In terms of importance and performance, how do students perceive
different characteristics of service quality at the University of
Manchester?
2. Do discrepancies exist between students’ perceptions of service quality
across different academic year groups?
Findings to the research questions listed above enable the final research question to
be answered:
3. How can service management at the University of Manchester improve
the level of service quality provided to students?
39
4. Methodology
4.1 Introduction
The following chapter outlines the methodology and research techniques adopted to
answer the research questions proposed in Chapter 3. The chapter begins by
explaining the methodological stance of the researcher; justifying the course for
research, which dictates subsequent methodology decisions (Malhotra and Birks,
2007). Following this, the use of focus groups as the primary data collection method
is rationalised. Subsequently, the focus group data collection procedure is outlined,
illustrating the sample, procedure and issues faced in the data collection process.
Next, reliability and validity are considered, outlining the measures undertaken to
maintain each of these. Finally, the ethical issues concerning the research are
evaluated and the chapter concludes with a summary, which attempts to review the
methodology and offer improvements for future research.
4.2 Methodological Stance
The methodological stance of a researcher asserts how researchers view the world
and what their assumptions and beliefs are concerning their existence (Saunders et
al., 2009). Therefore, when conducting research, it is important to ensure that the
philosophical position of the researcher is properly considered since this underpins
the chosen research strategy (Saunders et al., 2009), ensuring that the phenomenon
being investigated is fully understood (Johnson and Clarke, 2006). In order to
determine the methodological stance of the researcher, two philosophical concepts
must be considered, namely, epistemology and ontology (Saunders et al., 2009).
Epistemology relates to the study of knowledge, its limitations and how the
researcher interprets knowledge. It is concerned with how knowledge is generated
and establishes which information is valid and which is not (Bryman and Bell, 2011).
Traditionally, the paradigms of postivism and interpretivism are associated with
epistemology (Malhotra and Birks, 2007). Generally, the paradigm determines which
research techniques are adopted for a study (Malhotra and Birks, 2007). However,
when choosing which paradigm is most applicable for research, Weber (2004)
suggests that the researcher’s beliefs and the purpose of the research dictate this
decision, as well as which method is most appropriate.
40
Positivism is an epistemological position, which advocates the application of methods
of the natural sciences to the study of social reality and beyond (Bryman and Bell,
2011). It is based on foundational principals that advocate the values of reason, truth
and validity (Hatch and Cunliffe, 2006). Therefore, a positivist perspective suggests
that human beings, their actions and institutions can be studied as objectively as the
natural world (Fisher and Buglear, 2007). The researcher that embraces a positivist
perspective believes that reality is ‘out there’ waiting to be captured (Malhotra and
Birks, 2007). In capturing reality, a positivist approach focuses on research that
involves scientific experimentation that places emphasis on a highly structured
methodology (Gummesson, 2005). Furthermore, the overriding purpose of a positivist
approach is to generate generalised laws that can be used to predict behaviour and
provide an explanation for marketing phenomena (Fisher and Buglear, 2007).
However, in critique of a positivist approach to research, Addis and Podesta (2005)
assert that such an approach reduces participants of the study to mere numbers;
disregarding any interactions they may have in the research process.
On the other hand, interpretivism is an epistemological perspective that advocates
the need to understand the differences between humans in their role as social actors
(Saunders, et al., 2009). Interpretivism states that individuals create, modify and
interpret the world, and that adopting such an approach is often useful for generating
knowledge that is said to be socially constructive. The interpretivist approach uses a
subjective approach to explore the world, suggesting that no independent objective
reality exists. Accordingly, the interpretivist researcher believes that there may be a
wide array of interpretations of realities. The interpretivist researcher does not set out
to test hypotheses, but instead explores the nature and interrelationships of
marketing phenomena (Malhotra and Birks, 2007).
Ontology is a branch of philosophy that is concerned with the study of reality and
dictates how a researcher approaches different phenomena (Saunders et al., 2009).
Ontology is concerned with the nature of social entities and asks how we perceive
objects to exist in the world (Bryman and Bell, 2011). It questions whether reality is
objective and exists regardless of our perception of it, or whether it is subjective and
only exists because we believe this to be so (Saunders et al., 2009). Therefore, a
researcher must question whether social entities should be considered as objective
entities that have a reality external to social actors, or whether they should be
considered as social constructions built up from the perceptions and actions of social
actors (Bryman and Bell, 2011).
41
Conventionally, an objectivist is likely to favour quantitative research while a
subjectivist will favour qualitative research. The main difference between quantitative
and qualitative research is that quantitative researchers employ measurement and
qualitative researchers do not (Bryman and Bell, 2011). However, Strauss and
Corbin (1998) state that the decision of whether to use either qualitative or
quantitative methods largely depends on the nature of the research problem.
Furthermore, Weber (2004) criticises the approach of sticking too rigidly to one
paradigm for research, suggesting that the best findings come from picking the most
appropriate methods that are relevant to the research problem at hand. Alternatively,
many authors suggest combining both quantitative and qualitative data since the two
approaches are complementary, suggesting that they should not be used in isolation
of each other (Jankowicz, 2005; Malhotra and Birks, 2007). Worryingly, many
researchers fall into the trap of adopting a rigid position in favour of either qualitative
or quantitative research (Malhotra and Birks, 2007), which can put the reliability and
validity of the research study in jeopardy.
Saunders et al. (2009) argue that the most appropriate philosophical stance depends
on the research objectives and questions. Based on this assertion, the researcher’s
beliefs and the literature presented above, this study adopts a positivist
epistemological perspective that focuses on an objective ontological reality. Utilising
an interpretivist approach for research would be inappropriate in this context, given
that this perspective centres on personal opinion and feelings instead of attempting
to establish objective reality. Instead, this study seeks to utilise a scientific approach
to research in order to achieve truth and uncover reality.
However, as illustrated above, it is crucial to avoid reducing participants of the study
to mere numbers and disregarding their interaction in the research process.
Therefore, since the approach taken depends on the research problem at hand, this
study contradicts the customary use of solely quantitative research for a positivist
approach and utilises a mixture of both qualitative and quantitative measurement
where necessary. Sticking too rigidly to one approach by using either qualitative or
quantitative measurement could jeopardise the findings and potentially impact the
reliability and validity of the research, which ultimately affects the credibility of the
study. Furthermore, it is reasonable to integrate qualitative research techniques
(conventionally associated with interpretivist research) in a positivist study. This is
supported by Onwuegbuzie and Leech (2005) who advocate that researchers should
learn to appreciate both quantitative and qualitative research, regardless of the
42
philosophical stance adopted by the researcher. The authors term adopters of this
approach the ‘pragmatic researcher’. This type of researcher tends to deal with
problems in a sensible and realistic manner that focuses more on practical rather
than theoretical considerations. After all, it is widely accepted that research
methodologies are merely tools that are designed to aid our understanding of the
world (Onwuegbuzie and Leech, 2005).
The ‘pragmatic researcher’ appreciates that incorporating both quantitative and
qualitative techniques in the same study can strengthen the validity of a
methodology, offsetting some of the limitations and problems associated with
individual research techniques (Sechrest and Sidani, 1995). In consideration of this
study, Onwuegbuzie and Leech (2005) add that the inclusion of qualitative data can
be particularly useful for explaining and validating relationships that have been
discovered by quantitative data, since relying on one type of data (i.e. number or
words) can be extremely limiting.
4.3 Rationalising the Methodological Approach
The methods and techniques that are most suitable for research depends on the
research problem and its purpose (Jankowicz, 2005). Therefore, with the study’s
philosophical stance, objectives and research questions in mind, this section of the
methodology explains the rationale for selecting focus groups as the primary data
collection tool for the methodology.
Focus groups as a research technique have long been prominent in marketing
studies (Krueger and Casey, 2009). Focus groups are a quick, flexible and
inexpensive method of gathering data from several respondents in a short period of
time (Ghauri and Gronhaug, 2010). Traditionally, focus groups are seen as an
interpretivist research technique and useful for exploratory research when little is
known about the phenomenon at hand (Stewart et al., 2007). However, as located in
Section 4.2, their application is not solely dedicated to qualitative research. In support
of this, Ghauri and Gronhaug (2010) state that although focus groups are mostly
used for collecting qualitative data, they can also be used to produce quantitative
data.
This study defies common conventions, utilising focus groups as the main research
technique, whilst researching from a positivist perspective. Saunders et al. (2009)
43
provide evidence to support this approach, suggesting that it is possible to quantify
qualitative data and convert it into numerical codes so that it can be analysed
statistically. Ghauri and Gronhaug (2010) provide additional support suggesting that
a researcher may collect and code the data that they have collected in such a
manner that would allow statistical analysis.
Given the nature of the study (i.e. cross-sectional) and the various resource
constraints (i.e. time, money, accessibility) placed upon the researcher, focus groups
appeared to be the most feasible method in comparison to other data collection
methods. For instance, attempting to distribute large questionnaires to students
across different academic years could have been problematic. There are multiple
reasons for this, including the issues associated with achieving a good representation
of the population and guaranteeing a high response rate (Malhotra and Birks, 2007),
as well as the difficulty of accurately interpreting the results from the questionnaires
(Ghauri and Gronhaug, 2010).
4.4 The Research Process: Focus Groups
The purpose of the study is to investigate students’ perceptions of service quality so
that suggestions can be provided to service management at the University of
Manchester. In addition, the study is cross-sectional, focusing on determining
whether perceptions change over the course of a student’s degree (i.e. Year 1
through to Year 3). Each of the focus groups was carried out over a two-week period
at the beginning of semester two (February 2012) using students enrolled at the
University of Manchester. Apart from one of the focus groups, the remaining five
lasted approximately one and one and a half hours in duration, falling in line with
guidelines set out by Ghauri and Gronhaug (2010) who believe that focus groups
should last between 30 minutes and 2 hours to achieve the most promising results.
4.4.1 Sample
In total, 36 BSc Management students from Manchester Business School
participated in the study; including 16 females and 20 males, with an age range of
18-24 (Appendix E). This translates to a total of six semi-structured focus groups,
each with six participants involved. In order to increase reliability and validity, two
focus groups were conducted for each year of study (i.e. Year 1, Year 2 and Year 3).
These characteristics fall in line with Malhotra and Birks (2007) who recommend that
44
focus groups should have between six and ten participants. Importantly, Morgan
(1998) asserts that groups of fewer than six are unlikely to generate the momentum
for a successful session, while groups of more than ten may be too crowded and may
not be conducive to a cohesive and natural discussion. More recently, Ghauri and
Gronhaug (2010) claim that a focus group that is too small (e.g. less than 5
participants) or too large (e.g. more than 10 participant) can make the focus group
ineffective as the participation of individuals can become too fragmented.
The participants for each focus group were selected using convenience sampling.
Convenience sampling is a non-probability sampling technique that is used to obtain
a sample of convenient elements at the researcher’s own discretion (Saunders et al.,
2009). In addition, convenience sampling is the least expensive and least time
consuming of all sampling techniques (Malhotra and Birks, 2007). Therefore, in an
attempt to maximise homogeneity between participants, a requirement of the sample
was that all participants were enrolled on the BSc Management degree programme
(including associated specialisms) and part of Manchester Business School. In
support of this, Hair et al. (2008) recommend that focus groups should be as
homogenous as possible. Furthermore, Krueger and Casey (2009) believe that it is
important that some kind of homogeneity exists between the participants, but with
enough difference to allow for variation of opinions and debate. Kitzinger (1994)
claim that being with others who share similar experiences encourages those
participating to express, clarify, or even to develop particular perspectives. In
addition, commonality among group members is useful for avoiding conflict as well as
acting as a mechanism that encourages more in-depth and open discussion (Ghauri
and Gronhaug, 2010).
The researcher acknowledges that convenience sampling may not be representative
of a definable population (Ghauri and Gronhaug, 2010). Therefore, the researcher
appreciates the boundaries of the study and the potential bias attached to the results
in the conclusion of the study.
4.4.2 Procedure
Before the actual six focus groups were carried out, a pilot focus group was
conducted with a convenience sample of six final year business students to ensure
consistency in the format, design, layout and structure of the focus group.
Importantly, this was also used as an opportunity to confirm the service quality
45
characteristics used in the focus group (Appendix F), which were taken from the
literature (Appendix A and Appendix B) and adapted to this study. Participants also
had the opportunity to suggest additional characteristics if they felt that they had not
been brought up. Regardless, as Malhotra and Birks (2007) point out, the first focus
group should be treated as an experimental group. The intention here was to
ascertain whether the procedure worked, how participants reacted, how participants
perceived the service quality characteristics and how the moderator dealt with the
focus group (Malhotra and Birks, 2007). In essence, the pilot focus group aimed to
eliminate any confusion, in an attempt to improve the reliability and validity of each
future focus group.
Apart from one, each of the other focus groups was carried out in a quiet study room
in the John Ryland’s University Library. According to DiCicco-Bloom and Crabtree
(2006), it is important to ensure that the environment is familiar and comfortable each
time for all participants. Furthermore, Malhotra and Birks (2007) claim that a relaxed,
informal atmosphere helps group members to forget that they are being questioned
and observed. Finally, each focus group ended by summarising the main points that
had been covered and asking participants if this seemed an accurate summary.
For each of the main focus groups, each participant was welcomed, given an
overview, introduced to key terms (e.g. service quality) and informed of the ground
rules (Krueger and Casey, 2009). As group sessions are often unpredictable in terms
of the flow of conversation (Silverman, 2006), a topic agenda was utilised to ensure
that all of the necessary topics were covered. More specifically, the first part of each
focus groups required participants to rate different service quality characteristics
based on how important they perceived them to be and how well they performed
(Appendix I). The second part of each focus group engaged participants in a lengthy
discussion to determine why they had rated characteristics the way they did, as well
as asking students to provide suggestions for improving the level of service quality at
the university.
The researcher was the moderator for each focus group since similar demographics
were shared with participants, which allowed the researcher to relate to each of the
participants more readily (Krueger and Casey, 2009). Moreover, it is useful for the
moderator to have a good understanding of the group so that they can maintain
useful conversation and debate when the group is going off topic (Silverman, 2006).
Ghauri and Gronhaug (2010) also believe that the moderator is useful for ensuring
46
that the focus groups remained effective and structured, and should only intervene if
the discussion started to stray off topic. Accordingly, the moderator attempted to
remain neutral throughout each focus group and recorded the discussion using a
dictaphone. As Krueger and Casey (2009) advocate, this was important so that the
conversation could be better managed without the need for note taking.
4.4.3 Maintaining Reliability and Validity
It was important to be aware biases that could arise throughout the focus group
process and therefore crucial to maintain the validity and reliability. According to
Ghauri and Gronhaug (2010), validity refers to measures that capture what they are
supposed to capture whereas reliability considers the stability of measures. One
downside to the use of focus groups for collecting data is that it is very difficult to
summarise and categorise information that has been gathered (Ghauri and
Gronhaug, 2010), creating the possibility of biased results.
In order to enhance the reliability and validity of each focus group, data was collected
using a triangulation approach. Saunders et al. (2009) assert that triangulation is the
combination or use of two or more different data collection techniques within one
study of the same phenomenon. According to Ghauri and Gronhaug (2010), when
correctness or precision is important, it is logical to collect information using different
methods and angles. With this in mind, a combination of both quantitative and
qualitative research techniques was used within each focus group to embrace a
triangulated approach to data collection. First of all, the use of a basic quantitative
survey (Appendix F) to collect student perceptions was used in each focus group.
However, the researcher acknowledged that independently, surveys do not reveal
any reasoning behind the responses, commonly providing management with a simple
indication and no justification. The overriding purpose of using a survey within the
focus group was to encourage discussion. Therefore, the second part of the focus
group centred on discussing participants’ ratings from the surveys and also asking
them to provide suggestions for university service management. Each discussion
was also recorded to allow easier transcription of findings in the analysis.
In addition to attempting to triangulate the research procedure, two focus groups
were conducted for each year of study, thus repeating the data collection process to
ensure consistency and the elimination of potential anomalies. Furthermore, the
moderator ensured that each focus group was semi-structured in nature to help
47
prevent creating bias by unintentionally guiding responses. Despite these attempts to
improve reliability, the researcher acknowledges that the data collection process was
not perfect and that it could be subject to possible bias (e.g. demographic bias). That
said, Krueger and Casey (2006) claim that it is necessary to make trade-offs when
selecting participants and that this is acceptable as long as it is taken into account in
the analysis of data.
4.5 Data Analysis Techniques
It is appropriate to discuss which techniques have been used to analyse data that
has been collected. Firstly, mean scores, variances and rankings were calculated for
each academic year group based on each of the service quality characteristic
(Appendix G). To achieve this, ratings from each participant across both focus
groups in each year were combined.
Subsequently, Importance-Performance Analysis (IPA) was used to profile the data
for each academic year group. IPA is one of the most useful forms of analysis in
marketing research, combining information about customer perceptions and
importance ratings (Zeithaml et al., 2009). In this instance, IPA was used to link
perceptions of importance with perceptions of performance for different service
quality characteristics, as perceived by students (Section 5.1). The data were then
mapped out onto simple to read matrices that management can use at to establish
what service quality characteristics need addressing, which need maintaining and
which need de-emphasising. For instance, a characteristic that was perceived
extremely important but performed poorly would be considered as a problematic area
that management needed to address. Although, students’ perceptions were
measured using a scale ranging from 1 - 10 for both the importance (e.g. 1 = Low
Importance, 10 = High Importance) and performance (1 = Low Performance, 10 =
High Performance) of characteristics, the scale was readjusted during analysis to 4 -
10 to generate a better representation of the findings. The mean results used to plot
the data on the IPA matrices did not fall below 4, making it possible to map the data
using a shortened scale. This is taken into consideration when interpreting the results
in the discussion.
It is acknowledged that each IPA matrix, without any further support, does not give
management an accurate interpretation of each service quality characteristic. As a
result, the recordings from each focus group discussion were transcribed and
48
organised into key themes that could be used in conjunction to each IPA matrix to
provide further evidence for the discussion. This supports the methodological stance
of the researcher, incorporating both quantitative and qualitative analysis techniques
in the same study to increase the credibility of the findings.
4.6 Ethical Considerations
Saunders et al. (2009) point out that ethical concerns can occur at all stages of a
research project; when seeking access, during data collection, as data is analysed
and when findings are reported. Ethical concerns include protecting the anonymity of
participants, honouring all statements and conducting research in a way that does
not embarrass or harm the participants (Malhotra and Birks, 2007). Thomas (2004)
postulates that it can be difficult to try to avoid ethical problems in marketing
research, making it increasingly important to consider ethics throughout the research
process. Moreover, Malhotra and Birks (2007), and later Ghauri and Gronhaug
(2010), advise that the ethical consideration process should begin during the design
stage of the research, since ethics can have a detrimental impact on time and
resources if they are only considered at the final stage of the research process.
A researcher must take all possible precautions to inform and safeguard each
respondent (Ghauri and Gronhaug, 2010). Therefore, to ensure complete ethical
consideration, the research was conducted in line with the Data Protection Act
(1998), the Economic and Social Research Council (ESRC) research ethics
framework (2010) and the ethical research guidelines provided by the University of
Manchester. In particular, the Data Protection Act (1998) was followed to help
prevent the invasion of privacy of data held about participants. The act also ensures
that personal data must be: processed fairly, obtained for a specific purpose,
accurate, kept secure, kept up-to-date and kept no longer than necessary (Saunders
et al., 2009).
Each participant that was involved in the study was informed about the study’s
purpose, procedure and structure at the beginning of each focus group. Silverman
(2006) asserts that it is crucial for the participants to be aware of the purpose of the
study and how the research will be used to avoid any element of deception.
Additionally, Loue (2000) claims that participants must be respected and provided
with sufficient privacy and confidentiality to safeguard their interests. Therefore, it
was made clear to participants that their involvement was voluntary and that they
49
retained the right to withdraw from the study at their own discretion. Finally, all
participants were ensured that data would remain completely anonymous and that
any evidence would be destroyed on completion of the study.
4.7 Chapter Summary
As with any methodology, it is common for issues to arise throughout the data
collection process. Although the methodology proved to be a very challenging part of
the research study due to its unpredictable nature, only minor technological issues
were encountered.
This chapter has outlined the research plan and methodology used to address the
research questions that were proposed in Chapter 3. Firstly, the methodological
stance of the researcher was outlined, which influenced the rationale of using focus
groups as the primary data collection method. Following this, the data collection
process was outlined, describing and explaining the sample, procedure, problems
encountered and mechanisms used to maintain validity and reliability throughout the
process. Subsequently, the methods used to analyse the data were considered and
issues associated with the analysis were highlighted and justified. Finally, important
ethical issues relating to the study were considered, whilst listing the techniques and
procedures used to ensure the study remained within suitable ethical boundaries.
50
5. Discussion
5.1 Introduction
The purpose of this chapter is to analyse and discuss findings simultaneously,
allowing coherent presentation and interpretation of the results. It is logical to adopt
such an approach, as the discussion of results that follow the analysis is key in
understanding the findings at each stage. This allows each of the research questions
proposed in Chapter 3 to be answered more succinctly than if the chapters were
separated. In doing so, the discussion attempts to compare and contrast findings with
the relevant literature that was reviewed in Chapter 2, as well as identifying aspects
that are absent from the current scholarly literature.
The discussion is divided into four sections, namely, the Importance and
Performance of Service Quality Characteristics (Section 5.2), Importance-
Performance Analysis and Problematic Areas (Section 5.3), Differences between
Students’ Perceptions of Service Quality (Section 5.4), and Suggestions for
University Service Management (Section 5.5). The first three sections place
emphasis on tackling research question one and two, while the final section focuses
on research question three, bringing in elements from the first three sections to
support each suggestion proposed to university service management.
5.2 The Importance and Performance of Service Quality Characteristics
5.2.1 Overview
This section focuses on research question one, determining what students perceived
to be the most important and best performing service quality characteristics, in order
to understand the current provision of service quality at the university. Baron et al.
(2009) provide support for the need to tackle this question, arguing that a good
starting point for service managers is to determine the level of quality that the
organisation should provide for different aspects of the service.
In general, the findings indicate that both importance and performance ratings for
different characteristics of service quality vary amongst students. In many instances,
the findings indicate that variances exist between students’ perceptions of the same
51
characteristic (Appendix G). For example, one student could consider the range of
teaching methods used by lecturers to be extremely important, whereas another
student could consider the same characteristic to be relatively unimportant. This
supports the work of Zeithaml et al. (2009), who suggest that customers have many
service requirements and that characteristics are not of equal importance and that
some customers may consider one characteristic to be relatively unimportant, while a
different customer regards the same factor as being crucial. In terms of higher
education, this presents various implications for university service management
(Abdullah, 2006a), making it important for institutions to concentrate on the
characteristics perceived to be important rather than focusing on characteristics in an
ad-hoc manner. After all, knowing the relative importance of different characteristics
could enable the university’s limited resources to be allocated more efficiently,
stimulating the possibility of better service provision for students.
5.2.2 The Importance of Service Quality Characteristics
Despite the ‘elusive’ and’ indistinct’ nature of service quality (Bolton and Drew, 1991;
Carman, 1990; Cronin and Taylor, 1992; Parasuraman et al., 1988), demonstrated by
discrepancies between students’ perceptions of different characteristics, the findings
indicate that there is a ‘core’ set of characteristics that are important to each
student’s university experience. Table 5.1 provides evidence to support this,
illustrating students’ perceptions of the eight most important service quality
characteristics. The data was extrapolated, utilising the mean scores from each year
of study to rank each characteristic based on its importance. Moreover, the
corresponding variance was calculated for each characteristic to show how much
deviation existed between students’ responses for each characteristic.
Although the positioning of each characteristic varies, there are six characteristics
that are common to each academic year group. The remaining characteristics signify
specific needs for students in each year group, which is explored further in Section
5.4. Nevertheless, when questioned about the six characteristics, participants termed
these as ‘essential’. More specifically, one participant claimed that these
characteristics were:
“…fundamental to the university experience and extremely
important in achieving a good degree at university, which is the
52
primary objective for most students.” (Participant A – Year 2,
Group 1)
To further the credibility of a ‘core’ set of characteristics, the variance for each
characteristic is relatively low, ranging between 0.33 and 1.11. To put this into
perspective, according to the perceptions of year 3 students, the highest variance
recorded was 9.24 for the performance of seminars (Appendix G). Despite this, a low
variance suggests that students’ perceptions did not fluctuate significantly from the
mean, demonstrating that participants were in agreement with regards to the rating of
each of these characteristics, albeit some characteristics more than others.
Year 1
Rank Characteristic Mean Variance
1 Knowledge and experience of academic staff 9.33 0.42
2 Quality of lectures 8.92 0.63
3 Relevance of course material 8.83 0.33
4 Internal student feedback systems 8.75 0.57
5 The reputation of the university 8.75 0.75
6 Social opportunities 8.75 0.75
7 Quality of seminars 8.58 0.99
8 Ability to understand student needs 8.42 0.81
Year 2
1 Relevance of course material 9.17 0.52
2 Internal student feedback systems 9.08 0.63
3 Prompt and efficient feedback on work 8.92 0.81
4 Knowledge and experience of academic staff 8.67 0.97
5 The reputation of the university 8.42 0.99
6 Quality of seminars 8.25 1.11
7 Quality of lectures 7.83 0.70
8 Ability to deal with queries promptly and efficiently 7.83 0.52
Year 3
1 Quality of seminars 9.33 0.61
2 Internal student feedback systems 9.25 0.57
3 The reputation of the university 9.25 0.93
4 Careers service 9.25 0.75
5 Quality of lectures 9.25 0.75
6 Knowledge and experience of academic staff 8.92 0.81
7 Relevance of course material 8.83 0.70
8 Quality of academic facilities and learning resources 8.67 0.61 Table 5.1: Extract of Most Important Characteristics Data in Each Academic Year Group See Appendix G for full data tables
53
Cuthbert’s (1996a) argues that perceptions of service quality change over time. The
findings contradict this assertion, indicating that perceptions for certain
characteristics (i.e. ‘core’ characteristics) do not fluctuate greatly, remaining fairly
consistent as time progresses (i.e. year 1 through to year 3). Notwithstanding this
issue, the findings also show similarities with the results of a study conducted by
Oldfield and Baron (2000), who also investigated students’ perceptions of service
quality. As the findings demonstrate in this study, Oldfield and Baron (2000) also
established that students place different importance on service quality characteristics
and that perceived service quality could be grouped into three dimensions, namely,
requisite, acceptable and functional elements. In particular, Oldfield and Baron
(2000) classified requisite elements as those characteristics that were essential to
enable students to fulfil their study obligations (e.g. ‘knowledge of academic staff’,
‘queries are dealt with efficiently and promptly’, ‘academic staff deal with me in a
caring fashion’). Similarly, the findings of this study suggest that there appears to be
a set of ‘core’ characteristics that enable a student to fulfil their study obligations.
However, Oldfield and Baron (2000) only compared perceptions of first and final year
students, whereas this study investigated perceptions across each year of an
undergraduate degree.
5.2.3 Academic Characteristics
Four of the six essential characteristics identified in Table 5.1 (i.e. ‘knowledge and
experience of academic staff’, ‘quality of lectures’, ‘relevance of course material’ and
‘quality of seminars’) show strong resemblance to the academic aspect of a student’s
university experience. Similarly, Abdullah (2006a) identified ‘academic aspects’ as
one of six key dimensions when developing of the HEdPERF scale, providing
evidence to support their relevance to a student’s university experience. Likewise,
the requisite elements identified in Oldfield and Baron’s (2000) study show that the
majority of these elements were academic related. In this study, three of the
characteristics fell into the ‘teaching section’ of the survey, while the remaining
characteristic fell into the ‘academic staff section’ of the survey (Appendix F). In
addition, numerous participants commented on the possibility that some of these
characteristics were linked, for example, one participant pointed out how the ‘range
of teaching methods used’ could affect the ‘quality of a lecture’. Another participant
believed that these characteristics were part of the university’s primary offering,
explaining that they best reflect what a student is paying for to study at university.
54
It was evident that participants were torn when deciding whether the ‘quality of
lectures’ or the ‘quality of seminars’ was more important to them. The general
consensus from the discussion was that the ‘quality of lectures’ would act as a bigger
determinant on the student’s final grade than the ‘quality of seminars’ did.
Regardless, several participants believed that these complemented each other,
pointing out that a good quality seminar is crucial for consolidating what had been
learnt in class. Despite this, a large number of participants also claimed that given
their experience of seminars and the lack of consistency in quality, good quality
lectures were more important to them.
Participants also perceived the ‘knowledge and experience of academic staff’ as
important across each academic year. The majority of participants believed that it
was important to be taught by leading doctors and professors that are the forefront of
their subject fields. Participants also commented on a supposed linkage of this
characteristic with the quality of lectures and seminars, illustrating that this was
influential in determining the quality of the lecture or seminar. One participant
classified this as the most important characteristic, suggesting that if the lecturer is
knowledgeable and the information they are providing is up-to-date, useful and from
trustworthy sources, then a student is more likely to galvanise an interest in the
subject, which in turn, could have a positive influence on the performance of that
particular student. Finally, participants believed that the provision of relevant course
material is crucial, especially when applying for jobs in the future, due to need to
apply what had been learnt in class to practical situations in an employment position.
5.2.4 Internal Student Feedback Systems
Participants believed that students should be seen as the university’s primary
customer. Gruber et al. (2010) provide support for this, claiming that students should
be seen as the primary target audience, stressing the need for academic
administrators to understand their requirements. One participant explained that:
“The university needs to focus on operating more as if it was a
business rather than a university. One of the most fundamental
tasks for any business is to understand customer needs and this
university is no exception.” (Participant G - Year 3, Group 2)
55
This supports the view of DeShields et al. (2005) who claim that it is crucial for higher
education management to apply market orientated principles and strategies that are
used in profit-making institutions. In conjunction to this, students should be at the
forefront of service quality design and involved in improving the service quality by
being able to provide feedback to management. In order to achieve this, participants
believe that it is important that a variety of feedback mechanisms are available. One
participant acknowledged the importance of ‘internal student feedback systems’
stating:
“…without any chance for students to give feedback, the university
would find it difficult to improve the provision of service quality,
which could result in the competition overtaking and potentially
jeopardising the university’s reputation.” (Participant C - Year 3,
Group 1)
This statement illustrates the importance of student feedback and its role in gaining a
competitive advantage in the higher education marketplace. Hill (1995) provides
support for this, claiming that students play a key role in the production and delivery
process of the service. Finally, many third year participants commented on the issue
that the NSS is the only mechanism that they could use to provide feedback on their
entire university experience. These participants also recognised that the NSS was
externally focused and did not collect any meaningful information that university
service management could use to improve service quality.
5.2.5 The University’s Reputation
For the majority of participants, ‘the reputation of the university’ was a considerably
important characteristic. Participants believed the university’s reputation was related
to employability and that it influenced their future job prospects. The overriding
purpose of attending university for many students is to increase the likelihood of
securing a job post graduation. Therefore, participants were in agreement that they
wanted their university and degree classification to be recognised when applying for
jobs upon completion of their university degree. One participant claimed that:
56
“You could have an amazing time at a less reputable university, but
sooner or later it all comes down to employability, and this is not
going to work as well if the reputation of the university is not there.”
(Participant F - Year 3, Group 1)
Furthermore, an overwhelming 75% of participants believed that due to the relatively
intangible nature of education (Gruber et al., 2010) and the difficulty in defining and
measuring a service (Giese and Cote, 2000; Parasuraman et al., 1988), the
university’s reputation acted as a key search attribute (Zeithaml, 1981) when
deciding which university to study at. Nevertheless, Abdullah (2006a) provides
evidence to support the importance of this characteristic with the development of the
HEdPERF scale. The author’s study identified ‘reputation’ as one of six key
dimensions, which must be carefully evaluated when using the scale to gauge the
current level of service quality. Despite the evident importance of the university’s
reputation, another participant acknowledged the difficulty of not being able to
properly ascertain the true value of the university’s reputation until they have started
applying for jobs.
5.2.6 The Performance of Service Quality Characteristics
Understanding the performance of different service quality characteristics is critical to
enable university service management to understand how to improve service quality.
DeShields et al. (2005) provide support for this claiming that institutions need to
continue to deliver a high quality service and satisfy students in order to succeed in a
competitive service environment. In particular, one participant explained that:
“Being in touch with the service and understanding how various
aspects perform is crucial for determining what improvements are
needed, in order to enhance the overall level of service quality and
to satisfy various stakeholders.” (Participant G - Year 3, Group 2)
Table 5.2 illustrates students’ perceptions of the eight best performing service quality
characteristics. By combining the performance results for each year group and
ranking them in accordance to their mean, it is evident that there are numerous
characteristics that perform consistently across different academic year groups.
57
Although the positioning of each characteristic varies, there are four characteristics
that are common to each academic year group (i.e. ‘the reputation of the university’,
‘knowledge and experience of academic staff’, ‘campus location and layout’, and
‘organisation and management of course’).
Year 1
Rank Characteristic Mean Variance
1 The reputation of the university 8.75 0.93
2 Knowledge and experience of academic staff 8.67 0.97
3 Knowledge of administrative staff 8.58 0.99
4 Campus location and layout 8.42 1.17
5 Organisation and management of course 7.83 0.88
6 Ability to deal with queries promptly and efficiently 7.75 0.39
7 Provision of other facilities and services 7.50 2.09
8 Quality of academic facilities and learning resources 7.33 1.52
Year 2
1 Course flexibility 8.08 0.63
2 The reputation of the university 8.08 0.63
3 Knowledge and experience of academic staff 8.00 0.91
4 Campus location and layout 7.92 0.45
5 Physical appearance of university 7.75 0.39
6 Careers service 7.50 0.64
7 Organisation and management of course 7.42 0.99
8 Quality of academic facilities and learning resources 7.33 0.79
Year 3
1 Careers service 9.08 0.45
2 The reputation of the university 8.42 0.45
3 Knowledge and experience of academic staff 8.25 1.30
4 Course flexibility 8.17 0.70
5 Campus location and layout 8.00 1.09
6 Organisation and management of course 7.75 2.39
7 Quality of lectures 7.58 0.27
8 Relevance of course material 7.33 2.24 Table 5.2: Extract of Best Performing Characteristics Data in Each Academic Year Group See Appendix G for Full Data Tables
5.2.7 The Reputation of the University
Participants from the focus groups believed that ‘the reputation of the university’ was
one of the best performing factors. As Table 5.2 illustrates, of the four common
characteristics across each academic year, ‘the reputation of the university’ appears
to be the highest scoring. Since all the participants involved in the study were part of
Manchester Business School (MBS), they commonly referred to the their school’s
reputation rather than the reputation of the university. However, the general
58
consensus from each focus group was that the university was highly regarded and
respected, particularly MBS.
There is an enormous collection of evidence to support the university’s high
performance in this particular characteristic. It is evident that the reputation of the
university is rated highly by various independent organisations. According to Times
Higher Education World University Rankings (2012), the university placed 48th in the
world and 9th in Europe for 2012. In addition, QS World University Rankings (2012)
ranked the university 29th in the world for 2011/2012. Nevertheless, as numerous
participants correctly pointed out, the university will face the challenge of upholding
their reputation over the next few years, due to the rise in tuition fees (Section 1.1.1),
and its likely impact on the level of service quality sought by students.
Participants also rated the ‘knowledge and experience of academic staff’ highly,
regarding it as the second best performing of the common characteristics across
each academic year group. Interestingly, several participants brought up the
possibility that this particular characteristic is influenced by ‘the reputation of the
university’. They added, to maintain a good reputation, the university needs to
employ people that are knowledgeable and experienced in their fields, since this is
something that will directly impact the reputation of the university.
5.2.8 Organisation and Management of Course
All participants were in agreement that the ‘organisation and management of the
course’ was another area in which the university performed well. Despite the large
number of people enrolled on the course, participants felt that the timetables,
lectures, module choices were very well organised. Many participants brought up
positive incidents with administration, where they had gone out of their way to deal
with non-routine problems such as clashes with timetables. In addition, numerous
participants commented on the university’s efficient and diverse use of channels to
communicate with students. For example, they felt that the SMS service that
informed students when a lecture is cancelled is a very efficient communication
method, realising that the majority of students have more immediate access to their
mobile phones. In addition to this, participants believed that that they are kept well
up-to-date with different events and opportunities by the university’s effective e-mail
system.
59
5.2.9 Campus Location and Layout
Participants across each academic year group felt that the ‘campus layout and
location’ was another well performing characteristic. However, it appears that many
students regard this characteristic as relatively unimportant, ranking 22nd for year 1
and 2 students, and 23rd for year 3 students, of 24 characteristics studied (Appendix
G). Several participants believed that the university is well networked by good local
bus services, which also integrate well with the city. Moreover, students also brought
up the provision of a free shuttle bus connecting North and South campus, and that
most of their lectures were conveniently located close to Manchester Business
School (MBS) on Oxford Road (participants were all MBS students). Accordingly,
good transportation links may make the campus location and layout become less
important to students. One participant provided support for the lack of importance of
this particular characteristic, suggesting that it did not affect the provision of service
quality:
“Although the performance of the campus and layout exceeds what
I would have expected, it is not something that I perceive to be
extremely important to my university experience as this does not
really impact my ability to study at the university.” (Participant G -
Year 1, Group 2)
Interestingly, several participants pointed out that the ‘campus layout and location’
was extremely influential when originally making the decision to study at the
University of Manchester. This is perhaps a result of the lack of search properties
(Zeithaml, 1981) for many prospective students when evaluating university as a
service, resulting in them resorting to things that can be evaluated (e.g. campus
layout and location) in the absence of any tangible manifestation. In support of this,
one participant argued that:
“In first year I was more concerned with the location of different
amenities and facilities in relation to the campus. However, this was
purely a first year thing which has deteriorated in importance as the
years have gone on, despite it remaining a seemingly well
preserved aspect of the university’s service provision.” (Participant
60
A - Year 3, Group 1)
This demonstrates the need for the university to focus the university’s limited
resources on improving the more important service quality features that have a
greater impact on students’ perceptions rather than allocating resources to less
important parts of the service that have a less significant impact on students’
perceptions. This falls in line with Zeithaml et al. (2009) who suggest that a common
mistake for managers is to try and improve the quality of service by spending
resources on the wrong initiatives, only to become discouraged because customer
perceptions of the organisation’s service do not improve.
5.3 Importance-Performance Analysis and Problematic Areas
5.3.1 Overview
The previous section focused on determining what the most important and best
performing characteristics were without considering whether a relationship existed
between the importance and performance of different characteristics. As a result,
Importance-Performance Analysis (IPA) was used as part of the data analysis,
combining the mean scores for both the importance and performance of each
characteristic and plotting them on an easy to read matrix for each academic year
group (Figure 5.1, Figure 5.2, and Figure 5.3). Each matrix provides university
service management with a simple visual interpretation of the current provision of
service quality, which could be beneficial for making more informed decisions in the
future.
More specifically, university service management can use each matrix to direct
attention to the service quality characteristics that need improving, as well as those
that should be maintained or de-emphasised. In this instance, each matrix maps the
relationship between students’ importance perceptions and students’ performance
perceptions for each service quality characteristic. Each IPA matrix recognises
problematic areas for management, which are those characteristics that are
perceived to be extremely important by students but perform poorly. The location of
problematic characteristics and the reasoning behind each is useful for backing up
the suggestions that are provided to management (see e.g. Section 5.5). The IPA
matrix for each academic year group is illustrated below (Appendix H for key).
61
Figure 5.1: Importance-Performance Matrix – Year 1
62
Figure 5.2: Importance-Performance Matrix – Year 2
63
Figure 5.3: Importance-Performance Matrix – Year 3
64
5.3.2 Areas to Maintain
Firstly, as mean scores for each service quality characteristic did not fallow below 4,
each IPA matrix graphed data using a shortened scale (e.g. 4-10). This generated a
better representation of the results, making it easier for university service
management to locate areas to improve as well as areas to maintain. However, due
to using a shortened scale and the possibility of bias, the positioning of each
characteristic on each matrix does not explain anything without any justification.
Instead, the discussion points from each focus group must be used to support and
complement the IPA matrices, in order to provide better and more reasoned
suggestions for university service management.
5 Knowledge and Experience of Academic Staff
9 The Organisation and Management of the Course
11 The Quality of Academic and Learning Facilities
23 The University’s Reputation
Table 5.3: Areas to Maintain
As Table 5.3 illustrates, that there are four characteristics that perform consistently
well (i.e. high importance and high performance) across each academic year group.
Section 5.2 provided reasoning for each of these characteristics. Nevertheless, it is
clear that management should maintain these in the short-term and turn their
attention to other characteristics that require more immediate attention. However, it is
important for management not to overlook these characteristics completely and
periodically review students’ perceptions to ensure their importance or performance
ratings do not change. Cuthbert (1996a) provides support for this, pointing out that
perceptions are varied and continuous, over months and years and are therefore
prone to change, illustrating the need to continually update students’ perceptions.
Furthermore, the findings also indicate that ‘campus location and layout’ is a
characteristic that needs de-emphasising, plotting in the bottom right quadrant of
each IPA matrix. It is evident that this characteristic appears to perform well (Section
5.2.9) but is seen as relatively unimportant by students. In reality, it would be
impractical to attempt to de-emphasise this characteristic; therefore, this
characteristic should be something that university management should maintain.
65
5.3.3 Problematic Areas
The IPA matrices identified five characteristics (Table 5.4) that fell into the quadrant
representing areas to improve. Each of these characteristics were considered
extremely important by students but performed poorly. These characteristics are not
equal in terms of their importance and performance and those characteristics that are
closer to the top left corner of the quadrant indicate problematic areas that the
university should consider first (i.e. higher importance and lower performance). The
‘availability of academic staff’ is an example of a characteristic that fell into the
improve quadrant for each year’s IPA matrix. However, this characteristic positioned
in the bottom right corner of the quadrant of each matrix, suggesting that it would be
an issue of immediate priority for university management, as students consider other
characteristics to be more important, and at the same time, worse preforming.
2 Quality of Seminars
6 Availability of Academic Staff
8 Prompt and Efficient Feedback on Work
12 Access to Academic Facilities and Learning Resources
24 Internal Student Feedback Systems
Table 5.4: Problematic Areas
Due to the issues identified with the scale used for each IPA matrix, the discussion
now explores each of the problematic characteristics identified in Table 5.4 in more
detail. This is also useful to provide evidence to support suggestions made in Section
5.5 of this chapter.
5.3.4 The Quality of Seminars
There was a lengthy discussion concerning this particular characteristic in each of
the focus groups. On a positive note, the majority of participants believed that
seminars were extremely important, with the majority of participants emphasising
their relevance for consolidating what had been learnt in lectures. However, it is also
evident that there were conflicting views about the performance of seminars, with
many participants identifying the lack of consistency in regards to their quality. A
selection of participants believed that some of their seminar leaders fail to
66
consolidate or improve knowledge and that more often than not their seminar leaders
were of a poor standard. For several participants, the impact of just one bad
experience in a seminar significantly jeopardised their overall evaluation of the quality
of seminars. Accordingly, there were extremely diverse evaluations for this
characteristic since many participants found it difficult to evaluate. This is supported
by Gruber et al. (2010) who argue that higher education is predominantly intangible,
perishable and heterogeneous, resulting in aspects of the service experience varying
from one situation to next and making them difficult to evaluate.
Many participants believed that most seminar leaders were knowledgeable but found
it difficult to convey their ideas and engage the seminar class in discussion. When
probed further, a selection of participants felt that some seminar leaders lacked the
necessary skills to evoke passion and stimulate participation of all group members.
Worryingly, one participant claimed that in some instances their motive for attending
one particular seminar was to simply register their attendance and ‘get a tick’,
claiming that they did not gain any value from the seminar. Moreover, the majority of
participants claimed that there appeared to be no evidence that seminar leaders had
been subject to any training. Several participants believed that this had a detrimental
affect upon the quality of the seminar and could be a plausible reason for the evident
lack of consistency.
In contrast, a handful of participants gave examples of positive seminar experiences
where their lecturer had taught their seminars. These participants believed that more
often than not this resulted in better coordination between the lecture and seminar.
Despite the issue associated with the competence of some seminar leaders,
participants appreciated the difficulty in achieving quality when seminars are heavily
based upon co-creation of value between the producer (seminar leader) and the
consumer (student). This is supported by Hill (1995) who recognises the complexity
associated with achieving quality, especially when the service does not just depend
on the service provider but also on the performance of the consumer. Worryingly, a
significant number of participants brought up the problem of unequal participation in
seminars and the issue of some students ‘freeriding’. Several participants pointed out
that it was commonplace that some people had not completed the required work,
which jeopardised the quality of the seminar. This is a problem that university
management needs to consider, given that the co-production of services is of greater
concern to organisations when customers are more involved in the production
process (Palmer, 2011).
67
5.3.5 Feedback on Work and Availability of Academic Staff
The participants identified that feedback given to students on their work is another
important but relatively low performing characteristic. The general consensus
amongst participants was that feedback was often delayed and there were
inconsistencies in the time taken mark and return a piece of coursework, assignment
or examination. Several participants recalled experiences of poor promptness where
receiving the feedback had surpassed the promised window. One participant
provided an example of one member of staff that marked and returned their
coursework within a week of submitting it, whereas another member of staff greatly
surpassed their deadline, providing feedback four weeks late.
Aside from the issue of delays, numerous participants pointed out that feedback
tends to be generic, providing no real guidance for improvement. Many participants
believed that staff did not provide enough comments or useful comments that could
be used to improve their work in the future. According to several participants, there
seems to be unwillingness amongst staff to provide extensive feedback to students
on an individual basis for assessed work. Worryingly, one participant felt that this was
the worst performing characteristic of service quality because of the nature of the
feedback received. When asked to elaborate, the participant pointed out that:
“… it is difficult to gain anything from the feedback received on work
at university. It is commonplace to receive a feedback sheet full of
ticks and one comment.” (Participant I - Year 3, Group 2)
Despite the issues associated with feedback, many participants were aware that
class sizes were large and finding a way for a lecturer to provide individualised
attention to every student is extremely problematic. One participant believed that this
problem linked to the ‘availability of academic staff’, which is another characteristic
that needs to be considered by university service management. In relation to this
issue, a significant number of participants claimed that the contact time with
academic staff was extremely limited in comparison to the number of people enrolled
on the module. They went on to explain that lecturers only offer a small and inflexible
selection of office hours, perhaps one to two hours per week, which could be
expected to cover up to 150 students on the larger and more popular modules. The
68
importance of receiving efficient feedback and individualised attention from academic
staff cannot be undermined. In support of this, Oldfield and Baron (2000) found that
the ‘provision of individualised attention’ fell into requisite elements, which were
important for allowing a student to fulfill their study obligations. Furthermore, some
participants believed that the potential marginal benefit of good feedback and access
to academic staff is relatively high and has a direct positive impact on the quality of
their next assignment.
5.3.6 Internal Student Feedback Systems
Of the characteristics that have been located in the improve quadrant, participants
perceived this to be the most important and worst performing characteristic across all
academic year groups. Participants in the focus groups believed that the university
does not take a customer-centric approach. They felt that the student should be seen
as the primary customer since they are the consumers of the service and without
them the service would not be able to function. This falls in line with Hill (1995) who
believes that students are the primary customers of higher education services.
Gruber et al. (2010) provides further support for this, suggesting that students need
to be seen as the primary target audience by universities and that there is a need for
academic administrators to focus on understanding their requirements.
A significant number of participants also commented on the lack of mechanisms in
place for students to give their opinions on the university. More specifically,
participants were under the impression that no feedback system existed, or at least
they were not aware of any, to post their feedback on the complete student
experience. The majority of participants pointed out that end of unit questionnaires
were available for completion at the end of each semester, however, these were
course specific and did not assess the entire university experience. Moreover, a
number of participants brought up the backward looking nature of this particular
feedback mechanism. They believed that there is no incentive for students to fill in
the questionnaire properly and provide the university with reliable feedback, since
these students do not reap the benefits of change due to a potential time lag and
difficultly implementing change immediately in such a rigid organisation.
Additionally, a handful of participants thought that end of unit questionnaires were
seen more of an administrative task rather than a true evaluation of the service
quality. These participants doubted the integrity of end of module questionnaires,
69
asking the question of how the university could act upon a mere rating for a particular
characteristic. Furthermore, these participants believed that these questionnaires do
not properly engage with students and are not designed with the students in mind;
rather collecting what the university perceives to be important. This is consistent with
Gruber et al. (2010), who claim that many existing surveys used by higher education
institutions are poorly designed, lack standardisation and give no evidence
concerning reliability or validity.
All participants in the third year focus groups mentioned the NSS as a means for
assessing the entire student experience at the university. However, many
participants appreciated that this was externally moderated and did not act as a
constructive feedback system for the university. When probed further about validity of
the NSS, many participants stated that the NSS was not a fair reflection of their
university experience and that they felt pressurised when completing it. Many
participants did not want to give a bad interpretation of the university, as they were
under the impression that a bad perception of the university would ultimately affect
their own employability opportunities. Worryingly, several participants that had
completed the NSS survey admitted to not giving a truthful interpretation of the
university and exaggerating the quality of the university, portraying the university to
be better than it actually is. Interestingly, prospective students are one of the main
users of NSS data when looking to join a university and this data contributes in
forming their expectations. If they choose to attend the university, they may
experience negative disconfirmation (e.g. dissatisfaction), resulting from a mismatch
between their initial expectations and actual perceptions (Buttle, 1995).
5.3.7 Access to Academic Facilities and Learning Resources
‘Access to academic facilities and learning resources’ is a further characteristic that is
regarded as problematic. Generally, participants of each focus group believed that
the provision of facilities and resources is above average. However, one particular
issue weakened the performance of this characteristic for many participants.
Numerous participants commented on the issue of learning resources during
examination period at the end of semester one and semester two. During this time,
demand exceeds supply and there appears to be a limited supply of learning
resources (e.g. study areas, computers, books) available to fulfill a student’s needs.
One participant recalled an occasion where they were queuing in the main library for
1 hour 45 minutes to get access to a university computer, while another participant
70
explained that on one occasion they had to work on the floor because the library was
too overcrowded. In addition to this issue, a selection of participants commented on
concerns regarding the availability of course textbooks for certain modules. They
believe that more often than not there are not enough course textbooks relative to the
number of students on the course and this becomes an even greater issue when
most textbooks are compulsory and can be priced anywhere between £20 and £50.
5.4 Differences between Students’ Perceptions of Service Quality
5.4.1 Overview
The following section focuses on addressing research question two. It is evident from
the findings that differences also exist between certain characteristics across
different academic year groups. Moreover, students have different perceptions
regarding the importance and performance of service quality characteristics on both
an intra and inter year basis, imposing various implications for university service
management.
5.4.2 The Complex Nature of Service Quality Perceptions
The literature highlighted that service quality in higher education is a complex and
multifaceted issue (Harvey and Green, 1993). This is supported by the findings,
which illustrate that perceptions change between different academic year groups (i.e.
on a inter year basis). For example, third year participants perceived the careers
service to be more important than first year participants did. Such findings support
the view that service quality is context specific, and varies from place to place
depending on the context being studied. Sultan and Wong (2010) provide evidence
to support this view, stating that service quality should be seen as a contextual issue
since its dimensions vary widely. Zeithaml et al. (2009) provide further support,
postulating that customers do not perceive service quality in a one-dimensional
manner but rather judge quality based on multiple characteristics relevant to the
context. To add to the complexity of this issue, and in support of the notion of service
quality being context specific, the findings also suggest that students’ perceptions of
service quality also vary within each year group (i.e. on a intra-year basis). For
instance, one third year student could regard seminars as high performing whereas
another student in the same year could perceive them to be poor. This finding falls in
line with Lovelock and Wirtz (2011), who believe that quality means different things to
71
different people depending on the context being examined, and that two people can
have drastically different perceptions of the same service.
As a result of the context specific nature of service quality, it can be postulated that
service managers face considerable difficultly producing a meaningful representation
of service quality within the same organisation. This challenges past studies
conducted by various researchers that have attempted to generalise the
conceptualisation and measurement of service quality by developing generic service
quality measurement scales (Section 2.4.4) that claim adaptability and versatility to
different service industries. However, despite numerous attempts by academics, no
single model of service quality is completely accepted (Clewes, 2003). Seth et al.
(2005) provide additional support for this, suggesting that there is not a generally
accepted model of service quality nor is there any generally accepted operational
definition of how to measure service quality. Instead, studies have suggested that
service quality scales need to be adapted to the study context (Carman, 1990;
Carilliat et al., 2007), providing further evidence to support the notion that service
quality is context specific.
In reference to a higher education context, the literature identified that Abdullah
(2006a, 2006b) created the HEdPERF tool, based on six dimensions (i.e. non-
academic aspects, academic aspects, reputation, access, programme issues and
understanding), to assess service quality in a higher education context. It is evident
that even industry specific instruments assume linearity when conceptualising and
measuring service quality. In reality it is impractical to assume that generic models
capture a detailed perspective of a complex sector such as higher education.
Instead, and as this study has found, it seems more feasible to adopt a context
specific view of service quality, using measurement technique(s) based on the
situation at hand, in order to be able to deal with the multifaceted, indistinct and
elusive nature of the construct (Bolton and Drew, 1991; Carman, 1990). Despite this,
it must be acknowledged that models may provide a good starting point for a
researcher wishing to measure service quality, by directing attention to various
issues that may need considering.
5.4.3 Variations in Students’ Perceptions Over Time
The literature identified that university service management tend to regard service
quality as uniform, assuming students require the same provision and failing to
72
acknowledge the possibility that perceptions may alter over time as a student
progresses through their undergraduate degree. Through the use of mean service
quality perception scores, the findings demonstrate that student perceptions of
certain characteristics alter in terms importance and performance as students make
the transition from first year to third year of their undergraduate degree. This falls in
line with Cuthbert (1996a), who points out that service quality perceptions are varied
and continuous, over months and years and are therefore subject to change.
Furthermore, Berry et al. (1985) state that the quality of service can vary within the
same organisation. Therefore, university service management must be able to track
and manage perceptions as they change over time rather than assuming all
perceptions of each characteristic remain the same. The challenge here is to not only
meet students’ needs but to react to these needs as they alter over time.
Notable Trends
Characteristics I/P Mean ↑ or
↓ Year 1 Year 2 Year 3
Careers Service I 5.92 6.08 9.25 ↑
Social Opportunities I 8.75 7.25 4.50 ↓
Provision of other Facilities and Services I 8.33 6.83 5.33 ↓
Quality of Lectures P 6.92 6.92 7.58 ↑
Quality of Seminars P 4.17 6.00 6.17 ↑
I – Importance, P – Performance, ↑ - Increase, ↓ - Decrease
Table 5.5: Notable Patterns Across Different Academic Year Groups
Table 5.5 illustrates that the importance of ‘social opportunities’ and ‘provision of
other facilities and services’ decreased in importance from year one to year three,
whereas the ‘careers service’ increased in importance from year one to year three.
On the other hand, the ‘quality of lectures and seminars’ appeared to increase in
performance from year one through to year three. Based on this principle of change
over time, comparable findings can be are witnessed in a study conducted by
Oldfield and Baron (2000). The authors investigated student perceptions of both first
year and final students, establishing that perceptions of service quality changed over
time. Although the authors were unable to conclude definitively, their limited
comparative study revealed that acceptable elements (e.g. ‘availability of staff’ and
‘willingness of staff to provide individual attention’) showed a gradual increase,
73
becoming increasingly important, the longer the students had been on the course.
5.4.4 The Careers Service
As Table 5.5 illustrates, students’ perceptions of the careers service increased in
importance through year one to year three. More specifically, the findings highlight
that third year participants found the careers service extremely important in
comparison to participants in other years. When the careers service was discussed
with third year participants in the focus group, the general consensus the careers
service is essential, since a career is now more important to them than it had been
previously, considering each participant was approaching the end of their
undergraduate degree. When asked to elaborate, the majority of third year
participants said that they had been using the career service, mainly for seeking
advice and guidance for a gap year, work experience and graduate schemes. One
participant explained:
“…having the support of the careers service is crucial for improving
my job prospects, especially in consideration of the competitive job
market and the evident impact of the recession.” (Participant G -
Year 3, Group 2)
In contrast, first year participants perceived the careers service to be less important
than the other two academic year groups did. When participants were asked to
discuss the careers service, it was clear that a career was not something at the
forefront of their agendas this early on in their degrees. To support this, the majority
of participants claimed that they had only used the careers service once or twice with
two participants admitting that they had never visited the careers service at all.
Despite this, first year participants agreed that over the course of the degree they
would be more inclined to use the careers service.
Finally, results from second year participants regarding the careers service appeared
to be very mixed. An extremely large variance of 11.36 supports the view that there is
large disparity between students’ perceptions of the importance of the careers
service. Evidently, some participants found the careers service very important
whereas others rated it as unimportant. When the careers service was discussed
74
with participants, it was clear that this disparity had arisen from a small selection of
participants that were seeking summer internships and had found the career service
very useful. One participant explained:
“…the careers service provided access to a range of resources
such as blogs, industry guides and practice psychometric tests that
were extremely useful for securing a place on an internship
scheme.” (Participant D - Year 2, Group 1)
Each focus group identified that specific student needs across different academic
years influenced the importance of the careers service. Importantly, if the purpose
and importance of the careers service is promoted to first year students earlier and
more thoroughly then students’ perceptions of its importance might improve towards
those of third year students.
5.4.5 Social Opportunities and the Provision of Other Facilities
Table 5.5 highlights that both ‘social opportunities’ and the ‘provision of other facilities
and services’ decreased in importance from year one through to year three. Many
first year participants believed that the provision of social opportunities were
important for establishing networks and new friendships. Of the first year focus
group, 75% of participants were involved in either a society or sports team.
Correspondingly, social opportunities were more important to these students. In
addition, several first year participants identified the possibility of a positive
relationship between the importance of ‘social opportunities’ and the ‘provision of
other facilities and services’ (e.g. cafes, social areas, student accommodation). When
asked to elaborate, participants claimed that these facilities enhanced a student’s
access to social opportunities, providing a place for people to meet and socialise.
On the other hand, third year participants believed that their networks had already
been established, combined with the increased importance of their academic studies
thus decreasing the importance of ‘social opportunities’ to them. Several participants
pointed out that they had already worked their way up to high positions in various
societies and sports teams that they were involved in and that they were now at the
75
forefront of organising and providing many of the ‘social opportunities’ for first and
second year students. Furthermore, many third year participants also commented on
the decreased importance of the ‘provision of other facilities and services’. All third
year participants in this study lived in student houses and pointed out that the
provision of facilities such as student accommodation was no longer applicable to
them. Four of the twelve participants stated that they were completing a
Postgraduate degree and that student accommodation may become important to
them. However, this is beyond the scope of this study, since the purpose here is to
understand only undergraduate student perceptions.
Finally, the discussion with second year participants revealed that the general
consensus was that they were indifferent about ‘social opportunities’. Numerous
participants pointed out that second year was worth 25% of their final degree,
providing evidence to support the gradual decrease in importance from first to
second year. Moreover, several participants pointed out that they were using second
year as an opportunity to strike the appropriate balance between ‘social
opportunities’ and their studies.
5.4.6 Quality of Lectures and Seminars
Although consistent teaching should be provided across each year of study, the
findings show that the ‘quality of lectures and seminars’ were two performance
characteristics that increased in performance from year one through to year three.
Only third year participants could properly relate to the transition of quality in terms of
lectures and seminars from year one to year three. Accordingly, several participants
from the third year discussion pointed out that they had noticed a gradual
improvement in the ‘quality of seminars’. In particular, one participant provided
evidence for this, explaining that lecturers seemed to be much more willing to meet
students’ needs. They added that during semester one of year three, one of their
lecturers took all the seminars for a class size of approximately 170 students, which
did not occur in their first or second year at the university.
As with the ‘quality of seminars’, participants of the third year focus groups believed
that their lectures in third year were of better quality than they were in second or first
year. When asked to provide a reason for this, participants thought that management
might have more consideration for third year due to its increased importance, as well
as the impact that positive degree results will have on the university’s reputation.
76
The findings also show that the performance of the ‘quality of lectures’ for both year
one and two is the same. This provides evidence to suggest that the university may
place more emphasis on enhancing the ‘quality of lectures’ for third year students. In
support of this, one participant in third year commented on a positive approach taken
by one of their lecturers to improve the quality of their lectures. They stated:
“The lecturer used a range of methods to stimulate learning and
improve quality such as podcasts, case studies, news links and
other initiatives, including providing no lecture notes until after the
lecture had finished.” (Participant B - Year 3, Group 1)
Finally, the participant added that each of the seminar leaders were required to
attend each lecture and in some cases took some of the lectures that related to their
specialised fields. The participant believed that this improved the ‘quality of each
seminar’, as seminar leaders were more aware of what was covered in class and
able to relate to students’ needs much more readily.
5.5 Suggestions for University Service Management
5.5.1 Overview
The purpose of this section is to provide suggestions to university service
management that can be utilised to improve the level of service quality. This falls in
line with Baron et al. (2009) who point out that organisations are operating in
extremely tough environments, and service managers now realise that improving
service quality is crucial for gaining a competitive advantage. The problematic areas
uncovered in Section 5.3 are used as a basis to provide suggestions for
management. These are identified as the areas that management will achieve the
greatest marginal benefit if management focus on improving them.
5.5.2 Tuition Fee Rise
Many English universities, including University of Manchester, are increasing their
tuition fees to £9,000 as of September 2012. Part of the discussion in each focus
group centered on whether participants believed that the university’s current level of
77
service quality justified the increase in tuition. Worryingly, 28 of the 36 (i.e. 78%)
participants stated that they would not have chosen to attend the university if the
tuition fee stood at £9,000. The majority of participants could not justify the increase,
claiming that there is a clear misalignment between price and the level of service
quality offered by the university. Since higher education is a credence-based service
and has even been termed a ‘pure service’ by some authors (Oldfield and Baron,
2000), evaluation is increasingly difficult, resulting in prospective students relying on
aspects such as price to evaluate the service in the absence of any tangible
manifestation or when all other factors are equal (Palmer, 2011).
Despite this, the majority of participants believed that an increase in price should
encourage the university to improve the level of service quality. Participants pointed
out that an increase in price would increase a student’s expectations of the service
they received, which would probe problems for the university in terms of improving
service quality and meeting higher expectations. Palmer (2011) supports this,
illustrating that price influences customer’s perceptions of service quality, as well as
the service organisation’s ability to produce quality services. In terms of higher
education, since the price of the service influences a student’s expectations, it is
more likely that negative disconfirmation (i.e. dissatisfaction) will occur when actual
perceptions are lower than the student’s original expectations (Buttle, 1995),
presenting problems for university service management.
When asked about the impact of the tuition fees, participants believed that the
university would face problems in the short-term, especially in terms of those
students entering the university system in 2012. As students start to pay more money
for their higher education, their expectations are likely to be raised in terms of contact
time, resources and facilities (Key Note, 2011). Several participants believed that,
due to the possibility of an inevitable delay in the implementation of service quality
changes, these students would experience a similar level of service quality for almost
three times the price. Numerous participants thought that the standard of service
quality would remain the same for the next two years or so and that improvements in
service quality would not be witnessed for some time. Accordingly, participants
believed that this could have a negative impact on the university’s reputation and
brand image. Furthermore, if the university does not react appropriately then this
could also damage their competitive position in the future. Palmer (2011) supports
this, suggesting that maintaining high price and low quality positions is not a
sustainable strategy for an organisation to follow in the long term.
78
5.5.3 Customer-Centric Approach
The discussion has already identified evidence to suggest that the university fails to
fully acknowledge the student as its primary customer (Section 5.3.6). Participants in
the focus groups believed that service quality is designed from the perspective of the
organisation rather than the perspective of the student. In reality, it is important for an
organisation to listen to the voice of primary customer and understand and serve
their needs (Nadiri et al., 2009). The student can indicate exactly what improvements
are needed as they experience the service first hand. As a result, the university must
consider adopting customer-centric approach that focuses on acknowledging
students’ viewpoints when designing the service.
Gruber et al. (2010) provide further evidence to support a customer-centric approach,
suggesting the need for university service management to focus on understanding
student requirements instead of collecting data based on what the institution thinks
students perceive as important. This would enable the university to understand how
their students perceive the services offered, from which they may be able to adapt
their services in a way that stimulates better provision of service quality for students.
However, it is important to acknowledge the complex nature associated with
understanding students’ needs. The findings revealed in Section 5.4 that students’
perceptions of service quality were varied over time and changed across different
years of study. As a result, it is impossible to satisfy all students’ needs and
management must carefully determine which customer needs must be fulfilled.
5.5.4 Service Quality Improvement Programme
It is clear that the university needs to understand service quality to be able to improve
it. Baron et al. (2009) support this notion, stressing that service quality does not come
about by chance and that an organisation needs to develop strategies to ensure that
they deliver consistent and high-quality services. This is no different for a higher
education institution such as the University of Manchester.
It is appropriate for the university to undertake a service quality improvement
programme, which is continually monitored and measured correctly. Zeithaml et al.
(1990) provide support for this, outlining a number of guidelines that the programme
must follow to increase the possibility of success. Firstly, the programme must be
79
varied and utilise a mixture of qualitative and quantitative research techniques since
each individual research method has its own limitations. Secondly, the measurement
of service quality must be ongoing since expectations and perceptions of customers
are dynamic and constantly changing. Accordingly, the university should utilise a
continuous approach, focusing on periodical evaluations of service quality throughout
the year rather than at the end of each semester. Thirdly, the service quality
improvement programme should be undertaken with employees (e.g. academic staff
and administration staff) as the closeness of staff to customers within the services
sector makes it important that they are asked about problems and possible
improvements as well as their personal motivations and requirements. Finally, results
must be shared with employees since this may improve employees’ performance in
delivering service quality if they are made aware of the results of studies of customer
expectations and complaints (Zeithaml et al., 1990).
As part of the service quality improvement programme, Lovelock and Wirtz (2011)
recommend the need for service management to provide three types of service
performance reports to assist an improvement programme: a monthly service
performance update, a quarterly service performance review and an annual service
performance report. These reports should be short and reader-friendly, focusing on
key indicators and providing easy to understand information for management to act
on.
Taking into account the components of a service quality programme and factors
influencing its success, it is necessary to outline the problematic areas that were
identified in Section 5.3, since these are useful for constructing suggestions that
management can include in their service quality improvement programme (Table
5.6).
Table 5.6: Problematic Areas Located from IPA
2 Quality of Seminars
6 Availability of Academic Staff
8 Prompt and Efficient Feedback on Work
12 Access to Academic Facilities and Learning Resources
24 Internal Student Feedback Systems
80
5.5.5 Feedback
This section addresses two problematic areas that relate to feedback: feedback
systems (i.e. ‘internal student feedback systems’) and feedback received on work
(i.e. ‘prompt and efficient feedback on work’). It is clear that two key issues underpin
the need to improve ‘internal student feedback systems’. First of all, there is no
internal system in place that allows students to evaluate the whole student
experience. Secondly, the university seemingly fails to adopt a customer-centric
approach.
Several participants in the focus group suggested the creation of an internal
feedback system that allowed students to evaluate the whole university experience.
They explained that this could be integrated online within the student portal to
provide all students with easy access. When probed further, participants suggested
that the university could build in a range of methods such as Likert-scale based
questionnaires, Critical Incident Techniques and online forums that students could
use to post their issues that need addressing. As well as collecting feedback using
online channels, participants suggested that university management could conduct a
set of periodic focus groups (e.g. on monthly basis) with a chosen set of students
from each year of study to better understand the experience from the students’ point
of view as they progress through their undergraduate degree. Also, the continued
use of external bodies such as NSS will encourage a more rounded interpretation of
service quality, complementing the internal measures that the university decides to
adopt.
There is no single best way to measure service quality as all methods have
limitations (Clewes, 2003), however, a triangulated approach that adopts a range of
methods could as a means of reducing the possibility of bias. This falls in line with
one of the recommendation made by Zeithaml et al. (1990), that a service quality
improvement programme should utilise a mixture of both qualitative and quantitative
research techniques. Moreover, Baron et al. (2009) provide additional support,
suggesting that approaches for measuring service quality are not mutually exclusive,
and that in practice, organisations use a combination of measurement
methodologies.
Several participants suggested the use of real-time continuous feedback systems,
using end of module feedback surveys as an example to illustrate this suggestion.
81
They believed that the current approach used to collect perceptions at the end of
modules was meaningless and did not motivate or incentivise students to provide
appropriate feedback as they did not reap the benefits of any of the improvements.
Many participants suggested that the delivery of this feedback mechanism must be
reconsidered, and that a continuous improvement approach is needed that provides
a platform for students to give feedback at anytime throughout the semester.
Students should be provided with the option to log feedback, both negative and
positive, at the earliest possible opportunity. This would allow the lecturer to act upon
and improve the quality of their lectures or seminars as quickly and efficiently as
possible, rather than being made aware of, and possibly rectifying student
dissatisfaction at the end of a semester when it may be too late. This would help
better address the consequences of changing customer perceptions over time
(Section 5.4.3). This suggestion also falls in line with one of the requirements of a
service quality improvement programme that the measurement of service quality
must be ongoing and not just a snapshot of service quality at one point in time
(Zeithaml et al., 1990). In addition, not only can this suggestion be applied to end of
module feedback systems, but it could also be applied to other internal student
feedback systems that university service management decides to implement.
Feedback that students received on work was another characteristic that was
perceived to be problematic. Participants identified that it is more commonplace for
feedback to not be given to students on time, as well as the issue of feedback being
too generic. Participants believed that feedback is extremely important to them and
suggested that academic staff need to focus on improving the quality of feedback,
providing students with more relevant feedback that they can use to improve
performance in subsequent assignments and examinations. To achieve this,
participants suggested that university service management should consider providing
more opportunities for students to receive feedback on their work. More specifically,
participants brought up the lack up feedback given on examinations they complete. In
light of this, several participants suggested that the provision of a post-exam
feedback lecture, which the lecturer can use to highlight things that had been done
well, as well as providing pointers for improving in future examinations. Moreover,
participants also brought up the idea of offering the opportunity of one-to-one
allocated appointments with lecturers to discuss assignments and coursework. One
participant claimed that one of their lecturers already did this, despite no formal
requirements being in place.
82
Finally, participants felt that feedback should be more prompt. However, many
participants did appreciate that delays with coursework were usually due to the sizes
of classes, sympathising with a lecturer that could have up to 150 students’
assignments to mark. Not only does this make it increasingly difficult to give
indiviudalised feedback to each student, but it also increases the workload placed on
the lecturer. As a result, participants suggested that larger courses are examined and
possibly split into smaller classes; utilising more teaching staff. Although this is a very
optimistic suggestion, participants were convinced that lecturers would be able to
provide more individual attention to students, especially in terms of the feedback they
received on work.
5.5.6 Staff Development
One of the main issues raised by the majority of participants pertained to the lack of
consistency in the ‘quality of seminars’, despite being regarded as one of the most
important characteristics by participants. Although it was only the ‘quality of seminars’
located as a problematic area, participants brought up similar issues with the ‘quality
of lectures’, but agreed that lectures did not need to be improved as much as
seminars. Despite this, the majority of participants could recall at least one encounter
where they were dissatisfied with either a lecture or a seminar.
In order to tackle the issue of inconsistency with lectures and seminars, participants
suggested that a training academy should be introduced whereby all academic staff
are provided with formalised training to develop the appropriate skills to enhance the
delivery of teaching, in an attempt to meet a certain level of service quality. The
purpose of the training academy would be to improve their communication, team
working ability and presentation skills. In order to achieve this, the training academy
could be run by a mixture of external qualified personnel, as well as experienced
academic staff. The training academy could use a range of methods such as videos,
one-to-one training, group exercises and seminar classes. Furthermore, participants
suggested that the teaching academy could also encourage communication between
staff across the university and facilitate the diffusion of best practice principles (e.g.
teaching methods and techniques used). Transferring knowledge in this way could
increase transparency and allow staff to improve their understanding of the level of
quality that students expect. However, the university must consider the level of
standardisation, since introducing a formal training academy or following a ‘best
practice model’ could stifle creativity and in fact limit the ‘quality of lecture or
83
seminar’. Therefore, the university faces the challenge of striking the appropriate
balance between the formality of training and the level of standardisation so that
unique teaching methods are not phased out.
In relation to seminars, numerous participants in the focus groups discussed the
notion of co-creation of value and acknowledged that unprepared students could
negatively affect the ‘quality of a seminar’. After all, service quality does not only
depend on the service provider, but also the performance of the consumer (Hill,
1995). Participants believed that group participation from all students in the seminar
positively influences the performance of seminars. Therefore, a suggestion made by
participants to overcome this issue involved incentivising seminars and rewarding
students for participating in class by allocating a proportion of their grade for that
particular module based on the completion of work and their contribution to
discussion within each seminar.
Finally, participants suggested that in every possible instance, lecturers should take
charge of each of seminars for their module. They believed that lecturers are “more
in the know” and have a better understanding of how to integrate their own lecture
with the seminar. In cases where this is not possible, one participant used an
experience in one of their modules to provide the suggestion that seminar leaders
should take some of the lectures or at least have to attend the lectures. Many
participants believed that this would reduce the likelihood of a mismatch between the
quality of the lecture and a seminar, making it easier for seminar leaders to integrate
lecture material into seminars.
5.5.7 Further Suggestions
Participants also brought up a range of other interesting suggestions that may not be
of primary concern to the university but are factors that might be useful to consider.
As well as seminars and lectures, another issue brought up by participants pertained
to the performance and inconsistent nature of academic advisors, who are assigned
to students when they join the university. Although most participants that were
involved in the study were happy with their academic advisor, it was evident that
some participants were equally disappointed. As a result of inconsistencies,
participants suggested that as with academic staff, academic advisors should also
receive formal training and guidance. Participants believed that academic advisors
should be more proactive and take the role of a mentor, guiding students through
84
their university degree. Participants added that ‘progress meetings’ with academic
advisors should be made compulsory. Students need to be provided with the
opportunity to foster a good relationship with their academic advisors, in order to
understand how to progress properly through university.
Several participants also brought up the suggestion that the university needs to
provide more social opportunities. Although the majority of participants agreed that
the provision of societies was sufficient, many were astonished by how under utilised
the student union is. Participants felt that the student union is a big facility with a
large capacity but is not being marketed properly. According to numerous
participants, there appears to be considerable demand for the introduction of a
weekly social event using the union’s facilities.
Finally, a number of participants had suggestions in regards to more practical
methods for assessment. Participants suggested assigning coursework projects to
students based on local firms in the Manchester area. This would create value for
local firms by helping them solve complex business problems, whilst offering
students a good opportunity to apply their skills in real-life business situations.
Accordingly, students would discover how the theory that they have learnt in class
could be applied in a business situation, whilst gaining experience to help bridge the
gap between the transition from university to employment.
5.6 Chapter Summary
This chapter has critically analysed the findings from each focus group, using
literature presented in Chapter 2. Section 5.2 focused on research question one,
identifying the most important and best performing characteristics as perceived by
students at the university. Subsequently, Section 5.3 combined the data to create
importance-performance matrices, which enabled the identification of problematic
areas, which the university needs to address. Section 5.4 focused entirely on
research question two, locating and explaining differences in students’ perceptions
from different academic year groups. Finally, Section 5.5 addressed the problematic
areas highlighted in Section 5.3 to provide suggestions that university management
could choose to adopt.
85
6. Conclusion
6.1 Introduction
This study has investigated perceptions of service quality at the University of
Manchester from the viewpoint of the student. It sought to uncover what students
perceived to be most important and best performing characteristics of service quality.
In addition, the study aimed to determine how these perceptions varied across
different academic years of study, whilst also identifying problematic areas that
contributed in generating suggestions for university service management to improve
the level of service quality.
This chapter begins by outlining the conclusions for each research question and
determining whether the study’s research objectives have been achieved.
Subsequently, the limitations of the research are presented. Finally, the chapter
concludes by providing potential avenues for future research.
6.2 Conclusions Pertaining to Research Questions
6.2.1 Important and Best Performing Service Quality Characteristics
This particular section focuses on research question one. With support from Zeithaml
et al. (2009), it can be concluded that students’ perceptions of both the importance
and performance of various service quality characteristics varies, witnessing some
students perceiving certain characteristics to be more important than others. As a
result, there is a need for university service management to determine the
importance and performance of different service quality attributes and manage them
accordingly. Management must adjust the level of service quality for each
characteristic based on the importance and performance of that characteristic rather
than managing service quality in an ad-hoc manner. Knowing the relative importance
and performance of different characteristics could result in better resource allocation,
providing a greater marginal benefit in terms of service quality improvement, whilst
ensuring resources are not spent on the wrong initiatives.
86
Although the importance and performance of characteristics varied, extrapolation of
the data identified a ‘core’ set of 6 service quality characteristics that are perceived to
be essential to all students’ university experience. Further analysis identified that the
majority of these characteristics could be grouped as ‘academic’. With the support of
Oldfield and Baron (2000), it can be concluded that there appears to be certain
service quality characteristics that are part of the university’s ‘primary package’ and
essential in allowing a student to fulfil their study obligations at university.
In terms of the performance of different characteristics, it can be concluded that there
are four characteristics that students believed performed well - reputation, knowledge
and experience of staff, campus location and layout, organisation and management
of course. Although further investigation is needed, it is also evident that there
appears to be relationships between some of the characteristics, demonstrating that
focusing on improving the service quality of one characteristic could have a positive
impact on another characteristic (e.g. the ‘knowledge of academic staff’ influences
the ‘quality of lectures’). This makes it important for the university to understand how
various characteristics impact each other, in order to become more tactical in
managing and improving service quality. Therefore, it is important to conduct
additional research in an attempt to better understand the relationships between
different characteristics.
6.2.2 Differences between Student Perceptions across Different Academic Year
Groups
This research question sought to determine where differences exist between
students’ perceptions of service quality across different academic years. It can be
concluded that two main themes can be extracted from the study’s findings: the
contextual nature of service quality and the effect of time on service quality
perceptions.
In terms of the context specific nature of service quality, the findings demonstrate
that students’ perceptions of service quality characteristics vary within the same
organisation (i.e. on both an intra and inter year basis). Based on these findings and
the complex nature of service quality, it can be concluded that perceptions of service
quality depend on the study context, varying depending on the situation at hand. This
presents university service management with the need to determine the most
appropriate way to accurately measure service quality. Notwithstanding this issue, it
87
can also be established that students’ perceptions of service quality change over
time. The study provides reasoned evidence to support this conclusion point,
demonstrating that perceptions of service quality change from year-to-year as a
student progresses through their undergraduate degree. Cuthbert (1996a) also
provided support for this, suggesting that in the context of higher education, students’
experiences are varied and continuous, over months and years. As a result,
university service management should not perceive service quality to be the same
across different academic year groups, but rather manage service quality on a year-
to-year basis.
Although these findings present various implications for management, university
service management must embrace them and adopt a continuous approach when
measuring students’ service quality perceptions. In order to take on such an
approach, management must closely monitor and track service quality perceptions,
altering the level of service quality in accordance to perceptions as they change over
time. This will ensure perceptions do not change too dramatically, which could result
in the possibility of the university losing its competitive advantage.
It is acknowledged that service quality is an ‘elusive’ and ‘indistinct’ construct where
huge ambiguity still exists (Bolton and Drew, 1991; Carman, 1990; Cronin and
Taylor, 1992; Parasuraman et al., 1988). The findings of this study contradict the
findings of previous research that suggest that service quality models and
questionnaires are appropriate for attempting to measure service quality perceptions
in higher education. Due to the context specific nature and complexity of service
quality, it is likely that a higher education institution wishing to measure perceptions
of service quality could achieve more credible results by using a range of
methodologies, triangulating the data collection process based on the situation at
hand, since there is no best way to measure service quality (Clewes, 2003).
6.2.3 Suggestions for University Service Management
The final research question sought to identify problematic areas and offer
suggestions for university service management to improve the provision of service
quality. The findings from the study identified that maintaining the current provision of
service quality could be problematic in the short term for the university, especially in
consideration of the imminent rise in tuition fees and the consequences of a
misalignment between price and quality. As a result, and with the guidance of
88
Zeithaml et al. (2009), it was recommended that a service quality programme must
be undertaken that monitors service quality periodically, involves employees and
utilises a mixture of quantitative and qualitative methods.
The study utilised IPA to integrate both the importance and performance data,
identifying characteristics that university service management needed to focus on
improving (i.e. high importance and low performance). Importantly, IPA was useful for
directing attention to the characteristics that need considering, providing
management with a convenient visual interpretation of the university’s service quality.
In doing so, IPA highlighted that some characteristics were more important than
others, introducing the need for management to prioritise characteristics. As a result,
suggestions were provided for those characteristics requiring more immediate
attention, including the introduction of a staff-training academy to combat
inconsistencies in seminars and lectures. However, the study highlighted that
management must consider the potential impact of introducing a training academy,
since the consequences of too much standardisation could negatively impact the
overall service. Finally, it was evident that the university needs to acknowledge the
student as their primary customer. Therefore, it can be concluded that the university
needs to adopt a customer-centric approach that involves students in service design
as much as possible.
6.3 Limitations of Study
As with any research project, this study has been subject to various limitations that
may have hindered its accuracy. Consequently, interpretation of findings should be
considered with caution since constraints including time and limited resources
accentuate the chance of methodological issues. Although the research attempted to
reduce issues through the use of a triangulation approach to research (e.g.
qualitative and quantitative methods), the boundaries of the study must be
acknowledged.
The research only considered a small sample of 36 students, 12 from each year of
study. In addition, the sample was based on a specific course (BSc Management)
within a school (i.e. Manchester Business School) of one university. Accordingly, it is
appreciated that the discussion revolves around a limited sample and it would not be
appropriate to generalise the findings of the study to all UK universities. At the same
time, it is important to not underestimate the significance of the findings. Instead, the
89
findings present a strong case for service quality, providing invaluable insights that
are specific to the University of Manchester, which service management could
consider when addressing service quality issues. As a result, this study acts as a
foundational basis that university service management can use as a starting point in
their quest to understand the complexities associated with service quality from the
viewpoint of students.
Although the researcher maintained best efforts to ensure that homogeneity existed
between participants, the use of a convenience sample could have introduced an
element of bias to the investigation. Due to the difficulty attracting participants,
especially first and second year students, convenience sampling techniques were
used at the discretion of the researcher to choose students rather than randomly
selecting students. Aside from this issue, the use of a shortened scale in each IPA
matrix may have represented each characteristic to be more problematic than they
actually were, which may have an introduced a further element of bias. Despite this,
the researcher acknowledges that these issues could have skewed the results;
however, it is firmly believed that the discussion provides a good reflection of
students’ perceptions. This offers further evidence to support the findings that are
illustrated in each IPA matrix.
In hindsight, if the researcher had access to more time and resources, then a larger
sample (i.e. more focus groups) would have been used, as well as a more detailed
investigation into the relationship between different service quality characteristics.
This may have encouraged better understanding of service quality, yielding results
that are more generalisable.
6.4 Future Research Opportunities
Despite the limitations of this study, there is a range of interesting potential future
avenues for research. Although it is evident that this study has provided fresh
insights into what is a very topical issue, additional research could build on this,
enhancing the university’s understanding of service quality.
Considering tuition fees are set to rise in September 2012, there is potential to
replicate the study at a later date to assess whether students’ perceptions change
dramatically in the future in response to the price increase. A repeat study of this kind
would need to be carried at least a year on from the current study since perceptions
90
may take some time to change. This will allow university management to monitor the
change in student perceptions, as the findings from the future study could be
compared with the findings from this study.
Additionally, there is potential to change the context of the study. Obviously, the
focus on university education would remain but there is an opportunity to measure
perceptions of students from different faculties within the universities to determine
whether disparity exists. At a broader level, a study could be undertaken at other
universities in the UK, as well as the possibility of measuring the perceptions of
postgraduate students since their perceptions may differ from those of undergraduate
students. As Oldfield and Baron (2000) point out, each replication would add to
knowledge, and it would be useful to see if similar findings were uncovered in
different contexts.
Finally, this study has only focused on the perceptions of the student, considering
them as the primary customer in a higher education context. It did not measure the
perceptions of other stakeholders in higher education (e.g. academic staff and
administrative staff). As Appleton-Knapp and Krentler (2006) point out, different
stakeholders have different opinions and it is natural for perceptions to vary between
these stakeholder groups. Gruber et al. (2010) also suggest that every stakeholder in
higher education has their own view of service quality due to particular needs. As a
result, opportunities exist to investigate the service quality perceptions of academic
or administrative staff. Due to the unique nature of higher education as a service, the
provision of good service quality is largely dependent on employees. Therefore,
conducting similar studies with different stakeholders in higher education could
identify useful insights for university service management, as well as offering an
opportunity to compare how employees perceive service quality with students’
perceptions from this study.
91
7. Appendices
Appendix A: The SERVQUAL Instrument
92
Source: Parasuraman et al. (1998)
93
Appendix B: Variables and Dimensions for the HEdPERF Scale
Source: Abdullah (2006a)
94
Appendix C: Sample End of Unit Questionnaire
95
96
Appendix D: National Survey Questionnaire
97
Source: Ipos MORI (2012)
98
Appendix E: Focus Group Details & Participant Demographics
Focus Group Details Frequency
Total no. of participants 36
Participants per focus group 6
No. of focus groups conducted 6
No. of focus groups conducted per year (i.e. Year 1, Year 2 & Year 3)
2
Participant Demographic Details Frequency %
Gender Split Male Participants Female Participants Total
20 16 36
56% 44% 100%
Age of Participants 18 19 20 21 22+ Total
4 8 14 6 4 36
11% 22% 39% 17% 11% 100%
Course Specialism Accounting & Finance Decision Science Human Resources International Business Economics Marketing Operations & Technology Innovation, Sustainability & Entrepreneurship No Specialism Total
5 0 2 5 12 4 3 5 36
14% 0% 6% 14% 33% 11% 8% 14% 100%
99
Appendix F: Example Focus Group Survey
100
101
Appendix G: Importance-Performance Data - Year 1, Year 2 & Year 3