Top Banner

of 28

Evaluating Digital Services – Research Methods i1

Jun 02, 2018

Download

Documents

mightymarc
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 8/10/2019 Evaluating Digital Services Research Methods i1

    1/28

    Evaluating digital services: a Visitors and

    Residents approach

    This document provides an overview of the qualitative and quantitative methods used during theVisitors and Residents project:what motivates engagement with the digital information environment. It

    also provides guidance on methodologies that you might wish to consider if undertaking a similar

    project.

    This resource was co-authored by Donna Lanclos (UNC Charlotte) and David White (University of

    Oxford), and also contains the work of Lynn S. Connaway, Erin M. Hood, and Carrie Vass. It has

    been developed as a result of a collaboration between Jisc, the University of Oxford, and OCLC, and

    in partnership with the University of North Carolina, Charlotte.

    ContentsEvaluating digital services: a Visitors and Residents approach.............................................................. 1

    Qualitative ............................................................................................................................................... 3

    Research Ethics .................................................................................................................................. 3

    Conducting the Personal Interview .................................................................................................. 4

    Interviews............................................................................................................................................. 4

    Advantages of the Interviews ........................................................................................................... 5

    Disadvantages of the Interview ........................................................................................................ 6

    Semi-Structured Interviews ................................................................................................................. 6

    Critical Incident Technique .................................................................................................................. 6

    CIT in Virtual Reference Services .................................................................................................... 7

    Previous Uses of CIT ....................................................................................................................... 7

    Diaries .................................................................................................................................................. 7

    Survey Research ................................................................................................................................. 9

    Quantitative ....................................................................................................................................... 10

    Applied Research .............................................................................................................................. 11

    Action Research ............................................................................................................................. 11

    Evaluative Research .......................................................................................................................... 11

    Evidence-Based Research ................................................................................................................ 12

    Research Design ............................................................................................................................... 12

    Mixed Methods .................................................................................................................................. 14

    Data Collection Methods ................................................................................................................... 15

    Sampling ............................................................................................................................................ 16

    Types of Sampling ......................................................................................................................... 17Examples of Nonprobability Sampling ............................................................................................... 17

    Accidental Sample ......................................................................................................................... 17

    Quota Sample ................................................................................................................................ 17

    Snowball Sample ........................................................................................................................... 18

    Purposive Sample .......................................................................................................................... 18

    Self-selected Sample ..................................................................................................................... 18

    Incomplete Sample ........................................................................................................................ 18

    Examples of Probability Sampling ..................................................................................................... 18

    Simple Random Sample (SRS) ..................................................................................................... 18

    Systematic Sample ........................................................................................................................ 18

    Stratified Random Sample ............................................................................................................. 19Cluster Sample .............................................................................................................................. 19

    http://www.jisc.ac.uk/whatwedo/projects/visitorsandresidents.aspxhttp://www.jisc.ac.uk/whatwedo/projects/visitorsandresidents.aspxhttp://www.jisc.ac.uk/whatwedo/projects/visitorsandresidents.aspx
  • 8/10/2019 Evaluating Digital Services Research Methods i1

    2/28

    Determining the Sample Size ........................................................................................................ 19

    Sampling Error ............................................................................................................................... 19

    Nonsampling Error ......................................................................................................................... 20

    Questionnaire .................................................................................................................................... 20

    Focus Group Interviews..................................................................................................................... 21

    Designing the Focus Group ........................................................................................................... 22References ............................................................................................................................................ 23

    Appendix A ............................................................................................................................................ 26

    Moderating Focus Groups ................................................................................................................. 26

    Issues to Consider ......................................................................................................................... 26

    What to Look For: Desirable Moderator Characteristics ................................................................ 26

    Moderator Listening Dos............................................................................................................... 27

    Moderator Listening Donts............................................................................................................ 27

    Appendix B ............................................................................................................................................ 27

    Listening Behaviors: Self-Assessment .............................................................................................. 27

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    3/28

    QualitativeThe Visitors and Residents (V&R) project uses several qualitative methodssemi-structured

    interviews and diaries (White and Connaway 2011-2012 ). These data collection methods provide

    very rich data that help to identify how and why individuals get their information and engage with

    technology. In general, if the questions you have about a phenomenon are about how and why

    questions, qualitative approaches are your best bet for acquiring answers. Done well, it will reveal the

    underlying attitudes and motivations of your users, which may well not be apparent using statistical

    methods.

    Qualitative research is inductive and interpretive. Characteristically, it does not begin with a premise,

    but begins in exploration of social phenomenon with a desire to discover (Davis, Gallardo, and

    Lachlan 2013). A qualitative perspective embraces interpretation and observation in a naturalistic

    environment, focusing on the way subjects describe, understand, and see the world (Connaway and

    Powell 2010; Davis, Gallardo, and Lachlan 2013). It is a more holistic approach to solving problems

    than quantitative methods (Connaway and Powell 2010). Where quantitative research often seeks to

    describe an objective reality, qualitative seeks to understand lived reality, by attending to the

    subjectivity of human experience and behaviour. Why do subjects act as they do? How is realityconstructed in social interaction? (Davis, Gallardo, and Lachlan 2013). Qualitative research strives

    toward the goals of preserving human behavior, analyzing its qualities, and representing different

    worldviews and experiences (Davis, Gallardo, and Lachlan 2013, 41).

    Qualitative methods are fitting when the phenomena being studied are social in nature, complex, and

    cannot be quantified (Connaway and Powell 2010) Therefore, it is acceptable to have small samples

    in qualitative research, as it is not always necessary to generalize the data to wider populations

    (Connaway and Powell 2010; Davis, Gallardo, and Lachlan 2013). This approach is significantly

    useful in exploratory research (Connaway and Powell 2010).

    Qualitative researchers use various tools and techniques. Many of the techniques of qualitative

    researchers are drawn from sociology and anthropology (Connaway and Powell 2010). They ranged

    from traditional ones, such as observation and the interview, to less traditional ones, for example,

    mechanical recording and photography (Connaway and Powell 2010). Other instruments that are

    used by the qualitative researcher include: participation, interviews, focus group interviews, reviewing

    documents, gathering life histories, narrative forms of coding data, transcripts, field notes, and

    exploring ones own life (Davis, Gallardo, and Lachlan 2013). In qualitative research it is explicit that

    the researcher is a fundamental part of the investigative process (i.e., the interpretation of data and

    therefore the creation of knowledge). Qualitative research requires an interpretive leak, which is

    inherently subjective. Traditionally, quantitative methods have been used in library science. However,

    in recent years qualitative methods have been used more often (Connaway and Powell 2010).

    Research EthicsIt is tremendously important when doing research about your patrons to pay attention to the

    requirements of ethical research behaviour standards. At its heart, basic research ethics is about

    being careful to gain informed consent , and to be considered in your subsequent use and

    dissemination of the data. Every university has a human subjects Institutional Review Board, and any

    projects should be submitted to local IRBs for approval before any research is underway. At least

    basic ethics training should also be obtained by all of the individuals conducting the research.

    In general, it is a good practice to offer your participants incentives, because people are busy, and if

    we rely only on participants who are passionate enough about library services and resources to

    participate for free in our evaluations, we run the risk of ignoring large chunks of the population who

    may also need those same resources and services. We used incentives extensively in the Visitorsand Residents (V&R) project. Interview participants were offered vouchers/gift certificates to local big

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    4/28

    box stores or online merchants for their time, each time they were interviewed. Generally we offered

    $20 voucher to participants in the US, and GBP15 in the UK. For the survey, we will similarly be

    offering incentives to participants, in roughly those amounts.

    Incentives do not have to be that large, and can vary widely in amount and type. Some projects are

    successful with giving participants gift cards to local cafes or campus bookstores, or promotional

    library swag such as water bottles, flash drives, or even just food (the latter is particularly effective for

    focus groups). In all cases, incentives should not be seen as bribes towards a particular kind of

    information, but as an attempt to compensate (and thank) research participants for their time.

    Incentives should not be so large as to make people likely to say or do something that they would not

    ordinarily do.

    During the data collection process, it is important for the researcher to be sensitive about ethical

    issues, especially when dealing with human subjects (Connaway and Powell 2010; Creswell 2007).

    The ethics of social research is not about etiquette; nor is it about considering the poor hapless

    subject at the expense of science of society (Sieber 1992, 5). Rather, it is about learning how to

    make social research work for all (Sieber 1992).

    The participants must be respected and not stereotyped (Creswell 2007). The participants need to be

    adequately compensated for their time and effort. They should be treated with respect. The

    researcher should know how to address potential legal issues, and how to keep the participants from

    risk (Creswell 2007).The researcher also should be sensitive to vulnerable populations, power

    imbalances, and possible risks to participants (Creswell 2007; Hatch 2002). No identifying information

    should be disclosed and the participants should also be informed that the collected data and study

    findings may be used in presentations and publications.

    When collecting data it is necessary to be aware of specific institutions Institutional Review Board

    (IRB) protocol (i.e., Human Subjects Committees, Human Investigation Committees, and Human

    Subjects Review Boards), as each institution may not have the same protocol. The U.S. government

    requires all universities and organizations that conduct research with human subjects, and thatreceive federal funding for that research to have an IRB (Connaway and Powell 2010).

    Conducting the Personal Interview

    All encounters with the research participants should be conducted in a welcoming, nonthreatening

    environment. The researcher should give a short, casual overview of the study (Connaway and

    Powell 2010). The importance of the individuals participation should be stressed, and anonymity or

    confidentiality should be assured (Connaway and Powell 2010). If the interview is going to be

    recorded, the participant should consent to do so before beginning the interview (Connaway and

    Powell 2010). The participant should also be asked to consent to the use of their information in the

    research report (Connaway and Powell 2010). Additionally, the interviewer should answer all

    appropriate questions about the study and produce proper credentials when necessary. Examples of

    questions the researchers should be prepared to answer include: How did you happen to pick me?

    Who gave you my name? Why dont you go next door? (Connaway and Powell 2010).

    Interviews

    Interviews can be among the most time-consuming ways of gathering data, and the amount of data

    they generate can be at times overwhelming. On the other hand, interview data can provide rich

    fodder for insights into human behavior, when done right. They are excellent instruments for revealing

    what you do not already know, but only if the questions are open, and not leading.

    An interview consists of a researcher asking a study participant questions while recording the

    responses (Davis, Gallardo, and Lachlan 2013). The researcher becomes the interviewer and the

    study participant becomes the interviewee. By nature, the interview is a communication practice

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    5/28

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    6/28

    representative of the population than would be a sample representing a relatively low response rate

    (Connaway and Powell 2010, 172). The personal contact of the interview helps to encourage persons

    to fully respond to the questions. Therefore, it is possible to employ interview schedules of greater

    length than comparable questionnaires, without jeopardizing a satisfactory response rate (Connaway

    and Powell 2010). The personal interaction also ensures that misunderstandings about questions are

    more easily and readily cleared up. It is believed that the interviewee is better at revealing informationthat is complex or emotional when the interview is face-to-face (Connaway and Powell 2010).

    Disadvantages of the Interview

    There are two potential problems with interviews: biases and interviews that are mediated. Bias

    presents a real threat to the validity of interviews, particularly, the bias introduced by the researcher

    (Connaway and Powell 2010, 171). Potential bias can be avoided when the researcher dresses

    inconspicuously and appropriately for the environment, when the interview is held in a private and

    informal setting, and when the researcher is careful to ensure that the responses of the interviewee

    are their own, not reflecting biases of the interviewer (Connaway and Powell 2010). Mediated

    communication, i.e., the internet, reduces some of the biases by eliminating face-to-face

    communication. However, not all participants have access, or can access, the internet. Therefore, itcan produce an unrepresented sample (Connaway and Powell 2010). Additionally, rapport and

    interpersonal relationships are often more difficult to develop through computer-mediated

    communication (Connaway and Powell 2010). Internet interviews or telecommunication interviews are

    more economical than face-to-face interviews.

    Semi-Structured Interviews

    The semi-structured interview is a main methodological strategy to be used within qualitative methods

    (Morse 2012). Semi-structured interviews are used when the interviewer knows what questions to

    ask, but does not know what answers to expect (Morse 2012, 88). The interviewer may have some

    existing knowledge about the topic. Semi-structured interviews are interviews with guided, open-

    ended questions, asked in the same order in each interview (Morse 2012). Different from structuredinterviews, responses to these questions may be probed so that the interviewee has the liberty to

    respond as he or she desires (Morse 2012). Oftentimes, the semi-structured interview method is not

    the only method used for data collection. If it is the only method used then either the sample obtained

    from the interview is sufficient, other methods are unfitting to gather information from the population,

    or the population is delineated (Morse 2012).

    Critical Incident Technique

    This is a relatively efficient way of discovering what is important to the participant from their

    perspective. It is a way to encourage participants to recall a specific moment in time, to re-engage

    them with their thoughts and actions in the past.

    The Critical Incident Technique (CIT) is a qualitative technique that is used in business and marketing

    (Radford, Connaway, Radford, and Lingel 2013). It focuses on a memorable event or experience, and

    allows categories or themes to emerge from the recounting of that experience rather than to be

    imposed (Radford, Connaway, Radford, and Lingel 2013). The CIT consists of a set of procedures

    for collecting direct observations of human behavior in such a way as to facilitate their potential

    usefulness in solving practical problems and developing broad psychological principles (Flanagan

    1954, 327). It details a procedure for collecting information about events or incidents that have special

    significance and meeting defined conditions (Flanagan 1954).

    The CIT is used for gathering important facts about behaviour in certain, defined situations (Flanagan

    1954). The technique does not have a single set of rules to be followed for data collection, but rather

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    7/28

    is a flexible technique with principles that can be adapted and modified to meet the needs of the

    situation at hand (Flanagan 1954).

    The technique was an outgrowth of studies the Aviation Psychology Program of US Army Air Forces

    in WWII for the analysis of failure for those learning to fly, and therefore incorporated in the research

    for selecting pilots and to determine reasons for failures of bombing missions. The individuals on the

    flight were asked to describe the officers action. What did he do? Ultimately, CIT helped to establish

    the critical requirements for combat leadership (Radford, Connaway, Radford, and Lingel 2013).

    CIT in Virtual Reference Services

    The Virtual Reference Service project, Seeking Synchronicity: Evaluating Virtual Reference Services

    from User, Non-User & Librarian Perspectives, sought to understand the study habits and needs of

    virtual reference services users and to potential users in order to identify characteristics for informing

    library system and service development (Connaway and Radford 2011). The CIT questions asked the

    users to think about an experience where they felt they achieved (or did not achieve) a positive result

    after a library reference service in any format, and asked the nonusers similar type questions. The

    participants were to describe each interaction and identify what made these interactions positive ornegative. One of the VRS users described that the librarian threw in a cordial sign-off and

    encouraged me to pursue the reading during a library reference service chat. They exclaimed that it

    was like talking to a friendly librarian in person (Connaway and Radford 2011; Radford, Connaway,

    and Shah 2011-2013).

    Previous Uses of CIT

    Several studies have since utilized the CIT. For example, the CLASP (Connecting Libraries and

    Schools Projects), which was a project used to evaluate attitudes toward public libraries (Tice 2001).

    A sample of fifth and seventh grade students were asked to describe a good experience with the

    public library and then describe a bad experience with a public library (Tice 2001).The students were

    then asked to relate whether or not CLASP had made a difference. Overall, CLASP had a morepositive impact on fifth graders (39 percent) than it did on the seventh graders (25 percent) (Tice

    2001). Also, CIT has been used to assess staff development needs, library decision-making, and was

    a tool for librarians entering management positions (Fisher and Oulton 1999). The technique was

    used for information seeking with the LBGT youth (Hamer 2003). The interview schedule was creating

    using CIT, which was a method that complemented the study's social constructivist perspective

    (Hamer 2003). An example of some of the questions asked were: what questions did you initially

    have about coming-out? How did your questions change as you continued to think about comingout?

    How did you go about trying to find answers to these questions? (Hamer 2003, 73). Lastly, CIThas

    also been used to investigate information needs and information-seeking behaviours of university staff

    (Wilkins and Leckie 1997). In terms of collecting information, CIT can be used in survey, interview,

    questionnaire, and focus groups.

    CIT also was used in the Visitors and Residents (V&R) project (White and Connaway 2011-2012 ).

    This study utilized methods such as diaries , interviews , and surveys , which allowed for triangulation

    of data . CIT was used when asking the diarist follow up questions such as, think of a time when you

    had a situation where you needed answers or solutions and you did a quick search and made do with

    it. You knew there were other sources but you decided not to use them. Please include sources such

    as friends, family, teachers, coaches, etc (White and Connaway 2011-2012 ).

    Diaries

    Qualitative data-gathering about the everyday lives and practices of patrons/users can be challenging.

    This is particularly the case when researchers do not have easy access to homes and other placesinhabited by students and faculty in the parts of the day when they are not at the university. Soliciting

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    8/28

    diaries can be a way for libraries and other institutions to find out about what patrons do and think,

    without requiring the constant presence of a researcher.

    Diaries are a relatively unobtrusive means of collecting data; they do not require intrusion into the

    lives of the participants (Berg 2001, 118). However, they are an active instrument, in that the

    researcher solicits diary entries from participants. Diaries are considered field probes for recording

    activities; blank-paged books (or other media) in which participants confide their innermost thoughts

    (Lindlof and Taylor 2002; Zimmerman and Wieder 1977). The diary is a limited tool can nonetheless

    provide insight into the cognitive and psychological lives of the participants (Berg 2001).

    The Visitors and Residents (V&R) project was envisioned as a longitudinal study, one where we not

    only gathered a rich set of data from individuals at a given point in time, but also followed certain

    individuals through time, to be able to track changes that might occur due to technological innovation,

    transitions to other educational stages, geographic moves, and so on (Connaway and Powell 2010;

    White and Connaway 2011-2012 ). Such tipping points cannot be made visible with one-time

    interviews alone.

    One approach to the diary method is to allow the participant to keep the diary for about a week, andthen submit it to the researcher. The participant may be instructed to simply note the occurrence of

    events in a certain type (including details when, where, who, and how) (Lindlof and Taylor 2002,

    118). This approach to diary keeping and collecting allows the researcher to note patterns of events

    that occur in a sample population, such as a family or an athletic team, and then schedule sessions of

    participant observation when the events occur (Lindlof and Taylor 2002). Interviewers can use diary

    data to inquire about their own attitudes or impressions, usually in a free narrative.

    The diary can provide a large amount of information about participantsdaily lives (Lindlof and Taylor

    2002). It is also a method in which the participants are mostly free to decide the nature of the data

    (Lindlof and Taylor 2002). However, a weakness of the diary method is that the participants may

    forget to write in their diaries, and they may not be completely truthful (Lindlof and Taylor 2002).

    For the V&R project, we initially proposed that we select a subset of interviewees from which to

    request diaries. Such self-reported documents have been used in other projects to expand the range

    of information that researchers can gather, and to compensate for the fact that we were not with our

    research participants on a regular basis in the course of their everyday lives (Connaway and Powell

    2010; Wildemuth 2009; Somekh and Lewin 2005). We wanted the diaries to be submitted once per

    month, after the initial interview, during a three-year period. The intent was not to dictate what form

    the diaries would take, because we hoped that by leaving it open-ended, we would get more

    information. We did not want to stifle input by being too prescriptive. We hoped to learn from the

    choices the diarists would make in terms of format and means of contact as well as content.

    This open-ended strategy was unsuccessful. We were sent lists of websites that people visited with

    little explanation of why or how they visited those sites. The rich datasets that we acquired from the

    semi-structured interviews were not readily forthcoming in the diariesthe risks of engaging with

    diaries as data-gathering instruments (less-than-full participation, selective revelation of activities, etc.

    cf. Connaway and Powell 2010) were fully realized. We needed to provide more structure in the

    longitudinal piece of the study.

    To that end, we decided on monthly interviews with a selection of individuals, via Skype, face-to-face,

    and on the phone (depending on their preference). We also generated a written version of the follow-

    up interview, to provide structure for those who preferred to do the diaries on their own. Incentives

    were provided to diarist/interviewees, as was the case with the original sets of structured interviews .

    The questions are explicitly linked to the lines of inquiry we began in the original interviews. Withthese follow-up interviews, we are using critical incident technique (CIT) (Connaway and Powell

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    9/28

    2010, 110), to get interviewees to recall specific times when they engaged in information-seeking

    behaviours. Since the follow-ups are monthly, we ask the interviewees about recent incidents. Even

    though the reporting on their behaviour is not occurring in real time, it is providing longitudinal data

    about their behaviours, and how they change (or do not change) during a specified time period.

    Survey ResearchSurvey is one of the most familiar research instruments in a library context. They are particularly

    useful to institutions interested in gathering descriptive information from a broad swath of individuals.

    Keep in mind that the process of survey creation is iterative, and needs to achieve a balance between

    the desire for gathering large amounts of information from participants with the likelihood that

    individuals will complete the survey. Surveys can also generate a great deal of data, and it can be a

    challenge to organize and analyse all of it effectively.

    Survey research has been defined as the research strategy where one collects data from all or part

    of a population to assess the relative incidence, distribution, and interrelations of naturally occurring

    variables (Kidder and Judd 1986; Connaway and Powell 2010, 78). Survey is one of the most

    efficient methods of gathering original data from individual subjects, especially when the subjects aredispersed over a large geographic area. Surveys are designed to measure attitudes, opinions, or

    personal views of a population (Babbie 2013, 254). The end result of the survey method is that, if

    properly done, it allows one to generalize from a smaller group to a larger group from which the

    subgroup has been selected (Babbie 2013, 107). Survey methodology is most commonly used in

    descriptive studies (Connaway and Powell 2010).

    The Visitors and Residents (V&R) project includes a survey , which is intended to gather data that will

    allow us to provide a larger context for the rich interview data gathered from our initial group of

    participants (White and Connaway 2011-2012 ). The survey is online and will be distributed to 200

    students and scholars (100 each from the US and UK, with 25 participants in each of the 4

    educational stages) to broaden the scope of the study, and to compare the qualitative data collected

    in interviews and diaries to a larger sample. See Table 1. To date, our findings indicate that thepatterns of behaviour revealed in the V&R research noticeably do vary by the participants educational

    stage and not by age, which can vary broadly within each educational stage for the study participants.

    Conducting a survey will allow us to evaluate the extent to which we can generalize those results

    (Connaway and Powell 2010).

    Educational Stage Definition

    Emerging Last year high school/secondary

    school and First yearundergraduate college/universitystudents

    Establishing Upper division undergraduatecollege/university students

    Embedding Graduate students

    Experiencing Faculty

    Table 1: Definitions of Educational Stages

    We began with the semi-structured interview questions from the first phases of the research project,

    and added questions that arose in the course of analyzing the interview data, especially in thecreation of the codebook . Our goal was to have a survey that takes approximately 30-45 minutes to

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    10/28

    complete. The survey was pre-tested with a selected group before we recruited for actual survey

    participants. We also were budgeted for incentives for the survey participants, because we have

    found that individuals are not apt to respond to calls for participation without some sort of

    compensation for their time. Our project is not unique in this respect.

    The V&R survey will include a non-probability, purposive, stratified sample . We have specific goals

    for the stratifications, in that we are targeting US and UK populations within the four educational

    stages, with a specific number of participants within each category (Connaway and Powell 2010). This

    is a descriptive study (Connaway and Powell 2010), intended to give us a broader idea of what

    engagement with information and technology looks like in digital and face-to-face situations. The

    survey questions are a combination of open-ended critical incident questions, multiple choice, and

    Likert scale questions. The sequencing and grouping of the different types of questions by theme

    facilitates responding to the open-ended questions.

    Quantitative

    Quantitative research is familiar to LIS professionals, and, indeed, to higher education. Assessment

    requirements frequently require attention to that which can be counted (how many resources, howmuch time, how many graduates, etc.), and in those cases, quantitative approaches are appropriate.

    In the case of the Visitors and Residents (V&R) project, we were concerned with how and why

    questions, and used both qualitative and quantitative methods (White and Connaway 2011-2012 ).

    Some dichotomize research as either quantitative or qualitative (Connaway and Powell 2010).

    Quantitative research takes a deductive explanatory based point of view,practicing a postpositivist

    paradigm (Davis, Gallardo, and Lachlan 2013). It involves a highly structured problem-solving

    approach, relying on the quantification of concepts for purposes of evaluation and measurement

    (Glazier and Powell 1992). For example, it can be used when studying numerical data, or making an

    attempt to reduce words to numerical data (Davis, Gallardo, and Lachlan 2013). Quantitative research

    is suitable where variables of interest can be quantified, where inferences can be drawn from samples

    to expand to larger populations, and where hypotheses can be formulated and tested (Liebscher1998). The main goal of this type of research is to represent and explain objective reality with a goal

    of simplifying, organizing, predicting, and controlling human behavior (Davis, Gallardo, and Lachlan

    2013, 41).

    From an ontological perspective, a qualitative researcher might be more of an impressionist, whereas

    a quantitative researcher is typically more of a realist (Davis, Gallardo, and Lachlan 2013). A realist

    perspective seeks to explain phenomena, to predict and control, and represent objective reality

    (Davis, Gallardo, and Lachlan 2013). Quantitative researchers try to find answers to research

    questions oftentimes from the assumption that reality (and knowledge) can be known and measured

    (Davis, Gallardo, and Lachlan 2013). Knowledge is found, not created (Davis, Gallardo, and Lachlan

    2013). Quantitative research is also deductive. The researcher begins the research with a basicpremise or assumption about the phenomena being studied and then seeks to confirm or disconfirm

    it. It is best to use quantitative methods when desiring to learn about a large population of people, or

    when seeking to generalize data to a larger population of people or a similar situation (Davis,

    Gallardo, and Lachlan 2013).

    Tools used by quantitative researchers include statistical methods, experiments, interviews,

    questionnaires, surveys, theory testing, numerical coding, and secondary data analysis. They also

    typically seek to generalize to a larger group of people (Davis, Gallardo, and Lachlan 2013).

    Although quantitative and qualitative methods have distinct characteristics, we believe that both are

    useful, depending upon the research objectives and questions. The methods can be combined for a

    mixed methods approach, which is what we have done with the V&R project (White and Connaway2011-2012 ).

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    11/28

    We will be disseminating an online survey to 200 students and scholars (100 each from the US and

    UK, with 25 participants in each of the 4 educational stages) to broaden the scope of the study, and to

    compare the qualitative data collected in interviews and diaries to a larger sample (White and

    Connaway 2011-2012 ). See Table 1. To date, our findings indicate that the patterns of behaviour

    revealed in the V&R research noticeably do vary by the participants educational stage and not by

    age, which can vary broadly within each educational stage for the study participants. Conducting asurvey will allow us to evaluate the extent to which we can generalize those results (Connaway and

    Powell 2010).

    Applied Research

    Applied research is usually more pragmatic than basic research, having very specific concerns

    (Connaway and Powell 2010). It takes the theory and concepts from basic research and, by formal

    methods of inquiry, investigates real world phenomena (McClure 1989, 282).This is a specific type

    of research ideally provides information that can be used immediately to solve actual problems, and it

    may or may not have application beyond the specific study (Connaway and Powell 2010, 71).

    However, the utility of a theory can be tested in a real-world context, in applied research (Davis,

    Gallardo, Lachlan 2013). It can validate theories and lead to revisions of theories (Connaway andPowell 2010, 72). Therefore, applied research can add to the existing body of knowledge within a field

    (Connaway and Powell 2010).

    When considering the difference between basic and applied research, it is important to note that they

    should not be assumed to be mutuallyexclusive, but rather two parts on a continuum (Connaway

    and Powell 2010, 71). As the interplay between academics [basic researchers] and practitioners

    [applied researchers] can be extremely valuable and should be encouraged (Davis 1982, 96).

    Action Research

    Action research is sometimes treated as interchangeable with applied research (Connaway and

    Powell 2010). However, action research is different from applied research as it has direct applicationto the immediate workplace of the researcher, whereas applied research may have the broader

    purpose of improving the profession at large (Perrault and Blazek 1997, 60). It can be thought of as

    participative organizational research, focused on problem definition and resolution; involving an

    external researcher who works with organizational members to arrive at workable solutions to their

    problems, within the framework of [a] theoretical perspective (Wilson 2008 ). Action research is

    practical, orderly, flexible, and adaptive, and empirical to a degree, but weak in internal and external

    validity (Issac and Michael 1995, 59).

    Issac and Michael (1995) provide six Basic Steps of Action Research:

    1. Articulating the problem or determining the goal

    2. Looking over the literature3. Formulating testable hypotheses

    4. Arranging the research setting

    5. Establishing measurement techniques and evaluation criteria (59).

    6. Analyzing the data and evaluating the results (59).

    The data from action research may turn directly into a new library service, for example, or to improve

    or discontinue an existing one (Connaway and Powell 2010).

    Evaluative Research

    Wallace and Van Fleet state (2012, 1), Research and evaluation are the foundation of evidence-

    based practice . In todays economic environment, librarians must evaluate and make decisionsbased on evidence.

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    12/28

    Evaluative, or evaluation, research has a primary goal: to test the application of knowledge in a

    specific project or program (Connaway and Powell 2010). It is practical and utilitarian in nature; it is

    not as useful for the development of theoretical generalizations (Connaway and Powell 2010). In

    most evaluative studies there is an implicit, if not explicit, hypothesis in which the dependent variable

    is a desired value, goal, or effect such as better library skills and higher circulation statistics; the

    independent variable is often a program or service (Connaway and Powell 2010, 73). It is much likebasic research in regards to methods and techniques (Connaway and Powell 2010, 76).

    There are two types of evaluative research: summative and formative. Summative (e.g., outcome)

    research is characteristically quantitative and is concerned with effects of a program. Formative

    evaluation (e.g., process) is done during a program, and examines how well the program is working. It

    tends to be qualitative, and used more frequently for revising and improving programs. More specific

    types of evaluative research include the use of standards, performance measurement, and cost

    analysis (Connaway and Powell 2010, 74).

    This type of study typically has a rather high number of uncontrolled variables, as they are carried out

    in real settings (Connaway and Powell 2010). Since they are carried out in real settings, they are

    limited in time and space, and the researcher may have a vested interest in the project, making themsusceptible to research bias (Connaway and Powell 2010).

    Recent attempts to evaluate the effectiveness of libraries have focused on their outcomes or actual

    impactan increasing number of researchers are attempting to determine how the lives of individuals

    are affected by their use of libraries and other information resources and services, as opposed to just

    stopping with the measurement of output or performance (Connaway and Powell 2010, 75).

    Evidence-Based Research

    The call for evidence-based practice and decision-making in libraries began in the mid-2000s and the

    publication of the journal, The Evidence Based Library and Information Practice, in 2006 to provide a

    forum for librarians and information professionals to discover research that may contribute to decisionmaking in professional practice promoted it (EBLIP 2013 ). It is a type of applied (or action) research

    whose popularity can be attributed to an economic climate that includes decreased budgets, requiring

    librarians and information professionals to make decisions based on current and valid data; therefore,

    desirably reducing costs (Connaway and Powell 2010). Evidence-based research mirrors both the

    efforts of the practitioners, who consume the results of research in making those decisions, and the

    efforts of applied researchers, who strive to produce the research evidence intended for use by

    practitioners (Eldredge 2006, 342). It has been used by practitioners, professional organizations, and

    researchers alike.

    OCLC Online Computer Library Center, Inc. has developed several research activities working withlibraries, museums, and archives to utilize the data they collect to provide intelligence to make

    informed decisions (OCLC Research 2013a ). The OCLC research scientists and program officers

    and their colleagues have disseminated numerous papers, reports, and presentations demonstrated

    how these data can be used for the development of user-centred services and systems (OCLC

    Research 2013b ; OCLC Research 2013c ). The frequency of literature that addresses evidence-

    based research illustrates the importance and interest of the method among information professionals

    (Connaway and Powell 2010).

    Research Design

    Research design refers to the entire process of research from conceptualizing a problem to writing

    research questions, and to data collection, analysis, interpretation, and report writing, not simply the

    methods such as data collection, analysis, and report writing (Bogdan and Taylor 1975; Creswell

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    13/28

    2007). It is thelogical sequence that connects the empirical data to a studys initial research question

    and, ultimately, to its conclusions (Creswell 2007; Yin 2003, 20). It can be thought of as the structure

    by which the research is built upon; the plan that facilitates the work to be done for the project. More

    specifically, it ensures that the evidence obtained permits the answering of the initial research

    question, as unequivocally as possible (De Vaus 2005).

    The research process should never end, but build on previous research. Scientific research should

    produce a circular movement from facts to hypotheses, to laws, to theories, and back to facts as the

    basis for the testing and refinement of more adequate hypotheses (Connaway and Powell 2010, 22).

    Research design includes the following elements: building on previous research, shaping and

    reshaping its findings (Connaway and Powell 2010).

    According to Connaway and Powell (2010), the research design includes the following elements: 1)

    goals and objectives; 2) hypothesis; 3) assumptions; 4) definitions. Definitions are the operational or

    working definitions for key terms or terms that are used within the proposal; 5) methodology. The

    methodology section is a place for the researcher to describe how the study will be organized and the

    situation where the hypothesis will be tested. In the methodology section, the researcher should also

    provide details about the techniques and tools to be used for data collection. 6) Data analysis. Therationale for your analysis should be specified.

    Theory plays a crucial role in the design of a research study (Connaway and Powell 2010). A good

    research design includes the broad philosophical and theoretical perspectives that can influence the

    quality and validity of a study (Creswell 2007). Theory helps to make research more productive in that

    it organizes a number of unassorted facts, laws, concepts, constructs, and principles into a

    meaningful and manageable form (Connaway and Powell 2010, 47). Theory helps explain the nature

    of casual relationships between variables (Connaway and Powell 2010). The next step in the standard

    scientific method of inquiry is the formulation of theoretical hypotheses (Babbie 2010). Babbie defines

    hypothesis as a specified testable expectation about empirical reality that follows from a more

    general proposition (Babbie 2010, 46). Not every research study requires the development of formal

    hypotheses. Qualitative and exploratory research designs do not always warrant the development of a

    hypothesis (Connaway and Powell 2010).

    The basic scientific method of inquiry includes identification or development of a theoretical

    framework, identification of the problem, formulation of the hypothesis, and measurement.

    Measurement is the method to be used for data collection and analysis. The level of validity and

    reliability needs to be known and should be built in the data collection and analysis.

    The research project is not likely to succeed unless careful attention has been paid to these steps.

    Yet it is tempting for the researcher to slight, or ignore, these steps in order to get involved in the

    design of the study and the collection and analysis of the data. A more well developed concept and

    plan for the study will save time in the later stages of the research. A question well-stated is aquestion half answered (Connaway and Powell 2010, 67).

    Connaway and Powell (2010, 315) offer the following questions to evaluate the research design:

    1. Does the research design seem adequate and logical for the solution of the problem?

    2. Are the reasons for its choice adequately explained?

    3. Was the methodology explained in an understandable way so that it can be replicated?

    4. If important terms are used in an unusual sense, are they defined?

    5. Are the data collected adequate for the solution of the problem?

    The research questions for the Visitors and Residents (V&R) project developed from the questions

    identified by Connaway and Dickey (2010 ) (White and Connaway 2011-2012 ). The research design

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    14/28

    includes both qualitative and quantitative methods to provide rich data from a non-probability,

    purposive stratified sample that can be compared to a larger sample.

    Mixed Methods

    Frequently the questions librarians want to answer about patron behavior are complex, and require

    multiple approaches. In these cases, it is worthwhile to design mixed methods projects, thatincorporate inquiries that measure and count as well as ask open-ended, descriptive, or analytical

    questions. Done well, a mixed approach can yield a sophisticated picture of what is going on. The

    disadvantages are that such a combined approach is time consuming, and involves large chunks of

    data that need to be triangulated with each other. Mixed method projects frequently require the use of

    Qualitative Data software packages such as NVivo or atlasTI, to assist with data management.

    A study that uses a mixed methods research approach (MMR) incorporates both qualitative and

    quantitative techniques of data collection (Connaway and Powell 2010). The nature of data collection

    is important as the research findings are affected by it; they can lose their validity (Connaway and

    Powell 2010). Therefore, it is important for the research to select two or more data collection

    techniques and methods to measure the variables and test the hypothesis (i.e., triangulation)(Connaway and Powell 2010). According to Burgess, triangulation refers to three points of view within

    a triangle (Burgess 1984). The term mixed methods, used by Gorman and Clayton, allows the

    researcher to use a range of methods, data, investigators, and theories within a study (Gorman and

    Clayton 2005). For example, information about library use could be collected with questionnaires,

    interviews, documentary analysis, and observation [and] consistent findings among

    differenttechniques [suggests] that the findings are reasonably valid (Connaway and Powell 2010,

    146). However, if there are inconsistencies with the results, it would indicate the need for further

    research (Connaway and Powell 2010). Morgan provides a discussion of the different approaches for

    combining qualitative and quantitative methods, as well as identifying the challenges of combining the

    two methods (Connaway and Powell 2010, 146; Morgan 2006).

    According to Connaway and Powell (2010, 146), triangulation (i.e., multiple methods of datacollection) can include:

    Interviews (individual and group)

    Observation

    Survey

    Documentary analysis

    Questionnaires

    Benefits of mixed methods include (Connaway and Radford 2013):

    Convergence, corroboration, correspondence, or complementarity.

    Development: the use of results from one method to help develop or inform another.

    Initiation: recasting of questions or results from one method to another.

    Expansion: extend breadth and range or enquiry by using different methods.

    Validity is enhanced.

    Outcomes of mixed methods include (Connaway and Radford 2013; Todd 2008):

    Offset: weakness of one method can be compensated for in strengths of another.

    Completeness: a more comprehensive account.

    Explanation: one method helps explain findings of another.

    Unexpected results: surprising, intriguing, and adding to richness of findings.

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    15/28

    The Visitors and Residents (V&R) project incorporates mixed methods; using semi-structured

    interviews with four different groups of individuals in different educational stages; diaries ; and an

    online survey (White and Connaway 2011-2012 ). See Table 1. The critical incident technique was

    included in the questions for both the semi-structured interviews, diaries, and online survey (White

    and Connaway 2011-2012 ). The multi-method design enables triangulation, which provides a cross

    examination of the data analysis and results. The quantitative and qualitative methods, includingethnographic methods that devote individual attention to the subjects, yield a very rich data set

    enabling multiple methods of analysis (Connaway, Lanclos, White, Le Cornu, and Hood 2012).

    Key Questions for Mixed Methods (Connaway and Radford 2013):

    1. Use methods simultaneously or sequentially?

    2. Which method, if any, has priority? Why?

    3. Why mixing? E.g., triangulation, explanation, or exploration?

    4. How do mixed methods impact data analysis?

    Data Collection Methods

    Before data collection the researcher must visualise how the data are going to emerge, how the data

    will look, and what will be done with the data (Berg 2012). For example, will data be in an audio file, or

    be written long hand in spiral notebooks? (Berg 2012). Oftentimes, when the raw data are collected,

    they are not immediately ready for analysis (Berg 2012). The data must be organized. For example,

    field notes must be typed up and interviews must be transcribed in order for the data to be managed.

    According to Creswell (2007) a circle of interrelated activities is the best way to display the data

    collection process. See Figure 1. It is a process of engaging activities that goes beyond just collecting

    data (Creswell 2007). The activities include locating the site/individual, storing data, resolving field

    issues, and recording information, collecting data, purposefully sampling, and gaining access and

    making rapport. These five approaches to inquiry differ in the diversity of information collected, the

    unit of study being examined, the extent of field issues discussed in the literature, and theintrusiveness of the data collection effort (Creswell 2007, 144).

    Figure 1: The data collection process (after Creswell).

    GainingAccess and

    Making Rapport

    Locating Site/Individual

    Storing Data

    ResolvingField Issues

    RecordingInformation

    CollectingData

    PurposefullySampling

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    16/28

    Sampling

    Thinking about sampling is important because at the end of the project, you want to be able to make

    effective arguments about the representativeness of your research results. Good sampling is in part

    about avoiding problems of self-selection; for example, fans of the library volunteering to help out

    the library by participating in the research, and therefore, skewing the data in one way or another.

    Concerns about the representativeness of your data are important when identifying how students use

    an academic library. Observations should not only be conducted on a Saturday morning before an

    exam week, but also conducted during different time periods, at different points in the semester, and

    at multiple locations within the library in order to have a more representative sample of how students

    use the library.

    Sampling often is one of the most vital steps in survey research. It is necessary to ensure the quality,

    validity, and credibility of research (Davis, Gallardo, and Lachlan 2013). Proper sampling ensures that

    what is said to be represented is appropriately represented (Davis, Gallardo, and Lachlan 2013).

    Intensive sampling methods have been developed and used mainly within the context of survey

    research (Connaway and Powell 2010). However, different techniques of sampling are equally

    applicable to other research methods such as content analysis, experimentation, and even fieldresearch (Connaway and Powell 2010). It is necessary to have a basic understanding of some of the

    concepts and terms related to sampling before considering the standard techniques.

    Connaway and Powel (2010) identify eight basic terms and concepts:

    1. Universethe theoretical aggregation of all units or elements that apply to a particular

    survey (116). For example, if a researcher were surveying college seniors, the universe

    would be all of college seniors, regardless of their location, etc.

    2. Populationthe total of all cases that conform to a pre-specified criterion or set of criteria

    (116). A population is a specific part of a universe. For example, college seniors in America.

    The population must be chosen after the selection of the sample with regard to the selection

    criteria, desired size, and the parameters of the survey population. Cost and access to thepopulation are good factors to consider when selecting a sample as well.

    3. Population stratuma subdivision of a population based on one or more specifications or

    characteristics (116). A population stratum could be college seniors in America with that are

    under the age of 25.

    4. Elementan individual member or unit of a population (116). Each individual American

    college senior under 25 would be considered an element.

    5. Censusa count or survey of all the elements of a population, and the determination of the

    distribution of their characteristics (116). A census is typically not possible.

    6. Samplea selection of units from the total population to be studied (116). A sample is ideal

    because it is less costly than studying a population. However, it may be impossible to

    determine how representative a sample is of the population.7. Casean individual member of the sample

    8. Sampling Frame (i.e., population list)the actual list of units from which the, or some part of

    the sample, is selected

    Representativeness is vital to sampling (Connaway and Powell 2010). It ensures that the researcher

    does not have to contact, or interview, an entire population in order to get accurate data about that

    population. In quantitative research designs the sample must be generalisable and representative to

    an entire population. Generalisability can ensure that the researchers sample will apply to other

    situations and people in which the study represents (Davis, Gallardo, and Lachlan 2013).

    The Visitors and Residents (V&R) project, which proposes that context and motive matter more to

    individuals decisions about technology and digital spaces than does age or skill, utilized stratified,

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    17/28

    purposive, nonprobability sampling methods (White and Connaway 2011-2012 ). Quota, purposive,

    and snowball sampling techniques were used.

    The project has specific goals for the stratifications, in that it targets US and UK populations within

    four educational stages, with a specific number of participants within each category. See Table 1. This

    is a descriptive study (Connaway and Powell 2010), intended to identify individuals engagement with

    information and technology in both digital and face-to-face situations. The project used both

    qualitative and quantitative methods. The qualitative instruments used were semi-structured

    interviews, conducted with 4 different groups of individuals each at different educational stages, from

    both the US and the UK. The small sample size restricts the generalisability of the results of the semi-

    structured interviews and monthly diaries or interviews. However, the findings provide a useful picture

    of how a specific group of individua ls engage with technology and how these individuals behaviours

    change or do not change during a 3-year period. A quantitative tool used was a survey, which was

    selected by quota sampling, to be generalisable as well as comparative. The use of both the

    qualitative methods with a small sample size and the quantitative method with the large sample size

    can provide the rich, thick data descriptions as well as the numerical analyses and comparisons

    (Connaway, Lanclos, White, Cornu, and Hood 2012). If the quantitative data collected from the larger

    sample and the qualitative data collected from the small sample have little variance, it will be possibleto generalize all of the findings (White and Connaway 2011-2012 ).

    Types of Sampling

    There are two types of sampling methods: probability and nonprobability. When selection

    probabilities are unknown, one cannot make legitimate use of statistical inference (Connaway and

    Powell 2010, 117). With nonprobability sampling the researcher is not able to generalise from the

    sample to the population. However, social science research often is conducted under circumstances

    that do not permit the kinds of probability samples that are used in large-scale surveys (Babbie 2010).

    By definition, nonprobability sampling often is easier and less expensive than probability sampling.

    There are many situations where probability sampling is not possible, and therefore nonprobability

    sampling is suiting (Babbie 2010, 192). A benefit of nonprobability samples is that oftentimes they

    are easier and cheaper than probability sampling.

    Examples of Nonprobability Sampling

    Accidental Sample

    Accidental samples often are conducted on a first come, first serve type basis; having no real

    preferential selection of participants (Connaway and Powell 2010, 117). In utilizing an accidental

    sampling technique, the researcher simply selects the cases that are at hand until the sample

    researches a desired, designated size (Connaway and Powell 2010, 117). With an accidental

    sample, there is little, if any, assurance that the sample was representative of the larger population(Connaway and Powell 2010).

    Quota Sample

    Similar to accidental sampling, quota sampling addresses the issues of representativeness (Babbie

    2010, 194). However, the two methods approach the representativeness differently (Babbie 2010,

    194). Quota sampling begins with a matrix, or a table, describing the characteristics of the target

    population, whereas accidental sampling does not (Babbie 2010, 194). The researcher may need to

    know what part of the population is male and what part female as well as what proportions of each

    gender fall into various categories, such as age, education levels, and ethnic groups (Babbie 2010,

    194). Finding out these proportions is characteristic of quota sampling. Despite the fact that quota

    sampling resembles probability sampling, it has inherent complications. First, the quota frame (theproportions that different cells represent) must be accurate, and its often difficult to get up-to-date

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    18/28

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    19/28

    every tenth element for the sample would have a ratio of 1:10 and a sample of 100 (Connaway and

    Powell 2010, 123). A downfall to systematic sampling is that not every element has an equal chance

    of being drawn (Connaway and Powell 2010).

    Stratified Random Sample

    This type of sampling is a modified version of systematic and simple random sampling. It reduces thenumber of cases needed to achieve a given degree of accuracy (Connaway and Powell 2010, 123).

    Before beginning the Stratified Random Sample technique, the population elements must be divided

    into groups or categories and then random samples should be drawn from each group (Connaway

    and Powell 2010). Each element should only appear in one group and different sampling methods can

    be used for each group (Connaway and Powell 2010). The two basic types of stratified random

    samples are proportional and disproportional (Connaway and Powell 2010).

    Cluster Sample

    Cluster samples are used when populations are not listed easily (Connaway and Powell 2010). For

    example, trying to sample the entire population of colleges in the United States is impractical;

    therefore cluster sampling should be utilized in order to get the most representative sample of the

    population. In this type of sampling, the population is divided into clusters or groupsthe researcher

    then randomly selects from the sample of those clusters (Connaway and Powell 2010).

    Determining the Sample Size

    The size of the sample is important, as it is essential for a representative sample and reliable

    information about a population. The sample size cannot be too low. However, it is unnecessary to

    obtain a larger sample than necessary, as it increases the amount of time and money spent on a

    study (Connaway and Powell 2010). There are four criteria for determining a sample size: 1) the

    amount of precision necessary between the sample and the population 2) the inconsistency of the

    population influences the sample size needed to achieve a level of accuracy 3) the method of

    sampling to be used can affect the size of a suitable sample 4) the way in which the results are going

    to be analysed influences decisions about sample sizes (Connaway and Powell 2010).

    Statistical formulas are used for finding the appropriate sample size and to find the degree of

    accuracy with which a researcher wants to estimate a certain characteristic, or variability, of a

    population (i.e., the standard deviation) (Connaway and Powell 2010). These formulas consider the

    confidence level, which is related to the difference of samples attributed to chance or a real difference

    (Connaway and Powell 2010). The confidence level is equal to 1 minus the level of significance or 1

    minus the probability of rejecting a true hypothesis (Connaway and Powell 2010, 129). Sample

    calculators are also free from DSS Research at http://www.dssresearch.com/toolkit/sscalc/size.asp for

    determining sample size and the confidence level (Connaway and Powell 2010).

    Sampling Error

    Formulas are also used to find the sampling error (i.e., the standard error of the mean). (Connaway

    and Powell 2010). The sampling error represents how much the average of the means of an infinite

    number of samples drawn from a population deviates from the actual mean of that same population

    (Connaway and Powell 2010, 132). Finding this deviation is necessary because the mean of the

    sampling distribution is not always the mean from the actual population, thus a sampling error

    (Connaway and Powell 2010). A point estimate or an interval estimate is also used by statisticians to

    find an accurate mean of the population (Connaway and Powell 2010).

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    20/28

    Nonsampling Error

    A sampling error can also be due to errors in measurement. Respondents may lie about their age and

    report figures inaccurately for various unknown reasons. Therefore, it is easy to see that determining

    nonsampling error can be difficult (Connaway and Powell 2010). When sampling error decreases,

    nonsampling error tends to increase (Connaway and Powell 2010).

    Questionnaire

    The Visitors and Residents (V&R) project generated a questionnaire that then became our semi-

    structured interviews , online survey , and follow-up diary questions (White and Connaway 2011-

    2012 ).

    The questionnaire is an instrument that includes a series of instruments that, once formulated, can be

    used in individual or focus group interviews , online or paper surveys , and diaries . The major steps

    involved in planning the questionnaire are not that different from the planning that should go into the

    early development of a research study (Connaway and Powell 2010). The first step is to know the

    purpose of the study, or to define the problem. Secondly, the researcher must consider previous

    advice of experts and existing research that is related. The researcher must then hypothesize asolution to the proposed problem or define research questions. Fourth, the information needed to test

    the hypothesis must be identified. This includes data collection, organization, presentation and how

    the data is going to be analyzed. Next, the researcher must identify the population to be sampled, or

    research subjects, and considering if they are accessible or not. Finally, the data collection method

    must be selected. Advantages and disadvantages to each technique of data collection should be

    considered in order to make sure to use the most appropriate one (Connaway and Powell 2010).

    The questionnaire must be constructed properly if it is to be successful. The arrangement of questions

    can be in tunnel format, which is when questions tend to be similar in terms of breadth and depth

    throughout the questionnaire (Davis, Gallardo, and Lachlan 2013, 213). They can also be arranged in

    funnel format, beginning with broad questions and ending with more narrow questions, or invertedfunnel format, which is the opposite (Davis, Gallardo, and Lachlan 2013). The types of questions

    asked must pertain to the type of information needed (Connaway and Powell 2010).

    The following are types of questions that can be asked (Connaway and Powell 2010):

    1. Factual

    2. Opinions

    3. Attitudes

    4. Information

    5. Self-perception

    6. Standards of Action

    7. Past or Present Behaviour8. Projective Questions

    Davis, Gallardo, and Lachlan (2013) offer five strategies for writing questions. Questions:

    1. should be clear

    2. must only be about one issue

    3. should avoid biased wording

    4. should avoid making assumptions

    5. should avoid offending participants if possible.

    The question format is also important, as it can affect obtaining the desired information.

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    21/28

    The two basic types of questions are open-ended and fixed-response questions (or closed-ended)

    (Connaway and Powell 2010; Davis, Gallardo, and Lachlan 2013). An example of an open ended

    question could be, what do you think about the campus? An example of a fixed-response could be

    simply asking whether or not the participant has a smart phone. (For information about advantages

    and disadvantages to both types of questions see Connaway and Powells (2010) Basic Research

    Methods for Librarians.

    Scales can also be utilized to obtain responses from participants. Types of scales include an itemized

    rating or specific category scale, a graphic rating scale, and a rank-order scale (Connaway and

    Powell 2010). The utilization of each specific scale should be relevant to the goals of the research

    questions and the study. The content and selection of questions is also important. In order to

    eliminate redundancy, the researcher can use a variable question matrix, which is simply d a table

    with the questions numbered across one edge and the variables across the other (Connaway and

    Powell 2010). Questions should not be biased and the participants should be able to answer the

    questions (i.e., have all the necessary information) (Connaway and Powell 2010). Also, each question

    should only ask one question. However, cross-checking questions can be utilized. These are

    questions that ask the same question as one or more other questions to ensure that the respondent

    gives a correct answer (Connaway and Powell 2010). Also, careful attention should be given to thequestion wording and the sequencing of the questions, as to not skew the studys validity and

    reliability (Connaway and Powell 2010).

    Once the questionnaire is finished, or finalized, it should be checked by an expert who can catch

    methodological weaknesses, such as insufficient questions or bad scales (Connaway and Powell

    2010). After the questionnaire is checked, it should be pre-tested, or used in a pilot study (Connaway

    and Powell 2010). This test gives the researcher an opportunity to increase reliability and validity, as it

    may also point out weaknesses of the questionnaire. The sample used for the pre-test is often a

    nonprobability sample, or a convenience sample (Connaway and Powell 2010). Lastly, the

    questionnaire should receive some final edits, such as formatting and wording.

    Focus Group Interviews

    Focus group interviews can be used to identify the information gathering patterns of scholars and

    other specific users groups. The participants could be asked to discuss the sources they use to find

    information, what types of information they find most useful, how they evaluate the information they

    retrieve, and what resources or tools would facilitate information retrieval for their specific purposes

    (Connaway and Powell 2010, 173; Connaway 1996). (For more specific examples of library uses of

    focus groups see Connaway & Powells Fifth Edition of Basic Research Methods for Librarians).

    A form of focus group interview was used to test the Visitors and Residents (V&R) mapping exercises

    and to share our findings with experts within the library and information technology professions (White

    and Connaway 2011-2012 ). Three of the expert sessions were conductedone at the EDUCAUSEConference in Denver in November 2012 and two at the 2013 American Library Association (ALA)

    Annual Conference in Chicago in June and July 2013. Most lately, at ALA 2013, the sessions were

    conducted with 19 experts in areas such as information literacy, instruction, evaluation, and user

    experience. The participants represented a diverse selection of institutions from all regions of the

    United States and one from Australia. While the age range was also diverse, the gender was much

    more heavily weighted towards females with only three males attending. The sessions ran for 90

    minutes each and were conducted interactively, moving between presentation and mapping exercises

    and group discussion.

    Along with the clinical interview, the nondirective interview, and personal history, the focus group

    interview is a less structured interview (Connaway and Powell 2010; Connaway 1996). Less

    structured interviews are best utilized in the beginning stages of a qualitative study or investigation, asthey can introduce the researcher to an otherwise unexplored topic or field (Connaway and Powell

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    22/28

    2010; Connaway 1996). Focus group interviews can help the interviewer facilitate complex decision

    making, examine existing research questions, and identify and address important issues (Connaway

    and Powell 2010; Connaway 1996). They are designed for the in-depth examination of a topic or

    situation of interest; to get the research participants beliefs and feelings and to learn how they

    manifest in behaviour (Connaway and Powell 2010; Connaway 1996; Davis, Gallardo, and Lachlan

    2013; Goldman and McDonald 1987). Focus group interviews are useful for developing and refiningresearch instruments, such as interview schedules and questionnaires; getting participants

    explanations of results from earlier studies; assessing research sites or study populations; and

    developing ideas and concepts (Connaway and Powell 2010; Connaway 1996). They can be used to

    replace the individual interview or a questionnaire, if warranted (Connaway and Powell 2010;

    Connaway 1996). They can also be used in combination with quantitative or other qualitative

    (Connaway and Powell 2010; Connaway 1996).

    A focus group interview is a group discussion among five to twelve participants about a specific topic

    or situation that is led by one or 2 moderators (Davis, Gallardo, and Lachlan 2013). Focus group

    discussions start out broadly and progressively narrow down to the focus of the research (Young

    1993). Focus groups are beneficial when the interaction among participants will yield the best

    information, when there is not a lot of time to collect information, when the participants arecooperative with each other, and when participants interviewed one-on-one are hesitant to provide

    information (Creswell 2007; Kruger 1994; Morgan 1988; Stewart and Shamdasani 1990). They may

    provide a more comfortable interview atmosphere for the participants.

    Previously, focus group interviews have been used in media testing and marketing fields (Davis,

    Gallardo, and Lachlan 2013). Their (re)discovery by social scientists is fairly recent (Davis, Gallardo,

    and Lachlan 2013). Since, the focus group interview has been used in academic, newspaper,

    hospital, public, and state libraries to gather information on users perceptions of services and

    collections (Connaway and Powell 2010, 173; Connaway 1996). For example,focus group interviews

    have been specifically used to gather information about the beliefs and work of librarians, to evaluate

    online searching by end users, and in the research and development of online public accesscatalogues (Connaway and Powell 2010; Connaway 1996).

    The focus group interview has been used, and can be used, in all types of libraries (Connaway and

    Powell 2010; Connaway 1996). Library and information organizations can use the focus group

    interview to develop needs assessment, community analysis, and promotional strategies for new

    services (Connaway and Powell 2010, 173; Connaway 1996). This method can also be used in

    library and information science research to answer questions regarding the assessment of library

    services and resources, which includes online public access catalogues and online resources

    (Connaway and Powell 2010; Connaway 1996).

    Designing the Focus Group

    Before the focus group interview is facilitated, the researcher must decide what, specifically, is going

    to be studied and who is going to be interviewed. This is important, as the designing of the focus

    group depends on the objectives of the study and the type of participants (Connaway and Powell

    2010; Connaway 1996). It is essential to select participants who provide the most representative

    sample. However, the selection of the moderator is equally important. Preferably, the moderator will

    be an outside person, trained in focus group techniques, with good communication skills (Connaway

    and Powell 2010, 174; Connaway 1996). The moderator should begin the session with an appropriate

    introduction and ice breaker before shifting to the actual interview guide (Connaway and Powell 2010;

    Connaway 1996). The interview guide, or discussion schedule, is created as a projective technique,

    designed to purposely encourage a relaxed free-flow of associations and evade participants built-in

    censoring mechanisms (Connaway and Powell 2010; Connaway 1996; Young 1993). During the

    interview, the moderator serves as a facilitator and a type of gatekeeper. However, the moderator

    needs to keep from guiding and directing the conversation too much; listening mostly to the

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    23/28

    participants and not editing or judging their comments (Connaway and Powell 2010; Connaway

    1996). In most cases, it is not unusual for the moderator to bring snacks to the session. The length of

    a focus group interview is usually one or two hours and it may be necessary to have more than one

    session (Connaway and Powell 2010; Connaway 1996).

    Creswell (2007) recommends six criteria to consider for a successful focus group:

    1. use dependable recording,

    2. design and use the interview protocol,

    3. refine the interview questions and the procedures further through pilot testing,

    4. determine the place for conducting the interview,

    5. obtain consent from the participants upon arriving at the interview site, and

    6. keep to the questions.

    Recorders are often used in focus group interviews, as opposed to solely taking notesthe recording

    can be transcribed later (Connaway and Powell 2010; Connaway 1996).

    Essentially there are five guidelines for a person that is transcribing the focus group interview:

    1. trace the threads of an idea throughout the discussion,

    2. identify the subgroup or individual to whom an idea is important,

    3. distinguish between ideas held in common from those held by individuals,

    4. capture the vocabulary and style of the group, and

    5. distinguish among perceptions, feelings, and insights (Connaway and Powell 2010, 175;

    Connaway 1996).

    Disadvantages to the Focus Group Interview

    As mentioned, focus group interviews are beneficial to the researcher as they provide an opportunity

    for in depth discussion about a potentially unfamiliar topic. They can aid in the refinement of existing

    research questions or survey instruments and surface important information. However there are a

    couple disadvantages. During the session one participant may dominate, not giving more reserved

    individuals an opportunity to participate. Therefore, the researcher must be careful to encourage all

    participants to talk (Creswell 2007). Secondly, focus groups (i.e., unstructured surveys) are harder to

    administer and to analyse than more structured surveys (Connaway and Powell 2010; Connaway

    1996).

    Below are tips specifically for the focus group moderator, and a listening behaviour exercise that

    gathers useful information about the focus group interview participants (Mild and Johnson-Jones

    2000).

    ReferencesBabbie, Earl R. 2010. The practice of social research. Belmont, CA: Wadsworth/Cengage Learning.

    Berg, Bruce L., and Howard Lune. 2012. Qualitative research methods for the social sciences.

    Boston: Pearson.

    Bogdan, Robert, and Steven J. Taylor. 1975. Introduction to qualitative research methods: A

    phenomenological approach to the social sciences. New York: Wiley.

    Brenner, Michael. 1985. Intensive interviewing. In The research interview: Uses and approaches, ed.

    Michael Brenner, Jennifer Brown, and David V. Canter, 147-162. London: Academic Press.

    Burgess, Robert G. 1984. In the field: An introduction to field research. London: Allen and Unwin.

  • 8/10/2019 Evaluating Digital Services Research Methods i1

    24/28

    Connaway, Lynn Silipigni, and Marie L. Radford. 2011. Seeking synchronicity: Revelations and

    recommendations for virtual reference. Dublin, OH: OCLC Research.

    http://www.oclc.org/reports/synchronicity/full.pdf.

    Connaway, Lynn Silipigni, and Marie Radford. 2013. Academic library assessment: Beyond the

    basics. Presented at Marquette University, July 31, in Milwaukee, Wisconsin.

    Connaway, Lynn Silipigni, and Ronald R. Powell. 2010. Basic research methods for librarians. Santa

    Barbara, CA: Libraries Unlimited.

    Connaway, Lynn Silipigni, and Timothy J. Dickey. 2010. The digital information seeker: Report of

    findings from selected OCLC, RIN, and JISC user behavior projects.

    http://www.jisc.ac.uk/media/documents/publications/reports/2010/digitalinformationseekerreport.pdf.

    Connaway, Lynn Silipigni, Donna Lanclos, David S. White, Alison Le Cornu, and Erin M. Hood. 2012.

    User-centered decision making: A new model for developing academic library services and systems.

    IFLA 2012 Conference Proceedings, August 11-17, Helsinki, Finland.

    Connaway, Lynn Silipigni. 1996. Focus group interviews: A data collection methodology for decisionmaking. Library Administration and Management 10, no. 4: 231-239.

    Creswell, John W. 2007. Qualitative inquiry and research design: Choosing among five approaches.

    Thousand Oaks, CA: Sage.

    Davis, Charles H. 1982. Information science and libraries: A note on the contribution of information

    science to librarianship. The Bookmark 51/52: 84-96.

    Davis, Christine S., Heather L. Gallardo, and Kenneth L. Lachlan. 2013. Straight talk about

    communication research methods. Dubuque, IA: Kendall Hunt.

    De Vaus, David A. 2005. Research design. London: SAGE.

    EBLIP. 2013. Home. Evidence Based Library and Information Practice.

    http://ejournals.library.ualberta.ca/index.php/EBLIP/.

    Eldredge, Jonathan. 2006. Evidence based librarianship: The EBL process. Library Hi Tech 24, no. 3:

    341-354.

    Fisher, Shelagh, and Tony Oulton. 1999. The critical incident technique in library and information

    management research. Education for Information 17, no. 2: 113126.

    Flanagan, John C. 1954. The critical incident technique. Psychological Bulletin 51, no. 4: 327358.

    Glazier, Jack D., and Ronald R. Powell, eds. 1992. Qualitative research in information management.

    Englewood, CO: Libraries Unlimited.

    Goldman, Alfred E., and Susan S. McDonald. 1987. The group depth interview. Englewood Cliffs, NJ:

    Prentice Hall.

    Gorman, G. E., and Peter Clayton. 2005. Qualitative research for the information professional: A

    practical ha