Top Banner
Observations from the Field: Sharing a Literature Review Rubric Rosemary Green Mary Bowser Shenandoah University SUMMARY. This paper reports an ongoing research project aimed at developing an analytic rubric used with graduate literature reviews, fo- cusing on the process of adapting and testing it as an instructional and assessment instrument. Early data are presented with explanations of instrument adaptation and our reflections on training and implementa- tion. Emphasis is given to our techniques for transferring the project from one off-campus graduate program to a similar one, while continu- ing the faculty-librarian model of collaboration. We offer recommenda- tions for modifications to the process and highlight strengths of the rubric. doi:10.1300/J111v45n01_10 KEYWORDS. Scoring rubric, faculty-librarian collaboration, informa- tion literacy, literature review, graduate education, academic writing INTRODUCTION Preparing a thesis may be the first large-scale writing task that many graduate students undertake, and often students are exposed to the scholarly writing process for the first time in their master’s program. From the student’s point of view, writing a thesis and reviewing the [Haworth co-indexing entry note]: “Observations from the Field: Sharing a Literature Review Rubric.” Green, Rosemary, and Mary Bowser. Co-published simultaneously in Journal of Library Administration (The Haworth Information Press, an imprint of The Haworth Press, Inc.) Vol. 45, No. 1/2, 2006, pp. 185-202; and: The Twelfth Off-Campus Library Services Conference Proceedings (ed: Julie A. Garrison) The Haworth Information Press, an imprint of The Haworth Press, Inc., 2006, pp. 185-202. Available online at http://jla.haworthpress.com doi:10.1300/J111v45n01_10 185 Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009
18

Observations from the Field

Jan 12, 2023

Download

Documents

Liz England
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Observations from the Field

Observations from the Field:Sharing a Literature Review Rubric

Rosemary GreenMary Bowser

Shenandoah University

SUMMARY. This paper reports an ongoing research project aimed atdeveloping an analytic rubric used with graduate literature reviews, fo-cusing on the process of adapting and testing it as an instructional andassessment instrument. Early data are presented with explanations ofinstrument adaptation and our reflections on training and implementa-tion. Emphasis is given to our techniques for transferring the projectfrom one off-campus graduate program to a similar one, while continu-ing the faculty-librarian model of collaboration. We offer recommenda-tions for modifications to the process and highlight strengths of therubric. doi:10.1300/J111v45n01_10

KEYWORDS. Scoring rubric, faculty-librarian collaboration, informa-tion literacy, literature review, graduate education, academic writing

INTRODUCTION

Preparing a thesis may be the first large-scale writing task that manygraduate students undertake, and often students are exposed to thescholarly writing process for the first time in their master’s program.From the student’s point of view, writing a thesis and reviewing the

[Haworth co-indexing entry note]: “Observations from the Field: Sharing a Literature Review Rubric.”Green, Rosemary, and Mary Bowser. Co-published simultaneously in Journal of Library Administration(The Haworth Information Press, an imprint of The Haworth Press, Inc.) Vol. 45, No. 1/2, 2006, pp. 185-202;and: The Twelfth Off-Campus Library Services Conference Proceedings (ed: Julie A. Garrison) The HaworthInformation Press, an imprint of The Haworth Press, Inc., 2006, pp. 185-202.

Available online at http://jla.haworthpress.comdoi:10.1300/J111v45n01_10 185

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 2: Observations from the Field

literature present opportunities for considerable anxiety, but directguidance in graduate research writing may not be available (Bruce,1994a; Green & Bowser, 2002). Graduate students are not necessarilyequipped with the requisite skills to execute large research writing proj-ects, including mastery of the literature, nor can it be presumed that theywill be taught these skills during their program. Students’ writing skillsand knowledge about those conventions are assumed, yet they are ex-pected to perform competently and to write a thesis relatively unas-sisted. Even such basic skills as citing and attributing proper referenceto others’ work may be taken for granted (Hendricks & Quinn, 2000). Inactuality, the graduate student will find very little guidance for combin-ing linguistic attributes with structure in order to complete a literaturereview (Swales & Lindemann, 2002).

In response to the difficulties observed when our graduate educationstudents at Shenandoah University began constructing their literaturereviews, we developed a rubric to guide students in constructing thissection of the thesis. This paper reports an ongoing research projectaimed at developing an instrument used with graduate literature re-views. In a faculty-librarian partnership, we created an instructional andassessment rubric with formative, summative, and evaluative applica-tions. Students use the rubric as a writing guide, instructors use it to as-sess individual literature reviews, and evaluators use it to compareliterature reviews across class groups. In an initial pilot phase, the rubricwas used with off-campus graduate education students in team-taughtthesis courses, and data were collected. On the basis of the adaptive con-struction of the rubric, a project at a second university was undertakento test the rubric for transferability to a similar instructional context.

The process of adapting and testing the rubric at a second institutionis the focus of this report. We present early data from “Best PracticesUniversity” (BPU) with explanations of instrument adaptation and ourreflections on training and implementation. Emphasis is given to ourtechniques for transferring the project from one off-campus graduateprogram to a similar one, while continuing the faculty-librarian modelof collaboration. We offer recommendations for modification to theprocess as we experienced it and highlight the strengths of the rubric.

UNDERTAKING THE LITERATURE REVIEW

As it is expected of graduate students, the literature review processrequires accumulating, reading, comprehending, evaluating, organis-

186 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 3: Observations from the Field

ing, and synthesizing relevant literature. Knowledge of the empirical,theoretical, and methodological foundations of one’s research is dem-onstrated in the literature review, a central feature in both the researchproposal and completed thesis. The literature review contextualizes cur-rent issues, describing art and science, theory and practice. Undertakingthe literature review eventually leads to determination of existing schol-arship, support for problem formulation, and location of one’s researchwithin existing bodies of knowledge. The review of literature contrib-utes insights or applications otherwise not identifiable in existing litera-ture (Rumrill & Fitzgerald, 2001), grounding the new research andestablishing its individuality. Furthermore, the literature review offersepistemological context, revealing the knowledge of previous re-searchers and knowledge of the review writer (Lea & Street, 1998). Asstudents learn to speak from the text, relying less on received wisdomand personal opinion, they become critical readers of research (Riehl,Larson, Short, & Reitzug, 2000), able to situate an argument by evaluat-ing and incorporating relevant literature.

Developing and composing the literature review takes place over aperiod of time, following a nonlinear path from emergence to comple-tion. In a cycle punctuated by gathering, analysing, organising, drafting,and reflecting, the literature review process is characterized by inter-twining functions. While the literature review is ubiquitous throughoutmost postgraduate research, literature about the literature review is neg-ligent in addressing the complexity of the process and end product. Aca-demic interest in the literature review as a topic for research is ratherlimited and seldom turns toward identifying, evaluating, and integratingpast research (Bruce 1994b; Cooper, 1988).

Descriptive or instructional reports address application of the litera-ture review in scholarship and research (e.g., Miller & Crabtree, 2004;Rumrill & Fitzgerald, 2001); advice on construction of a review of theliterature (e.g., Afolabi, 1992; Swales & Lindemann, 2002); and infor-mation literacy events such as acquiring, evaluating, and organising theliterature as elements of the literature review process (e.g., Morner,1993; Nimon, 2002). A second and smaller group of empirical studiesfocuses on the review of literature as a phenomenon that frames a re-search study or serves as an object of inquiry. Qian (2003) and Krishnanand Kathpalia (2002) reported small-scale studies in which literature re-view writing was used as a vehicle for understanding students’ strategiesfor learning academic writing. As the focus of inquiry, investigations of

Rosemary Green and Mary Bowser 187

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 4: Observations from the Field

the literature review and practices in teaching and learning the processappear in a comparably small cohort. Bruce’s phenomenographic analy-sis of doctoral students’ experiences of the literature (1994a) and subse-quent reflection model (1994b, 1996) spoke to the consolidation of thegathering, evaluating, and synthesizing stages of the literature reviewprocess. Two studies reported by faculty-librarian teams employed cri-terion-reference rubrics that assess the quality of literature reviews writ-ten by doctoral or master’s students (Boote & Beile, 2005; Green &Bowser, 2003). Both articulated criteria for the graduate literature re-view and pointed to clearer standards and instructional strategies forachieving well-constructed literature reviews. In a single doctoral dis-sertation devoted to the influence and role of the doctoral literature re-view, Zaporozhetz (1987) reported on doctoral advisors’ instructionalpractices and attitudes toward the review of literature, concluding thatthe literature review is given the least instructional consideration amongdissertation chapters.

Literature from the past two decades offers very few discussions ofteaching and learning the literature review process, with the assumptionmade that students bear sufficient skills and knowledge specific to re-searching, interpreting, and writing about scholarly literature (Dreifuss,1981; Libutti & Kopala, 1995; Williams, 2000; Zaporozhetz, 1987).Authors of academic research belong to a scholarly club (Macauley2001a), writing for members of that club. Much of the research that isread by graduate students presumes an ability to use earlier research lit-erature, yet novice researchers may experience difficulty decipheringresearch literature, perceiving it to be privileged discourse (Bruce,1994b; Riehl et al., 2000). Hart (1998) noted that the quality of litera-ture reviews produced by students varies considerably, and annotatedbibliographies rather than critiques of research are all too common. Theliterature describes a constellation of assumptions regarding studentpreparedness for graduate research and writing. Added to the presump-tion that postgraduates are well equipped for the literature review taskare further assumptions that they are able to negotiate the informationenvironment and, overall, that they are sufficiently prepared to conductgraduate level research (Hernandez, 1985; Kiley & Liljegren, 1999). Asintegral to the research process as the literature review should be, stu-dents have been known to produce a problem statement, hypothesis, andresearch design in advance of reviewing relevant literature. The hypoth-esis or research problem should emerge naturally from the literature re-view, yet when the quality of graduate research is assessed, rarely is a

188 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 5: Observations from the Field

qualifying literature review that supports a testable hypothesis articu-lated as a competency (Winston & Fields, 2003).

While most introductory research textbooks in education and the so-cial and behavioral sciences contain a section on reviewing prior re-search (e.g., Creswell, 2005: Gall, Gall, & Borg 2003), they concentrateto a greater degree on methods of data collection and analysis. In a re-view of research writing texts and handbooks, Paltridge (2002) foundthat less attention is afforded thesis or dissertation writing itself thanother elements such as methods and analysis; the same can be said forthe treatment of the literature review in these texts. Where the literaturereview is outlined, texts primarily emphasize procedures for literaturegathering and interpreting. Recent texts in information science research(e.g., Gorman & Clayton, 2005; Powell & Connaway, 2004) outlined asimilar approach to accumulating literature, and students are left towonder how to treat that material once gathered. Mauch and Birch(1998) segregated the steps in literature searching from the written re-view of the literature; the benefits of reviewing the literature are out-lined, but no clear strategies for organizing and writing are given. Thepreference in many texts is for literature searching and reviewingsources rather than constructing a written synthesis; perhaps the authorsassume that, once the student has resources in hand, writing a literaturereview follows readily. Generally, criteria and prescriptions for re-searching and writing a scholarly literature review remain vague and di-verse (Boote & Beile, 2005; Bruce, 1994c).

Three notable exceptions to these segregating approaches are Hart’s(1998), Pan’s (2003), and Galvan’s (2005) guides, devoted entirely towriting the literature review from first draft to completed review.Throughout Doing a Literature Review, Hart offered formulae, check-lists, and criteria intended to help students establish a supportable argu-ment and develop attributes in academic writing. Galvan’s WritingLiterature Reviews and Pan’s Preparing Literature Reviews presentedcomplementary workbook approaches, recommending exercises in read-ing empirical research and writing evaluative essays. Galvan assuredstudents that they will learn how to write a review of the literature, buthe, too, distinguished literature seeking from writing about research andfailed to define the literature review clearly. Much advice on literaturereviewing is given, with little evidence that such advice is taken and, ifso, to what effect.

Rosemary Green and Mary Bowser 189

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 6: Observations from the Field

INFORMATION LITERACY:GENERATING THE LITERATURE REVIEW

The literature consistently shows that the practice of reviewing theliterature may have two meanings–accumulating relevant literature andwriting about such literature. Cooper (1985) encapsulated these twomeanings in his definition of a literature review:

First, a literature review uses as its database reports of primary ororiginal scholarship, and does not report new primary scholarshipitself . . . Second, a literature review seeks to describe, summarize,evaluate, clarify, and/or integrate the content of the primary re-ports. (p. 7)

To any student commencing the process, reviewing the literature meansthat topic and relevant sources must first be located, acquired, inter-preted, and analysed.

A decade ago, Ackerson (1996) and Libutti and Kopala (1995) attestedthat little in the literature of the day addressed the uniqueness of graduatestudent literature seeking and literature review needs. At one time, infor-mation literacy initiatives in higher education concentrated largely on un-dergraduate education (Dewey, 2001; Grassian & Kaplowitz, 2001).Postgraduates’ instructional needs have been overlooked, perhaps be-cause of the assumption that commencing graduate students have al-ready reached required levels of information skills, either by osmosis orthrough their undergraduate studies (Macauley, 2001a). Historically,efforts toward training and supporting research students have been ex-tensions of undergraduate programs rather than separate programsbased on specific needs of research postgraduates. That imbalance isbeing adjusted, and more initiatives with pedagogically appropriatefoundations are now directed toward the master’s and doctoral student(Genoni & Partridge, 2000). Effective information literacy instructiontargeted toward the graduate learner incorporates methods such as peercollaboration, direct instruction, and a discipline-specific emphasis(Green & Bowser, 2002; Nimon, 2002; Smith & Oliver, 2005; Squires,1998). Student-centered methods of delivering information literacy in-struction include academic coursework (Bruce, 1990; Green, 2006;Kingston & Reid, 1987; Rader, 1990) that incorporates active learning,action research, or problem-based components (Bruce, 2001; Nimon,2002).

190 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 7: Observations from the Field

Postgraduate researchers have highly specialized information needsthat require skills sets that differ from those needed by undergraduates(Ackerson, 1996; Macauley, 2001b; Morner, 1993; Williams, 2000).Advanced information skills of identification, advanced browsing, crit-ical evaluation, and presentation are particularly required of graduateresearchers (Barry, 1997). Foreshadowing later research, Zaporozhetz(1987) and Alire (1984) spoke with groups of graduate students andfound that the majority admitted a need for proper instruction in acquir-ing and using bibliographic sources. Macauley (2001a, 2001b) andNimon (2002) documented interviews in which students acknowledgethe need for information literacy skills in order to complete their disser-tations successfully.

With information literacy integrating into the mission of higher educa-tion institutions, those in the community who share the same educationalgoals are brought together in more coordinated efforts. Expanding ven-ues for information retrieval offer new opportunities for faculty-librar-ian cooperation. The literature documents an increase in informationliteracy partnerships established across academic units (Dewey, 2001;Doskatsch, 2003; Green & Bowser, 2002; Isbell & Broadus, 1995;Macauley & Cavanagh, 2001). Zaporozhetz (1987) concluded that “theprocess of writing the dissertation provides a unique interaction amongadvisors, doctoral candidates, and librarians” (p. 129). Bailey (1985),Bruce (2001), and Macauley and Cavanagh (2001) echoed this observa-tion, describing partnerships shared by postgraduates, librarians, andacademics. The impetus for many of these initiatives comes from li-brarians, especially regarding the stages at which students need andacquire information literacy skills. Nevertheless, academics and li-brarians agree upon the importance of information literacy to studentlearning (Bruce, 2001; Macauley, 2001b). Regardless of the pedagogi-cal approach, information literacy training for research students mustdeal with effective information management in the domain of the stu-dents’ discipline if the initial stages of identifying the research questionand pursuing relevant literature are to proceed.

METHODS

This paper presents and updates ongoing research. With the intentionof developing a rubric that could be used for both instructional and eval-uation purposes, we first identified our focus as the quality of literaturereviews written by students in our master’s of education program at

Rosemary Green and Mary Bowser 191

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 8: Observations from the Field

Shenandoah University and devised a scoring instrument to apply andscrutinize criteria relevant to thesis literature reviews. The rubric re-flected our perspectives on the ways that students accumulate, evaluate,and use research sources in the process of reviewing literature. The firstanalytic rubric had twelve criteria measured in a range of three scores;we modified the rubric to include ten criteria and a five-score scale, rea-soning that smaller differences between samples can be difficult to readin a shorter scale (Perlman, 2002). In a recent study (Green & Bowser,2003), we used a ten-item, three-category, five-score rubric to rate liter-ature reviews written for theses by master’s of education students at ouruniversity. Five criteria articulated elements of content, three addressedpresentation, and two described writing/format features. Scores rangedfrom a low of one to a high of five across deficient, undeveloped,emerging, developed, and exemplary scoring ranges; a total score of 50was possible. All points on the scale and all criteria were clearly labeledand described. We continue to take the rubric through an applied cycleof data collection, interpretation, and revision.

An essential feature of the rubric continues to be the instructionalframework in which it is implemented. The rubric is introduced to com-mencing master’s students during the information literacy stages of lit-erature review development, when students are initially gathering andevaluating research literature and preparing to draft their first synthesesof the literature. In later stages of the graduate program, a librarian-in-structor collaborates with graduate education faculty to co-teach a the-sis course that concentrates on the final stages of completing andwriting up research projects. The rubric is used for teaching purposes tocommunicate clear guidelines for expectations of well-constructed lit-erature reviews, and students use it consistently through literature seek-ing and writing phases until their theses are completed. Instructors scoreand match the rubric with each student’s series of drafts, and studentsare able to see progress in their conceptualizing and writing. Addition-ally, students are aware of and attempt to meet the articulated assess-ment criteria of the rubric (Moskal & Leydens, 2002).

Introducing and Analyzing the Rubric at a Second Site

In 2003 we were invited to share the rubric and our instructional modelwith another, comparably sized institution. Initially, we recognized thefeasibility of a joint venture because the structures of graduate educationprograms at Best Practices University (BPU) and Shenandoah Univer-

192 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 9: Observations from the Field

sity (SU) are similar. Graduate education programs in both institutionsare targeted toward teacher practitioners, constructed around cohortmodels, share similar instructional methods, and use a combination offace-to-face methods and course management software to deliver instruc-tion to off-campus graduate students. At both institutions, librarians col-laborate with faculty to teach graduate education courses centered oninformation literacy and methods in gathering, evaluating, and synthe-sizing research literature. We welcomed this opportunity to test the ru-bric and our instructional model for transferability to a similar contextand to examine the dependability of our research procedures. By exam-ining the rubric in the new setting, we hoped also to study the rubric forflexibility as well as utility for formative, summative, and evaluativepurposes.

As the joint project began to take shape, one critical difference be-tween the two institutions became apparent. Graduate education stu-dents at SU organize their research projects around topics of personalinterest and professional expertise, so each project is relatively unique.In contrast, all graduate education students at BPU are required to en-compass educational best practices into their scholarly projects. BPUstudents are grouped as they enter the program, students write initial lit-erature reviews of single topics within best practices literature, and thecapstone research projects and literature reviews are written by the stu-dent groups. Therefore, in the first months of this phase we concentratedon modifying and adapting our rubric. With a faculty-librarian team atBPU, we designed a best practices rubric to match expectations thattheir beginning students would incorporate the literature of best prac-tices in K-12 teaching into initial project literature reviews. As weoperationalized the criteria to fit the best practices context, six criteriafrom our original rubric were revised; of those, two content criteria andone composition criterion underwent substantive changes to accommo-date the emphasis on current best practices literature. During the fall2004 term, the BPU librarian-instructor taught new graduate educationstudents in an introductory research course, using the best practices ru-bric to provide guidelines for constructing early literature reviews andto provide feedback on students’ drafts.

Using the foundation of our original rubric, the three categories forcriteria–content, presentation, and writing/format–were retained in thebest practices rubric, as was the five-point scoring scheme. In the bestpractices rubric, content criteria address (1) inclusion of historical,theoretical and seminal literature; (2) inclusion of best practices litera-

Rosemary Green and Mary Bowser 193

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 10: Observations from the Field

ture; (3) quality of literature; (4) relevance of literature to currenttopic; and (5) relevance of studies to each other. Presentation criteriadescribe (6) organization; (7) transitions; and (8) justification for fur-ther research. Finally, the writing/format criteria articulate (9) clarity ofwriting and interpretation of literature; and (10) bibliographic (APA)format.

Following implementation of the adapted rubric at BPU, we soughtto test the efficacy of the instrument, instructional model, and our re-search methods. We engaged BPU raters to apply the rubric as anevaluative instrument, comparing samples of literature reviews writtenby BPU students using the rubric with samples written before the rubricwas introduced into their graduate education program. We were con-scious of the potential for imposing our biases and preferred to use inde-pendent raters rather than read the trial samples ourselves, so wecontinued to use that approach here. We prepared two BPU trainers, thegraduate librarian-instructor and a member of the education faculty,who then trained their own faculty readers to read and rate samples ofliterature reviews using the best practices rubric. In the process of pre-paring the two BPU trainers to use the rubric, we checked for crite-rion-related reliability by matching their scores with our ratings of thesame samples. Inter-rater reliability was also checked by determiningthat the two trainers reached consensus independently when rating lit-erature review exemplars drawn from BPU students’ work. The train-ers proceeded to act as our agents and continued the project at theirinstitution for one academic term. They randomly selected 16 sam-ples, equally distributed between literature reviews written with andwithout the use of the rubric. Then they selected, trained, and pairedeight raters from among their institution’s graduate education faculty toread and score the samples. Each sample was read and scored twice. Asour research agents, the graduate librarian and faculty member codedand distributed samples to their raters, then returned the scored rubricsto us for analysis.

Because the data were inconclusive, we decided to take these sam-ples through another reading and analysis. We trained a second group offour raters, this time faculty and librarian readers from ShenandoahUniversity. Again, criterion-related rater reliability and inter-rater reli-ability were confirmed by reaching consensus between experts and rat-ers and between raters. In this reading, raters were not paired; instead,each of the 16 samples read by the first raters was read again by one newrater.

194 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 11: Observations from the Field

FINDINGS AND DISCUSSION

Sixteen literature reviews emphasizing best practices research andwritten by commencing master’s of education students at BPU werescored twice by paired raters from their home institution. Severalmonths later, the same samples were read a third time by raters selectedfrom our university. The literature reviews were scored on a five-pointscale according to criteria to include use of best practices literature andto accommodate early-stage literature reviews. A two-sample t testassuming unequal variances was conducted on the total score means,but no significant differences were found between scores of sampleswritten without rubrics and scores of samples written with rubrics.When mean scores by criteria were examined, a range from a low of2.8 (SD 1.3) for criterion 1, historical, theoretical, seminal literature, toa high of 3.8 (SD 1.1) for criterion 10, APA format, emerged first insamples of literature reviews written without the rubric. Among the sam-ples of literature reviews written with the rubric, mean scores rangedfrom a low of 2.4 (SD 1.3), again criterion 1, to a high of 4.2 (SD 0.9),again criterion 10. Samples written with the use of the rubric showedslight gains in criterion 4, relevance of published studies to currenttopic, from 3.5 (SD 0.7) to 3.7 (SD 0.8), in criterion 8, justification forfurther research, from 2.6 (SD 1.3) to 3.2 (SD 1.2), and again in criterion10, APA format, from 3.8 (SD 1.1) to 4.2 (SD 0.9). Table 1 describesthese findings, derived from three readings of each group of sample lit-erature reviews.

Raters at Best Practices University consistently scored their students’literature reviews higher than did the Shenandoah University raters.While total scoring from BPU raters improved slightly in the sampleswritten with the rubric, data from SU raters indicate that total scores de-creased in the samples written with the rubric. High standard deviationsin all scores are noted. Those findings are described in Table 2.

By sharing and adapting the rubric to a second setting, we introducednew variables into our project. For example, we learned that our originalrubric was designed for students concluding their literature reviews andtheses, not necessarily for students who were just beginning the literaturereview process as were those at Best Practices University. For this reason,further revisions to the rubric for both instructional and evaluative func-tions are needed. At BPU, adjunct education faculty as well as the li-brarian and faculty we trained taught the literature review process usingthe rubric; whereas, at SU, the same librarian-instructor and educationfaculty have consistently team-taught the thesis course, including for-

Rosemary Green and Mary Bowser 195

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 12: Observations from the Field

mative and summative uses of the rubric. Faculty, librarians, and ourtrained raters at SU typically examine literature reviews in their finaldrafts rather than in the beginning stages, and that background possiblyresulted in their assigning lower scores to the BPU first-stage literaturereviews. Nevertheless, when taken together, the total scores suggest thestudents are able to demonstrate emergent capability in reviewing liter-ature.

Our conclusions should be tempered with an appreciation for the pre-liminary nature of the revised rubric and its use in the second setting.Most significant, perhaps, is that BPU trainers, faculty, and studentswere new to the process of using formative and summative rubrics, andour literature review rubric was one of the first introduced into theirgraduate programs; consequently, a period of adjustment was to be ex-pected. As an analytical tool, the rubric was newly introduced to the

196 The Twelfth Off-Campus Library Services Conference Proceedings

TABLE 1. Comparison of Samples Written Without and With Rubric, by Criterion

Rubric Criterion Without RubricMean (SD)

With RubricMean (SD)

1. Historical & theoretical background; seminal literature 2.8 (1.3) 2.4 (1.3)

2. Best practices literature 3.0 (1.4) 3.0 (1.4)

3. Quality of literature 3.1 (1.3) 3.1 (1.4)

4. Relevance of published studies to current topic 3.5 (0.7) 3.7 (0.8)

5. Relevance of published studies to each other 3.3 (1.1) 3.1 (1.3)

6. Organization 3.7 (1.2) 3.3 (1.1)

7. Transitions 3.7 (1.1) 3.4 (1.1)

8. Justification for further research 2.6 (1.3) 3.2 (1.2)

9. Clarity of writing and interpretation of literature 3.6 (1.0) 3.4 (0.8)

10. Bibliographic (APA) format 3.8 (1.1) 4.2 (0.9)

Note: Values are rounded to tenths.

TABLE 2. Comparison of Total Scores of Samples Written Without and WithRubric, by Raters

Best Practices UniversityRaters

Shenandoah UniversityRaters

Without Rubric, Total Scores Mean (SD) 34.3 (9.3) 29.9 (7.4)

With Rubric, Total Scores Mean (SD) 34.9 (7.2) 28 (6.8)

Note: A total score of 50 points was possible.

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 13: Observations from the Field

BPU environment, possibly accounting for the relatively broad range ofscores given to individual criteria.

We have reached encouraging conclusions regarding the abilities ofnovice graduate students to appropriate practices for reviewing the liter-ature. They seem to be able to apply bibliographic formatting accuratelyand consistently, regardless of instructional model. In the introductoryresearch class, students begin to use the literature to articulate the rela-tionship between published research and the newly defined researchtopics and to justify their research studies. We would expect that othercriteria for quality and relevance of studies and for presentation to im-prove over time as students gain more experience with the literature. Atthe lowest points of standard deviation, mid-level, undeveloped toemerging scores are maintained for criteria 4, relevance of publishedstudies to current topic, and 10, bibliographic format, and for criteria 6,organization, 7, transitions, and 9, clarity of writing and interpretation.These scores encourage us to conclude that these learners are able todemonstrate relevance of the literature using conventions required inearly stages of academic writing, even though they may be entering thecycle of gathering, analyzing, and synthesizing research literature forthe first time.

The process of sharing, adapting, and revising the original rubric tothe second setting showed interesting and informative results, and welearned that circumstances affecting rater training and instrument userequired significantly different approaches at each site. We were able toestablish criterion-related and inter-rater reliability with two BPU train-ers and with SU raters by seeking consensus at the time that training ex-emplars were read. However, inter-rater reliability at the training wasnot statistically determined with correlation coefficients. The variabil-ity of scores among raters and between groups of raters indicated thatwe should continue to examine training procedures throughout the read-ing process. Rater checking, begun during the training phase, shouldalso continue throughout data collection.

We observed, too, that matters of course outcomes and program de-sign affected the manner in which we modified and recommended usingthe rubric. Translations from classroom practice to examination of prac-tice are not necessarily seamless. If the rubric is to serve dual functionsas an instructional and evaluative instrument, it requires further modifi-cation and exploration. We found evidence of differences in perceptualframeworks of trainers, raters, instructors, and student writers, empha-sizing the need to operationalize criteria for literature reviews and formaster’s research projects as a whole. All participants in the rubric im-

Rosemary Green and Mary Bowser 197

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 14: Observations from the Field

plementation process should be able to differentiate well developed andcomprehensible criteria, yet when the new rubric was put into practice,we observed that criteria were not necessarily understood consistentlyacross participating groups, including students, instructors, and readers.

We question now whether a five-score rubric functions best as an as-sessment tool; returning to Perlman (2002), we recognize that longscales present difficulties in differentiating adequately between scalepoints and challenge raters to reach agreement on scores. The nextphase of instrument development will examine whether the rubric as itis currently written functions equally well for both instruction and eval-uation or whether two instruments are now required. As an unexpectedoutcome from the process of adapting our original rubric, we have beenable to reorganize the first instrument’s criteria and refine criteria de-scriptions. We realize, too, that the best practices rubric and our originalrubric represent two instructional tools that may potentially serve asbookends in the process of literature review construction, one at the firstdraft phase and one at the final phase of completion.

CONCLUSION

Teaching and learning the literature review process in graduate edu-cation deserves exploration, discussion, and innovation. We have con-cluded that the body of research relevant to teaching, learning, andevaluating literature reviews is quite small. Despite this apparent lack ofconversation, the importance of giving pedagogical support to improv-ing the conceptualization and structure of literature reviews cannot beoverstated. Sources included in the review are chosen largely based onresearch needs perceived by the graduate writer, following criteria thatmay or may not meet standards of objectivity. Depending on the skillsand inclinations of the writer, justification for inclusion of literature inthe review may be quite subjective, in part because graduate studentsmay not be sufficiently guided in the process of reviewing the literature.Ideally, tools such as rubrics can be used as part of the instructionalscheme of modeling good literature reviews, scaffolding methods forlearning the process, and coaching students in the gradual, recursiveprocess of writing and revising. Our study contributes to this discussionby proposing an instructional model, describing an analytical instru-ment for the process, and sharing the procedures by which both instruc-tion and evaluation can be refined and distributed among colleagues.

198 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 15: Observations from the Field

REFERENCES

Ackerson, L. G. (1996). Basing reference service on scientific communication: To-ward a more effective model for science graduate students. RQ: Reference Quar-terly, 2, 248-60.

Afolabi, M. (1992). The review of related literature in research. International Journalof Information and Library Research, 4 (1), 59-66.

Alire, C. A. (1984). A nationwide survey of education doctoral students’ attitudes re-garding the importance of the library and the need for bibliographic instruction.Unpublished doctoral dissertation, University of Northern Colorado, Greeley.

Bailey, B. (1985). Thesis practicum and the librarian’s role. Journal of Academic Li-brarianship, 11, 79-81.

Barry, C. (1997). Information skills for an electronic world: Training doctoral researchstudents. Journal of Information Science, 23, 225-238.

Boote, D., & Beile, P. (2005). Scholars before researchers: On the centrality of the dis-sertation literature review in dissertation preparation. Educational Researcher, 34(6), 3-15.

Bruce, C. (1990). Information skills coursework for postgraduate students: Investiga-tion and response at the Queensland University of Technology. Australian Aca-demic & Research Libraries, 21, 224-232.

Bruce, C. (1994a). Research students’ early experiences of the dissertation literaturereview. Studies in Higher Education, 19, 217-119.

Bruce, C. (1994b). Supervising literature reviews. In O. Zuber-Skerritt & Y. Ryan(Eds.), Quality in postgraduate education (pp. 143-155). London: Kogan Page.

Bruce, C. (1994c). When enough is enough: Or how should research students delimitthe scope of their literature review? In G. Ryan, P. Little & I. Dunn (Eds.), Chal-lenging the conventional wisdom in higher education (pp. 435-439). Sydney: Uni-versity of New South Wales.

Bruce, C. (1996). From neophyte to expert: Counting on reflection to facilitate com-plex conceptions of the literature review. In O. Zuber-Skerritt (Ed.), Frameworksfor postgraduate education (pp. 239-253). Lismore, New South Wales: SouthernCross University.

Bruce, C. (2001). Faculty-librarian partnerships in Australian higher education: Criti-cal dimensions. Reference Services Review, 29, 106-115.

Cooper, H. (1985). A taxonomy of literature reviews. Washington, DC: National Insti-tute of Education.

Cooper, H. (1988). Organizing knowledge syntheses: A taxonomy of literature re-views. Knowledge in Society, 1, 104-126.

Creswell, J. W. (2005). Educational research: Planning, conducting, and evaluatingquantitative and qualitative research (2d ed.). Upper Saddle River, NJ: Pearson.

Dewey, B. I. (2001). Library user education: Powerful learning, powerful partner-ships. Lanham, MD: Scarecrow Press.

Doskatsch, I. (2003). Perceptions and perplexities of the faculty-librarian perspective:An Australian perspective. Reference Services Review, 31, 111-121.

Dreifuss, R. A. (1981). Library instruction and graduate students: More work forGeorge. RQ: Reference Quarterly, 21, 121-123.

Rosemary Green and Mary Bowser 199

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 16: Observations from the Field

Gall, M. D., Gall, J. P., & Borg, W. R. (2003). Educational research: An introduction(7th ed.). Boston: Allyn and Bacon.

Galvan, J. L. (2005). Writing literature reviews: A guide for students of the social andbehavioral sciences (3rd ed.). Glendale CA: Pyrczak Press.

Genoni, P., & Partridge, J. (2000). Personal research information management: Infor-mation literacy and the research student. In C. Bruce & P. Candy (Eds.), Informa-tion literacy around the world: Advances in programs and research (pp. 223-235).Wagga Wagga, New South Wales: Centre for Information Studies, Charles SturtUniversity.

Gorman, G. E., & Clayton, P. (2005). Qualitative research for the information profes-sional; A practical handbook (2d ed.). London: Facet.

Grassian, E. S., & Kaplowitz, J. R. (2001). Information literacy instruction: Theoryand practice. New York: Neal-Schuman.

Green, R. (2006). Fostering a community of doctoral learners. Paper presented at theEleventh Off-Campus Library Services Conference, Savannah, GA.

Green, R., & Bowser, M. (2002). Managing thesis anxiety: A faculty-librarian partner-ship to guide off-campus students through the thesis process. Journal of Library Ad-ministration, 37, 341-354.

Green, R., & Bowser, M. (2003, April). Evolution of the thesis literature review: A fac-ulty-librarian partnership to guide off-campus research and writing. Paper pre-sented at the Eleventh Annual Conference of the Association of College andResearch Libraries, Charlotte, NC.

Hart, C. (1998). Doing a literature review: Releasing the social science research imag-ination. Thousand Oaks, CA: Sage.

Hendricks, M., & Quinn, L. (2000). Teaching referencing as an introduction toepistemological empowerment. Teaching in Higher Education, 5, 447-457.

Hernandez, N. (1985). The fourth, composite ‘R’ for graduate students: Research.Univer-sity of Wyoming, Laramie (ERIC Document Reproduction Service No. ED267671).

Isbell, D., & Broadus, D. (1995). Teaching writing and research as inseparable: A fac-ulty-librarian teaching team. Reference Services Review, 23 (4), 51-62.

Kiley, M., & Liljegren, D. (1999). Discipline-related models for a structured programat the commencement of a PhD. Teaching in Higher Education, 4, 61-75.

Kingston, P., & Reid, B. (1987). Instruction in information retrieval within a doctoralresearch program: A pertinent contribution? British Journal of Academic Librari-anship, 2, 91-104.

Krishnan, L. & Kathpalia, S. S. (2002). Literature reviews in student project reports.IEEE Transactions of Professional Communication, 45, 187-197.

Lea, M. R., & Street, B. (1998). Student writing in higher education: An academicliteracies approach. Studies in Higher Education, 23, 157-172.

Libutti, P., & Kopala, M. (1995). The doctoral student, the dissertation, and the library:A review of the literature. Social Science Reference Services, 48, 5-25.

Macauley, P. (2001a). Doctoral research and scholarly communication: Candidates,supervisors and information literacy. Unpublished doctoral thesis, Deakin Univer-sity, Geelong, Victoria.

Macauley, P. (2001b). Menace, missionary zeal or welcome partner? Librarian in-volvement in the information literacy of doctoral researchers. New Review of Li-braries and Lifelong Learning, 2, 47-66.

200 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 17: Observations from the Field

Macauley, P., & Cavanagh, A. K. (2001). Doctoral dissertations at a distance: A novelapproach from down under. Journal of Library Administration, 32, 331-46.

Mauch, J. E., & Birch, J. W. (1998). Guide to the successful thesis and dissertation:Conception to publication. A handbook for students and faculty. New York: MarcelDekker.

Miller, W. L., & Crabtree, B. F. (2004). Depth interviewing. In S. N. Hesse-Biber & P.Leavy (Eds.), Approaches to qualitative research (pp. 185-202). New York: OxfordUniversity Press.

Morner, C. J. (1993). A test of library research skills for education doctoral students.Unpublished doctoral dissertation, Boston College, Boston.

Moskal, B. M., & Leydens, J. A. (2002). Scoring rubric development: Validity and reli-ability. In C. Boston (Ed.), Understanding scoring rubrics: A guide for teachers(pp. 25-33). College Park, MD: ERIC Clearinghouse on Assessment and Evalua-tion. (ERIC Document Reproduction Service No. ED471518).

Nimon, M. (2002). Preparing to teach “the literature review”: Staff and student viewsof the value of a compulsory course in research education. Australian Academic &Research Libraries, 33, 168-179.

Paltridge, B. (2002). Thesis and dissertation writing: An examination of published ad-vice and actual practice. English for Specific Purposes, 21, 125-143.

Pan, M. L. (2003). Preparing literature reviews: Qualitative and quantitative ap-proaches (2d ed.). Glendale, CA: Pyrczak Press.

Perlman, C. (2002). An introduction to performance assessment scoring rubrics. In C.Boston (Ed.), Understanding scoring rubrics: A guide for teachers (pp. 5-13). Col-lege Park, MD: ERIC Clearinghouse on Assessment and Evaluation (ERIC Docu-ment Reproduction Service No. ED471518).

Powell, R. R., & Connaway, L. S. (2004). Basic research methods for librarians (4thed.). Westport, CT: Libraries Unlimited.

Qian, J. (2003). Chinese graduate students’ experiences with writing a literature re-view. Unpublished master’s thesis, Queen’s University at Kingston, Ontario.

Rader, H. B. (1990). Bringing information literacy into the academic curriculum. Col-lege & Research Libraries News, 51, 879-880.

Riehl, C., Larson, C. L., Short, P. M., & Reitzug, U. C. (2000). Reconceptualizing re-search and scholarship in educational administration: Learning to know, knowingto do, doing to learn. Educational Administration Quarterly, 46, 391-427.

Rumrill, P. D., & Fitzgerald, S. M. (2001). Using narrative literature reviews to build ascientific knowledge base. Work, 16, 165-170.

Smith, J., & Oliver, M. (2005). Exploring behaviour in the online environment: Stu-dent perceptions of information literacy. ALT-J, Research in Learning Technology,13 (1), 49-65.

Squires, D. (1998). The impact of new developments in information technology onpostgraduate research and supervision. In J. A. Malone, B. Atweh & J. R. Northfield(Eds.), Research and supervision in mathematics and science education (pp. 299-322).Mahwah, NJ: Lawrence Erlbaum Associates.

Swales, J. M., & Lindemann, S. (2002). Teaching the literature review to internationalgraduate students. In A. M. Johns (Ed.), Genre in the classroom: Multiple perspec-tives. Mahwah, NJ: Lawrence Erlbaum Associates.

Rosemary Green and Mary Bowser 201

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009

Page 18: Observations from the Field

Williams, H. C. (2000). User education for graduate students: Never a given, and notalways received. In T.E. Jacobson & H.C. Williams (Eds.), Teaching the new li-brary to today’s users (pp. 145-172). New York: Neal-Schulman.

Winston, B. E., & Fields, D. L. (2003). Developing dissertation skills of doctoral stu-dents in an Internet-based education curriculum: A case study. American Journal ofDistance Education, 17, 161-172.

Zaporozhetz, L. (1987). The dissertation literature review: How faculty advisors pre-pare their doctoral candidates. Unpublished doctoral dissertation, University ofOregon, Eugene OR.

doi:10.1300/J111v45n01_10

202 The Twelfth Off-Campus Library Services Conference Proceedings

Downloaded By: [Canadian Research Knowledge Network] At: 20:11 10 March 2009