Top Banner
Advisory Committee for Academic Assessment Office of Academic Assessment Kent State University Kent, Ohio 44242 Contents What is the purpose of this guide? Why is assessment of student learning important? How do we define the process of academic assessment? What are the six steps to guide an assessment process? Who is responsible for developing an assessment process for academic units?
44
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Page 1: Advisory Committee for Academic Assessment

Advisory Committee for Academic AssessmentOffice of Academic Assessment

Kent State UniversityKent, Ohio 44242

Contents

What is the purpose of this guide?Why is assessment of student learning important?How do we define the process of academic assessment?What are the six steps to guide an assessment process?Who is responsible for developing an assessment process for academic units?What is the role of the Advisory Committee for Academic Assessment?Step One: Identify GoalsStep Two: Identify Objectives

Page 2: Advisory Committee for Academic Assessment

Step Three: Specify ApproachesStep Four: Specify MeasuresStep Five: Evaluate & Share ResultsStep Six: Make ChangesGlossaryResources

What is the purpose of this guide?

The purpose of this guide is to help academic units develop and/or improve the process of assessing student learning.

back to top

Why is assessment of student learning important?

In effect there are two reasons assessment of student learning is important.

Assessment is needed for improvement. Improvement, with its internal focus provides

opportunities for the academic community to engage in self-reflection of its learning goals, to determine the degree to which these goals correspond to student and societal needs, and to evaluate if students’ activities, products, or performances coincide with the academic community’s expectations;

offers information to students about the knowledge, skills, and other attributes they can expect to possess after successfully completing coursework and academic programs.

establishes ways for academic units to understand the dimensions of student learning when seeking to improve student achievement and the educational process.

Assessment is needed for accountability. Accountability, with its external focus, provides

evidence of student achievement to accreditation groups, state legislators, and other stakeholders in education. Kent State University’s accreditation process, the Academic Quality Improvement Project (AQIP), holds the institution responsible for evidence, among other efforts, of the continuous improvement of student learning.

back to top

How do we define the process of academic process?

The process of assessment, as it is understood in this guide, recommends that academic units

undertake activities to clarify the needs of their students and faculty, relevant administrators, community persons, and others involved with the outcomes of student learning,

enable the unit to decide what academic goals they value and thus what they expect students to learn,

determine and implement the best approaches to methods and measures that will evaluate the degree to which student learning outcomes meet these expectations, and finally

agree on ways to use this evidence to support improved student learning as well as an

Page 3: Advisory Committee for Academic Assessment

improved process for its assessment.

A comprehensive definition of assessment that portrays this process was proposed to the American Association for Higher Learning by Thomas Angelo, AAHE Bulletin, November 1995, p.7. The arrowed statements by ACAA following each component of the definition specify, broadly, actions to be taken.

“Assessment is an ongoing process aimed at understanding and improving student learning. It involves

making our expectations explicit and public;

o All academic units should articulate clearly their learning goals as measurable objectives of their programs and ultimately of their courses, involving and sharing the development of these objectives with students, administrators, and other publics.

setting appropriate criteria and high standards for learning quality;o Standards of excellence are required when establishing learning expectations and

the criteria for evaluating learning outcomes to assure the assessment process has as its primary goal the continuous improvement of the quality of student achievement.

systematically gathering, analyzing, and interpreting evidence to determine how well performance matches those expectations and standards;

o Faculty members in academic units should use valid and reliable ways to assess student learning, employ multiple measures on a regular basis, and ascertain the degree to which learning outcomes coincide with objectives and standards agreed on by the unit.

and using the resulting information toDOCUMENT , explain, and improve performance.”o These results, then, provide the evidence to document and support explanations of

student performance. Using these results, the unit has an opportunity to re-examine objectives, methods and measures as feedback to help students to improve their learning.

back to top

What are the six steps to guide an assessment process?

1. Identify in broad terms what educational goals are valued

2. Articulate multiple measurable objectives for each goal

3. Select appropriate approaches to assess how well students are meeting the articulated objectives

4. Select appropriate measures that can be administered, analyzed, and interpreted for evidence of student learning outcomes

5. COMMUNICATE  assessment findings to those involved in the process of assessment

6. Use feedback to make changes and inform curricular decisions and reevaluate the assessment process with the intent to continuously improve the quality of student learning.

These six steps can be applied to different levels or areas of student learning such as that of a

Page 4: Advisory Committee for Academic Assessment

program (e.g. graduate program or a program designed for a Learning Community), a major field of study (e.g. music), or a course (e.g. English 10001). The steps are useful for programs in academic units that are degree granting as well as for the wide range of programs and processes that influence learning, such as distance learning, library resources or liberal education courses.

back to top

Who is responsible for developing an assessment process for academic units?

Every academic unit will determine for itself the process most effective for the assessment of student learning in their programs.

Faculty are in the best position to determine educational values, to define measurable objectives to assess, to select methods and measures, and use findings to improve student learning outcomes.

Faculty need to assure that the educational requirements of others, such as students, employers, or other knowledgeable persons, are considered in developing this process.

This effort is a university wide one and should be consistent with our institutional mission and strategic directions

back to top

What is the role of Advisory Committee for Academic Assessment?

The Office of the Provost appointed the Advisory Committee for Academic Assessment (ACAA) in Spring 2000 for the purpose of

COLLABORATING with the Office of Academic Assessment to support and coordinate a process to assess student academic achievement.

serving as one resource to academic units for the continuous improvement of the quality of education at Kent State University.

assisting in the development of materials such as this guide, workshops, and surveys to assist academic units in their preparation, implementation, and review of assessment plans.

This guide is a resource in support of assessment as are workshops and other events sponsored by ACAA.

back to top

Step One: Identify Goals

A goal is a statement expressing what ideals are to be achieved. Goal statements tend to be broadly philosophical, global, timeless and not readily amenable to measurement. They capture the knowledge, skills, and values that students should acquire in a program by a course.

This first step in identifying goals requires faculty and others to reflect on questions such as the following:

what is the mission of this unit that guides and encapsulates the essence of learning - the knowledge, the skills, the values or attitudes to be achieved by students?

Page 5: Advisory Committee for Academic Assessment

are these goals compatible with the mission of the university and its strategic plan?

The characteristics of goal statements should be the same whether the focus is at the level of the undergraduate major or minor, a specialized program, the graduate program, courses, or an entire unit. Several examples of broadly stated, philosophical, and hopeful goal statements follow.

“The mission of the School of Architecture and Environmental Design is to encourage development of inquiring, responsible persons who will dedicate themselves to the improvement of the quality of life, the enhancement of the physical environment and the protection of the public welfare as related to architecture and urban design. The values to be developed are spiritual as well as physical, social as well as economic, and aesthetic as well as technical.” (Extrapolated from the Undergraduate Catalog, 2001-2002, pg. 243)

“The mission of the English Department is to foster literacy in the broadest sense through appreciation of the written word.” (From the Academic Program Review: Self-Study Report, the B.A. in English, Literature and Creative Writing Options, 1995)

“The goal of the School of Art is to provide graduates with the ability to develop works of art that express ideas and personal feelings as well as analyze and interpret works of art made by others.” (Submitted as an example for this guide by a faculty member of the School of Art, 2001)

Learning goals allow us to share with others the ideals of student learning we hope to achieve and to indicate the consistency of these goals with the mission of the university and its strategic planning. The above goal statements reflect the characteristics of such aims: they are general, they are ideals hoped for, they are not time-bound, and, unfortunately, they are not amenable, as stated, to being measured.

Herein lies the rub. Because the intent of academic assessment is to support continuous improvement of student learning, we must derive from these goals elements that can be measured. If this does not occur, we cannot evaluate how well students are learning what we expect of them. For this reason, the second task in the assessment planning requires the redefinition of goal statements as measurable objectives.

back to top

Step Two: Identify Objectives

The task at step two is to redefine broad, global goal statements by specifying them in terms that allow for evaluation of how well students are meeting these learning goals.

List the student learning objectives for this major (course). Learning objectives should specify the activities, products, or performances to be measured and evaluated and the criteria they must meet for success. Learning objectives state what students will know, understand, and be able to do when they complete this major (course).

Defining objectives requires faculty and others to reflect on the questions below:

how can the learning goals be stated as an activity, product, or performance that can be measured?

what will students know, understand, and be able to do when they complete studies within this academic unit?

will the specified learning objectives provide direction for educational activities in the

Page 6: Advisory Committee for Academic Assessment

unit and inform students about the expectations of the faculty?

An example of the transformation of learning goal statements to learning objectives can be demonstrated using the examples of learning goals presented inStep One.

Goal: To develop “responsible persons who will dedicate themselves to the . . . enhancement of the physical environment.Learning Objective: Students will be able by their junior or senior years to critique various ethical and legal policies that impact the physical environment and defend, in both verbal and written work, their choices as to those that benefit this environment.

Goal: “To foster literacy through the appreciation of the written word.”Learning Objective 1: Students will be able to master interpretive and analytical skills in writing about literature.Learning Objective 2: Students will be able to critique and revise their own material.

Goal: “To design and develop works of art that express ideas and personal feelings” . . .Learning Objective: Students will provide a body of their artwork, accompanied with narrative, that demonstrates independent artistic development and self-reflection.

Restating learning objectives with as much specificity as possible by defining the criteria by which knowledge, performance, or values will be evaluated assures objectivity and makes required standards apparent. For example, in the second objective for English above, the faculty need to make explicit the level of performance students would attain for acceptable or unacceptable work. These revisions of objectives occur with discussion of what approaches or methods will be used as well as how objectives will be measured . . . the next steps in this process of assessment

back to top

Step Three: Specify Approaches

Approaches define the procedures by which information is gathered; whereas, measures (in Step Four) are the specific instruments used to provide data. Some typical approaches (methods) used to gather information on student learning include portfolios, capstone courses, standardized achievement tests, external reviews, internship performances, focus groups, and so on. Multiple approaches (methods) and administration times are essential to ensure that students who may perform poorly with one method or at one time have other opportunities to demonstrate their learning.

More than one approach should be used to evaluate an objective. A valuable way to avoid the possible bias of using one method is to employ alternative methods at different points in time. For example, with regard to the learning objective for the School of Art on page 6, an approach might be student portfolios examined by external reviewers to judge if the artwork meets the criteria specified. An additional method is a student survey at graduation to evaluate to what degree students feel competent to perform the specified elements or criteria that reflect artistic development and self-reflection.

When selecting approaches to use, the following are some questions that need careful consideration.

How many faculty are willing to participate in the methods selected? Will all students in a program or course be evaluated or a sample of students? How much time is involved? Determine how this will affect faculty, staff, and student

Page 7: Advisory Committee for Academic Assessment

work. How much useable information already exists and is available to the unit? What are any cost constraints? Are there university resources that can be used for

support? Will a study at one point in time (cross-sectional method) provide the information

needed or will following individual students or groups of students over several points in time (longitudinal method) be a more useful approach?

Does the unit prefer to use methods that rely primarily on numerical analysis (quantitative method) or on observations (qualitative method)?

Multiple approaches and times are essential to ensure that students who may perform poorly with one approach or at one time have an opportunity to demonstrate learning through multiple approaches and measures.

back to top

Step Four: Specify Measures

The task at this step is to identify and use measures appropriate for assessing the level at which students have achieved desired learning objectives. Needed now is agreement among faculty as to what evidence will assure that students are achieving the skills, knowledge, and values important to the academic unit. This moves the assessment process from a focus on intended results expressed as learning objectives to the level of achieved results.

Many measures can evaluate the objectives for learning, but it is important not to depend on a single measure to provide data about what and how well students are learning. Doing so can result in misinformation. Just as students learn in different ways, students respond differently with various evaluation tools. Using varied measures over time, including performance measures, more accurately affirms change and growth in learning. This allows greater confidence when recommending changes in the learning and assessment processes. Multiple measures to evaluate the learning objective for the School of Architecture (page 6) are offered below as examples of several ways to provide data about the level of student learning, each related to the same learning objective.

Capstone Experience Evaluation measures through explicitly defined criteria the competency level at which students have mastered the knowledge, skills, and values that define the major. To evaluate the level at which students have mastered one facet of this course, responsibility for enhancing the physical environment, seniors would write a paper. Faculty would establish a scoring guide of the essential elements used to judge this work. The elements might include knowledge of environmental policies, their historical development, their level of environmental impact, and students’ understanding and valuation of policies that benefit or threaten the environment. Each criterion/element would have a subscale defined possibly as percentages or in specific terms that are agreed to measure the competency of student work.

Internally Developed Tests are established by consensus of members of the faculty to measure, as above, the level of knowledge students have about legal and ethical policies regarding the physical environment, how they would use policies to enhance the environment, and their personal values regarding this issue. To ensure questions focus on the objectives of the program, elements should be evaluated that nearly all faculty in the program agree should be known, applied, and show commitment to the learning objective.

Surveys garnered often from alumni, employers, and students indirectly measure through self-report the competency of student learning at various times during their academic career or after completion of their studies. The School might choose a survey of graduates three years out to evaluate what

Page 8: Advisory Committee for Academic Assessment

responsibilities they are taking with regard to enhancing the environment, the degree to which the believe their studies fostered the direction they have taken, and the value of this objective to their job opportunities or community service.

Some questions that need to be considered when selecting measures include the following.

What schedule can be established to ensure an ongoing process of evaluation of objectives?

Will an externally developed test measure the specific goals and objectives of interest? Are there faculty who can prepare internally developed tests or performance measures that

are valid and reliable? How can students be motivated to do their best on any measures assessing learning?

back to top

Step Five: Evaluate and Share Results

The task at this step is three-fold. First is to collect the information from measures that have been chosen to provide evidence of how close students’ actual learning comes to meeting the expected outcomes faculty and others have for a course, a major, or a program. Second is to evaluate what is found. Third is to share the findings.

Some questions that need consideration when planning this step follow.

In connection with the collection of measures

Are faculty or other knowledgeable personnel available to supervise the collection of information intended to measure student learning?

Is space available to maintain information securely so as to assure confidentiality? If measures are to be repeated on a regular basis, or longitudinal studies are to be

done, is staff available to assemble the information in a useable fashion? Are there software programs available that can assist in the storage of data such as is to be used?

Are students made aware of faculty advocacy of the assessment process and the agreed on measures to assess their learning? Faculty attitude and contribution are known to have an effect on student participation and motivation.

In connection with the interpretation of measures,

Are faculty available to score, analyze, and interpret the findings of the measures to be used? If not, are there university or college resources available to assist in these tasks?

If externally developed (standardized) tests are used, is there assurance that the interpretations from these tests will provide the information required?

In connection with the communication of results,

Who will have access to the findings from any studies of student learning? Who are the audiences to whom the findings will be reported? How

willCOMMUNICATIONS  differ for different audiences, e.g. for annual reports to the faculty, to the college, to ACAA, to students or for marketing purposes?

Who will have responsibility for writing reports?

Page 9: Advisory Committee for Academic Assessment

How will the unit assure a timely and accurate flow of information to those who will use this information? How will findings be used and by whom?

Sharing results leads to the reevaluation of the assessment process to improve student learning.

back to top

Step Six: Make Changes

Academic assessment is an ongoing process that requires continuous reevaluation as to whether teaching and learning processes achieve the goals and objectives defined by faculty in the academic unit. When students succeed in achieving those goals and objectives, one might assume that the teaching and learning processes are functioning well. When students do not achieve those goals and objectives, changes should be made in teaching and learning processes. Reevaluation after changes are made will suggest if those changes were helpful to student learning. In this way, assessment creates a continuous cycle through these six steps in the assessment program and teaching/learning processes.

In making changes, faculty should consider the following two questions:

What elements of the teaching/learning processes should be added, deleted, or changed to improve student success?

Did the assessment plan for the academic unit produce results that have face validity? If not, why not?

Making changes to enhance student success requires reflection and thoughtful analysis foreshadowed in the actions suggested in the section on “How do we define the process of academic assessment?”

for academic units “to agree on ways to use this evidence to support improved student learning as well as an improved process for assessment”, and

to make use of findings to “provide the evidence toDOCUMENT  and support explanations of student performance” and as “an opportunity to re-examine objectives, methods and measures as feedback to help students to improve their learning.”

Some believe when the words ‘improvement” or “enhancement” are used that something is wrong. That is not the case. Most faculty, for example, are accustomed to reviewing and looking to improve what occurs during class time, at the end of a course, or in committees that discuss curriculum, pedagogy, and other educational matters. The intent of step six is the same -- to plan, often with others, new ways to accomplish their goals for students.

Some questions that need consideration at this juncture follow.

Do the objectives and findings define as well as answer the questions that are important to understanding and enhancing student learning?

Are faculty and students motivated to participate in the assessment process? If not, why not?

Has thought been given to the use of benchmarks based on comparable student groups?

Are there resources available to assist in areas of assessment that are found problematic?

Is there adequate support from the university to allow for continuous implementation

Page 10: Advisory Committee for Academic Assessment

and evaluation of the assessment plan?

back to top

Glossary

In order to develop this guide, it was necessary to define and explain terms as they seemed appropriate for the purposes of this guide. ACAA recognizes that some terms are understood in ways different from that used here, concepts such as assessment, methods, measures, goals, objectives, and values to cite a few. In an effort to be helpful, not arbitrary, the following explanations of terms as used in this guide are offered.

Approaches are the procedures used to gather the information needed to assess how well students have met the learning objectives. They are the course of action through which evidence about courses, programs, majors and the like will be gathered. To provide quality information, multiple approaches should be used.

Assessment refers to a continuous process instituted to understand and improve student learning. While academic units may find alternative pathways to arrive at this goal, this process needs to begin with articulation of educational goals for all programs and courses. These goals should be expressed as measurable objectives followed by the selection of reliable and valid methods and measures. After collecting, interpreting, and sharing findings, the aim is to use these learning outcomes to better understand how and what students learn, how well students are meeting expected objectives, as well as to develop strategies to improve the teaching and learning processes.

Benchmark is the actual measurement of group performance against an established standard or performance, often external.

Criterion is the standard of performance established as the passing score for a performance or other measures such as a test. The performance is compared to an expected level of mastery in an area rather than to other students’ scores.

Cross-Sectional Studies provide information about a group of students at one point in time.

Evaluate and Evaluation are terms used in this guide to indicate the interpreting of findings and are used as synonymous to the term assess and assessment. ACAA is aware that many make a distinction between evaluation and assessment with the difference that assessment is a process predicated on knowledge of intended goals or objectives while, in contrast, evaluation is a process concerned with outcomes without prior concern or knowledge about goals. That distinction is not used in the guide.

Goals are statements about the general academic aims or ideals to which an educational unit aspires. Goal statements allow us to share with others our hopes in regard to the learning achievements of our students. Further, goals at the unit level should align with the mission of the university. Goal statements are not amenable, as stated, to measurement.

Longitudinal studies provide information from the same group of students at several different points in time.

Measures are the specific instruments or performances used to provide data about learning. They are the tools that are to provide information as to the level of achieved results or outcomes. To avoid systematic bias in findings, multiple measures are required.

Methods - see approaches.

Page 11: Advisory Committee for Academic Assessment

Objectives are the redefinition of learning goals in a way that permits their measurement. Objectives express the intended results or outcomes of student learning and clearly specify the criteria by which student knowledge, performance, or values will be evaluated.

Process is a method generally involving steps or operations that are ordered and/or interdependent.

Qualitative and Quantitative Research describe two research methods. Both are valuable as a means to assess student learning outcomes. In a practical and somewhat philosophical sense the difference is that quantitative research tries to make use of objective measures to test hypotheses and to allow for controlling and predicting learning. Qualitative research makes use of more subjective observations of learning.

Reliability is the extent to which studies or findings can be replicated.

Sampling consists of obtaining information from a portion of a larger group or population. When the selection of a sample is randomly chosen there is greater likelihood that the findings from the sample will be representative of the larger group.

Validity depends on demonstrating that a measure actually measures what it is purported to measure.

back to top

Resources

There are many sources of information and assistance available to help with the tasks suggested in this guide. A full listing of these with brief comments as to their purpose will be available at the ACAA website. This page suggests a limited number of resources that may be valuable to be aware of immediately.

Page 12: Advisory Committee for Academic Assessment

uality Management Journal, 6(2), 9-21 (1999).

HOW TO IMPROVE TEACHING QUALITY

Richard M. FelderDepartment of Chemical Engineering

North Carolina State University

Rebecca BrentCollege of Engineering

North Carolina State University

An announcement goes out to the faculty that from now on the university will operate as a total quality management campus. All academic, business, and service functions will be assessed regularly, and quality teams will plan ways to improve them. A campus quality director and a steering team are named, with the director reporting to the Provost. All university departments appoint quality coordinators, who attend a one-day workshop on quality management principles and return to their departments to facilitate faculty and/or staff meetings at which quality improvement is discussed.

Many faculty members are irate. They argue that TQM was developed by and for industry to improve profits, industry and the university are totally different, and talking of students as "customers" is offensive and makes no sense. They make it clear that they will have nothing to do with this scheme and will view any attempt to compel them to participate as a violation of their academic freedom.

What happens then is…practically nothing. Some changes are made in business and service departments, some curricula are revised, and a few instructors make changes in what they do in their classrooms but most go on teaching the way they have always taught. After two or three years the steering committee writes its final report declaring the program an unqualified success and disbands, and life goes on.

Higher education discovered total quality management in the 1980s and quickly became enamored of it. Books like TQM for Professors and Students (Bateman and Roberts 1992) and Total Quality Management in Higher Education (Sherr and Teeter 1991) declared that TQM could serve as a paradigm for improving every aspect of collegiate functioning from fiscal administration to classroom instruction. Terms like "customer focus," "employee empowerment," "continuous assessment," and "Deming’s 14 principles" started appearing with regularity in education journals and in administrative pronouncements on campuses all over the country. Deming himself suggested the linkage between quality management principles and education, claiming that "…improvement of education, and the management of education, require

Page 13: Advisory Committee for Academic Assessment

application of the same principles that must be used for the improvement of any process, manufacturing or service" (Deming, 1994).

Some academic programs and many individual faculty members have tried applying quality principles in their work. Recent papers in engineering education describe quality-based models for classroom instruction (Jensen and Robinson 1995; Shuman et al. 1996; Stedinger 1996; Latzgo 1997; Karapetrovic and Rajamani 1998), curriculum reform and revision (Bellamy et al. 1994; Litwhiler and Kiemele 1994; Summers 1995; Houshmand et al. 1996; Shelnutt and Buch 1996), and department program planning and administration (Diller and Barnes 1994). Nevertheless, after more than a decade of such efforts, TQM has not established itself as the way many universities operate, especially in matters related to classroom instruction.

Our concern in this paper is specifically with teaching, as opposed to academic or research program structure and administration. We first consider how an instructor can improve the quality of instruction in an individual course, and then the more difficult question of how an academic organization (a university, college, or academic department) can improve the quality of its instructional program. In both cases, we examine the potential contribution of quality management principles to teaching improvement programs in light of the cultural differences between industry and the university.

IMPROVING TEACHING QUALITY IN AN INDIVIDUAL CLASS

We may define good teaching as instruction that leads to effective learning, which in turn means thorough and lasting acquisition of the knowledge, skills, and values the instructor or the institution has set out to impart. The education literature presents a variety of good teaching strategies and research studies that validate them (Campbell and Smith 1997; Johnson et al. 1998; McKeachie 1999). In the sections that follow, we describe several strategies known to be particularly effective.

Write instructional objectives

Instructional objectives are statements of specific observable actions that students should be able to perform if they have mastered the content and skills the instructor has attempted to teach (Gronlund 1991; Brent and Felder 1997). An instructional objective has one of the following stems:

At the end of this [course, chapter, week, lecture], the student should be able to ***

To do well on the next exam, the student should be able to ***

Page 14: Advisory Committee for Academic Assessment

where *** is a phrase that begins with an action verb (e.g., list, calculate, solve, estimate, describe, explain, paraphrase, interpret, predict, model, design,optimize,…). The outcome of the specified action must be directly observable by the instructor: words like "learn," "know," "understand," and "appreciate," while important, do not qualify.

Following are illustrative phrases that might be attached to the stem of an instructional objective, grouped in six categories according to the levels of thinking they require.

1. Knowledge (repeating verbatim): list [the first five books of the Old Testament]; state [the steps in the procedure for calibrating a gas chromatograph].

2. Comprehension (demonstrating understanding of terms and concepts): explain [in your own words the concept of phototropism]; paraphrase [Section 3.8 of the text].

3. Application (solving problems): calculate [the probability that two sample means will differ by more than 5%]; solve [Problem 17 in Chapter 5 of the text].

4. Analysis (breaking things down into their elements, formulating theoretical explanations or mathematical or logical models for observed phenomena): derive[Poiseuille’s law for laminar Newtonian flow from a force balance]; simulate [a sewage treatment plant for a city, given population demographics and waste emission data from local manufacturing plants].

5. Synthesis (creating something, combining elements in novel ways): design [an elementary school playground given demographic information about the school and budget constraints]; make up [a homework problem involving material covered in class this week].

6. Evaluation (choosing from among alternatives): determine [which of several versions of an essay is better, and explain your reasoning]; select [from among available options for expanding production capacity, and justify your choice].

The six given categories are the cognitive domain levels of Bloom’s Taxonomy of Educational Objectives (Bloom 1984). The last three categories--synthesis, analysis, and evaluation--are often referred to as the "higher level thinking skills."

Well-formulated instructional objectives can help instructors prepare lecture and assignment schedules and facilitate construction of in-class activities, out-of-class assignments, and tests. Perhaps the greatest benefit comes when the objectives cover all of the content and skills the instructor wishes to teach and they are handed out as study guides prior to examinations. The more explicitly students know what is expected of them, the more likely they will be to meet the expectations.

Page 15: Advisory Committee for Academic Assessment

Use active learning in class

Most students cannot stay focused throughout a lecture. After about 10 minutes their attention begins to drift, first for brief moments and then for longer intervals, and by the end of the lecture they are taking in very little and retaining less. A classroom research study showed that immediately after a lecture students recalled 70% of the information presented in the first ten minutes and only 20% of that from the last ten minutes (McKeachie 1999).

Students’ attention can be maintained throughout a class session by periodically giving them something to do. Many different activities can serve this purpose (Bonwell and Eison 1991; Brent and Felder 1992; Felder 1994a; Johnson et al. 1998; Meyers and Jones 1993), of which the most common is the small-group exercise. At some point during a class period, the instructor tells the students to get into groups of two or three and arbitrarily designates a recorder (the second student from the left, the student born closest to the university, any student who has not yet been a recorder that week). When the groups are in place, the instructor asks a question or poses a short problem and instructs the groups to come up with a response, telling them that only the recorder is allowed to write but any team member may be called on to give the response. After a suitable period has elapsed (which may be as short as 30 seconds or as long as 5 minutes—shorter is generally better), the instructor randomly calls on one or more students or teams to present their solutions. Calling on students rather than asking for volunteers is essential. If the students know that someone else will eventually supply the answer, many will not even bother to think about the question.

Active learning exercises may address a variety of objectives. Some examples follow.

Recalling prior material. The students may be given one minute to list as many points as they can recall about the previous lecture or about a specific topic covered in an assigned reading.

Responding to questions. Any questions an instructor would normally ask in class can be directed to groups. In most classes—especially large ones—very few students are willing to volunteer answers to questions, even if they know the answers. When the questions are directed to small groups, most students will attempt to come up with answers and the instructor will get as many responses as he or she wants.

Problem solving. A large problem can always be broken into a series of steps, such as paraphrasing the problem statement, sketching a schematic or flow chart, predicting a solution, writing the relevant equations, solving them or outlining a solution procedure, and checking and/or interpreting the solution. When working through a problem in class, the instructor may complete some steps and ask the student groups to attempt others. The groups should generally be given enough time to think about what they have been asked to do and begin formulating a response but not necessarily enough to reach closure.

Page 16: Advisory Committee for Academic Assessment

Explaining written material. TAPPS (thinking-aloud pair problem solving) is a powerful activity for helping students understand a body of material. The students are put in pairs and given a text passage or a worked-out derivation or problem solution. An arbitrarily designated member of each pair explains each statement or calculation, and the explainer’s partner asks for clarification if anything is unclear, giving hints if necessary. After about five minutes, the instructor calls on one or two pairs to summarize their explanations up to a point in the text, and the students reverse roles within their pairs and continue from that point.

Analytical, critical, and creative thinking. The students may be asked to list assumptions, problems, errors, or ethical dilemmas in a case study or design; explain a technical concept in jargon-free terms; find the logical flaw in an argument; predict the outcome of an experiment or explain an observed outcome in terms of course concepts; or choose from among alternative answers or designs or models or strategies and justify the choice made. The more practice and feedback the students get in the types of thinking the instructor wants them to master, the more likely they are to develop the requisite skills.

Generating questions and summarizing. The students may be given a minute to come up with two good questions about the preceding lecture segment or to summarize the major points in the lecture just concluded.

Use cooperative learning

Cooperative learning (CL) is instruction that involves students working in teams to accomplish an assigned task and produce a final product (e.g., a problem solution, critical analysis, laboratory report, or process or product design), under conditions that include the following elements (Johnson et al. 1998):

1. Positive interdependence. Team members are obliged to rely on one another to achieve the goal. If any team members fail to do their part, everyone on the team suffers consequences.

2. Individual accountability. All team members are held accountable both for doing their share of the work and for understanding everything in the final product (not just the parts for which they were primarily responsible).

3. Face-to-face promotive interaction. Although some of the group work may be done individually, some must be done interactively, with team members providing mutual feedback and guidance, challenging one another, and working toward consensus.

4. Appropriate use of teamwork skills. Students are encouraged and helped to develop and exercise leadership, communication, conflict management, and decision-making skills.

5. Regular self-assessment of team functioning. Team members set goals, periodically assess how well they are working together, and identify changes they will make to function more effectively in the future.

Page 17: Advisory Committee for Academic Assessment

An extensive body of research confirms the effectiveness of cooperative learning in higher education. Relative to students taught conventionally, cooperatively-taught students tend to exhibit better grades on common tests, greater persistence through graduation, better analytical, creative, and critical thinking skills, deeper understanding of learned material, greater intrinsic motivation to learn and achieve, better relationships with peers, more positive attitudes toward subject areas, lower levels of anxiety and stress, and higher self-esteem (Johnson et al. 1998; McKeachie 1999).

Formal cooperative learning is not trivial to implement, and instructors who simply put students to work in teams without addressing the five defining conditions of cooperative learning could be doing more harm than good. In particular, if team projects are carried out under conditions that do not ensure individual accountability, some students will inevitably get credit for work done by their more industrious and responsible teammates. The slackers learn little or nothing in the process, and the students who actually do the work justifiably resent both their teammates and the instructor.

The following guidelines suggest ways to realize the benefits and avoid the pitfalls of cooperative learning (Felder and Brent 1994; Johnson et al. 1998; Millis and Cottell 1998; NISE 1997).

Proceed gradually when using cooperative learning for the first time. Cooperative learning imposes a learning curve on both students and instructors. Instructors who have never used it might do well to try a single team project or assignment the first time, gradually increasing the amount of group work in subsequent course offerings as they gain experience and confidence.

Form teams of 3-4 students for out-of-class assignments. Teams of two may not generate a sufficient variety of ideas and approaches, teams of five or more are likely to leave at least one student out of the group process.

Instructor-formed teams generally work better than self-selected teams. Classroom research studies show that the most effective groups tend to be heterogeneous in ability and homogeneous in interests, with common blocks of time when they can meet outside class. It is also advisable not to allow underrepresented populations (e.g. racial minorities, or women in traditionally male fields like engineering) to be outnumbered in teams, especially during the first two years of college when students are most likely to lose confidence and drop out. When students self-select, these guidelines are often violated. One approach to team formation is to use completely random assignment to form practice teams, and then after the first class examination has been given, form new teams using the given guidelines.

Page 18: Advisory Committee for Academic Assessment

Give more challenging assignments to teams than to individuals. If the students could just as easily complete assignments by themselves, the instructor is not realizing the full educational potential of cooperative learning and the students are likely to resent the additional time burden of having to meet with their groups. The level of challenge should not be raised by simply making the assignments longer, but by including more problems that call upon higher level thinking skills.

Help students learn how to work effectively in teams. Some instructors begin a course with instruction in teamwork skills and team-building exercises, while others prefer to wait for several weeks until the inevitable interpersonal conflicts begin to arise and then provide strategies for dealing with the problems. One technique is to collect anonymous comments about group work, describe one or two common problems in class (the most common one being team members who are not pulling their weight), and have the students brainstorm possible responses and select the best ones.

Take measures to provide positive interdependence. Methods include assigning different roles to group members (e.g. coordinator, checker, recorder, and group process monitor), rotating the roles periodically or for each assignment; providing one set of resources; requiring a single group product; and giving a small bonus on tests to groups in which the team average is above (say) 80%. Another powerful technique is jigsaw, in which each team member receives specialized training in one or another subtask of the assignment and must then contribute his or her expertise for the team product to receive top marks.

Impose individual accountability in as many ways as possible. The most common method is to give individual tests. In lecture courses, the course grade should be based primarily on the test results (e.g., 80% for the tests and 20% for team homework), so that students who manage to get a free ride on the homework will still do poorly in the course. Other techniques include calling randomly on individuals to present and explain team results; having each team member rate everyone’s contribution and combining the results with the team grade to determine individual assignment grades, and providing a last resort option of firing chronically uncooperative team members.

Require teams to assess their performance regularly. At least two or three times during the semester, teams should be asked to respond to questions like "How well are we meeting our goals and expectations? "What are we doing well?" "What needs improvement?" and "What (if anything) will we do differently next time?"

Do not assign course grades on a curve. If grades are curved, students have little incentive to help teammates and risk lowering their own final grades, while if an absolute grading system is used they have every incentive to help one another. If an instructor unintentionally gives a very difficult or unfair test on which the grades are abnormally low, points may be added to everyone’s score or a partial retest may be administered to bring the high mark or the average to a desired level.

Survey the students after the first six weeks of a course. As a rule, the few students who dislike group work are quite vocal about it, while the many who see its benefits are quiet. Unless the students are surveyed during the course, the instructor might easily conclude from the complaints that the approach is failing and be tempted to abandon it.

Expect some students to be initially resistant or hostile to cooperative learning.

This point is crucial. Students sometimes react negatively when asked to work in teams for the first time. Bright students complain about begin held back by their

Page 19: Advisory Committee for Academic Assessment

slower teammates; weaker or less assertive students complain about being discounted or ignored in group sessions; and resentments build when some team members fail to pull their weight. Instructors with experience know how to avoid most of the resistance and deal with the rest, but novices may become discouraged and revert to the traditional teacher-centered instructional paradigm, which is a loss both for them and for their students.

Cooperative learning is most likely to succeed if the instructor anticipates and understands student resistance: its origins, the forms it might take, and ways to defuse and eventually overcome it. Felder and Brent (1996) offer suggestions for helping students understand why they are being asked to work in groups and for responding to specific student complaints. These suggestions may not eliminate student resistance completely, but they generally keep it under control long enough for most students to start recognizing the benefits of working in teams.

Assessment and evaluation of teaching quality

Most institutions use only end-of-course student surveys to evaluate teaching quality. While student opinions are important and should be including in any assessment plan, meaningful evaluation of teaching must rely primarily on assessment of learning outcomes. Current trends in assessment reviewed by Ewell (1998) include shifting from standardized tests to performance-based assessments, from teaching-based models to learning-based models of student development, and from assessment as an add-on to more naturalistic approaches embedded in actual instructional delivery. Measures that may be used to obtain an accurate picture of students’ content knowledge and skills include tests, performances and exhibitions, project reports, learning logs and journals, metacognitive reflection, observation checklists, graphic organizers, and interviews, and conferences (Burke, 1993).

A particularly effective learning assessment vehicle is the portfolio, a set of student products collected over time that provides a picture of the student’s growth and development. Panitz (1996) describes how portfolios can be used to assess an individual’s progress in a course or over an entire curriculum, to demonstrate specific competencies, or to assess the curriculum. Rogers and Williams (1999) describe a procedure to maintain portfolios on the World Wide Web.

Angelo & Cross (1993) outline a variety of classroom assessment techniques, all of which generate products suitable for inclusion in student portfolios. The devices they suggest include minute papers, concept maps, audiotaped and videotaped protocols (students reporting on their thinking processes as they solve problems), student-generated test questions, classroom opinion polls, course-related self-confidence surveys, interest/knowledge/skills checklists, and reactions to instruction.

Page 20: Advisory Committee for Academic Assessment

Longitudinal study of the proposed instructional methods

In a study carried out at North Carolina State University, a cohort of students took five chemical engineering courses taught by the same instructor in five consecutive semesters. Active learning was used in all class sessions, and the students completed most of their homework assignments in cooperative learning teams. Both academic performance and student attitudes were assessed each semester for both the experimental cohort and a comparison cohort of students who proceeded through the traditionally-taught curriculum. Felder (1995, 1998) gives detailed descriptions of the instructional model and of the assessment procedures and results.

The experimental group entered the chemical engineering curriculum with credentials statistically indistinguishable from those of the comparison group and significantly outperformed the comparison group on a number of measures. Students in the experimental group generally earned higher course grades than comparison group students, even in chemical engineering courses that were not taught by the experimental course instructor. Comparison group students were roughly twice as likely to leave chemical engineering for any reason prior to graduation and almost three times as likely to drop out of college altogether. Anecdotal evidence strongly suggests that the experimental group outperformed the comparison group in developing skills in higher-level thinking, communication, and teamwork.

The attitudes of the two groups of students toward their education differed dramatically. Students in the experimental group gave significantly higher ratings to the quality of their course instruction, the student-friendliness of their academic environment, the level of peer support they enjoyed, and the quality of their investment in their chemical engineering education.

The value of TQM in improving classroom instruction

It is not difficult to find semantic links between teaching and total quality management. Almost every known strategy for teaching effectively cited in standard pedagogical references has counterparts on a list of TQM components compiled by Grandzol and Gershon (1997). Examples include writing instructional objectives (clarity of vision, strategic planning); student-centered instruction (customer focus, empowerment, driving out fear), collaborative or cooperative learning (adopting a new philosophy, teamwork), assessment (measurement, benchmarks, continuous improvement), and training and mentoring new faculty members (human resource development, employee training).

The question is, if effective teaching strategies are known and validated by extensive research (as they are), why not simply incorporate them into classroom instruction without an added layer of jargon? If all that is done is to choose a subset of TQM terms that map onto known effective teaching strategies and then apply the strategies

Page 21: Advisory Committee for Academic Assessment

in a single course—which is what most of the published studies in the education literature consist of—the TQM model adds no value. Perhaps more to the point, TQM is a collective strategy that has meaning only if it is agreed upon and implemented by the staff of an organization. Applying TQM terms to instruction in a single course by a single teacher may provide a good experience for the students, but it is not TQM.

In short, while improving the quality of classroom instruction is a worthwhile goal—arguably the most important goal that a university can adopt—there is no need to force-fit an industrial model or invent questionable analogies (e.g., students as "customers") to achieve it. TQM was developed by identifying problems with existing manufacturing practices and then applying a combination of sound economic and psychological principles to devise a better approach. Improving teaching requires identifying problems with existing academic practices and then applying a combination of sound educational and psychological principles to devise a better approach. Such approaches have already been devised. Why not just use them?

IMPROVING INSTITUTIONALTEACHING PROGRAMS

The proper use of any of the instructional methods described in the preceding section improves the quality of learning that occurs in the classroom. If several of the methods are used in concert, the potential for improvement is all the greater. The quality of an institutional teaching program may therefore be improved by persuading as many faculty members as possible to use those methods in their classes and providing them with the training and support they will need to implement the methods successfully.

It would be nice if we could stop right there, but the problem is more complex. The presumption in everything just said is that both faculty members and administrators at the institution in question generally agree on a definition of "quality of learning" and on the importance of improving it. Unfortunately, this presumption rarely has a basis in fact. Much therefore remains to be said about how to improve an institutional teaching program (as opposed to teaching in a single class), including the potential role of total quality management.

As noted in the introduction, many campuses have experimented with TQM, provoking a great deal of faculty opposition in the process and having relatively little impact on what happens in most classrooms. The conflict between the TQM advocates and opponents reflects differences between the industrial culture where TQM was developed and the culture of the university. The conflict can easily turn what should be a united effort to improve the quality of education into a power struggle between faculty members and administrators. The consequence is that the

Page 22: Advisory Committee for Academic Assessment

introduction of TQM to the campus may work against the cause it was intended to promote.

It is not that there is anything wrong with quality management principles. We believe that they are firmly rooted in common sense and that systematically applying them is very likely to lead to improvements in university operations. However, undertaking the wholesale application of a paradigm developed for one culture—industry—to another culture—higher education—has pitfalls. In important ways, the two cultures are as different as automobiles are from students, and steps that may be feasible in one environment may be entirely inappropriate in the other. [Beaver(1994) makes this point tellingly. Some of the ideas we present in the next section draw on his observations.] Perhaps more to the point, the rhetoric of total quality management contains terms that are offensive to many faculty members, and their resentment of attempts to apply TQM language to their profession provokes fierce opposition to TQM-based strategies.

In the remainder of this section we review the cultural differences that give rise to the faculty opposition, and then suggest how the lessons of TQM may be applied to teaching program improvement in a manner much more likely to succeed.

Two different worlds

Every organization, be it a company, a corporate division, a university, a college, or an academic department, has both a stated mission, which is written for public consumption, and a true mission, which dictates how the organization allocates resources and rewards performance. The two missions may be the same or different. The working definition of "quality" within an organization is determined primarily by the organization’s true mission. The concept of the true mission is needed to explain the principal differences between the industrial and academic cultures that are related to quality management.

In industry, the true mission is relatively clear, and quality is relatively straightforward to define. In education, the true mission is complex and subject to endless debate, and quality is therefore almost impossible to define in an operationally useful manner.

Whatever the corporate mission statement may say, the true mission of a for-profit company is to maximize profits (more precisely, some measure of profitability). Setting aside altruistic objectives that may motivate individual company personnel, such goals as zero defects, customer satisfaction, staff empowerment, etc., are to the corporate mind simply means to the end of maximizing profits. "Quality" may be defined as any property of an industrial process or product that varies in a generally

Page 23: Advisory Committee for Academic Assessment

monotonic manner with profits. The goal of raising quality is therefore consistent with the mission of maximizing profits.

In education as in industry, the stated mission and the true mission may not coincide. The similarity ends there, however. The goals that constitute the educational mission of a university are extremely hard to pin down to everyone’s satisfaction. Is the goal to produce graduates who simply know a lot more than they did when they enrolled as freshmen? What is it that we want them to know? Do we wish to equip the students with the skills they will need to succeed as professionals? What skills would those be? Are they the same for all professions? Are we trying to produce "educated citizens"? Whose definition of "educated" will we adopt? Plato’s? Dewey’s? Alan Bloom’s? Is it our purpose to promote certain values in our graduates? Which ones?

Agreeing on educational goals is only the first step toward formulating an academic mission, however. In the modern university, teaching is just one of several important functions, the others being research, service to business and technology (e.g., through faculty consulting activities), and service to the community and society at large. The true mission of the university might involve maximizing research expenditures, tuition revenues, "productivity" (rate of production of graduates divided by faculty size), the institution’s ranking in U.S. News and World Report, national rankings of the football and basketball teams, and regional and national reputations of the undergraduate and graduate teaching programs. Many of these goals are unrelated and most of them compete for limited resources. Prioritizing them to arrive at a realistic teaching quality improvement program is a challenge unlike anything encountered in industry.

In industry, quality is relatively easy to assess. In education, even if a definition of quality can be formulated and agreed upon, devising a meaningful assessment process is a monumental task.

Quality control managers can easily count the number of television sets in a production run that malfunction, or the percentage of silicon dioxide films deposited on semiconductor wafers that fall outside pre-specified quality control limits, or the weekly volume of complaints about the promptness and effectiveness of repair service calls. The lower those values, the higher the quality of the process being assessed.

But what are the measures of quality in education? Assuming that the mission of a university includes the imparting of certain knowledge, skills, and (perhaps) values, a meaningful assessment process must include measuring the degree to which the students have acquired those attributes. Assessing knowledge is relatively straightforward, but methods for assessing skills are complex and time-consuming to administer and valid means of assessing values do not exist.

Page 24: Advisory Committee for Academic Assessment

In industry, the customer is relatively easy to identify and is always right, at least in principle. In education, those who might be identified as "customers" have contradictory needs and desires and may very well be completely wrong.

When attempt is made to introduce TQM on a campus, the term "customer" probably provokes more faculty outrage than any other feature of the approach. Its use is taken as clear evidence that the proponents of the program do not understand the differences between an industrial organization and an educational institution.

This inference is understandable. If I manufacture automobiles, the customers are automobile buyers. If I produce semiconductor chips, the customers are the manufacturers of the products that use semiconductor chips. If I own a restaurant, the customers are the diners. If a significant number of my customers complain, it means that I am not doing an acceptable job, and unless I improve in a way that reduces the number of complaints, I will suffer negative consequences. Admittedly, the shareholders and/or the Board of Directors might also be considered my customers, but if the first group of customers is unhappy and I am operating in a competitive market, the second group will sooner or later also be unhappy.

If I am a faculty member, my "customers"—who include hirers of graduates, university administrators, governing boards, state legislatures, research funding agencies, parents, and students—want different and frequently contradictory things. Industry wants graduates who have good technical, communication, and teamwork skills and who can think critically and solve problems creatively. Administrators and governing boards want the university to have high national rankings (which are invariably based on research reputations), large amounts of external funding, and high "productivity," turning out as many graduates in as short a period of time as possible and at the lowest possible cost. Legislatures want the universities to be responsive to the taxpayers’ needs, which usually means having a strong but affordable undergraduate program. Funding agencies want results obtained quickly and cost-effectively. Parents want low tuition and graduation in four years or less. And then there are the students.

Students at a university want a bewildering variety of different and often contradictory things. Some want teaching that emphasizes the concrete and practical over the abstract and theoretical that will prepare them for their chosen professions; others want a rigorous education that will prepare them to enter top graduate schools and then go on to research careers. Most dislike difficult homework assignments and examinations; a few welcome the challenge. Some like working in teams; others hate it. And so on.

Page 25: Advisory Committee for Academic Assessment

In short, the "customers" of a university clearly cannot always be right, and they may sometimes be completely wrong. The goal of customer satisfaction that makes so much sense in a corporate environment consequently makes little sense at a university. It is little wonder that faculty members react negatively to the concept.

In industry, a clear chain of command usually exists, on paper and in fact. In education, a chain of command might exist on paper, but it is in fact relatively amorphous and nothing at all like its industrial counterpart.

Corporate executives who wish their subordinates to do things differently have both carrots and sticks at their disposal. Employees who make substantial contributions to meeting the goals of the company or of their superiors may be awarded bonuses, raises, and promotions. Those who fail to make such contributions may (leaving aside considerations related to unions) find themselves unemployed or relegated to undesirable positions as a consequence of their insubordination. For both of these reasons, if the CEO or the Board of Directors of a company decides that (for example) a TQM policy will be implemented, the policy is implemented, and staff members who fail to go along with it place themselves at risk.

Insubordination is not part of the normal vocabulary of administration-faculty relations. Administrators may make requests but they simply do not give orders to professors, and they have very little power to compel acceptance of their requests. They may award or deny merit raises to noncompliant faculty members but there is not much else they can do, especially if the faculty members are tenured. (Tenure has no counterpart in industry.) If they ask professors to do something that requires a substantial expenditure of time and/or effort—such as undertaking a quality-based teaching improvement program—they must somehow make a convincing case that doing it is in the professors’ best interests. Considering the low priority of teaching in most academic reward systems, that case can be extremely difficulty to make.

Toward an effective institutional teaching improvement program

We have so far spoken only of changes in teaching methods, but improvements in instructional programs may also involve subject integration, just-in-time instruction, writing across the curriculum, or any of a variety of other non-traditional approaches that have been found to improve learning. In the final analysis, however, the quality of a teaching program is primarily related to the quality of the instruction that takes place in individual classrooms. For the new curricula and instructional methods to have the desired impact, a reasonable percentage of the faculty must participate willingly and competently in both their delivery and their assessment. If they do not, the curriculum structure and any other educational reforms will be largely irrelevant in the long run.

Page 26: Advisory Committee for Academic Assessment

Most faculties have enough members who are sufficiently dedicated to teaching to participate voluntarily in pilot studies of new instructional programs, with minimal expectation of tangible reward. As many administrators have recently discovered, however, attracting and keeping enough faculty volunteers for a full-scale implementation of a new teaching program can be difficult or impossible, particularly if their participation is an add-on to all their other responsibilities and does not count toward tenure and promotion.

Administrators who wish to make major improvements in the quality of their teaching programs should therefore provide incentives for faculty members to participate in the new programs, such as salary supplements, travel or equipment funds, or release from service responsibilities. They should also commit to faculty members who carry the principal burden of teaching and assessment in the new programs that they will have the same opportunities for tenure, promotion, and merit raises as their more research-oriented colleagues now enjoy (Boyer 1990; Glassick et al. 1997; Felder 1994b). Unless this commitment is made and honored, attempts to implement a large-scale teaching improvement program are likely to consume an immense amount of time and effort and accomplish relatively little in the end.

Here, then, is our view of what can be done to improve the instructional program at a university. Each step requires agreement of the faculty members who must implement it and the administrators who must provide the necessary resources.

1. Faculty members and administrators define the knowledge, skills, and values that the graduates of the program should have.

2. With the assistance of experts in pedagogy and learning assessment, the faculty defines the instructional methods most likely to lead to the acquisition of the desired attributes, selects the methods needed to assess the effectiveness of the instruction, and estimates the resources (including provisions for faculty development) needed to implement both the instruction and the assessment.

3. The administration commits to provide both the necessary resources to initiate and sustain the program and appropriate incentives for faculty members to participate.

4. The faculty and administration formulate a detailed implementation plan.5. The faculty implements the plan.6. The faculty and administration assess the results and modify the plan as

necessary to move closer to the desired outcomes.

Rogers and Sando (1996) present models for teaching program assessment that include recommendations for all but Step 3 of this list.

Page 27: Advisory Committee for Academic Assessment

This six-step plan sounds like a TQM model, and of course it is. It can be put into effect perfectly well, however, in the context of the university culture, without ever mentioning customers, empowerment, bottom-up management, or any other TQM term whose applicability to education is questionable. Consensus on all of the issues involved in educational reform might or might not be achieved, but at least the dialogue would focus on the real issues rather than semantic red herrings.

Our recommendations for improving teaching quality finally come down to this. Instructors who wish to improve teaching in a course should consult the literature, see which instructional methods have been shown to work, and implement those with which they feel most comfortable. Total quality management need not enter the picture at all. An administration wishing to improve the quality of its instructional program should first make the necessary commitment to provide the necessary resources and incentives for faculty participation. Then, don’t talk about TQM—just do it.

REFERENCES

Angelo, T.A., and K.P. Cross. 1993. Classroom assessment techniques: A handbook for college teachers, 2d ed. San Francisco: Jossey-Bass Publishers.

Beaver, W. 1994. Is TQM appropriate for the classroom? College Teaching 42, no.3:111-114.

Bellamy, L., D. Evans, D. Linder, B. McNeill, and G. Raupp. 1994. Active learning, team and quality management principles in the engineering classroom.Proceedings of the 1994 Annual Meeting of the American Society for Engineering Education. Washington, DC: ASEE.

Bloom, B.S. 1984. Taxonomy of educational objectives. 1.Cognitive domain. New York: Longman.

Bonwell, C.C., and J.A. Eison. 1991. Active learning: Creating excitement in the classroom. ASHE-ERIC Higher Education Report No. 1. Washington, DC: George Washington University.

Boyer, E.L. 1990. Scholarship reconsidered: Priorities of the professoriate. Princeton, NJ: Carnegie Foundation for the Advancement of Teaching.

Brent, R., and R.M. Felder. 1992. Writing Assignments — Pathways to Connections, Clarity, Creativity. College Teaching 40, no.2:43–47.

Page 28: Advisory Committee for Academic Assessment

Burke, K. 1993. The mindful school: How to assess thoughtful outcomes. Palatine, IL: IRI/Skylight Publishing.

Campbell, W. E., and K.A. Smith (Eds.). 1997. New paradigms for college teaching. Edina, MN: Interaction Book Company.

Deming, W.E. 1994. The new economics. 2d ed. Cambridge, MA: MIT Center for Advanced Engineering Studies. Cited in Latzko, 1997.

Ewell, P.T. 1998. National trends in assessing student learning. J. Engr. Education 87, no. 2:107-113.

Felder, R.M. 1994a. Any questions? Chem. Engr. Education, 28 no.3:174-175.

—. 1994b. The myth of the superhuman professor. J. Engr. Education, 82, no.2: 105–110.

—. 1995. A longitudinal study of engineering student performance and retention. IV. Instructional methods and student responses to them. J. Engr. Education, 84, no.4: 361–367.

—, and R. Brent. 1994. Cooperative learning in technical courses: Procedures, pitfalls, and payoffs. ERIC Document Reproduction Service, ED 377038.

—, and R. Brent. 1996. "Navigating the bumpy road to student–centered instruction." College Teaching 44, no.2: 43–47.

—, and R. Brent. 1997. Speaking objectively. Chem. Engr. Education 31, no.3:178-179.

—, G.N. Felder, and E.J. Dietz. 1998. "A longitudinal study of engineering student performance and retention. V. Comparisons with traditionally-taught students," J. Engr. Education 87, no.4:469-480.

Glassick, C.E., M.T. Huber, and G.I. Maeroff. 1997. Scholarship assessed: Evaluation of the professoriate. San Francisco: Jossey-Bass.

Grandzol, J.R., and M. Gershon. 1997. Which TQM practices really matter: An empirical investigation. Quality Management Journal 97, no.4:43:59.

Gronlund, N.E. 1991. How to write and use instructional objectives (4th ed.) New York: Macmillan.

Page 29: Advisory Committee for Academic Assessment

Jensen, P.A., and J.K. Robinson. 1995. Deming’s quality principles applied to a large lecture course. J. Engr. Education 84, no.1:45-50.

Johnson, D.W., R.T. Johnson, and K.A. Smith. 1998. Active learning: Cooperation in the college classroom, 2d ed. Edina, MN: Interaction Press.

Latzko, W.J. 1997. Modeling the method: The Deming classroom. Quality Management Journal 5, no.5:46-55.

McKeachie, W. 1999. Teaching tips, 10th ed. Boston: Houghton Mifflin.

Meyers, C., and T.B. Jones. 1993. Promoting active learning. San Francisco: Jossey-Bass.

Millis, B.J., and P.G. Cottell, Jr. 1998. Cooperative learning for higher engineering faculty. Phoenix: Oryx Press.

NISE (National Institute for Science Education). 1997. Collaborative learning: Small group learning page. <http://www.wcer.wisc.edu/nise/cl1/>

Panitz, B. 1996. The student portfolio: A powerful assessment tool. ASEE Prism 5, no. 7: 24-29.

Rogers, G. M. & Sando, J. K. 1996. Stepping ahead: An assessment plan development guide. Terre Haute, IN: Rose-Hulman Institute of Technology.

Rogers, G. M., & Williams, J. 1999. Building a better portfolio. ASEE Prism 8, no. 5: 30-32.

Shelnutt, J.W., and K. Buch. 1996. Using total quality principles for strategic planning and curriculum revision. J. Engr. Education 85, no.3:201-207.

Shuman, L.J., C.J. Atman, and H. Wolfe. 1996. Applying TQM in the IE classroom: The switch to active learning. Proceedings of the 1996 Annual Meeting of the American Society for Engineering Education. Washington, DC: ASEE.

Stedinger, J.R. 1996. Lessons from using TQM in the classroom. J. Engr. Education 85, no2:151-156.

Summers, D.C.S. 1995. TQM Education: Parallels between industry and education. Proceedings of the 1995 Annual Meeting of the American Society for Engineering Education. Washington, DC: ASEE

Page 30: Advisory Committee for Academic Assessment