This article was downloaded by: [112.209.40.91] On: 23 February 2013, At: 00:13 Publisher: Routledge Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK Communication Education Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/rced20 Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009 Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore Version of record first published: 09 Mar 2011. To cite this article: Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore (2011): Assessment of Oral Communication: A Major Review of the Historical Development and Trends in the Movement from 1975 to 2009, Communication Education, 60:2, 255-278 To link to this article: http://dx.doi.org/10.1080/03634523.2010.516395 PLEASE SCROLL DOWN FOR ARTICLE Full terms and conditions of use: http://www.tandfonline.com/page/terms-and- conditions This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. The publisher does not give any warranty express or implied or make any representation that the contents will be complete or accurate or up to date. The accuracy of any instructions, formulae, and drug doses should be independently verified with primary sources. The publisher shall not be liable for any loss, actions, claims, proceedings, demand, or costs or damages whatsoever or howsoever caused arising directly or indirectly in connection with or arising out of the use of this material.
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
This article was downloaded by: [112.209.40.91]On: 23 February 2013, At: 00:13Publisher: RoutledgeInforma Ltd Registered in England and Wales Registered Number: 1072954 Registeredoffice: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
Communication EducationPublication details, including instructions for authors andsubscription information:http://www.tandfonline.com/loi/rced20
Assessment of Oral Communication:A Major Review of the HistoricalDevelopment and Trends in theMovement from 1975 to 2009Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael MooreVersion of record first published: 09 Mar 2011.
To cite this article: Sherwyn Morreale , Philip Backlund , Ellen Hay & Michael Moore (2011):Assessment of Oral Communication: A Major Review of the Historical Development and Trends inthe Movement from 1975 to 2009, Communication Education, 60:2, 255-278
To link to this article: http://dx.doi.org/10.1080/03634523.2010.516395
PLEASE SCROLL DOWN FOR ARTICLE
Full terms and conditions of use: http://www.tandfonline.com/page/terms-and-conditions
This article may be used for research, teaching, and private study purposes. Anysubstantial or systematic reproduction, redistribution, reselling, loan, sub-licensing,systematic supply, or distribution in any form to anyone is expressly forbidden.
The publisher does not give any warranty express or implied or make any representationthat the contents will be complete or accurate or up to date. The accuracy of anyinstructions, formulae, and drug doses should be independently verified with primarysources. The publisher shall not be liable for any loss, actions, claims, proceedings,demand, or costs or damages whatsoever or howsoever caused arising directly orindirectly in connection with or arising out of the use of this material.
Assessment of Oral Communication:A Major Review of the HistoricalDevelopment and Trends in theMovement from 1975 to 2009Sherwyn Morreale, Philip Backlund, Ellen Hay &Michael Moore
This comprehensive review of the assessment of oral communication in the commu-
nication discipline is both descriptive and empirical in nature. First, some background on
the topic of communication assessment is provided. Following the descriptive back-
ground, we present an empirical analysis of academic papers, research studies, and books
about assessing communication, all of which were presented or published from 1975 to
2009. The results are outlined of content and thematic analyses of a database of 558
citations from that time period, including 434 national convention presentations, 89
journal articles, and 35 other extant books and publications. Three main themes and
eight subthemes are identified in the database, and trends evident in the resulting data
are considered. The study concludes with a discussion of the trends and overarching
themes gleaned from the research efforts, and the authors’ recommendations of best
practices for how to conduct oral communication assessment.
Keywords: Assessment; Evaluation; Communication Assessment; Communication Skills
ISSN 0363-4523 (print)/ISSN 1479-5795 (online) # 2011 National Communication Association
DOI: 10.1080/03634523.2010.516395
Communication Education
Vol. 60, No. 2, April 2011, pp. 255�278
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
in high school or college*create barriers to successful performance at the next level.
Student learning outcomes data are essential to better understand what is working and
what is not, to identify curricular and pedagogical weaknesses, and to use this
information to improve performance. (Kuh & Ikenberry, 2009, p. 1)
As a national initiative, early mandates to assess student learning were often perceived
as an inappropriate expectation of faculty set by college administrators and legislators
external to their campuses. Many faculty members firmly believed their current
practices for grading knowledge and performance were quite sufficient. Times and
attitudes evolved, and assessment is now institutionalized on the majority of
American campuses (Ewell, 2009). As we will detail in the following report,
assessment in higher education and in the communication discipline has developed
considerably over the last 35 years. As a policy and practice in higher education,
assessment likely will be with us for years to come. Assessment is not going away and
nor should it.
Today a wide range of organizations external to campuses*regional accrediting
bodies, legislatures, state boards of education, and others*endorse and mandate
assessment. They require assessment as part of their accountability processes to
ensure faculty at institutions of higher education are doing their jobs well. These two
processes*assessment and accountability*may be a source of confusion for some,
because both are often collectively referred to as assessment. In reality, assessment
and accountability are two different processes but with one potentially embedded in
the other. Put simply, when we assess our own performance or that of our students, it
is assessment; when others assess our performance or that of our department,
program, or institution, it is accountability (Frye, 2006).
Another persistent source of confusion about these two processes is the language
used in their discussion and application. Precision of language about assessment
could support greater clarity of practice and perhaps more enthusiastic support and
participation. To that end and to inform the report that follows, Table 1 presents
some of the more commonly used terms in the assessment movement.
The present study examines the development and evolution of the assessment
movement in the communication discipline by providing a comprehensive overview
of historical trends in related scholarship over the last 35 years. The goal of this
research study is to serve the needs of scholars, teachers, and administrators who are
committed to engaging in oral communication assessment effectively, both in and
outside the communication discipline. We begin with some descriptive background
for this study and then outline the method and results of gathering and analyzing
data about assessing communication from national convention programs, educa-
tional journals, as well as other books and publications.
Background to the Present Study
Some understanding of assessment in general, and in the communication discipline
in particular, is in order. A brief sketch of the history of the assessment movement
includes references to communication’s role therein. Then we detail the National
256 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Table 1 Common Terms Used in the Assessment Initiative
Term Definition/description
Assessment andaccountability
In general, when we assess our own performance, it is considered assessment;when others assess our performance, it is considered accountability. That is,assessment is a set of initiatives we take to monitor the results of our actionsand to improve ourselves; accountability is a set of initiatives others take tomonitor the results of our actions and to penalize or reward us accordingly.
Assessment The systematic process of determining educational objectives, gathering,analyzing, and using information about student learning and learningoutcomes to make decisions about programs, individual student progress, oraccountability.
Measurement The systematic investigation of people’s attributes or behaviors.Benchmark A criterion-referenced objective standard that is used for comparative
purposes. A program can use its own data as a baseline or benchmark againstwhich to compare future performance. It can also use data from anotherprogram as a benchmark.
Direct assessment Direct assessment of student learning requires students to display theirknowledge and skills as they respond to or are evaluated using an assessmentinstrument. Objective tests, essays, presentations, and classroom assignmentsall meet this criterion.
Indirectassessment
Indirect assessments such as surveys and interviews ask students to reflect ontheir learning rather than demonstrate it.
Formativeassessment
An assessment that is used for improvement (on an individual or programlevel) rather than for making final decisions or for accountability. It is alsoused to provide feedback to improve teaching, learning, and the curricula, aswell as to identify students’ strengths and weaknesses.
Summativeassessment
A sum total or final product measure of achievement at the end of aninstructional unit or course of study.
Performance-basedassessment
An assessment technique involving the gathering of data through systematicobservation of a behavior or process and evaluating these data based on aclearly articulated set of performance criteria to serve as the basis for evaluativejudgments. Evaluating speeches is a good example of this type of assessment.
Evaluation This term broadly covers all potential investigations of institutionalfunctioning, based on formative, summative, or performance-basedassessment processes. Evaluation may include assessment of learning, but itmight also include nonlearning centered investigations (e.g., satisfaction withinstructional facilities).
Objectives The specific knowledge, skills, or attitudes that students are expected toachieve through their college experience (e.g., any expected/intended studentoutcomes).
Outcomes The results of instruction, the specific knowledge, skills, or developmentalattributes that students actually develop through their college experience(viz., the assessment results).
Rubric A scoring tool that lists the criteria for an assignment or task, or ‘‘what counts’’(e.g., purpose, organization, and mechanics) in a piece of writing. A rubricalso articulates gradations of quality for each criterion it contains, fromexcellent to poor.
Norm An interpretation of scores on a measure that focuses on the rank ordering ofstudents, not their performance in relation to criteria.
Value-added The effects educational providers have had on students during their programsof study. The impact of participating in higher education on student learningand development above that which would have occurred through naturalmaturation. Value-added factors are usually measured as longitudinal changeor difference between pretest and posttest.
Assessment Review 257
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Communication Association’s involvement in assessing communication programs
and student learning outcomes. We compare assessment of communication to that in
other disciplines and describe the current status of communication assessment
nationally.
General Historical Background
In the 1960s, poor results of what was termed choice-based curriculum led to the
undergraduate curriculum reform calls of the 1980s. During that time, many students
were deemed not adequately prepared for college, and students graduating from
college sometimes lacked skills necessary for workplace success*a condition that
some might say has not changed sufficiently yet. Over 20 national reports on skills
assessment were published by a variety of associations and agencies between 1983 and
1989 (Hay, 1989, 1992).
As the 20th century drew to a close, interest in assessment continued to increase.
Goals 2000: Educate America Act, and President Bush’s No Child Left Behind program,
focused on improving education with an emphasis on assessing learning outcomes.
The Goals 2000 Act codified into law six national education goals developed in 1989
and added two goals to encourage parental participation and the professional
development of educators (‘‘Clinton intends,’’ 1993). The national goal on literacy
and lifelong learning was of particular importance to communication educators:
‘‘The proportion of college graduates who demonstrate an advanced ability to think
critically, communicate effectively, and solve problems will increase substantially’’
(Lieb, 1994, p. 1). Even though the communicate effectively portion of the objective
might not have received the full attention some communication educators believed it
deserved, it does provide a national rationale for assessing communication education.
National Communication Association (NCA) Assessment Initiatives
NCA, formerly known as the Speech Communication Association, has actively
developed a national assessment agenda since the 1970s. The Speech Communication
Association Task Force on Assessment and Testing, formed in 1978, was charged with
gathering, analyzing, and disseminating information about the testing of speech
communication skills (Backlund & Morreale, 1994). This task force has evolved into
the NCA Communication Assessment Division (CAD), which addresses activities
such as defining communication skills and competencies, publishing summaries of
assessment procedures and instruments, publishing standards for effective commu-
nication programs, and developing guidelines for program review. A significant
portion of the research on assessment supported by NCA has focused on two inter-
related areas: communication programs and student learning outcomes.
Communication program assessment. The purpose of program assessment is con-
tinuous improvement of departmental educational efforts through self-evaluation.
Such assessment provides an opportunity for department members to demonstrate
the unique contributions of their departments to administrators and to fend off
258 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
threats based upon fiscal constraints or political motivations. Program assessment
requires department members to examine curricula, educational experiences, and
student learning. Evaluation may be part of a campus-wide effort (Backlund, Hay,
Harper, & Williams, 1990) and/or part of a departmental initiative (Makay, 1997;
Shelton, Lane, & Waldhart, 1999). Work in this area is also associated increasingly
with mandates from state agencies and accreditation boards. NCA has a set of
Guidelines for Developing and Assessing Undergraduate Programs in Communication
available on the association’s website (NCA, 2008).
Communication learning outcomes assessment. The purpose of learning outcomes
assessment is to examine actual student learning in any course or other arena in
which teaching and learning may occur. Faculty ultimately must ‘‘own’’ assessment of
student learning; they are the ones who write student-focused learning objectives,
select appropriate instruments of assessment, collect, analyze, and interpret the data,
and then use the data for course and program improvement. Developing student-
learning outcomes in communication begins with defining communication compe-
tence as it relates to the desired educational outcomes of the instructional program.
Several publications, including the published proceedings of NCA’s national
assessment conference in 1993, discuss various approaches to examining commu-
and categories, and produce a comprehensive description of how communication
assessment has been approached over the years.
Content Analysis
First, a content analysis process was conducted to identify and count presentations
at conventions of the NCA and scholarly articles on communication assessment
in leading, national communication education journals. Additionally, a list was
developed of extant publications, such as books and monographs that address
assessment of communication. More specifically, for the time period from 1975 to
2009, we reviewed events and presentations listed in NCA and Speech Communica-
tion Association (SCA) convention programs. For the same time period, we
examined the tables of content of Communication Education, the Association for
Communication Administration Bulletin, and Communication Teacher (formerly
Speech Communication Teacher). Some past issues of Speech Communication Teacher
were not included because they are not available in any electronic database. The tables
of content for journals published by the four regional communication associations
also were reviewed but not included because most were not available electronically.
We also searched an array of databases and the Internet for books and other nonserial
publications, such as conference proceedings within and outside of the communica-
tion discipline. Assessment, evaluation, assessing, and evaluating were the initial
keywords used in this search. These keywords were adjusted during the data gathering
process based on the results of queries conducted in various databases. The results of
the content analysis processes were recorded in Ref Works, a computerized
bibliographic database system that is housed on the campus of one of the authors,
but was electronically accessible to all authors involved in this study. The resulting
database contained a total of 558 items, including 434 convention presentations, 89
journal articles, and 35 other books and nonserial publications. After developing the
database, we subjected the items in the database to thematic analysis using a
qualitative coding and categorizing process (Saldana, 2009).
Thematic Analysis
Thematic analysis of the items in the database, including convention presentations,
journal articles, and other extant publications, was used to determine trends and
analyze patterns in the evolution of oral communication assessment from 1975 to
2009. The first step in the thematic analysis process involved identifying the general
theme or main focus of each item. Each of the four researchers in this study worked
independently, engaging in a first cycle coding process of all the items in the database
to develop a preliminary list of main themes. After comparing the four sets of themes,
three broad categories of themes emerged, focused on the why, what, and how of
assessing communication.
Next, the authors collaborated to identify subthemes for each of the three
categories and a description of each subtheme to be used in a second cycle coding
process. The goal of this collaboration was to agree on a set of subthemes that are
Assessment Review 261
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
comprehensive and mutually exclusive. If the set of subthemes was comprehensive,
then each of the presentations, articles, and publications in the database could be
assigned to one of the themes and subthemes. If the subthemes were mutually
exclusive, it would improve the likelihood that each item would clearly fall into one
subtheme rather than another. According to Saldana (2009), the goal of this type of
coding and categorizing process is to organize and group similarly coded data into
categories or ‘‘families’’ (p. 8) because they share some characteristics. Table 2 presents
a description of the three categories or main themes and their associated subthemes.
A pilot test of these subthemes was conducted to ensure their viability before
engaging in second cycle coding of the entire database of articles, presentations, and
publications. All four raters categorized the same 32 items from the database.
Table 2 Themes and Subthemes Used for the Thematic Analysis Process
Themes and subthemes Description
Theme 1What is communication
assessment, and why dowe do it?
This category of bibliographic items includes theoretical issues,fundamentals of oral communication assessment, and reflectionsabout the oral communication process. Items also focus on why weengage in the communication process.
Subtheme A: Generaloverview
These items discuss basic assessment concepts and assessment‘‘language,’’ and they reflect upon and provide a sense of where themovement is at or could be headed.
Subtheme B: Rationale These items explain why we do communication assessment.
Theme 2What is assessed? This category of bibliographic items includes the qualities, knowledge,
abilities, and dispositions that are commonly assessed. These itemsfocus more on what should be considered in the assessment processrather than how to do the assessment.
Subtheme C: Studentlearning outcomes
These items aid in defining what could and should be assessed. Theitems identify typical aspects of student learning that are or should beassessed.
Subtheme D: Program/departmental evaluation
These items focus on evaluation that occurs at the unit orprogrammatic level, and provide guidance in what qualities andprocedures to consider in such a review or evaluation.
Theme 3How is it assessed? This category of bibliographic items considers the ‘‘how to’s’’ of
specific assessment practices and processes.Subtheme E: Assessment
guidelines andframeworks
These items focus on how to develop and organize assessment efforts;they also explain how departments can gather and analyze assessmentdata.
Subtheme F: Assessmentin specific contexts andcourses
These items focus on assessment practices and processes with anemphasis on how to assess specific knowledge sets, behavioral skills,and dispositions across a range of situations and contexts (e.g.,interpersonal, group, public, organizational, K-12).
Subtheme G: Assessmentstrategies and techniques
These items focus on data-gathering strategies (e.g., portfolio, survey,behavioral coding) and their use in both classroom and nonclassroomcontexts (e.g., applied situations, communication centers and labs).
Subtheme H: Assessmentinstruments
These items focus on assessment instruments/measures, including thecriteria for evaluating various assessment instruments.
262 S. Morreale et al.
Dow
nloa
ded
by [
112.
209.
40.9
1] a
t 00:
13 2
3 Fe
brua
ry 2
013
Cronbach’s alpha was used to calculate the consistency of their ratings, and a
coefficient of .862 was achieved. Cronbach’s alpha coefficient is a measure of internal
consistency reliability, and it is useful for understanding the extent to which the
ratings from a group of judges are consistent (Stemler, 2004).
Since a reliable and consistent set of subthemes emerged in the pilot test,
Cronbach’s alpha was calculated to determine the two most consistent and reliable
pairs of coders who would engage in second cycle coding and work as two
independent teams to code half of the entire database of items. One pair of coders
achieved a reliability coefficient of .90 and the second pair a coefficient of .75.
The two teams then each took half of the database and used the set of subthemes in
Table 2 to code their items. Any items about which the two coders in a pair disagreed
were coded by a third coder in order to determine the subtheme for that item. The
results of content and thematic analysis are presented next followed by a discussion of
trends and overarching themes evident in the results.
Results
The results of the content analysis and coding of the NCA/SCA convention programs,
the national communication journals, and other books and publications provide a
detailed picture of how interest in communication assessment has been approached
over the years. Patterns, in 5-year periods extending from 1975 to 2009, became
evident when the items in the database were coded into subthemes. In the appendix
to this writing, full citations are provided for the journal references and books that
were identified and coded for this study. They are organized by theme and subtheme
to facilitate the reader’s ability to tie specific studies or books to a particular topic or
subtheme. A list of the 434 coded convention papers is available directly from the
authors of this study.
To interpret the results now presented in Tables 3�6, the eight subthemes, based on
subtheme letter (e.g., A, B, C), are briefly listed again here:
Theme One: What Is Communication Assessment, and Why Do We Do It?