Formative Assessments: What, Why, and How Learn what formative assessments are and why you should be doing them. Get tips for creating your own. Understand how to use them properly to measure students’ performance against learning outcomes, document students’ growth, and improve your teaching. Use the Impromptu Formative Assessments and the collection of 95 Formative Assessment Options to get started!
28
Embed
Formative Assessments: What, Why, and How...There are two main types of assessment: summative and formative. Summative assessment occurs after instruction in the form of a multiple
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Formative Assessments:
What, Why, and How
Learn what formative assessments are and why you should be doing them. Get tips for creating
your own. Understand how to use them properly to measure students’ performance against
learning outcomes, document students’ growth, and improve your teaching. Use the Impromptu
Formative Assessments and the collection of 95 Formative Assessment Options to get started!
4 • KDP New Teacher Advocate • Winter 2014
By Marisa T. Cohen
FORMATIVE ASSESSMENT
Feedback as a Means of Formative Assessment
Many people view assessment as a one-time event that occurs at the culmination of a learning experience. However, it is better to conceptualize assessment as an ongoing process that guides the learning and teaching relationship so that all players involved gain valuable information about themselves and the knowledge they hold.
Types of AssessmentThere are two main types of assessment: summative and formative. Summative assessment
occurs after instruction in the form of a multiple choice test or final exam that indicates the level of knowledge that the student has attained (Woolfolk, 2013). Formative assessment, on the other hand, occurs both before and during instruction. The purpose is to guide the teacher in planning and preparing the lesson and improving student learning.
It is not the assessments themselves that are formative or summative, but how they are used. Formative assessment can include multiple choice exams, but instead of giving them at the end of the unit, they are used before or during the unit to gather information about what the student
Online extras at kdp.org/publications/nta
KDP New Teacher Advocate • Winter 2014 • 5
ing formative assessment in an effective and beneficial way is to use students’ feedback to alter your own teaching. This requires a certain amount of flexibility, because the teacher must be willing to change her ap-proach based on what the students already know. In younger grades, many teachers use graphic organizers such as a K-W-L chart to assess students’ preexisting knowledge. This involves filling in what the students already Know and what they Want to know prior to the lesson. The teacher then reevaluates his or her approach to the unit before proceed-ing to helping them Learn. Teachers also may consider using a pretest. This gives the teacher a sense of what the students already know and provides them with a sense of any misconceptions the students hold regarding the information to be taught.
The term assessment is used often in educa-tion and usually has a negative connotation. It is important to learn effective ways to use assess-ment in the classroom and to understand how to employ formative assessment to guide your teaching. Also, it is imperative to stress to students that assessment should not be considered a pun-ishment, but rather a way to evaluate what they have mastered, with the hope of better tailoring learning to their unique needs.
ReferencesClarke, S. (2001). Unlocking formative assessment:
Practical strategies for enhancing pupils’ learning in the primary classroom. London, UK: Hodder & Stoughton.
Woolfolk, A. E. (2013). Educational psychology (12th ed.). Upper Saddle River, NJ: Pearson Education.
www.kdp.org
Dr. Cohen is an Assistant Professor of Psychology at St. Francis College in Brook-lyn, New York. She teaches educational psychology, general psychology, and experimental courses. Her interests include examining methods to teach students content-specific terminology in science and determining the ability of students to assess their own knowledge, self-regulate, and adequate-ly prepare for exams.
already knows. They are then used by the teacher to further mold lesson planning and to drive the learning process forward.
Assessment as a GardenShirley Clark (2001) used a beautiful anal-
ogy of the school as a garden and the students as the plants. Summative assessment would simply involve measuring your plants. While it is certainly interesting to compare and analyze the measurements, it may not affect the future growth of these plants. Formative assessment deals with the process of taking care of these plants, by considering how to feed and water them so that you are tending to their needs, promoting their growth, and creating a beauti-ful garden. Formative assessment can tell you how students are benefitting from instruction as it is taking place, and in turn help you adjust the lessons accordingly.
Components of Formative AssessmentFor formative assessment to be beneficial, it
is imperative to include the class in the learning process by providing them with a great deal of feedback. You also must spend time reflecting on what you are learning about your own teaching from the students’ feedback to you.
1. Provide detailed feedback to guide student learning. The more detailed the feedback you provide, the more effective it will be. Assigning a number or letter grade is not useful because it does not communi-cate to the students what they know or the areas in which they can improve. Rather, providing them with clearcut comments in the margins of the assignment will enable them to edit their work and learn something about their skills during the process. Feed-back also should provide students with some sense of understanding about how they are progressing toward the goal and what is still needed to reach it. You can even incorpo-rate feedback in multiple choice exams by having an open discussion of why certain answers are better than others. This creates a dialogue between the teachers and students that can be used to clarify any misunder-standings.
2. Use students’ feedback to learn about your teaching. Another component of us-
Help for employing formative assessment:
West Virginia Department of Education List of formative assessment types with links and examples bit.ly/WVaDOE6
Public Schools of North Carolina Vignettes and podcasts of formative assessment in practice bit.ly/NCassess
Formative Assessment StrategiesEasy ideas to use for formative assessmentbit.ly/FAStrat
10 Tips for Creating Formative Assessments By Bill Ferriter on Center for Teaching Quality Blog (http://bit.ly/10Tips4FormAss)
Here are ten tips taken from Common Formative Assessment: A Toolkit for PLCs at Work by Kim Bailey
and Chris Jakicic that can strengthen your assessment practices.
1. Remember that getting information quickly and easily is essential. Assessment data is only
valuable if you are actually willing and able to collect it and you can act on it in a timely manner.
That simple truth should fundamentally change the way that you think about assessments.
2. Write your assessments and scoring rubrics together even if that means you initially deliver
fewer common assessments. Collaborative conversations about what to assess, how to assess and
what mastery looks like in action are just as valuable as student data sets.
3. Assess ONLY the learning outcomes that you identified as essential. Assessing nonessential
standards just makes it more difficult to get—and to take action on—information quickly and
easily.
4. Ask at least 3 questions for each learning outcomes that you are trying to test. That allows
students to muff a question and still demonstrate mastery. Just as importantly, that means a poorly
written question won't ruin your data set.
5. Test mastery of no more than 3 or 4 learning targets per assessment. Doing so makes
remediation after an assessment doable. Can you imagine trying to intervene when an assessment
shows students who have struggled to master more than 4 learning outcomes?
6. Clearly tie every single question to an essential learning outcome. Doing so makes tracking
mastery by student and standard possible. Your data sets have more meaning when you can spot
patterns in mastery of the outcome—not the question.
7. Choose assessment types that are appropriate for the content or skills that you are trying to
measure. Using performance assessments to measure the mastery of basic facts is overkill.
Similarly, using a slew of multiple choice questions to measure the mastery of complex thinking
skills is probably going to come up short.
8. When writing multiple choice questions, use wrong answer choices to highlight common
misconceptions. The patterns found in the WRONG answers of well-written tests can tell you just
as much as the patterns found in the RIGHT answers. Fill your test with careless or comical
distractors and you are missing out on an opportunity to learn more about your students.
9. When writing constructed response questions, provide students with enough context to be able
to answer the question. Context plays a vital role in constructing a meaningful response to any
question. Need proof? A teenage daughter asks her parent, "Can I go to the mall with some
friends tonight?" Will the parent ask a few questions before saying yes?
10. Make sure that higher level questions ask students to apply knowledge and/or skills in new
situations. A higher level question that asks students to apply knowledge in the same way as they
have practiced before becomes a lower level question.
Thirty years ago in a report about ability testing, the National Research Council (NRC) concluded that although these forms of tests can be useful, “they are limited instruments and do not tell everything of importance about any individual” (Wigdor, 1982, p. 26). Concern over this limitation resulted in a movement in the 1990s away from multiple-choice assessments focused on minimum-competency toward new assessment formats focused on higher-order thinking skills such as student portfolios, hands-on performance, essays, and short-answer questions (Darling-Hammond, 1994; Hamilton & Koretz, 2002). However, assessment practices that provide information to improve student learning, known as “assessment for learning” (Hargreaves, 2005; Stiggins, 2002), continue to be dominated by “assessment of learning” practices that provide information about what students know (Sloane & Kelly, 2003). The use of assessments in public schools in the United States to link student test scores to school performance has arguably transformed assessments into accountability tools
Understanding the differences among traditional, alternative, and authentic assessment practices can help shift our focus
from assessment of learning to assessment for learning.
Dennis S. Rosemartin, a former elementary classroom teacher, is currently a PhD candidate at the University of Arizona. His area of specialization is teacher preparation and environmental education, and his interests include assessment practices, curriculum theory, and environmental literacy.
(Airasian, 1987; Hamilton & Koretz, 2002; Sloane & Kelly, 2003).
There is a need to shift toward assessment for learning practices. To support this argument, I focus on four areas. I begin with an examination of the negative effects of using high-stakes assessments as the main accountability system for schools. I then discuss the underlying assumptions of learning that are inherent in different types of assessments. I follow with a comparison of the purposes of formative and summative assessments. I conclude with a framework and some examples to help teachers develop and implement assessment for learning practices.
Unintended Effects of High-Stakes AssessmentsThe No Child Left Behind (NCLB) legislation, signed into law in 2002, mandated states to develop an accountability system based on the scores of a yearly assessment with the ambitious goal of 100% proficiency in reading and math by 2014. The strong repercussions of
KAPPA DELTA PI RECORD u JAN–MAR 2013 21
22 KAPPA DELTA PI RECORD u JAN–MAR 2013
this accountability system, such as closing or reconstituting schools that did not improve test scores, created an environment in public schools where raising test scores became the primary goal.
As a classroom teacher, I witnessed intervention specialists focusing on students who were close to meeting proficiency standards rather than on students far below proficiency standards because they had a better chance of bringing up the aggregated test scores. These students have been referred to as “bubble kids” because their tests scores are on the bubble below the passing score; in one study at an elementary school in Texas, it was found that the majority of intervention programs focused only on these bubble kids (Booher-Jennings, 2005). After years of being abandoned by the school system, some high school students are encouraged to drop out of school (Nichols & Berliner, 2005).
Effect on Teaching and the CurriculumOther negative effects of high-stakes
assessments include narrowing of the curriculum and methods of instruction, less time for instruction, and declining teacher morale (Nichols & Berliner, 2005; Sadker & Zittleman, 2007; Smith, 1991). In the 1980s, critics of high-stakes assessments warned of the danger of teaching to the test (Sloane & Kelly, 2003), and recently there has been evidence that instructional time on subjects not on state-mandated assessments is decreasing while instructional time for the tested subjects is increasing. In a survey conducted by Education Week, 79% of teachers responded that the amount of time they spent teaching test-taking strategies was between “somewhat to a great deal” (Sadker & Zittleman, 2007). Moreover, findings in a case study which examined intervention reading lessons that focused on test-taking strategies indicated that the intervention may actually have a negative effect on student test scores (Valli & Chambliss, 2007).
Corruption of Teachers and Administrators
To increase the chances of receiving federal funding through a competitive grant known as Race to the Top, established by the Obama
administration, some states have proposed linking scores on state assessments to student promotion, teacher pay, and a school’s label as performing or under-performing (Kossan, 2010). Pressure to improve scores may have led to the cheating by school personnel at 44 schools in Atlanta, Georgia and the possible cheating at 82 schools in Pennsylvania (Winerip, 2011). These incidents shed light on the intense pressure that many public schools face and, unfortunately, are not isolated events. In a report published by the Great Lakes Center for Educational Research and Practice, Nichols and Berliner (2005) cited 83 reports of cheating by school personnel in 33 states, dating as far back as 1990. Although such actions are intolerable, they prompt critical examination of current education reform policy focused on using assessments as an accountability tool.
Teacher and Student MoraleWhile many factors contribute to the
stresses of teaching and being a student, high-stakes testing has been cited often by teachers and students as a main cause of anxiety (Jones, 2007; Nichols & Berliner, 2005; Smith, 1991). Numerous newspaper articles across the country have reported teachers feeling demoralized because the successes they had during the year were overshadowed by the results of the end-of-year assessments (Nichols & Berliner, 2005). Studies found evidence linking testing with negative effects on students and teachers such as anxiety, irritability, and loss of sleep (Jones, 2007).
The unintended negative effects of high-stakes assessments illustrate the principle premise of Campbell’s Law: “The more any quantitative social indicator is used for social decision-making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor” (Campbell, 1979, p. 85).
In Campbell’s (1979) view:Achievement tests may well be valuable
indicators of general school achievement under conditions of normal testing aimed at general competence. But when test scores become the goal of the teaching process, they both lose their value as indicators of
Assessment Practices
KAPPA DELTA PI RECORD u JAN–MAR 2013 23
WWW.KDP.ORG
educational status and distort the educational process in undesirable ways. (p. 85)
Teachers need to challenge the prevailing assessment practice because it has undermined equitable educational opportunities and stifled innovative teaching practices that enhance student performance (Neisworth & Bagnato, 2004; Stiggins, 2002; Wiggins, 1989). To shift toward assessment for learning practices, we should begin with an examination of the different types of assessments.
Traditional, Alternative, and Authentic AssessmentsIn discourse about assessments, the terms traditional, alternative, and authentic connote certain types of assessment practices. Traditional assessments typically refer to multiple-choice tests; alternative assessments include short-answer questions, essays, oral presentations, and portfolios; while authentic assessments are considered relevant to real-world tasks outside of the classroom (Worthen, 1993). All forms of assessments result in information about what students know and can do; however, it has been argued that the underlying assumptions about what a student should learn and how a student learns is different in each assessment practice (Table 1). Traditional assessments reflect an essentialist philosophy of education, while alternative assessments reflect a constructivist philosophy of education (Anderson, 1998). Authentic assessments are considered a form of alternative assessment with the additional complexity that the assessment must be a worthwhile task or project that connects to the real world (Bergen, 1993–94; Wiggins, 1990).
Understanding that assessment practices are related to assumptions about learning is only part of the process of shifting from assessment of learning to assessment for learning practices. It is also important to understand that assessment practices can be used for formative or summative purposes and that assessments are not explicitly defined as formative or summative by their format, but rather by the way information is used to guide instruction (William & Black, 1996).
Traditional Alternative Authentic
Assumes knowledge has universal meaning
Treats learning as a passive process
Separates process from product and focuses on mastery
Focuses on mastering discrete, isolated bits of information
Assumes the purpose of assessment is to document learning
Believes that cognitive abilities are separate from affective and conative abilities
Views assessment as objective, value-free, and neutral
Embraces a hierarchical model of power and control
Perceives learning as an individual enterprise
Assumes knowledge has multiple meanings
Treats learning as an active process
Emphasizes process and product
Focuses on inquiry
Assumes the purpose of assessment is to facilitate learning
Recognizes a connection between cognitive, affective, and conative abilities
Views assessment as subjective and value-laden, and embraces a shared model of power and control
Perceives learning as a collaborative process
Requires students to be effective performers with acquired knowledge
Presents students with the full array of tasks that mirror the priorities and challenges found in the best instructional activities
Attends to whether students can craft polished, thorough, and justifiable answers, performances, or products
Achieves validity and reliability by emphasizing and standardizing the appropriate criteria for scoring varied products
Views assessment validity in part on whether the test simulates real-world “tests” of ability
Involves challenges and roles that help students rehearse for the complex ambiguities of the “game” of adult and professional life
Table 1. Assessment Practices.
Source: Adapted from Anderson, 1998, pp. 8–11; Wiggins, 1990, pp. 2–3.
The Purpose of Formative and Summative AssessmentsThe characteristics of formative and summative assessments are quite different (Table 2). The main purpose of a formative assessment is to give feedback about understanding and skill
24 KAPPA DELTA PI RECORD u JAN–MAR 2013
but then I recognized that to overcome this frustration, I needed to reflect, experiment, and advocate. Reflecting on how assessment practices serve the teacher and the students helps teachers select or design different types of assessments. Experimenting with different assessment practices helps teachers determine which ones best support student learning. Advocating assessment for learning practices in the school curriculum helps create dialogue among teachers about the purpose of assessments.
Assessment for learning practices I have used include self-evaluations, portfolios, and corrective instruction or feedback. These practices are not new and their implementation varies; Table 3 offers a brief summary.
Closing ThoughtsIn summary, assessment for learning practices gives students an opportunity to reflect on their work and make decisions about what and how to improve. Although these practices are guided by the teacher, they give the student an active role in the assessment and learning process. Assessment for learning practices are driven by the principle that “standards will be raised by improving student learning rather than by better measurement of limited learning” (Gibbs & Simpson, 2004–05, p. 3).
What the NRC criticized about ability testing three decades ago has reemerged in the current discourse of education reform. Criticism over the stringent punitive consequences attached to low aggregate scores on single statewide assessments has prompted Education Secretary Arne Duncan to propose including assessment of student growth in the accountability system for schools (Dillon, 2010). However, Duncan’s suggestion to use a pre-post test model to assess what students have learned in a school year still applies assessments as an accountability tool focused on what students know rather than an assessment tool focused on improving how students learn. I contend that including a summative role in assessment for learning practices can meet the need for accountability, while maintaining the critical focus on student learning through the assessment’s formative role.
Table 2. Formative and Summative Assessment Characteristics.
development to the teacher and student; that feedback is used to determine a strategy for improvement. The main purpose of a summative assessment is to describe what a student knows or has learned at a specific moment during the school year (Harlen & James, 1997).
The current trend in national educational reform focuses heavily on raising student achievement scores through summative assessment practices. Therefore, teachers should take an active role in the development and implementation of assessment for learning practices because the dominant use of summative assessments “drives out learning at the same time it seeks to measure it” (Boud 2000, p. 156).
Assessment for Learning PracticesAs a fourth-grade teacher, I spent a lot of time with students outside of the classroom during extracurricular clubs and field trips. Out of the confines of a classroom, I noticed that students were choosing a variety of ways to gather information or proceed in a task. While some students were learning through observation, others were learning by asking questions. I realized that many of the assessments I used in the classroom did not consider the complexity of student learning or give any insight to me or my students about how they could improve their learning.
My initial reaction was to blame the assessments that came with the curriculum;
Formative Assessment Summative Assessment
Criterion-referenced
Positive in intent, in that it is directed toward promoting learning
Takes into account the progress of each individual (student-referenced)
Values validity and usefulness over reliability
Requires that students have a central part in it
Norm-referenced
Takes place when achievement has to be reported
Relates to progression in learning according to public criteria
Requires methods that are as reliable as possible
Results for different students may be combined for various purposes
KAPPA DELTA PI RECORD u JAN–MAR 2013 25
WWW.KDP.ORG
ReferencesAirasian, P. W. (1987). State mandated testing and educational
reform: Context and consequences. American Journal of Education, 95(3), 393–412.
Anderson, R. S. (1998). Why talk about different ways to grade? The shift from traditional assessment to alternative assessment. New Directions for Teaching and Learning, 74(summer), 5–16.
Bergen, D. (1993–94). Authentic performance assessments. Childhood Education, 70(2), 99–102.
Booher-Jennings, J. (2005). Below the bubble: “Educational triage” and the Texas accountability system. American Educational Research Journal, 42(2), 231–268.
Boud, D. (2000). Sustainable assessment: Rethinking assessment for the learning society. Studies in Continuing Education, 22(2), 151–167.
Campbell, D. T. (1979). Assessing the impact of planned social change. Evaluation and Program Planning, 2(1), 67–90.
Darling-Hammond, L. (1994). Setting standards for students: The case for authentic assessment. The Educational Forum, 59(1), 14–21.
Dillon, S. (2010, February 2). Administration outlines proposed changes to ‘No Child’ law. The New York Times, p. A16.
Gibbs, G., & Simpson, C. (2004–05). Conditions under which assessment supports students’ learning. Learning and Teaching in Higher Education, 1(1), 3–31.
Hamilton, L. S., & Koretz, D. M. (2002). Tests and their use in test-based accountability systems. In L. S. Hamilton, B. M. Stechner, & S. P. Klein (Eds.), Making sense of test-based accountability in education (pp. 13–49). Santa Monica, CA: RAND Corporation.
Hargreaves, E. (2005). Assessment for learning? Thinking outside the black box. Cambridge Journal of Education, 35(2), 213–224.
Harlen, W., & James, M. J. (1997). Assessment and learning: Differences and relationships between formative and summative assessment. Assessment in Education, 4(3), 365–379.
Jones, B. D. (2007). The unintended outcomes of high-stakes testing. Journal of Applied School Psychology, 23(2), 65–86.
Kossan, P. (2010, January 20). State weighs major reform for education. Arizona Republic. Retrieved from http://www.azcentral.com
Neisworth, J. T., & Bagnato, S. J. (2004). The mismeasure of young children: The authentic assessment alternative. Infants & Young Children, 17(3), 198–212.
Nichols, S. L., & Berliner, D. C. (2005). The inevitable corruption of indicators and educators through high-stakes testing. Tempe, AZ: Education Policy Research Unit, Arizona State University.
No Child Left Behind Act of 2001. 20 U.S.C.A § 6301 et seq. (2002).
Sadker, D. M., & Zittleman, K. R. (2007). Teachers, schools, and society: A brief introduction to education. New York: McGraw-Hill.
Sloane, F. C., & Kelly, A. E. (2003). Issues in high-stakes testing programs. Theory Into Practice, 42(1), 12–17.
Smith, M. L. (1991). Put to the test: The effects of external testing on teachers. Educational Researcher, 20(5), 8–11.
Stiggins, R. J. (2002). Assessment crisis: The absence of assessment for learning. Phi Delta Kappan, 83(10), 758–765.
Valli, L., & Chambliss, M. (2007). Creating classroom cultures: One teacher, two lessons, and a high-stakes test. Anthropology & Education Quarterly, 38(1), 57–75.
Wigdor, A. K. (1982). Ability measurement: Uses, consequences, and controversies. Educational Measurement: Issues and Practice, 1(3): 6–26.
Wiggins, G. (1989). A true test: Toward more authentic and equitable assessment. Phi Delta Kappan, 70(9), 703–713.
Wiggins, G. (1990). The case for authentic assessment. ERIC Digest, ED 328 611.
William, D., & Black, P. (1996). Meanings and consequences: A basis for distinguishing formative and summative functions of assessment? British Educational Research Journal, 22(5), 537–548.
Winerip, M. (2011, August 1). Pennsylvania joins the list of states facing a school cheating scandal. The New York Times, p. A11.
Worthen, B. R. (1993). Critical issues that will determine the future of alternative assessment. Phi Delta Kappan, 74(6), 444–454.
Table 3. Examples of Assessment for Learning Practices.
Practice Description Formative Role
Summative Role
Self-evaluation(alternative assessment)
Checklist of content knowledge and skills in a subject area in which a student assesses his or her level as expert, good, or need some help. Teacher compares the self-evaluation with student work for accuracy.
Implement at the beginning of and throughout the school year so the student and teacher can discuss strengths and weaknesses in different subject areas and develop appropriate learning opportunities.
This evaluation can show students, teachers, and parents the content knowledge and skills that have been mastered or need more work at various points during the year.
Portfolio(alternative, authentic assessment)
Student-selected work reflects what has been done throughout the school year. The portfolio is an ongoing project that is continuously reviewed and discussed by the teacher and student.
Student work is assessed by the teacher and student in a one-on-one meeting to discuss strengths and weaknesses. It is good to compare current and previous work to see what progress has been made. During the meeting, a plan to address areas that need improvement is made.
Actual student artifacts can be used to assess student progress in a subject area at various times throughout the school year.
An area that needs improvement is addressed through specific guidance that highlights what particular issues need correction or improvement. The skill or content area is assessed before and after this process to determine the efficacy of the intervention. This assessment can be done individually or with the class.
The student is given the opportunity to learn through detailed instruction or feedback on a particular issue. The process should focus on positive reinforcement that leads to positive changes in the outcome.
Not applicable
Impromptu Formative Assessments
Try some of these prompts to informally assess your students’ understanding:
Identify at least three/five steps you need to take in order to solve math problems like these.
How would you help a friend keep the differences between amphibians and reptiles (or DNA
and RNA) clear in his or her mind?
Write a paragraph of 3-5 sentences that uses a demonstrative pronoun (or conjunction or
adverb) in each sentence and circle each example.
In a quick paragraph, describe . . . .
Create a web, mind map, or outline that captures what you learned today about . . .
What is your definition of . . .
Who was the most important character (and why) in . . . . (could be a story, book, play, or
historical event)
Solve these 3 (or 4 or 5) math problems.
Draw a symbol that you think represents _____ in this book or story and tell why you chose
the symbol.
Record your answer to this question on your dry-erase board and hold it above your head for
me to see.
Prepare a rough draft of the letter you are going to write.
Using this 3 x 5 card, write the 3 most important things you learned about a cell today and
hand it to me as you leave.
Draw a picture of the flower/animal in today’s story. (Could be a character that was
described.) Draw a picture of a flower/healthy meal and label the parts.
Write a sentence using one of the vocabulary words from today. (Could be on dry-erase
board)
Work with your seat partners to create a conversation that might occur in tomorrow’s
reading. Then share it with the class.
Work with your partner to answer this question (open-ended) . . . .
162 KAPPA DELTA PI RECORD u OCT–DEC 2012
Classroom Assessments That Inform Instruction
Assessment Techniques
The accountability movement in education has caused school administrators and teachers to think differently about how they report, interpret, and use student assessment data. For example, legislative measures such as No Child Left Behind require school officials to report how all students are progressing toward established standards typically measured by state and district tests (Goertz and Duffy 2003). School officials may use results from such high-stakes tests to determine whether students should progress to the next grade, attend summer school, or earn a high school diploma (Deshler and Schumaker 2006); how district funds will be used (Fuchs, Fuchs, and Capizzi 2005); and how teachers will be evaluated (U.S. Department of Education 2003).
In addition to analyzing student scores on state and district tests, teachers are revising their day-to-day classroom assessment practices. No longer can teachers wait until the conclusion
by Greg Conderman and Laura Hedin
Apply the many techniques suggested here to gather continuous formative student assessment data and adjust instruction accordingly.
of an instructional sequence or grading period to review student data, provide feedback to students, or inform parents about their child’s progress. Waiting to conduct assessments until after an instructional period misses opportunities for parents to provide ongoing support regarding their child’s learning; teachers to reflect critically about their instruction and make important instructional adjustments; and students to adjust their thinking processes, engage in self-assessment, and have multiple opportunities to improve and demonstrate their learning.
Because using information from ongoing assessments is so important, this article offers representative formative assessments that elementary, middle, and high school teachers can use in their classrooms to inform their instructional practices. Specifically, the authors illustrate assessments teachers can use before, during, and after instruction that will help them
Greg Conderman is a Professor of Special Education at Northern Illinois University. His research interests include co-teaching, strategy instruction, and methods for students with disabilities. He is a former special education teacher and educational consultant. He can be reached at [email protected] Hedin is an Assistant Professor of Special Education at Northern Illinois University. Her research interests include co-teaching, literacy methods, and science instruction. She is a former elementary teacher. You can contact her at [email protected].
understand their students’ learning and reflect upon their own instructional effectiveness. Even teachers who already are using some of these assessments may discover a wider variety of choices and uses available.
Importance and Types of Educational AssessmentAssessment is the process of gathering information or data on student performance to inform instructional decision-making (Nitko and Brookhart 2010). Teachers use assessment data for a variety of decision-making purposes, such as to determine students’ existing knowledge or skills regarding an upcoming topic; group students according to skills, abilities, learning styles, or interests; analyze student errors; determine what or how to reteach; provide a grade or commentary that summarizes skill growth; or refer the child to a child study meeting for additional assessment or intervention. As decision makers, teachers need to be familiar with and use various types of assessments because no single measure provides sufficient information about student progress (Nolet and McLaughlin 2005). Consequently, teachers need to have available and base their decisions on data from various assessments.
Classroom assessments generally can be divided into summative and formative categories. Summative assessments, such as unit or final exams, large cumulative projects, state and district exams, and report card grades have a sense of finality and are administered after a learning unit to provide feedback on how well students have mastered the content or learning objectives (Bahr and Garcia 2010). Summative assessments also are often used to evaluate the effectiveness of programs, school improvement goals, or curriculum alignment. Similarly, because they are administered at the conclusion of instructional periods, summative assessments do not provide information for teachers in making instructional adjustments and interventions during the learning process. Formative assessments accomplish these goals (Garrison and Ehringhaus 2007).
In contrast, formative assessments usually are informal, teacher-made, and administered
during the instructional cycle to provide feedback that allows teachers to adjust their ongoing instruction to improve students’ learning (Perie, Marion, and Gong 2009). Observations, student interviews, journals, teacher questioning, student signaling, and short daily homework assignments generally are considered formative assessments (Bahr and Garcia 2010). Formative assessments are embedded within the learning activity, directly linked to the current unit of instruction, administered in a short period of time, and may be individualized (Perie et al. 2009). Data from these assessments provide feedback to students so they can check their understanding and improve their performance. These assessments also guide teacher decision-making about ways to differentiate instruction and thus improve student achievement (Dodge 2009). Therefore, data from formative assessments support the instruction-assessment feedback loop, as illustrated in Figure 1.
For example, after teaching students how to solve one-variable algebra problems (instruction), Mr. Marcos reviewed students’ corresponding homework assignments and
Figure 1. The Instruction-Assessment Cycle.
Instruction Assessment
Planning (e.g., materials, groupings) Analysis and Goal
Setting
164 KAPPA DELTA PI RECORD u OCT–DEC 2012
noticed that many students were making several errors he needed to address the next day (analysis and goal setting). He decided to reteach one-variable problems with a different instructional approach (e.g., by using visuals and manipulatives) to the majority of the class, while students who mastered the skill could work independently or in groups on an enrichment activity (planning materials and groups). Mr. Marcos also developed a new brief assessment containing one-variable algebra problems for students to complete and submit (assessment). In a balanced assessment system, both summative and formative assessments are an integral part of information gathering (Garrison and Ehringhaus 2007).
Formative AssessmentsTeachers can use formative assessments at three points in time during the instructional cycle: before instruction, during instruction, and after instruction. Because each of these phases is unique, each is described separately here.
Before InstructionTeachers consider several sources of formative
assessment data to use before instruction. For example, they assess what students already know by using the What I Know (K) and What I Want to Learn (W) columns of a KWL chart (Ogle 1989), class discussions, pretests, anticipation guides, warm-ups, and admit slips. These group or individually administered assessments provide insight into what students already know, as well as their incorrect or faulty reasoning. Therefore, they provide teachers with important informal diagnostic information for guiding upcoming instruction.
While using the What I Know assessment, for example, first-grade teacher Mrs. Mae developed a chart with K, W, and L columns. She asked students what they already knew about planets, an upcoming unit. She discovered that many students in her class thought that Saturn had actual rings (e.g., jewelry). Rather than correct student misunderstandings at that moment, Mrs. Mae wrote the statement in the K column, but she added a little asterisk to that statement to remind the class to return
to that statement as their unit progressed. This activity helped Mrs. Mae realize that many students interpret language concretely, and she needed to be careful when explaining words containing more than one meaning.
Students also can contribute to the What I Want to Learn (W) column as a pre-instruction activity. Documenting student responses in this column acknowledges their input, creates shared class learning goals, establishes purposes for the unit, and motivates students to seek resources to find answers to their pressing questions. Students in Mrs. Mae’s class decided they wanted to learn whether other planets have life, how long it would take to reach various planets, and what astronauts eat in their rockets. Although some of these outcomes were part of the curriculum, after soliciting student comments, Mrs. Mae could become especially purposeful in addressing these student-generated goals in the unit. To validate this assessment tool, Mrs. Mae’s class revisited the K and W columns frequently throughout their unit to add or revise their original statements.
Similar to the K and W columns, focused class discussions provide teachers with valuable information about students’ background knowledge. Admittedly, students in diverse classrooms may have different background experiences, language levels, and concept understandings (Echevarria, Short, and Powers 2006). Therefore, teachers may wish to start initial discussions with open-ended recognition questions to create curiosity or interest about the upcoming topic or skill. For example, before the new unit on “long division,” special educator Mr. Birky drew the division sign on the board and asked students whether they had ever seen it. Depending on their responses, Mr. Birky could ask students where they had observed the sign and whether they knew the meaning of the sign, introduce the sign and its meaning, or begin more advanced instruction. Teachers also may use focus groups, individual student interviews, or conferences to informally assess entry-level skills.
Information from pretests provides valuable information for teachers and students. Because pretests parallel critical outcomes from the upcoming unit of study, student performance
Assessment Techniques
KAPPA DELTA PI RECORD u OCT–DEC 2012 165
WWW.KDP.ORG
can guide teachers regarding their instructional emphasis. For example, pretest scores from Mr. Leonard’s high school geography class revealed that many students could not identify Saudi Arabia, Turkey, or Greece on a map. Based on that information, he added additional map activities with those locations to his unit. Pretest scores also help teachers differentiate their instruction. Students who have mastered critical outcomes, as evidenced on their pretest, can extend their learning through research, service learning, independent projects, or other enrichment activities.
Another benefit for students is that after they have reviewed their pretest score (even though the score would not be calculated toward their grade), they may be motivated to learn what they realize they do not know. Pretests signal to students the important outcomes of a unit, thereby removing the mystery of guessing what the teacher feels is important. When administered again as post assessments, teachers and students document pretest gains as supportive evidence to include in student portfolios, share during parent-teacher conferences, and provide during student-review committees.
When developing traditional paper and pencil pretests, teachers are encouraged to use principles from effective test design, such as (1) write multiple-choice items as direct questions, place the bulk of information in the question stem (rather than in the responses), include no more than four responses per question, make all responses about the same length, and use “all of the above” and “none of the above” responses sparingly; (2) develop matching items only for homogeneous items (such as matching states to their capitals), write longer phrases in the left column, and keep matching items to no more than about 10 items; (3) write true-false items as statements that include one and only one concept, avoid taking statements directly from the text, and avoid taking an obvious true statement and just inserting the word “not” to make the statement false; and (4) provide background information as context when writing short-answer or essay questions. Further, avoid clues to any answers, because those will invalidate the assessments (Conderman and Koroghlanian 2002).
Additionally, teachers can use anticipation guides before a unit of instruction or before students view a video, PowerPoint™, or text. An anticipation guide consists of a list of statements related to the upcoming topic. While some statements may be clearly true or false, a good anticipation guide includes statements that provoke disagreement and challenge students’ beliefs about the topic (Connor 2006). Before instruction, students indicate whether they agree or disagree with each statement. Then, as instruction unfolds, students can change their original response based on new knowledge. The steps of using anticipation guides (Connor 2006) include (1) Choose the material or content for the anticipation guide; (2) Write several statements which focus on the topic that students can react to without prior knowledge and that challenge their beliefs; (3) Have students complete the anticipation guide; (4) Lead a class discussion before presenting the information, to generate different viewpoints; (5) Present the material; and (6) Revisit the anticipation guide by having students update their responses to reflect their new knowledge. Figure 2 provides an example of an anticipation guide from an upcoming unit on the Gold Rush.
Figure 2. Example of Anticipation Guide.
“The Gold Rush”Directions: Silently read the five statements below about the Gold Rush. Indicate whether you agree or disagree with each statement by checking the appropriate column. After we complete our reading for today, you will have an opportunity to change your responses.
1. The Gold Rush provided more advantages than disadvantages for families who moved out west.
2. The Gold Rush caused many people to become greedy.
3. The Gold Rush stimulated the discovery of the West.
4. Accumulating wealth leads to happiness.
5. America is the land of equal opportunity for anyone who is willing to work hard.
Agree Disagree
166 KAPPA DELTA PI RECORD u OCT–DEC 2012
Anticipation guides not only pique students’ desires to learn the content, but they also engage students in inquiry and problem solving, promote active participation, and offer immediate feedback. Students later can use them as study guides. Further, as a formative assessment tool, teachers compare students’ pre- and post-responses to note changes in attitude or content knowledge (Kozen, Murray, and Windell 2006).
Another group of formative assessments occurs as students enter the classroom. Teachers and students refer to these as warm-ups, admit slips, sponges, quick writes, or bell ringers. Usually they consist of short (e.g., 2–5 minute) activities that teachers present on the overhead or board, which students complete immediately upon entering. These warm-up activities typically review the previous
day’s lesson or assess students’ background knowledge before beginning a new lesson. The class can discuss answers to the warm-up activities as soon as they are completed, which provides immediate feedback, or teachers collect them and read them after class. Either way, student responses provide data for teachers to justify reteaching a skill or advancing to the next skill.
Similar to warm-ups, admit slips are written responses to open-ended questions or statements used as quick writes prior to the beginning of the lesson. These formative assessments help teachers check for student understanding or misunderstanding. Admit slips are especially advantageous for students who are willing to write responses or questions, but are reluctant to discuss or volunteer in class. Table 1 provides examples of these and other formative assessments for both elementary and secondary grades.
During InstructionTeachers may use numerous formative
assessments, such as unison responses, response cards, dry erase boards, or personal response systems, during instruction to determine whether students are acquiring critical skills or content.
Unison responses require all students to provide a verbal response on cue. The teacher’s cue might be a verbal request such as “What is . . ., everyone?” or a physical sign such as tapping the table or snapping fingers. Unison responses are best used when the question has only one correct response, such as reviewing math facts, orally spelling words, or answering factual questions. Though unison responses are designed to engage all students as a measure of formative assessment, teachers might have difficulty determining which students provided a correct or incorrect response, because some students might not respond, others might need more time to respond than others, and some students (and teachers) might be uncomfortable with this approach. If these conditions apply, teachers might instead use response cards or dry erase boards.
Response cards are in the form of cards containing answers or colored sheets of paper
Table 1. Assessments for Various Grade Levels.
Type of Assessment
Elementary Example
Secondary Example
Warm-ups Solve five, two-digit addition with regrouping problems displayed on the overhead.
Determine the area of each of the three triangles drawn on the board (with base and height measurements).
Admit slips Write two facts that you already know about our state.
Write one question you still have about the chemistry lab we completed yesterday.
Unison responses Everyone, as I point to the letter in the word, say the letter sound.
Everyone, as I point to an abbreviation on the Periodical Table of Elements, say what the abbreviation represents.
Response cards As I read a sentence, hold up the card indicating which end punctuation should be used.
As I describe the type of road, find the card representing the appropriate speed limit.
Dry erase boards Write the spelling word as I say it.
In one sentence, write your favorite part of the novel.
Exit slips Draw three things you could do if your house started on fire.
Provide your own examples of the three types of conflict we discussed today.
KAPPA DELTA PI RECORD u OCT–DEC 2012 167
WWW.KDP.ORG
that students display in response to a teacher’s question. For example, while studying the religions Islam, Hinduism, and Christianity, students in Mrs. Wachal’s class displayed the correct preprinted card responding to the religion she described. After each question, Mrs. Wachal quickly scanned her class to assess which students responded correctly and incorrectly. As recommended by research, when more than 20 percent of her students made an error, Mrs. Wachal stopped to reteach or review the information (Friend and Bursuck 2012). On other occasions, Mrs. Wachal had students display the piece of colored paper associated with their levels of concept understanding (i.e., red=Stop, I’m lost; yellow=I need a little clarification; green=I understand everything) or their indications of whether a statement she expressed was true (green) or false (red).
Similar to response cards, students use individual dry erase white boards to display responses to teacher questions. Dry erase boards have more flexibility than response cards because students use them to complete warm-up activities, write responses, show solutions to math or science problems, or complete analogies. As students write their answers, the teacher circulates to provide encouragement and corrective feedback. This step provides the teacher with immediate feedback on students’ knowledge. Teachers also can observe which students are off task or have not correctly processed the question (Conderman, Bresnahan, and Hedin 2011). Dry erase boards allow teachers to observe student responses and adjust instruction accordingly, which make them an effective, inexpensive, low-tech formative assessment option for teachers at all levels.
In contrast, personal response systems (PRS) illustrate a high-tech formative assessment device. In classrooms with computer/projector or SMART™ Board access, teachers use these systems to informally assess students during the lesson. This technology allows teachers to incorporate multiple-choice or true-false questions into the lesson’s PowerPoint presentation. When the question slide appears, students respond using their clicker remote-
control unit. After all students have responded, the teacher can display a bar or line graph showing the group’s responses. At that point, the teacher decides to move on to the next slide or stop and reteach the concept if the class average was low. The software also allows teachers to later analyze each student’s response to each question, which provides valuable individualized diagnostic information.
After InstructionTeachers may use several formative
assessments, such as exit slips, the L column of the KWL chart, homework assignments, drafts of writing assignments, or projects completed in steps, after instruction.
Exit slips are short student responses collected by a teacher at the conclusion of a class. About one to five minutes prior to dismissal, teachers place a question on the board or overhead, or distribute a small piece of paper (exit slip) with the question for the day. Before students exit, they write their responses, with or without their names included. A version of exit slips is a 3-2-1 slip in which students write three new things they learned, two things they still want to learn, and one clarifying question.
If teachers previously used the K and W columns, they can now have students generate ideas for the L, or What I Learned column. Toward the end of the science unit, Mrs. Mae’s students completed the class KWL chart by adding the L column. Sometimes students learn different content—and much more content—than what teachers typically assess. Mrs. Mae was surprised to learn how much her class remembered from various videos and how well they integrated their learning. Her traditional assessments did not capture these outcomes. Although the L column typically is considered a summative assessment, teachers ask students what they have learned and record their responses at strategic points in the unit; in that way, they assess learning and adjust instruction accordingly before the unit concludes.
Teachers likewise use student data from homework assignments, writing drafts, and
168 KAPPA DELTA PI RECORD u OCT–DEC 2012
projects students complete in steps as formative assessments. For example, middle school math co-teachers Mr. Ginther and Ms. Knapp maintain a Microsoft® Excel spreadsheet to document math errors from select homework assignments. Based on student data, this team may decide to review with the whole class, or one teacher can reteach specific skills with individuals or small groups. Gathering data after several days of instruction—but before the conclusion of the unit—allows these teachers to catch many student errors before the summative exam.
S imi lar ly, a s Engl i sh teacher Mrs . DeYarman conferences with individual students in her writing class, she records each student’s writing goals and errors so that she can review skills once again before students complete their final drafts. Noting errors before students take their state or district high-stakes writing exam provides Mrs. DeYarman opportunities to tailor her instruction to the needs of her students.
Teachers can review projects students complete in steps to provide specific corrective feedback. After reviewing project drawings, for example, Woods/Industrial Arts teacher Mr. Cherif noticed that several students miscalculated a critical measurement, even though he directly taught this in class. Before he allowed students to estimate how much wood they needed, Mr. Cherif required students to recheck their measurements using the instructional module he created. This formative assessment step ensured students mastered a critical skill before making more costly errors.
Concluding ThoughtsRecent accountability measures emphasize student scores on summative assessments such as high-stakes state and district tests. However, student achievement on these tests is directly related to high-quality classroom instruction, which requires teachers to gather continuous formative student assessment data and adjust instruction accordingly. To that end, teachers use a variety of formative assessments before, during, and after instruction.
Before instruction, teachers may consider using the K and W columns of the KWL chart, focused class discussions, pretests, anticipation guides, warm-ups, or admit slips. During instruction, teachers may use unison responses, response cards, dry erase boards, or personal response systems. And after instruction (but before the conclusion of the unit), teachers may use the L column of the KWL chart and exit slips, as well as analyze select homework assignments, writing drafts, or projects students complete in parts. Incorporating these—or other—formative assessments as part of the instruction-assessment cycle provides timely feedback to students while helping teachers adjust their instruction so all students succeed.
ReferencesBahr, D. L., and L. A. de Garcia. 2010. Elementary mathematics is
anything but elementary. Belmont, CA: Wadsworth Cengage Learning.
Conderman, G., and C. Koroghlanian. 2002. Writing test questions like a pro. Intervention in School and Clinic 38(2): 83–87.
Conderman, G., V. Bresnahan, and L. Hedin. 2011. Promoting active involvement in today’s classrooms. Kappa Delta Pi Record 47(4): 174–80.
Conner, J. 2006. Instructional reading strategy: Anticipation guides. Available at: www.indiana.edu/~l517/anticipation_guides.htm.
Deshler, D. D., and J. B. Schumaker, eds. 2006. Teaching adolescents with disabilities: Accessing the general education curriculum. Thousand Oaks, CA: Corwin Press.
Dodge, J. 2009. 25 quick formative assessments for the differentiated classroom: Easy, low-prep assessments that help you pinpoint students’ needs and reach all learners. New York: Scholastic Inc.
Echevarria, J., D. Short, and K. Powers. 2006. School reform and standards-based education: A model for English-language learners. Journal of Educational Research 99(4): 195–210.
Friend, M., and W. D. Bursuck. 2012. Including students with special needs: A practical guide for classroom teachers, 6th ed. Boston: Pearson.
Fuchs, L. S., D. Fuchs, and A. M. Capizzi. 2005. Identifying appropriate test accommodations for students with learning disabilities. Focus on Exceptional Children 37(6): 1–8.
Garrison, C., and M. Ehringhaus. 2007. Formative and summative assessments in the classroom. Westerville, OH: Association for Middle Level Education. Available at: www.nmsa.org/Publications/WebExclusive/Assessment/tabid/1120/Default.aspx.
Goertz, M., and M. Duffy. 2003. Mapping the landscape of high-stakes testing and accountability programs. Theory Into Practice 42(1): 4–11.
Kozen, A. A., R. K. Murray, and I. Windell. 2006. Increasing all students’ chance to achieve: Using and adapting anticipation guides with middle school learners. Intervention in School and Clinic 41(4): 195–200.
Nitko, A. J., and S. M. Brookhart. 2010. Educational assessment of students, 6th ed. Des Moines, IA: Prentice Hall.
Nolet, V., and M. J. McLaughlin. 2005. Accessing the general curriculum: Including students with disabilities in standards-based reform, 2nd ed. Thousand Oaks, CA: Corwin Press.
Ogle, D. M. 1989. The know, want to know, learn strategy. In Children’s comprehension of text: Research into practice, ed. K. D. Muth, 205–23. Newark, DE: International Reading Association.
Perie, M., S. Marion, and B. Gong. 2009. Moving toward a comprehensive assessment system: A framework for considering interim assessments. Educational Measurement: Issues and Practice 28(3): 5–13.
U.S. Department of Education. 2003. Questions and answers on No Child Left Behind. Washington, DC: ED. Available at: www2.ed.gov/nclb/accountability/schools/accountability.html.