Top Banner
Case Study 20 Sam Gonzales, Measuring Learning and Performance EDTEC 795A Seminar Sean Harkey, Denise Myers, Jeffrey Scott Flight Attendant Training Program
22
Welcome message from author
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
  • 1. Flight Attendant Training Program

2. CONTEXT| Job Map
3. Context| DRIVERS
Introduce multimedia assessment method
Increase consistency between end-of-training assessments and on-the-job evaluations
Increase authenticity of assessment
Sam Gonzales, Director of Human Performance Technology
4. Context | DRIVERS
Current Assessment
Pencil-and-paper
Question types
Multiple choice
Short answer
Time consuming to score
New Assessment
Computer administered
Multiple-choice
Automatically scored
8 video clips
Identify task
Errors made?
Identify errors or critical aspect
5. Context| Action taken
6. Context| Results of the test
FREQUENT
UNUSUAL
FREQUENT
UNUSUAL
7. Context| Results of the test
8. Context| Results of the test
Correlation between end-of-training assessment and on-the-job performance evaluation was significantly improved in the experimental group
9. Key Stakeholders | Trainees
Concerned about the difficulty and fairness of video questions
10. Key Stakeholders | Instructors
Concerned about poor end-of-training performance reflecting on their own performance
11. Key Stakeholders | Supervisors
Concerned about the changing workforce and preparedness of new employees
12. Performance Issues
Facilitating preflight checks when unusual situations occur
Unusual situations preflight
Skills for facilitating preflight checks
13. Performance Issues
Dealing with difficult passengers when unusual situations occur
Difficult passengers increase task load and communication demand
Communication skills for passenger interaction
14. Context| Results of the test
FREQUENT
UNUSUAL
FREQUENT
UNUSUAL
15. Solution | Eliminate confounding factor
FREQUENT
UNUSUAL
FREQUENT
UNUSUAL
16. Solution | Add New Performance Goals
Redesign the course, assessment and evaluation
Add learning content to include instruction, discussion and additional practice handling unusual situations
Add traditional test questions to assess the learners understanding of how to complete tasks in unusual situations
Prompt the evaluator to distinguish between frequent vs. unusual situations when completing observational evaluations of performance
17. Evaluation | Conduct a second study
Conduct a second study to evaluate the effectiveness of the added performance goals
Control group of 20 completes original course and assessment, but the redesigned evaluation
Experimental group of 20 is trained, assessed and evaluated using all the redesigned tools
18. Evaluation | Analyze Results
Conduct reliability analysis for both training assessments without distinction between frequent and unusual performance goals
Conduct a second reliability analysis on the experimental results WITH distinction between frequent and unusual performance goals
19. Evaluation | Analyze Results
For each group, find correlation between end of training assessment and on the job performance evaluations without distinguishing between frequent and unusual performance goals
20. Evaluation | Analyze Results
Compare on the job performance evaluations between the two groups WITH distinction between frequent and unusual performance goals
21. Summary | Recommendations
Add content, assessment, and performance evaluation to support goals of completing tasks in unusual situations
Adopt multimedia test method
22. References
Ertmer, P. A. & Quinn, J. (2007.) The ID casebook: Case studies in instructional design, 3rd ed. Upper Saddle River, NJ: Pearson Prentice Hall.
Fraenkel, J. R. & Wallen, N. E. (2008.) How to design and evaluate research in education.7th ed.New York:McGraw Hill.
Hale, J.(2007.)The performance consultants fieldbook:Tools and techniques for improving organizations and people, 2nd ed.San Francisco:Pfeiffer.