Office of University Assessment Page 1 of 12 College of Fine Arts Student Learning Outcomes (SLO) Assessment Review Brief January 2015 Context: The University Assessment Council (UAC) is charged by the Provost to review student learning outcomes assessment reports and provide feedback to the colleges and other units on these reports. Each college and select units on at the institution has a representative on this council. These efforts are essential to the University’s continuous improvement of student learning. Based on SACSCOC Standard 3.3.1.1 1 all degree/certificate programs must have student learning outcomes, hence a student learning assessment report. To fulfill the charge stated above, at least once annually the UAC members undertake an evaluative review all of these SLO reports. Purpose of this Brief: Two briefs are prepared to provide input to each college and unit about: (1) institution-wide reporting and (2) their specific unit’s report. Review Process: On December 3 rd and 4 th , 2014, 23 UAC members evaluated 270 total SLO reports from academic year 2013-14, twenty-three of which were from the College of Fine Arts. The members used a previously validated rubric, SLO Assessment Report Rubric, to evaluate the 2013-14 reports (Appendix A). The SLO Assessment Report Rubric contains five separate criteria: (1) relationship between the outcome tools, (2) data collection and research design and integrity, (3) results, (4) interpretation of results, and (5) improvement action plan. During this review period, the members did not evaluate the ‘benchmark/target’ criterion because of a change in the reporting format. Each criterion was evaluated on a three-point scale of Meets Expectations, Emerging, or Does Not Meet. The equivalency comparison for 2012 and prior evaluations are Comprehensive, Needs Improvement, and Absent/Does Not Meet, respectively. Both evaluations, UAC members were asked to provide comments to assist the units in making improvements to their student learning process/environment. Due to length of the comments, they are not included in this brief; however, are provided within the evaluated rubric for each degree program. Description of this Brief: This brief contains: Degree Programs Evaluation Status Number of Reports Evaluated by Level Degree Program Report Status and Compliance Rating 2013-2014 Evaluation Overall Scores 2013-2014 Evaluation by Rubric Criteria Five Year Overall Score Trends Five Year Overall Score Trends by Rubric Criteria Next Steps Appendix 1 SACSCOC Standard 3.3.3.1 The institution identifies expected outcomes, assess the extent to which it achieves these outcomes, and provides evidence of the improvement based on analysis of the results in: educational programs, to include student learning outcomes (page 27, The Principles of Accreditation: Foundations for Quality Enhancement, 2012).
12
Embed
College of Fine Arts Student Learning Outcomes (SLO ... · Master Music Therapy (Inactive) Exempt Master Music/Music - Composition/Music - Performance Submitted ... Bachelor Art Studio-Fine
This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
Office of University Assessment Page 1 of 12
College of Fine Arts Student Learning Outcomes (SLO) Assessment
Review Brief
January 2015
Context: The University Assessment Council (UAC) is charged by the Provost to review student learning outcomes assessment reports and provide feedback to the colleges and other units on these reports. Each college and select units on at the institution has a representative on this council. These efforts are essential to the University’s continuous improvement of student learning. Based on SACSCOC Standard 3.3.1.11 all degree/certificate programs must have student learning outcomes, hence a student learning assessment report. To fulfill the charge stated above, at least once annually the UAC members undertake an evaluative review all of these SLO reports. Purpose of this Brief: Two briefs are prepared to provide input to each college and unit about: (1) institution-wide reporting and (2) their specific unit’s report. Review Process: On December 3rd and 4th, 2014, 23 UAC members evaluated 270 total SLO reports from academic year 2013-14, twenty-three of which were from the College of Fine Arts. The members used a previously validated rubric, SLO Assessment Report Rubric, to evaluate the 2013-14 reports (Appendix A). The SLO Assessment Report Rubric contains five separate criteria: (1) relationship between the outcome tools, (2) data collection and research design and integrity, (3) results, (4) interpretation of results, and (5) improvement action plan. During this review period, the members did not evaluate the ‘benchmark/target’ criterion because of a change in the reporting format. Each criterion was evaluated on a three-point scale of Meets Expectations, Emerging, or Does Not Meet. The equivalency comparison for 2012 and prior evaluations are Comprehensive, Needs Improvement, and Absent/Does Not Meet, respectively. Both evaluations, UAC members were asked to provide comments to assist the units in making improvements to their student learning process/environment. Due to length of the comments, they are not included in this brief; however, are provided within the evaluated rubric for each degree program. Description of this Brief: This brief contains:
Degree Programs Evaluation Status Number of Reports Evaluated by Level Degree Program Report Status and Compliance Rating 2013-2014 Evaluation Overall Scores 2013-2014 Evaluation by Rubric Criteria Five Year Overall Score Trends Five Year Overall Score Trends by Rubric Criteria Next Steps
Appendix
1 SACSCOC Standard 3.3.3.1 The institution identifies expected outcomes, assess the extent to which it achieves these outcomes, and provides evidence of the improvement based on analysis of the results in: educational programs, to include student learning outcomes (page 27, The Principles of Accreditation: Foundations for Quality Enhancement, 2012).
Relationship between assessment tools and outcomes
10-Nov 11-Nov 12-Nov 13-Nov 14-Nov
Data Collection and Research Design Integrity
10-Nov 11-Nov 12-Nov 13-Nov 14-Nov
Results
10-Nov 11-Nov 12-Nov 13-Nov 14-Nov
Office of University Assessment Page 9 of 12
NEXT STEPS
Review the data presented in both review briefs Review the individual unit results and comments provided from the UAC evaluative review Share the results with department chairs and/or faculty Encourage departments/units to review the comments provided by the UAC member in order to improve
their 2014-2015 SLO report Remind departments/units to contact the Office of University Assessment for any guidance or assistance in
revising their assessment processes and/or reporting efforts. Contact Tara Rose, Director of University Assessment, at: [email protected] or Brandon Combs, Assistant Director of University Assessment, at: [email protected].
Each category will be scored as Meets expectations (2 points), Emerging (1 point), or Does not meet
expectations (0 points). The total points for each report will be added together to provide an overall score for
the Student Learning Outcome Report. Final scoring categories are as follows:
Meets expectations: 10-12 points
Emerging: 6-9 points
Does not meet expectations: 0-5 points
Scores will be reported to the Dean and the UAC liaisons for each college or division.
I. Method(s)
Meets Expectations (2 points)
Emerging (1 point)
Does not meet expectations (0 points)
A. Relationship between assessment tools and outcomes A general explanation is provided about how the assessment tools relates to the outcome measured (e.g., the faculty wrote test items, essay questions, etc to match the outcome, or the instrument was selected “because its general description appeared to match our outcome”). May include pass rates for license or certification exams. Assessment tools specified by a program’s accrediting body are considered to meet expectations, but it must be made clear to the reader that the tool is chosen by the accrediting body. If more than one outcome is linked to any one assessment tool, an explanation is provided for how each outcome can be measured using only one tool.
At a superficial level, it appears the content assessed by the assessment tools matches the outcome, but no explanation is provided. Assessment tools are primarily indirect, and include things like head counts and course pass rates.
Seemingly no relationship between outcome and assessment tools.
Office of University Assessment Page 11 of 12
B. Data collection and Research design integrity Enough information is provided to understand the data collection process, such as a description of the sample, evaluation protocol, evaluation conditions, and student motivation, when and where the data was collected (e.g., were students sampled, or was the population evaluated, adequate motivation, two or more trained raters for performance assessment, pre-post design to measure gain, cutoff defended for performance vs. a criterion).
Limited information is provided about data collection such as who and how many took the assessment, but not enough to judge the veracity of the process (e.g., thirty-five seniors took the test). There appears to be a mismatch with specifications of desired results.
No information is provided about data collection process or data not collected.
C. Specification of desired benchmark/target (Not Scored During This Evaluation) Desired benchmark/target is specified (e.g., our students will gain ½ standard deviation from junior to senior year; our students will score above a faculty-determined standard). “Gathering baseline data” is acceptable for this rating. Enough information was provided to understand how the benchmark was determined.
Desired result (e.g., student growth, comparison to previous year’s data, comparison to faculty standards, performance vs. a criterion), but lack specificity (e.g., students will grow; students will perform better than last year).
No a priori benchmarks/targets for outcomes.
II. Results
Results are present, and directly relate to outcomes. The desired benchmarks for the outcomes are clearly presented, and were derived by appropriate analysis. If a rubric or grading scale was used, it is clear how many in the sample scored in each category.
Results are present, but it is unclear how they relate to the outcomes or the benchmark/target for the outcomes, but presentation lacks clarity or difficult to follow. Only the aggregate totals are given (e.g 80% of the students met the target.)
No results presented.
Office of University Assessment Page 12 of 12
III. Interpretation of Results
Interpretations of results seem to be reasonable inferences given the outcomes, benchmarks/targets, and methodology. It reflects a discussion of the results by pertinent parties. The position of the person or persons involved in the analysis is listed.
Interpretation attempted, but the interpretation does not refer back to the outcomes or benchmarks/targets for the outcomes. Or, the interpretations are clearly not supported by the methodology and/or results. There is no mention of the person or persons that completed the analysis.
No interpretation attempted. The analysis simply repeats what was stated in the Results category.
IV. Improvement Action
Examples of improvements (or plans to improve) are documented and directly related to findings of assessment. These improvements are very specific (e.g., approximate dates of and person(s) responsible for implementation, and where in curriculum/activities and department/program they will occur.) If no improvements are found to be necessary then: the program must either increase the benchmark, or explain why the benchmark does not need to be increased; state plans to focus on another area of concern for future assessments and work to monitor and maintain the current level of success for this outcome.
Examples of improvements are documented but the link between them and the assessment findings is not clear. The improvements lack specificity.