This document is posted to help you gain knowledge. Please leave a comment to let me know what you think about it! Share it to your friends and learn new things together.
Transcript
James International Journal of Educational Technologyin Higher Education (2016) 13:19 DOI 10.1186/s41239-016-0015-0
RESEARCH ARTICLE Open Access
Tertiary student attitudes to invigilated,online summative examinations
Rosalind James
Correspondence: [email protected] of New England,Armidale NSW, Australia
The outcomes of a trial implementation of an invigilated, online examination at aregional university in Australia and their implications for online education providersare discussed. Students in a first year online psychology course were offered theopportunity to complete their final examination task online with invigilationconducted via webcam. About a quarter of the students (125) initially elected tocomplete the online examination; however, after they had undertaken a practiceonline examination, only 29 (6.3 %) students elected to continue in the trial andproceed to take the final exam online. The study concluded that many studentshave substantial challenges with the idea of major stakes examinations beingonline. While lower associated costs and time requirements were motivations,many were challenged by the process due to technical difficulties and insufficientsupport. ICT infrastructure and reliable connectivity remain significant barriers tosuccessful completion of online examinations under secure, proctored conditions.
online testing is rarely employed in summative assessment in higher education.
The lack of widespread use of online summative assessment is almost certainly asso-
ciated with the perceived risks and security and authentication issues. Thus far, the
2016 James. Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International Licensehttp://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, providedou give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate ifhanges were made.
to provide and support the software used for online examinations, that commercial
organization must provide high quality support.
Valentine (2002) describes the quality of online instruction as being based on prep-
aration and an understanding of the needs of students—this is especially important in
high stakes online assessment. Considering all the data from this study, particularly
participant comments, it is apparent that some problems could, or should, be ad-
dressed prior to exposing students to an online assessment environment. It may be
necessary to re-consider some aspects of exam design. For example, good practice
(British Standard 23988) for e-examinations suggests that no online exam should last
more than 90 min without a break and, if a longer exam is needed, it should be split
into two parts with a break between. Most essential is thorough testing of the assess-
ment environment to ensure that technical and internet connectivity challenges are
identified and rectified prior to implementation. It is unacceptable practice having
students in a remote location in a high stakes assessment situation dealing with the
challenges described in this study. It is necessary to ensure appropriate design, proce-
dures and pedagogies are developed and implemented before students are exposed to
James International Journal of Educational Technology in Higher Education (2016) 13:19 Page 12 of 13
online summative assessment. Students also need adequate training and support to
prepare for taking online examinations.
Until the reliability of ICT infrastructure improves, it is difficult to imagine wide-
scale implementation of online, proctored, summative examinations in Australia. For
now, secure examination with identity authentication remains a labour-intensive and
costly pursuit. It may be time to stop searching for the elusive, fool-proof, automated
authentication system and start considering other approaches that adopt different
pedagogical models for assessing learning (Struyven et al., 2005; Weller, 2002) and
change the culture of cheating, as well as lobbying and re-educating quality assurance
agencies and accrediting organisations about appropriate alternatives to summative
examinations as assessment of learning.
Competing interestsThe author declares that she has no competing interests.
Authors’ informationDr Rosalind James was Director of dehub: Online and Distance Education Research Network from 2011 to 2014.Dr James has worked at Australia’s University of New England (UNE) for many years, as a Research Fellow with theDEHub Project and Project 2012: Flexible and Online, and before that as an academic mentor for transitional studentsand a course co-ordinator and lecturer in the foundational pathway course at UNE’s Teaching and Learning Centre.Rosalind comes from a background as a consultant and lecturer in Archaeology and Environmental Science and hasalso worked in diverse companies and government departments around the world as a senior manager and technicalconsultant in the commercial information and communications technology (ICT) arena. Her current research andpublications interest is in implementation and integration of ICT in learning, policy and quality assurance in onlinelearning, employability skills and academic professional development. Creativity and critical thinking are importantavenues of enquiry that arose during her direction of a large collaborative project to develop a community educationportal offering OER for lifelong learning. Dr James is an assessor for the Australian Government Office of Learning andTeaching and co-editor of the International Journal of Educational Technology in Higher Education (ETHE).
Received: 17 September 2015 Accepted: 9 December 2015Published: 24 May 2016
References
Alexander, S. (2001). E-learning developments and experiences. Education and Training, 43(4/5), 240–248.Barron, J., & Crooks, S. M. (2005). Academic integrity in web-based distance education. TechTrends, 49(2), 40–45.Bartley, S., & Golek, J. (2004). Evaluating the cost effectiveness of online and face-to-face instruction. Educational
Technology & Society, 7(4), 167–175.Bates, T. (2014a). A review of MOOCs and their assessment tools, Online Learning and Distance Education Resources,
November 8, 2014. Retrieved from https://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_101Walker.pdfBates, T. (2014b). The strengths and weaknesses of MOOCs: Part 2: learning and assessment, November 7, 2014. Retrieved
from http://www.westga.edu/~distance/ojdla/Fall133/harmon_lambrinos_buffolino133.htmlBedford, D. W., Gregg, J. R., & Clinton, M. S. (2011). Preventing online cheating with technology: a pilot study of remote
proctor and an update of its use. Journal of Higher Education Theory and Practice, 11(2), 41–58.Boyle, A. (2005). Sophisticated tasks in E-Assessment: What are they? And what are their benefits? Paper presented at 9th CAA
Conference 2005. Retrieved from http://www.caaconference.com/pastConferences/2005/proceedings/BoyleA2.pdfBrosnan, M. (1999). Computer anxiety in students: should computer-based assessment be used at all? In S. Brown, P.
Race, & J. Bull (Eds.), Computer-assisted assessment in higher education (pp. 47–54). Birmingham: Kogan Page.Brown, G., Bull, J., & Pendlebury, M. (1997). Assessing student learning in higher education. London: Routledge.Caldarola, R., & MacNeil, T. (2009). Dishonesty deterrence and detection: How technology can ensure distance learning test
security and validity. Proceedings of the European Conference on e-Learning (pp. 108–115).Chapman, G. (2006). Acceptance and Usage of e-Assessment for UK Awarding Bodies–A Research Study (pp. 101–103).
Loughborough University: Proceedings of the 10th CAA International Computer Assisted Assessment Conference,4 and 5.
Chiesl, N. (2007). Pragmatic methods to reduce dishonesty in web-based courses. Quarterly Review of DistanceEducation, 8(3), 203–211.
Clesham, R. (2010). Changing assessment practices resulting from the shift towards on-screen assessment in schools. Doctorof Education, University of Hertfordshire.
Engelbrecht, J., & Harding, A. (2004). Combining online and paper assessment in a web-based course in undergraduatemathematics. Journal of Computers in Mathematics and Science Teaching, 23(3), 217–231.
Englander, F., Fask, A., & Wang, Z. (2011). Comment on “The impact of online assessment on grades in communitycollege distance education mathematics courses” by Ronald W. Yates and Brian Beaudrie. American Journal ofDistance Education, 25(2), 114–120.
Foster, D., Mattoon, N., & Shearer, R. (2008). Using multiple online security measure to deliver secure course exams todistance education students: A white paper. Retrieved from https://www.ou.nl/Docs/Campagnes/ICDE2009/Papers/Final_Paper_101Walker.pdf
James International Journal of Educational Technology in Higher Education (2016) 13:19 Page 13 of 13
Fredericksen, E., Pickett, A., Shea, P., Pelz, W., & Swan, K. (2000). Student satisfaction and perceived learning with on-linecourses: principles and examples from the SUNY learning network. Journal of Asynchronous Learning Networks, 4(2), 7–41.
Grijalva, T. C., Nowell, C., & Kerkvliet, J. (2006). Academic honesty and online courses. College Student Journal, 40(1), 180–185.Harmon, O. R., Lambrinos, J., & Buffolino, J. (2010). Assessment design and cheating risk in online instruction. Online
Journal of Distance Learning Administration, 13(3). Retrieved from http://www.westga.edu/~distance/ojdla/Fall133/harmon_lambrinos_buffolino133.html
Hembree, R. (1988). Correlates, causes, effects, and treatment of test anxiety. Review of Educational Research, 58(1), 47–77.James, R. (2013). Kryterion Online Examination Software Trial: Evaluation of Student Experience. PO72 Online Examination
Trial Project. Armidale: University of New England, dehub.James, R., McInnis, C., & Devlin, M. (2002). Assessing Learning in Australian Universities. Canberra: Australian Universities
Teaching Committee.Johnson, G., & Davies, S. (2012). Unsupervised Online Constructed-Response Tests: Maximising Student Learning and Results
Integrity (pp. 400–408). Wellington: Paper presented at the ascilite Conference.Jung, I. (2003). Cost-effectiveness of online education. In M. Moore & W. Anderson (Eds.), Handbook of distance
education (pp. 717–726). London: Lawrence Erlbaum Associates.Khare, A., & Lam, H. (2008). Assessing student achievement and progress with online examinations: Some pedagogical
and technical issues. International Journal on E-learning, 7(3), 383–402.McCabe, D. L. (2005). Cheating among college and university students: A North American perspective. International
Journal for Educational Integrity, 1(1). Retrieved from http://www.ojs.unisa.edu.au/index.php/IJEI/article/view/14New, J. (2013a). MOOC students to be identified with webcams, ecampus news, September 17th, 2013. Retrieved from
http://www.ecampusnews.com/top-news/students-mooc-webcams-018/New, J. (2013b). Has Coursera solved the catch-22 of for-credit MOOCs?, ecampus news, September 19th, 2013. Retrieved
from http://www.ojs.unisa.edu.au/index.php/IJEI/article/view/14Ricketts, C., & Wilks, S. (2002). Improving student performance through computer-based assessment: insights from
recent research. Assessment & Evaluation in Higher Education, 27(5), 475–479.Sandeen, C. (2013). Assessment’s Place in the New MOOC World, Research & Practice in Assessment, Volume Eight (Summer)
(pp. 5–12).Schmelkin, L. P., Gilbert, K., Spencer, K. J., Pincus, H. S., & Silva, R. (2008). A multidimensional scaling of college students’
perceptions of academic dishonesty. The Journal of Higher Education, 79(5), 587–607.Shaffer, S. (2012). Distance education assessment infrastructure and process design based on international standard
23988. Online Journal of Distance Learning Administration, 15(2). Retrieved from http://www.westga.edu/~distance/ojdla/summer152/shaffer152.html
Stobart, G. (2001). The validity of national curriculum assessment. British Journal of Educational Studies, 49(1), 26–39.Struyven, K., Dochy, F., & Janssens, S. (2005). Students’ perceptions about evaluation and assessment in higher
education: a review. Assessment & Evaluation in Higher Education, 30(4), 325–341.Stuber-McEwen, D., Wiseley, P., & Hoggatt, S. (2009). Point, click, and cheat: frequency and type of academic dishonesty
in the virtual classroom. Online Journal of Distance Learning Administration, 12(3), 1–10.Trenholme, S. (2006-2007). A review of cheating in fully asynchronous online courses: A math or fact-based course
perspective. Journal of Educational Technology Systems, 35(3), 281–300.Valentine, D. (2002). Distance learning: Promises, problems, and possibilities. Online Journal of Distance Learning
Administration, 5(3). Retrieved from http://www.westga.edu/~distance/ojdla/fall53/valentine53.htmlWatson, G., & Sottile, J. (2010). Cheating in the digital age: Do students cheat more in online courses? Online Journal of
Distance Learning Administration, 8(1), 1–12. Retrieved from http://www.westga.edu/~distance/ojdla/spring131/watson131.html
Weller, M. (2002). Assessment issues on a web-based course, Assessment and Evaluation. Higher Education, 27(2), 109–116.Whitley, B. E. (1998). Factors associated with cheating among college students: a review. Research in Higher Education,
39(3), 235–274.Wilkinson, S., & Rai, H. (2009). Mastering the online summative assessment life cycle. In R. Donnelly & F. Mcsweeney
(Eds.), Applied e-learning and e-teaching in higher education (pp. 347–368). Hershey: IGA Global.Winslow, J. (2002). Cheating an online test: methods and reduction strategies. In M. Driscoll & T. Reeves (Eds.),
Proceedings of World Conference on E-Learning in Corporate, Government, Healthcare, and Higher Education 2002(pp. 2404–2407). Chesapeake: AACE.
Yates, R., & Beaudrie, B. (2009). The impact of online assessment on grades in community college distance educationmathematics courses. American Journal of Distance Education, 23(2), 62–70.
Submit your manuscript to a journal and benefi t from: