:

Can we stay one step ahead of cheaters? A field experiment in proctoring online open book exams

top-news
Banner

Can we stay one step ahead of cheaters? A field experiment in proctoring online open book exams
As more institutions of higher learning expand their offerings of online courses, the use of online assessments has become an important topic of discussion. Although the use of online assessments can be very beneficial, instances of cheating in the absence of a proctor pose a cost in protecting academic integrity. This has led to the development of many proctoring solutions to address this challenge. This paper presents two field experiments used to analyze the effects of proctoring methods on exam scores: one involving a face-to-face class and the other involving an online class. Also, two proctoring methods were used: live proctors and web-based proctors. In each class, best practices were used to minimize cheating and students were informed in advance which exams were proctored. Our results show that students whose exams were not proctored scored over 11% higher on average than those whose exams were proctored. However, the results varied significantly: the use of live proctors in the face-to-face class had a much larger effect on test scores than web-based proctors in the online class. We compare variables affecting each testing environment to uncover possible determinants, including the ease of collaboration, test anxiety, and information sharing over the testing period.

Introduction
Online assessments offer instructors several key advantages over comparable traditional in-class exams, including lower administrative costs, greater variety of multimedia assessment tools, convenience for students, and faster analysis of results. However, the main hesitation by instructors for implementing major exams online is the potential risk of academic dishonesty.
Many instructors believe that online assessments offer an invitation for students to cheat, and therefore avoid these exam formats due to a belief that excessive cheating may occur. Although anecdotal evidence suggests that cheating is pervasive in online exams, the literature has largely focused on traditional in-class exams as opposed to cheating in an online setting. This is especially relevant as institutions of higher learning dramatically expand their online course and degree offerings.
In this paper, we analyze cheating behavior in online exams by conducting two field experiments. Both are randomized controlled trials in large enrollment microeconomic principles classes in which students completed all exams online in a learning management system, one of which must be proctored. The first experiment took place in a face-to-face class where students took their proctored exam using their laptops but in the presence of a live proctor in the classroom. The second experiment took place in a fully online class where students took their proctored exam in any location using a web-based proctor. The exam to be proctored was randomized and students were informed in advance.
Our results show that students who took exams without supervision scored over 11% higher on average than those taking the exam under supervision. However, we find that the difference in scores between proctored and non-proctored exams is much smaller when a web-based proctoring tool is used instead of an in-class live proctor. We explore several mechanisms that could potentially explain our results, including test anxiety, information sharing, and collaboration on exams. Although our data do not rule out the possibility of anxiety explaining these results, we present evidence that collaboration among peers is the primary explanation for the differences in scores between proctored and non-proctored exams. Therefore, in our setting in which students have full access to course materials during exams, collaboration is how students cheat despite rules prohibiting these actions during exams.
Moreover, we found larger proctoring effects on performance among lower achieving students and among underclass students, but insignificant differences based on gender. The negative relationship between cheating and measures of student ability builds upon recent experimental research on student cheating by Yaniv et al (2017) by studying the costs and benefits students place on external rewards (e.g., grades) versus internal rewards (e.g., desire to achieve answers).
This paper explores various factors and student characteristics to gain insights into the nature of cheating behavior in online exams. The remainder of the paper is as follows: Section 2 reviews the literature on online testing and cheating behavior. Section 3 describes the field experiment design. Section 4 presents the empirical model. Section 5 presents the baseline results, while Section 6 examines the heterogeneous responses by student characteristics. Section 7 concludes and describes extensions for further research.

Background on online assessments
The implementation of online assessments became a necessity as institutions expanded distance learning and other online-enhanced courses over the past two decades. Even prior to the COVID-19 crisis in early 2020 that forced nearly all education to be moved online, most institutions of higher learning had already invested in tools and training to offer more courses and degrees online. Part of this phenomenon is explained by increased competition for students and space restrictions on campus.

Experiment design
This study was conducted in two large enrollment sections of microeconomic principles at a large public university in Illinois. One was a face-to-face class that took place in spring 2017 and the other was an online class in the prior winter term. The two classes covered the same content but differed in the length of the term, number of students enrolled, and number of exams.
The face-to-face class lasted 16 weeks. Students were required to take three non-cumulative midterm exams (one of which

Empirical model
To identify the average causal effects of taking the exam with a proctor, we estimate the following fixed effects model for the overall sample:
Where Scoreij is the exam score for a student in the exam. The proctor is an indicator variable that takes the value of 1 if the student took the exam with a proctor. Although we have a randomized setup, we address the potential concern that student performance on an exam may be correlated with the presence of a proctor (i.e.,

Results
We begin by exploring the effects of taking the midterm exam with a proctor on the average exam score itself. Table 3 presents the mean score by proctoring condition. Students who took the exam with a live proctor on Exam 1 scored on average 11.1% lower compared to students who took the exam without a proctor. This difference increased to 11.2% on Exam 2 and 15.3% on Exam 3. In the online class, which used a web-based proctor, the difference between students who took the exam with a proctor and

Heterogeneity
The average student effect of a proctor could potentially mask substantial heterogeneity. To explore heterogeneity we first focus on ACT scores which serve as a proxy for student ability. We divide students into three groups according to their ACT scores: below 26 (Low), from 26 to 30 (Medium), and greater than 30 (High). This roughly corresponds to the 80th percentile and lower, 80th to 95th percentile, and greater than 95th percentile, respectively, based on percentiles reported from ACT.

Conclusion
Despite significant anecdotal evidence of collaboration and other cheating behavior among college students during exams, little empirical evidence has been collected beyond self-reported survey data. This paper presents the results of two randomized field experiments assessing the effect of proctoring on exam scores in face-to-face and online settings. We find that students who were not subject to proctoring scored on average 11% higher compared to those who were required to use a proctor.

Banner

Leave a Reply

Your email address will not be published. Required fields are marked *