I Didn’t Know That Was Cheating: Student and Instructor Views on Academic Integrity in Computer Science Jaime Paredes Paez Michelle Cheatham Leanne Wu jaime.paredespez@ucalgary.ca michelle.cheatham@ucalgary.ca lewu@ucalgary.ca University of Calgary Calgary, Alberta, Canada Abstract Cheating has been a perennial problem in universities for centuries, and the rise of information technologies that ease the process is exacerbating the situation. In some fields, such as computer science, practitioners routinely build on code or other resources available online in the course of their normal workflow. The view taken on this type of action in an academic context may vary from instructor to instructor and course to course. This creates a potentially confusing environment for students. This work surveys a group of instructors and students on their opinion regarding the level of academic dishonesty inherent in 13 different scenarios, as well as the reasons that each group believes underlie students’ decision to cheat or to refrain from doing so. Keywords computer science education, academic integrity, academic dishonesty, cheating, plagiarism ACM Reference Format: Jaime Paredes Paez, Michelle Cheatham, and Leanne Wu. 2025. I Didn’t Know That Was Cheating: Student and Instructor Views on Academic Integrity in Computer Science. In The 27th Western Canadian Conference on Computing Education (WCCCE’25), April 28–29, 2025, Calgary, AB, Canada. 5 pages. https://doi.org/10.60770/a8cy-p868 1 Introduction Ask any university professor about their experiences with academic integrity violations, and you will almost certainly be on the receiving end of a lengthy rant about the issue. The prevalence of instances of academic dishonesty in higher education has consistently been found to be very high. Surveys have found dishonesty rates as high as 70 [10], 85 [11], or even 95% [7] at some universities. The frequency of academic dishonesty, likely combined with the sensitivity and immediacy of the topic for those in the research This work is licensed under a Creative Commons Attribution 4.0 International (CCBY 4.0) license. For all other uses, contact the authors. WCCCE ’25, April 28–29, 2025, Calgary, AB © 2025 Copyright held by the owner/authors https://doi.org/10.60770/a8cy-p868 and education professions, has resulted in a large body of research into many aspects of the issue. For example, existing work has considered how the prevalence of integrity violations varies by gender [5], ethnicity [3], year of study [11], and field of study [6]. Other efforts have considered the reasons why students cheat [1, 14] and what might be done to prevent it [2, 8]. Some issues of note in this research are: It is difficult to reproduce the results in many cases. For example, some results [5] show a significant difference in the rates of academic integrity violations between males and females, while others, e.g. [7], show no such difference. Results also vary based on field of study and the type of institution at which a survey was conducted. Much published work has a quote such as the following: Future studies may include replications of our findings in different types of institutions [3], with "types of institutions" replaced by "field of study", "cohort of students", etc. The difficulty in replication may be because researchers have yet to identify key variables that are tied to the likelihood of academic dishonesty. It is of particular concern because many instructors refer to universitylevel guidelines when discussing academic integrity in their syllabus or other course documents, but the one-size-fits-all nature of those guidelines is unlikely to be sufficient. Defining terms is challenging. For a survey containing questions like "have you ever committed an academic integrity violation?" to produce meaningful results, both the researchers conducting the survey and the participants must have a shared understanding of the meaning of that term. However, there are often significant differences of opinion on what constitutes an academic integrity violation among instructors in different fields of study and even among colleagues in the same department [2]. This confusion leads to the common student complaint that "I didn’t know that (what I did) was wrong." It may also be a factor in instructors deciding not to report an academic integrity incident, since they may be unsure that their colleagues will agree with their assessment of the behavior as dishonest [8]. One approach to dealing with this problem that has emerged in the literature is to ask about particular scenarios [3, 6, 13]. For example, "what percentage of your classmates do you think would share their homework solution with another student if asked?" rather than "what percentage of your classmates do you think cheat on their homework?" Computer Science may be a particularly challenging field with respect to academic integrity issues. Things specific to WCCCE ’25, April 28–29, 2025, Calgary, AB computer science, such as a reliance on online code repositories [6], may combine with student and instructor views on digital versus analog cheating to make academic integrity issues more confounding for students in this field [3]. Section 2 of this paper goes into more detail on these issues. In this work, we survey computer science students and instructors regarding potentially dishonest academic practices relevant to the field. Our goal is to answer the following research questions: • Do students and instructors share a common understanding in and between groups of what types of actions constitute an academic integrity violation? • What types of actions generate the most uncertainty regarding academic integrity? Does this vary by group? • Are the above results significantly impacted by demographic characteristics such as gender, years of experience, etc.? 2 Related Work As mentioned previously, there is a great deal of existing work on the topic of academic dishonesty. A survey of literature in this area can be found in [9], and a collection of recent articles on the subject that illustrates the variety of viewpoints being considered can be found in [4]. Because our current work focuses specifically on the views of computer science students and instructors regarding academic integrity violations, we focus narrowly on that topic in this section. A key aspect of computer science that potentially distinguishes it from other fields of study and practice is the commonplace reliance on digital tools and artifacts, and public information stores. Programming today is frequently done within an integrated development environment that supports auto-completion of coding statements, seamless lookup of online documentation and “cookbook” code snippets, and generative AI-powered code suggestions. Additionally, there are many online learning and help sites devoted to helping people learn to program and fix bugs in their code. Popular examples include StackOverflow and GeeksForGeeks. StackOverflow in particular is a common venue for students to post nonworking code and receive feedback and/or corrected code. Another characteristic of modern software development is the avoidance of re-inventing the wheel. Much application development involves the piecing together of existing libraries and other pieces of code in novel ways, rather than coding the entire application from scratch. As a result, there is a large amount of functional source code freely available on public repositories such as GitHub. There is some speculation that students “perceive anything that can be found on the internet to be common knowledge” and that students therefore do not see any issue with using the material without attribution [12]. Computer science’s reliance on digital artifacts may impact students’ and instructors’ views of what constitutes an academic integrity violation. In 2020, Ina Blau and her colleagues at Open University in Israel conducted a survey of 1482 students and 42 faculty members regarding the appropriateness and severity of actual penalties imposed by that university’s disciplinary committee [3]. Study participants were from the “humanities, social sciences, life and natural sciences, and exact sciences (including computer science).” The authors found that faculty perceived academic integrity violations to be more grave than did students. In most cases, Paez et al. both groups found analog integrity violations to be worse than digital violations, with the exception of “digital facilitation” (assisting someone else in cheating, plagiarizing, or fabricating data). Both groups suggested more severe penalties for analog versus digital offenses, including for facilitation. Recently, Banson et. al. conducted a scenario-based survey specific to computer science students [6]. They surveyed 160 students (no instructors) from 15 undergraduate computer science courses. Participants were presented with 15 scenarios involving “collaboration or help seeking” in CS courses and asked to rank their acceptability on a scale of 0 to 100. The authors point out that standard dishonesty policies are typically black and white, and yet students frequently rated some scenarios as much more acceptable than others. Furthermore, some scenarios, such as “asks a friend about a homework assignment using their actual code as a reference, and the friend gives tips based on the student’s questions” generated significant disagreement amongst the respondents, suggesting some uncertainty regarding expectations. This uncertainty was present even in scenarios that most computer science professors explicitly forbid, such as “asks a friend about a homework assignment, and the friend gives tips by revealing their code.” The next two sections outline our own scenario-based survey administered to computer science students and instructors and the results, which we will contextualize in relation to the previous work described here. 3 Methodology Along with several other researchers, we have adopted the approach of asking about specific scenarios rather than abstract questions to explore individuals’ attitudes towards academic integrity issues [3, 6, 13]. This allows us to avoid issues related to a difference in understanding of the meaning of terms like “academic integrity violation” and “plagiarism” between respondents. It also enables us to focus on concrete circumstances in which computer science students routinely find themselves. The research team collaboratively developed 13 hypothetical scenarios by drawing on critical issues that have been frequently observed within the Faculty of Computer Science. For example, one scenario is “While working on a programming assignment, you find a website with material that addresses the brief for the programming assignment. You copy two or three important lines of code from the website and use them in your code. You do not acknowledge the source of this code.” These scenarios were often highlighted by instructors in one-on-one discussions, faculty meetings, and department workshops. They largely overlap with those found in [6] but are tailored to our institution and department. Both students and instructors were asked about this set of scenarios, with minor wording changes for clarity. Students rated how acceptable each action is on a 5-point Likert scale from “acceptable” to “not acceptable”, while instructors were asked if they consider the action a breach of academic integrity (on a 5-point scale from “definitely yes” to “definitely no”. Both groups were also asked about common reasons behind academic integrity violations. These reasons again reflect the experiences and hypotheses of faculty in our department but largely overlap with those discussed in the literature, such as [6]. Each I Didn’t Know That Was Cheating: Student and Instructor Views on Academic Integrity in Computer Science group was given a list of the same 15 potential reasons that a student might cheat (e.g. “not enough time”, “everyone does it”) and ten potential reasons a student might refrain from cheating (e.g. “fear of being found out”, “fairness to other students”) and asked to assess their likelihood on a 5-point scale. The surveys also contained questions about related issues such as students’ perceptions of the efforts made by instructors and the university to prevent cheating and instructors’ approaches to discouraging academic integrity incidents; however, due to space limitations, these elements will be discussed in future publications. Unfortunately, the student survey went through internal review board approval prior to the rise of generative AI tools such as ChatGPT, so that topic was not covered in this study. We recruited a total of 137 undergraduate students and 19 instructors from the Computer Science faculty to participate in our study. The student cohort included 112 participants from an in-person 200level theory class who completed the survey in November 2022 and 25 participants from a 300-level online theory class who completed it in March 2023. Instructors were recruited during a departmental meeting in February 2024 and were given a month to complete the survey. All participation was voluntary, and students were given the option to enter a raffle for a $20 bookstore gift card as an incentive. 4 Results Table 1 shows the mean and standard deviation among the instructors and students regarding the acceptability of each scenario, while tables 2 and 3 show both groups’ views on the applicability of a set of reasons to cheat and not to cheat, respectively. 4.1 Scenarios There was fairly wide variation in the instructor responses regarding the acceptability of different actions: the standard deviation was approximately a full Likert scale level on six of the 13 scenarios. The disagreement covered both actions considered generally acceptable, like modifying an instructor’s solution to a similar problem and discussing an assignment in a group but writing it up alone, and those considered unacceptable, such as using two or three lines of code from a website without attribution and showing a friend their solution after obtaining a promise from them not to copy it. There was total agreement among instructors that paying someone else to solve an assignment is definitely cheating. The variation among the students’ responses was higher than among the professors, but generally involved the same scenarios. For instance, three of the four highest deviations among student responses were also among the four most variable instructor responses. The exception was that students disagreed more on the acceptability of showing a friend their solution after obtaining a promise not to copy it and much less on using a friend’s solution from a previous term. Students did not come close to total agreement amongst themselves on any scenario. Opinions on the acceptability of various actions aligned fairly well across instructors and students. Agreement was strongest on the acceptability of going to office hours (considered fine) and posting an assignment solution to GitHub prior to the deadline (considered unacceptable / probably cheating). Opinions differed WCCCE ’25, April 28–29, 2025, Calgary, AB most strongly between the two groups on using a friend’s solution for a similar assignment from a previous term and showing a friend their solution after the friend promises not to copy it. In both of these cases, instructors considered the action “definitely cheating” while students considered it ““unacceptable” rather than “not acceptable at all”). Interestingly, instructors were more lenient regarding the acceptability of asking for an 84.4% to be rounded up to an A than were students (4.63 versus 3.88). A couple of scenarios (using a friend’s solution from a previous term and buying a solution on a contracting website) had substantially more variation amongst the student responses than those of the instructors, but in general, situations that showed division amongst the instructors did so for students as well, and vice versa. 4.2 Reasoning The standard deviations indicated that there was less consensus both within and between the instructors and students regarding the impetus for students to cheat or to refrain from doing so. However, there was agreement between the two groups regarding the top three reasons for cheating (not enough time, workload too high, and to avoid failing) and on two of the three least likely reasons (monetary reasons and “everyone does it”). The only difference amongst these results was that few instructors felt that health issues led students to cheat, while few students believed that laziness was a common cause. Likewise, there was a fair amount of overlap regarding the reasons students decide not to cheat, with the ability to get good marks without cheating and pride in working topping both lists, and religious beliefs, fairness to other students and not knowing how to go about cheating at the bottom for both. Interestingly, instructors felt that moral values played a role in not cheating for many students, while the students suggested that the penalties if they were caught doing so was a more significant factor. 4.3 Demographics The data depicts a young, predominantly male, largely domestic, full-time student body. Among the 137 students’ responses, the majority were born in 2001 or 2004 (25.5% and 26.3%, respectively). Approximately two-thirds (65.7%) of the respondents are male, 81% are domestic students, and 98.5% are enrolled full-time. First-year students dominate the sample (73%), with progressively fewer students in higher years. Language diversity is evident, with 40.1% of students speaking only English, while 48.9% speak at least one language besides English. Motivations for enrollment vary widely: 44.5% express a keen interest in the subject matter, 19.7% are driven by career prospects, and 15.3% seek to develop coding skills. Of the 19 instructors surveyed, 21% completed their Ph.D. between 1990 and 1995, and another 21% between 2012 and 2015. A significant proportion (63%) had not taught at any other institution in the past five years. The range of post-secondary teaching experience varied: 32% had less than 10 years of experience, 21% had more than 10 years, and another 21% had over 20 years of teaching experience. The majority of instructors held the rank of full professor. We ran the ANOVA test to look at a wide variety of demographic variables for both instructors and students, including rank (assistant, associate, full, limited, sessional), track (teacher or research), WCCCE ’25, April 28–29, 2025, Calgary, AB Paez et al. Scenario Instructors Mean Std Dev Students Mean Std Dev Round 84.4% up Parent helps with online test Parent advises on group project Modify instructor solution Use 2 lines of code from website Discuss assignment in group, write up alone Ask for deadline extension Buy solution on contracting website Use friend’s solution from previous term Use 2 lines of code from TA Show friend solution after promise not to copy Go to office hours Post solution on GitHub before deadline 4.63 1.32 4.11 3.89 2.11 3.95 4.83 1.00 1.21 4.84 1.63 4.89 1.84 3.88 1.64 4.27 4.05 1.84 3.69 4.58 1.20 2.30 4.44 2.57 4.82 1.80 0.76 0.48 1.05 0.9 1.45 1.08 0.51 0.00 0.42 0.50 1.07 0.46 0.96 1.08 0.87 0.98 1.07 1.12 1.19 0.78 0.66 1.18 0.85 1.24 0.65 1.04 Table 1: Assessment of academic integrity concerns of hypothetical scenarios on a scale of 1 (unacceptable) to 5 (acceptable). Reason to cheat Instructors Mean Std Dev Students Mean Std Dev Not enough time Workload too high Else will fail assignment Else will fail course Lazy Everyone does it To get better marks Parental pressure Cannot afford to fail Assignment too difficult To help a friend Health issues Exams too hard Afraid of failing Monetary reasons 4.42 4.37 4.21 4.37 3.05 2.89 3.79 3.22 3.83 3.42 3.47 2.58 2.95 4.05 2.21 2.74 2.90 3.26 3.56 1.71 1.94 2.58 2.01 3.22 2.55 2.27 2.58 2.54 2.74 1.83 1.02 0.83 0.63 0.83 1.13 1.24 0.98 0.81 0.86 1.12 0.91 1.26 1.08 0.71 1.18 1.29 1.37 1.39 1.39 0.96 1.12 1.28 1.31 1.41 1.31 1.23 1.36 1.32 1.39 1.21 Table 2: Assessment of the prevalence of reasons behind cheating, from 1 (not at all likely) to 5 (highly likely) years taught (less than 10 to greater than 30 years), gender (selfdisclosed as masculine or feminine), status (Canadian/domestic or international), language, full- versus part-time students, online versus in person attendance, and student motivation for enrollment, and discovered no significant statistical difference amongst these groups (P>0.05) regarding perceptions of acceptability of the various scenarios. 5 Discussion Returning to our research questions, the results of this study do show a common understanding between instructors and students regarding the acceptability of most of the hypothetical actions from an academic integrity standpoint and the most common reasons that students do or do not cheat, in that they generally consider Reason not to cheat Instructors Mean StdDev Students Mean StdDev Want accurate assessment Pride in work Can get good marks Against moral values Against religious beliefs Fear getting caught Never considered it Don’t know how to Fairness to other students High penalty if caught 3.05 4.05 4.42 4.11 2.74 3.84 3.00 1.68 2.37 3.11 4.00 4.10 4.41 3.80 2.55 3.83 3.23 2.71 3.00 4.19 1.08 0.97 0.61 0.81 1.10 0.69 0.94 0.75 0.96 1.24 1.14 1.07 0.92 1.17 1.52 1.25 1.27 1.36 1.41 1.10 Table 3: Assessment of the prevalence of reasons behind refraining from cheating, from 1 (not at all likely) to 5 (highly likely) the same actions acceptable/unacceptable and the same reasons to cheat or not cheat to be likely/unlikely. These results differ from those presented in [3], which did find large differences between the severity of incidents as considered by instructors versus students, though that work focused on academic integrity incidents that involved digital resources (e.g. taking code from online rather than sharing code with a friend), and that may have revealed a stronger generational gap than the scenarios we used in this study. None of our results showed a statistically significant difference for any of the demographic traits considered. This differs from the work in [11], which found a difference in viewpoint based on year of study, and [3], which found significant but small differences based on gender, ethnicity, and language (Hebrew versus Arabic speaking). Both of these studies had a wider variation among students surveyed in terms of years of study and major subject area than the work presented here, which may account for some of this difference. While the results showed general overall agreement on the relative acceptability of various actions, the standard deviations I Didn’t Know That Was Cheating: Student and Instructor Views on Academic Integrity in Computer Science amongst the responses for both groups indicates that much of this issue is far from clear cut for either instructors or students. This matches prior results in the literature, including in [2, 6]. Discussion amongst faculty members in our department brought up the point that what is acceptable in one course or for one assignment might be a clear academic integrity violation in a different context. For example, if a student is writing a program for a game development course and uses a depth-first maze navigation algorithm as a small part of the overall work, that might be considered more acceptable than if a student in a data structures and algorithms course used the same code in their assignment, since that might directly thwart the learning objective of the assignment. This idea is also mentioned in [2]. Because of issues like these, it is very important that instructors clearly explain the bounds of acceptable behavior, at the level of individual assignments or at least individual courses. While more work remains to be done regarding the most effective way to convey these boundaries to students, it is clear that a one-size fits all approach such as linking to the generic university academic integrity guidelines in the course outline or syllabus is unlikely to be sufficient [2]. As noted previously, students viewed reusing a friend’s solution on a similar assignment from a previous term as substantially more acceptable than instructors did. This is inline with prior results in the literature, such as [11], in which students indicated that instructors should develop entirely new assignments every term (and even that not doing so may be indicative of laziness on the instructor’s part). This indicates that it may be helpful for instructors to specifically state that this behavior is unacceptable, as well as explain the pedagogical reasons for sometimes reusing the same or similar assignments across terms. Both groups indicated that lack of time and high workload are common reasons that students cheat – this may imply that seminars or other resources regarding effective time management and organizational skills could be useful for students, along with a clear policy regarding deadline extensions [2]. One particularly interesting aspect of these results is that students considered high penalties for getting caught as a top three likely reason to avoid cheating, but instructors did not, while instructors considered a student’s moral values to be an in this set but students did not. This is perhaps a bit too convenient. It is well established that instructors regularly skip reporting academic integrity violations [2, 6, 11]. Our results indicate that they may view following through on penalties as unnecessary/ineffective and that they can instead rely on students’ moral compasses, but the student responses indicate that this may be wishful thinking. 6 Conclusions The survey results presented here indicate that while the participating instructors and students had some internal variability in their views regarding academic integrity issues, the generally roughly agreed on both the overall assessment of each the acceptability of various common academic scenarios and the likelihood of various reasons underlying a student’s decision to cheat or not. These results did not significantly differ based on demographics. The outcome of this analysis reflects the need for instructors to clearly communicate what is considered acceptable versus out WCCCE ’25, April 28–29, 2025, Calgary, AB of bounds on an assignment-by-assignment, or at least course-bycourse, basis. Instructors should consider making it particularly clear when and why an assignment from a previous term is being reused. Seminars or other resources regarding time management and organizational skills may be particularly helpful for students, along with clear policies and procedures regarding deadline extensions. Finally, it is important for instructors to report academic integrity violations when they occur, in order to deter more such activity in the future. Acknowledgments The authors sincerely thank the many individuals within the University of Calgary who assisted in this work, including members of the CPSC Academic Integrity Working Group: Nathaly Verwaal, Nelson Wong, Jonathan Hudson, Richard Zhao, Pavol Federl, Wayne Eberly, and Lora Oehlberg; Sarah Elaine Eaton of the Werklund School of Education; Bronwen Wheatley of the Department of Chemistry/Earth, Energy, Environment; Nancy Chibry of the Faculty of Science; and recent graduates Alexanna Little, Samuel Osweiler, and Matthew Buhler. We also thank the University of Calgary Taylor Institute for Teaching and Learning for financial support (Grant 2302-27). References [1] Meital Amzalag, Noa Shapira, and Niva Dolev. 2021. Two sides of the coin: lack of academic integrity in exams during the corona pandemic, students’ and lecturers’ perceptions. Journal of Academic Ethics (2021), 1–21. [2] Susan L Bens. 2022. Helping students resolve the ambiguous expectations of academic integrity. In Academic integrity in Canada: An enduring and essential challenge. Springer International Publishing Cham, 377–392. [3] Ina Blau, Shira Goldberg, Adi Friedman, and Yoram Eshet-Alkalai. 2021. Violation of digital and analog academic integrity through the eyes of faculty members and students: Do institutional role and technology change ethical perspectives? Journal of Computing in Higher Education 33 (2021), 157–187. [4] Sarah Elaine Eaton and Julia Christensen Hughes. 2022. Academic integrity in Canada: An enduring and essential challenge. Springer Nature. [5] Adi Friedman, Ina Blau, and Yoram Eshet-Alkalai. 2016. Cheating and Feeling Honest: Committing and Punishing Analog versus Digital Academic Dishonesty Behaviors in Higher Education. Interdisciplinary Journal of E-Learning & Learning Objects 12 (2016). [6] International Society for Technology in Education (ISTE’22) 2022. Perceptions of Computing Students on Academic Dishonesty. International Society for Technology in Education (ISTE’22). [7] Bob Ives, Madalina Alama, Liviu Cosmin Mosora, Mihaela Mosora, Lucia GrosuRadulescu, Aurel Ion Clinciu, Ana-Maria Cazan, Gabriel Badescu, Claudiu Tufis, Mihaela Diaconu, et al. 2017. Patterns and predictors of academic dishonesty in Romanian university students. Higher Education 74 (2017), 815–831. [8] Cheryl A Kier and Cindy Ives. 2022. Recommendations for a balanced approach to supporting academic integrity: perspectives from a survey of students, faculty, and tutors. International Journal for Educational Integrity 18, 1 (2022), 22. [9] Bruce Macfarlane, Jingjing Zhang, and Annie Pun. 2014. Academic integrity: a review of the literature. Studies in higher education 39, 2 (2014), 339–358. [10] Lori Olafson, Gregory Schraw, Louis Nadelson, Sandra Nadelson, and Nicolas Kehrwald. 2013. Exploring the judgment–action gap: College students and academic dishonesty. Ethics & Behavior 23, 2 (2013), 148–162. [11] Kelley A Packalen and Kate Rowbotham. 2022. Student insight on academic integrity. In Academic integrity in Canada: An enduring and essential challenge. Springer International Publishing Cham, 353–375. [12] Jay Parkes and Dawn Zimmaro. 2017. The college classroom assessment compendium: a practical guide to the college instructor’s daily assessment life. Routledge. [13] Lee-Ann Penaluna and Roxanne Ross. 2022. How to talk about academic integrity so students will listen: addressing ethical decision-making using scenarios. In Academic integrity in Canada: An enduring and essential challenge. Springer International Publishing Cham, 393–409. [14] Hongwei Yu, Perry L Glanzer, and Byron R Johnson. 2021. Examining the relationship between student attitude and academic cheating. Ethics & Behavior 31, 7 (2021), 475–487.