Written by VSTE Professional Services member Dr. Tysha Batts (Instagram-tyshsan, LinkedIn-Tysha Batts)
The fear that AI will destroy academic integrity is real. Many educators raise the issue that AI is allowing students to cheat and not do their work. Scientists are even looking at AI and the behaviors of students before and after the release of ChatGPT. Lee et al., 2024, stated that when looking at private high and public high students were using ChatGPT to generate ideas for papers and assignments and for explaining new concepts.
However, the real discussion is how we, as educators, are helping students to use AI ethically. AI can be used as a co-pilot when used ethically. As we know, AI cannot build and refine themselves yet. AI is still being trained by the human-in-the-loop. This human-in-the-loop model is essential when it comes to having students use AI in their assignments. In a student’s workflow, this means the student remains the editor, evaluator, and final authority on the content. Using AI does not replace thought but helps to accelerate brainstorming, first-drafting, or finding alternative perspectives.
So how can this be done? This can be done by guiding our students. We need to determine how AI will support learning. Students need to understand the strengths and weaknesses of AI. We also need to guide students in academic integrity by giving them guidelines of when they can and cannot use AI in their assignments (co-pilot mode). For example, using AI to summarize long text, generating alternative titles for an essay or debugging code. For restricted use (integrity mode), we need to let students know for example, that AI is prohibited for generating final, graded response to a prompt. Students also need to have guidelines for a “cite and reflect” policy where they include a short AI use reflection statement at the end of assignments. This means as educators we must model the approach for students. Modeling this approach means educators must demonstrate AI use in their own planning, material creation, and research.
We know that AI detectors are inconsistent and are not reliable. The focus should not be on surveillance but on reimagining the assignments. Assignments need to promote the skills of an AI-ready graduate. We want students to be learners, researchers, synthesizers of information, storytellers, ideators, and connectors. However, to achieve this goal, we need to have assignments that promote critical thinking, synthesis of real-world or personal data, and unique perspectives that AI cannot provide..
Lee, V. R., Pope, D., Miles, S., & Zárate, R. C. (2024). Cheating in the age of Generative AI: A high school survey study of cheating behaviors before and after the release of chatgpt. Computers and Education: Artificial Intelligence, 7, 100253. https://doi.org/10.1016/j.caeai.2024.100253
ASCD. (n.d.). Profile of an AI-ready graduate. https://www.ascd.org/blogs/profile-of-an-ai-ready-graduate