Prof. Gregory S. Ching

National Chengchi University, Taiwan
Tutorial Title: Rethinking Generative AI in Student Assessment
Abstract
With increasingly accessible platforms, students are relying on GAI tools to generate essays, solve problems, not merely as aids but as primary sources for completing assessments, raising concerns about academic integrity and authentic learning outcomes. (Cotton et al., 2023). The misuse of GAI tools can lead to over-reliance, reduced critical thinking, and compromised assessment validity. Detecting AI-generated content presents a significant challenge, as traditional plagiarism detection tools are often ineffective against such text. Newer tools like GPTZero, Turnitin AI detection, and open-source detectors show promise, though their reliability and accuracy remain debated (Khalil & Er, 2023).
A balanced approach requires not only refining detection methods but also rethinking assessment design. Educators must adopt a dual approach: first, implementing robust detection and verification systems, and second, redesigning assessments to encourage authentic student engagement. This could include oral defences, process-based submissions, peer reviews, and critical reflection components.
It is important that we explore pedagogical frameworks that integrate GAI ethically into the learning process, positioning it as an assistive tool rather than a shortcut to bypass effort (Zawacki-Richter et al., 2022). There should also be policy development and AI literacy training for both staff and students that are crucial to ensure equitable and transparent use. This workshop will explore the above issues with the aim of developing a framework that views GAI as a supportive tool rather than a substitute for learning by the participants.
This tutorial will critically examine the implications of GAI use in student assessments. It begins by defining the nature and capabilities of GAI, then explores how students utilize these tools and the growing difficulty in detecting such usage as well as discussions on current detection methods and their limitations including detection strategies such as AI detection tools (e.g., Turnitin AI, GPTZero), stylometric analysis, and unexpected shifts in writing quality or structure. It then follows on to rethink assessment design shifting towards authentic, process-based tasks that emphasize reflection, creativity, and originality.
This workshop argues that the solution is not only technical but deeply pedagogical. We propose a balanced, dual approach: (1) refining AI detection tools and protocols, and (2) redesigning assessment to foreground process, creativity, and reflective thinking. Techniques such as oral defences, process-based portfolios, peer reviews, and critical reflection essays can re-centre students as active learners, not passive AI users. Drawing from current studies and institutional practices, including innovative hands-on assessment approaches like those at Nanyang Technological University (NTU) Singapore, the session will explore how formative, collaborative, and presentation-based tasks can better reflect true learning.
To structure AI integration, participants will engage with the AI Assessment Scale (Perkins et al., 2024), a five-level framework from “No AI” to “AI Exploration,” allowing instructors to align tasks with pedagogical goals and desired AI involvement. Bloom’s Revised Taxonomy will also be used to emphasize higher-order skills, such as Analyzing, evaluating, and creating, that are less likely to be replaced by GAI tools and are essential for future-ready learners.
It will then Review emerging AI detection tools, their capabilities, and limitations; analyse real-world assessment cases using the AI Assessment Scale; engage in collaborative redesign of one of their own assessment tasks; and participate in small-group discussions to propose holistic solutions that address technical, pedagogical, and ethical dimensions of GAI integration.
Participants in the workshop will discuss research explore pedagogical strategies that integrate GAI as a support mechanism rather than a substitute. This includes AI literacy education, co-creation models, transparent AI usage policies, and restructured assessments that evaluate critical engagement with AI outputs rather than passive use. It is proposed that participants will work in groups to discuss and propose solution to address each of the areas.
By the end of the session, participants will have co-developed a preliminary framework for managing the rise of GAI in education, balancing innovation, equity, integrity, and student agency. They will leave with adaptable strategies, model assessment designs, and policy considerations for fostering an educational culture where generative AI is positioned as a supportive partner, not a substitute for learning.
Bio
Dr. Gregory Ching
Dr. Gregory Ching is an Associate Professor at the Graduate Institute of Educational Administration and Policy, National Chengchi University (NCCU), Taiwan. His research interests include educational technology, student assessment, and higher education development. He is currently exploring the implications of generative AI in academic settings, with a focus on assessment design, academic integrity, and student engagement. Dr. Ching has published and presented widely in the fields of education and interdisciplinary studies.