Transformation from Traditional Assessments to AI-Adapted Assessments | Teaching and Learning Centre, Lingnan University

Teaching and
Learning Centre

Transformation from
Traditional Assessments to
AI-Adapted Assessments

Traditional
AI-Adapted
University Grants Committee

This page outlines the transition from traditional assessments to AI-adapted assessments, highlighting the integration of cutting-edge generative AI (GenAI) tools that provide real-time feedback and personalised planning. Traditional assessments are no longer applicable due to their limited adaptability and lack of contextual flexibility. By leveraging advanced GAI tools specifically designed for today’s AI-driven world, educators can provide real-time feedback and tailor learning plans to individual needs with greater precision.

Traditional Assessments
AI-Adapted Assessments
Traditional Assessments
Individual Written Report
  • Heavy weighting on the written report
  • Validity drift in learning measurement
  • Authorship and integrity uncertainty
  • Limited visibility of learning processes
  • Reduced quality and diagnostic value of feedback
  • Grade inflation and overestimation of mastery
AI-Adapted Assessments
AI-Adapted Presentation
  • Development of real‑world communication skills
  • Higher-order thinking
  • Reduction of plagiarism risk
AI‑Informed Reflective Written Report
  • Carrying reduced weighting
  • Emphasising reflections, oral defences, or mini-vivas
  • Providing real-time feedback on clarity, logic, and structure
  • Supporting writers with language scaffolding
  • Recording AI use in a disclosure section
Traditional Assessments
Capstone Project
  • Validity drift in assessing capstone learning outcomes
  • Authorship, integrity, and contribution ambiguity
  • Limited visibility of the learning and design process
  • Reduced quality and specificity of supervisory feedback
  • Grade inflation and overestimation of capstone mastery
  • Equity and access disparities in AI use
AI-Adapted Assessments
AI-Adapted Capstone Project
  • Process-rich evidence of learning
  • Authenticity-centred task design
  • AI support, not substitution
Traditional Assessments
Participation
  • Validity drift in measuring learning outcomes
  • Authorship, integrity, and accountability gaps
  • Reliance on passive or product‑based participation
  • Reduced visibility into learning processes
  • Grade inflation and surface‑level achievement
  • Equity and access concerns in AI use
AI-Adapted Assessments
Enhanced Participation
  • Deepening of discussion and critical thinking​
  • Student engagement via text, voice, or assisted modes​
  • Early flagging of disengaged students​
Traditional Assessments
Questionnaire Customisation
  • Time‑consuming and resource‑intensive
  • Validity limitations in measuring learning outcomes
  • Limited insight into authorship and response authenticity
  • Low diagnostic value and generic feedback
  • Inflexibility regarding learner context and emerging knowledge
  • Overestimation of understanding
AI-Adapted Assessments
AI‑Adapted Questionnaire Customisation
  • Dynamic personalisation
  • Context-aware language
  • Rapid content generation
  • Enhanced open-ended question handling
  • Integration of emerging trends
Traditional Assessments
Mid-term Examination
  • Difficulty in assessing higher‑order thinking
  • Validity drift in assessing intended learning outcomes
  • Authorship and integrity challenges
  • Limited visibility of learning processes
  • Grade inflation and misrepresentation of mastery
  • High risk of AI-assisted cheating in unsupervised settings
AI-Adapted Assessments
AI‑Adapted Individual Mid‑Term Examination
  • AI-resistant, process-based assessments (oral, live, multi-step)
  • Emphasis on higher‑order thinking over recall
  • Clear expectations and integrity guidelines for AI use
Traditional Assessments
Final Examination
  • Vulnerability to AI‑assisted cheating in online formats
  • Validity drift in measuring intended learning outcomes
  • Surface learning and memorisation bias
  • Misalignment with real‑world and graduate skills
  • Limited feedback and learning value
  • Grade inflation and misrepresentation of competence
AI-Adapted Assessments
AI-Adapted Final Examination
  • Authenticity‑focused exam design
  • Increased focus on higher‑order thinking
  • Process‑based assessment element
Traditional Assessments
Group Written Report
  • Heavy weighting of written reports
  • Validity drift in measuring group and individual learning
  • Authorship, integrity, and contribution ambiguity
  • Difficulty in assessing individual understanding
  • Loss of visibility of collaborative processes
  • Grade inflation and overestimation of team performance
AI-Adapted Assessments
AI-Adapted Group Written Report
  • Reduced weighting for written reports
  • Emphasising reflections, presentations, or oral defences
  • Contribution logs or reflections to ensure authentic understanding
  • Group documentation of when and how AI is used
  • Human-led final analysis with AI assistance
AI-Adapted Group Presentation
  • Preventing AI substitution through personalised, live, contextual tasks
  • Documenting AI tool usage by groups
  • Including individual follow-up questions or reflections

References:

Khatri, B. B., & Karki, P. D. (2023). Artificial intelligence (AI) in higher education: Growing academic integrity and ethical concerns. Nepalese Journal of Development and Rural Studies, 20(1), 1-7.
Kafali, E., Preuveneers, D., Semertzidis, T., & Daras, P. (2024). Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework. Big Data and Cognitive Computing, 8(11), 142. https://doi.org/10.3390/bdcc8110142
Eaton, S. E., Crossman, K., & Edino, R. (2019). Academic Integrity in Canada: An Annotated Bibliography. Retrieved from https://files.eric.ed.gov/fulltext/ED593995.pdf.