Transformation from
Traditional Assessments to
AI-Adapted Assessments
This page outlines the transition from traditional assessments to AI-adapted assessments, highlighting the integration of cutting-edge generative AI (GenAI) tools that provide real-time feedback and personalised planning. Traditional assessments are no longer applicable due to their limited adaptability and lack of contextual flexibility. By leveraging advanced GAI tools specifically designed for today’s AI-driven world, educators can provide real-time feedback and tailor learning plans to individual needs with greater precision.
- Heavy weighting on the written report
- Validity drift in learning measurement
- Authorship and integrity uncertainty
- Limited visibility of learning processes
- Reduced quality and diagnostic value of feedback
- Grade inflation and overestimation of mastery
- Development of real‑world communication skills
- Higher-order thinking
- Reduction of plagiarism risk
- Validity drift in assessing capstone learning outcomes
- Authorship, integrity, and contribution ambiguity
- Limited visibility of the learning and design process
- Reduced quality and specificity of supervisory feedback
- Grade inflation and overestimation of capstone mastery
- Equity and access disparities in AI use
- Validity drift in measuring learning outcomes
- Authorship, integrity, and accountability gaps
- Reliance on passive or product‑based participation
- Reduced visibility into learning processes
- Grade inflation and surface‑level achievement
- Equity and access concerns in AI use
- Time‑consuming and resource‑intensive
- Validity limitations in measuring learning outcomes
- Limited insight into authorship and response authenticity
- Low diagnostic value and generic feedback
- Inflexibility regarding learner context and emerging knowledge
- Overestimation of understanding
- Difficulty in assessing higher‑order thinking
- Validity drift in assessing intended learning outcomes
- Authorship and integrity challenges
- Limited visibility of learning processes
- Grade inflation and misrepresentation of mastery
- High risk of AI-assisted cheating in unsupervised settings
- Vulnerability to AI‑assisted cheating in online formats
- Validity drift in measuring intended learning outcomes
- Surface learning and memorisation bias
- Misalignment with real‑world and graduate skills
- Limited feedback and learning value
- Grade inflation and misrepresentation of competence
- Heavy weighting of written reports
- Validity drift in measuring group and individual learning
- Authorship, integrity, and contribution ambiguity
- Difficulty in assessing individual understanding
- Loss of visibility of collaborative processes
- Grade inflation and overestimation of team performance
- Reduced weighting for written reports
- Emphasising reflections, presentations, or oral defences
- Contribution logs or reflections to ensure authentic understanding
- Group documentation of when and how AI is used
- Human-led final analysis with AI assistance
References:
Khatri, B. B., & Karki, P. D. (2023). Artificial intelligence (AI) in higher education: Growing academic integrity and ethical concerns. Nepalese Journal of Development and Rural Studies, 20(1), 1-7.
Kafali, E., Preuveneers, D., Semertzidis, T., & Daras, P. (2024). Defending Against AI Threats with a User-Centric Trustworthiness Assessment Framework. Big Data and Cognitive Computing, 8(11), 142. https://doi.org/10.3390/bdcc8110142
Eaton, S. E., Crossman, K., & Edino, R. (2019). Academic Integrity in Canada: An Annotated Bibliography. Retrieved from https://files.eric.ed.gov/fulltext/ED593995.pdf.