A Framework for Responsible Development of Automated Student Feedback with Generative AI

29 Aug 2023  ·  Euan D Lindsay, Aditya Johri, Johannes Bjerva ·

Providing rich feedback to students is essential for supporting student learning. Recent advances in generative AI, particularly within large language modelling (LLM), provide the opportunity to deliver repeatable, scalable and instant automatically generated feedback to students, making abundant a previously scarce and expensive learning resource. Such an approach is feasible from a technical perspective due to these recent advances in Artificial Intelligence (AI) and Natural Language Processing (NLP); while the potential upside is a strong motivator, doing so introduces a range of potential ethical issues that must be considered as we apply these technologies. The attractiveness of AI systems is that they can effectively automate the most mundane tasks; but this risks introducing a "tyranny of the majority", where the needs of minorities in the long tail are overlooked because they are difficult to automate. Developing machine learning models that can generate valuable and authentic feedback requires the input of human domain experts. The choices we make in capturing this expertise -- whose, which, when, and how -- will have significant consequences for the nature of the resulting feedback. How we maintain our models will affect how that feedback remains relevant given temporal changes in context, theory, and prior learning profiles of student cohorts. These questions are important from an ethical perspective; but they are also important from an operational perspective. Unless they can be answered, our AI generated systems will lack the trust necessary for them to be useful features in the contemporary learning environment. This article will outline the frontiers of automated feedback, identify the ethical issues involved in the provision of automated feedback and present a framework to assist academics to develop such systems responsibly.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here