A Transformer for SAG: What Does it Grade?
DOI:
https://doi.org/10.3384/ecp190012Keywords:
Short-Answer Grading, Transformer-based models, Adversarial AttacksAbstract
Automatic short-answer grading aims to predict human grades for short free-text answers to test questions, in order to support or replace human grading. Despite active research, there is to date no wide-spread use of ASAG in real-world teaching. One reason is a lack of transparency of popular methods like Transformer-based deep neural networks, which means that students and teachers cannot know how much to trust automated grading. We probe one such model using the adversarial attack paradigm to better understand their reliance on syntactic and semantic information in the student answers, and their vulnerability to the (easily manipulated) answer length. We find that the model is, reassuringly, likely to reject answers with missing syntactic and semantic information, but that it picks up on the correlation between answer length and correctness in standard training. Thus, real-world applications have to safeguard against exploitation of answer length.
Downloads
Published
Issue
Section
License
Copyright (c) 2022 Nico Willms, Ulrike Padó
This work is licensed under a Creative Commons Attribution 4.0 International License.