A Transformer for SAG: What Does it Grade?

Authors

  • Nico Willms Hochschule für Technik, Stuttgart
  • Ulrike Padó Hochschule für Technik, Stuttgart

DOI:

https://doi.org/10.3384/ecp190012

Keywords:

Short-Answer Grading, Transformer-based models, Adversarial Attacks

Abstract

Automatic short-answer grading aims to predict human grades for short free-text answers to test questions, in order to support or replace human grading. Despite active research, there is to date no wide-spread use of ASAG in real-world teaching. One reason is a lack of transparency of popular methods like Transformer-based deep neural networks, which means that students and teachers cannot know how much to trust automated grading. We probe one such model using the adversarial attack paradigm to better understand their reliance on syntactic and semantic information in the student answers, and their vulnerability to the (easily manipulated) answer length. We find that the model is, reassuringly, likely to reject answers with missing syntactic and semantic information, but that it picks up on the correlation between answer length and correctness in standard training. Thus, real-world applications have to safeguard against exploitation of answer length.

Downloads

Published

2022-12-02