Bringing Automatic Scoring into the Classroom – Measuring the Impact of Automated Analytic Feedback on Student Writing Performance

Authors

  • Andrea Horbach CATALPA, FernUniversität in Hagen
  • Ronja Laarmann-Quante Ruhr-Universität Bochum,
  • Lucas Liebenow Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik, Kiel
  • Thorben Jansen Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik, Kiel
  • Stefan Keller Pädagogische Hochschule Zürich
  • Jennifer Meyer Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik, Kiel
  • Torsten Zesch CATALPA, FernUniversität in Hagen
  • Johanna Fleckenstein Leibniz-Institut für die Pädagogik der Naturwissenschaften und Mathematik, Kiel and Universität Hildesheim

DOI:

https://doi.org/10.3384/ecp190008

Keywords:

automatic scoring, writing feedback, intervention study

Abstract

While many methods for automatically scoring student writings have been proposed, few studies have inquired whether such scores constitute effective feedback improving learners’ writing quality. In this paper, we use an EFL email dataset annotated according to five analytic assessment criteria to train a classifier for each criterion, reaching human-machine agreement values (kappa) between .35 and .87. We then perform an intervention study with 112 lower secondary students in which participants in the feedback condition received stepwise automatic feedback for each criterion while students in the control group received only a description of the respective scoring criterion. We manually and automatically score the resulting revisions to measure the effect of automated feedback and find that students in the feedback condition improved more than in the control group for 2 out of 5 criteria. Our results are encouraging as they show that even imperfect automated feedback can be successfully used in the classroom.

Downloads

Published

2022-12-02