EliCoDe at MultiGED2023: fine-tuning XLM-RoBERTa for multilingual grammatical error detection
DOI:
https://doi.org/10.3384/ecp197003Keywords:
Grammatical Error Detection, Transformers, Multilingual Shared TaskAbstract
In this paper we describe the participation of our team, ELICODE, to the first shared, task on Multilingual Grammatical Error Detection, MultiGED, organised within the workshop series on Natural Language Processing for Computer-Assisted Language Learning (NLP4CALL). The multilingual shared task includes five languages: Czech, English, German, Italian and Swedish. The shared task is tackled as a binary classification task at token level aiming at identifying correct or incorrect tokens in the provided sentences. The submitted system is a token classifier based on XLMRoBERTa language model. We fine-tuned five different models—one per each language in the shared task. We devised two different experimental settings: first, we trained the models only on the provided training set, using the development set to select the model achieving the best performance across the training epochs; second, we trained each model jointly on training and development sets for 10 epochs, retaining the 10-epoch fine-tuned model. Our submitted systems, evaluated using F0.5 score, achieved the best performance in all evaluated test sets, except for the English REALEC data set (second classified). Code and models are publicly available at https://github.com/davidecolla/EliCoDe.Downloads
Published
2023-05-16
Issue
Section
Contents
License
Copyright (c) 2023 Davide Colla, Matteo Delsanto, Elisa Di Nuovo
This work is licensed under a Creative Commons Attribution 4.0 International License.